uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,157,028 | arxiv | \section{Introduction}
\label{Sec:Intro}
One of the major goals of the LHC is the study of the properties of the top
quark. In this respect single top production processes offer the unique
possibility of a direct measurement of the entry
$V_{tb}$ of the CKM matrix allowing non trivial tests of the properties
of this matrix in the Standard Model (SM)~\cite{SMsingle}. Moreover
single top production processes allow for a test of the $V-A$ structure of the
charged current weak interaction of the top by looking at the polarization
of this quark~\cite{VAsingle}. Such processes can be interesting in the
hunting for physics beyond SM; indeed new physics can manifest itself either via loop
effects,
or inducing non SM weak interactions
or introducing new single top production channels~\cite{BSMchan}. \\
\noindent
Within the Standard Model single tops can be produced via three
different modes. At the LHC the t-channel production mode will be not only the dominant
one~\cite{TOPreview} but also the best measured: CMS
studies~\cite{Pumplin:2002vw} conclude that, with $10$ fb$^{-1}$ of
integrated luminosity, the (mostly systematic)
experimental uncertainty of the cross section is reduced below the ten percent
level. \\
\noindent
Such an experimental precision requests a similarly
accurate theoretical prediction of the observables of the process.
In order to achieve it the complete NLO
calculation is required. In the SM this has been done for the QCD component
of the t-channel, resulting in a relatively small (few percent)
effect~\cite{TCqcd}.
The electroweak effects have been computed very recently at the
complete one loop level within the SM and the MSSM~\cite{Beccaria:2006, Beccaria:2008}.
Such computation is the topic of this paper.\\
\noindent
In particular in sect.~\ref{Sec:EWcorrections} we
briefly describe the structure of the Electroweak (EW) corrections to the
process of t-channel single (anti-)top production
focusing on the partonic processes leading to such corrections.
In sect.~\ref{Sec:SUSY} we discuss the SUSY corrections to these processes
evaluated within the MSSM. Numerical results for the EW corrections to t-channel
single (anti-)top production at the LHC are presented in
sect.~\ref{Sec:Numerics}, where the numerical impact of the SUSY corrections
is discussed as well. Sect.~\ref{Sec:Conclusions} summarizes our results.
\section{Electroweak Corrections}
\label{Sec:EWcorrections}
Electroweak corrections to single (anti-)top production in the t-channel are
of $\mathcal{O}(\alpha^3)$ and in an obvious notation they can be written as:
\begin{eqnarray}
\label{Eq:Main1l}
d \sigma_{t\mbox{\tiny-prod.}}(S) &=& \sum_{\mbox{\tiny{(q, q')}}} \int_{\tau_0}^1 d \tau
\frac{dL_{qb}}{d \tau} \Big(
d \sigma^{\mbox{\tiny ew}}_{q b \to q' t}(s) +
d \sigma^{\mbox{\tiny ew}}_{q b \to q' t
\gamma}(s) \Big), \\
d \sigma_{\bar{t}\mbox{\tiny-prod.}}(S) &=& \sum_{\mbox{\tiny{(q, q')}}}
\int_{\tau_0}^1 d \tau \frac{dL_{\bar q \bar b}}{d \tau} \Big(
d \sigma^{\mbox{\tiny ew}}_{\bar q \bar b \to \bar q' \bar t}(s) +
d \sigma^{\mbox{\tiny ew}}_{\bar q \bar b \to
\bar q' \bar t \gamma}(s) \Big).
\end{eqnarray}
Where $(q,q')=(u,d),(c,s),(\bar d, \bar u), (\bar s, \bar d)$, while $\tau_0 = m^2_t / S$ and $s = \tau S$. The
differential luminosity is defined as:
\be
\label{Eq:Lumi}
\frac{dL_{i j}}{d\tau}(\tau)= \frac{1}{1+\delta_{ij}}~
\int_{\tau}^1 \frac{dx}{x} \left[ f_i(x)f_j\left(\frac{\tau}{x}\right)+
f_j(x)f_i\left(\frac{\tau}{x}\right)\right] ,
\ee
$f_{i}(x)$ being the momentum distribution of the parton
$i$ in the proton. We perform our computation in Feynman gauge setting the CKM
matrix to unity. Due to CP invariance the unpolarized cross section of the
partonic process $\bar{q}\bar{b} \to \bar{q}' \bar{t}(\gamma)$ is equal to
that of the process
$q b \to q' t(\gamma)$,
so in the following we will analyse only the partonic processes contributing
to single top production.
\subsection{Virtual Corrections}
\label{SSec:Virtual}
First class of corrections entering eq.~(\ref{Eq:Main1l}) are the virtual
corrections to the generic partonic process $qb\to tq'$.\\
\noindent
The starting point is the cross section for the $ub\to td$ process;
the $\mathcal{O}(\alpha^3)$ corrections to the (unpolarized) differential
cross section to this process read:
\begin{equation}
d \sigma^{\mbox{\tiny ew}}_{ub\to td} = \frac{dt}{64 \pi s^2} \sum_{\mbox{\tiny spin}}
2 \mbox{Re} \{ \mathcal{M}^{0 ~*} \mathcal{M}^1 \},
\label{Eq:Upp}
\end{equation}
where $\mathcal{M}^0$ is the tree level amplitude of the
partonic process $ub \to dt$ while
$\mathcal{M}^1$ describes the corresponding EW one loop amplitude. The Mandelstam
variables are defined as:
\be
s = (p_b+p_u)^2,~~~
t = (p_b-p_t)^2,~~~
u = (p_b-p_d)^2 .
\ee
The diagrams related to $\mathcal{M}^{1}$ have been generated with
the help of \verb|FeynArts|~\cite{FeynArts}, the algebraic reduction of the
one loop integrals is performed with the help of \verb|FormCalc|~\cite{FormCalc} and
the scalar one loop integrals are numerically evaluated using \verb|LoopTools|~\cite{LoopTools}.
We treat UV divergences using dimensional reduction
while IR singularities are parametrized giving a small mass
$m_\gamma$ to the photon.
The masses of the light quarks are used as regulators of the collinear
singularities and are set to zero elsewhere. \\
\noindent
UV finite predictions can be obtained by renormalizing the parameters and the wavefunctions
appearing in $\mathcal{M}^{0}$. In our case we have to renormalize the wavefunction of the
external quarks, the mass of the W boson, the weak mixing angle $\theta_W$
and the electric
charge. We use
the on shell scheme decribed in ref.~\cite{DennerHab}.
This scheme uses the fine structure constant evaluated in the Thomson limit
as input parameter. In order to avoid large logarithms arising from the runnning
of $\alpha$ to the electroweak scale $M_W$, we renormalize the fine structure
constant in the $G_\mu$ scheme {\it i.e.} we define $\alpha$ in terms of the
the Fermi constant $G_\mu$:
\be
\alpha= \frac{\sqrt{2}}{\pi}G_{\mu}M_W^2\sin^2 \theta_W.
\label{Eq:alfaGM}
\ee
We consistently change the definition
of the renormalization constant of the fine structure constant following the
guidelines of ref.~\cite{WjetProd,WjetProd2}. \\
\noindent
As pointed out in ref.~\cite{HollikGMU} in this scheme the
leading logarithms arising from the running of $\alpha$ are resummed
to all orders in perturbation theory and
absorbed in the definition of $\alpha$. Moreover, in the case of charged
current processes,
the universal enhanced terms of the
type $\alpha~m_t^2 / M_W^2$ are included as well.
\\
\noindent
The unpolarized differential cross section for the process $\bar d b \to \bar
u t$ can be obtained from
that of the process $ub\to td$ by crossing:
\begin{equation}
d \sigma^{\mbox{\tiny ew}}_{\bar d b \to \bar u t} = \frac{dt}{64 \pi s^2} \sum_{\mbox{\tiny spin}}
2 \mbox{Re} \{ \mathcal{M}^{0 ~*}(s\to u,~u\to s) \mathcal{M}^1(s\to u,~u\to
s) \} .
\label{Eq:Dbar}
\end{equation}
The differential cross sections of the processes involving
$c$ and $\bar{s}$ are,
in the massless limit, equal to those quoted in eq.~(\ref{Eq:Upp}) and in eq.~(\ref{Eq:Dbar}), respectively.
\subsection{Real Corrections}
\label{SSec:Real}
Another class of $\mathcal{O}(\alpha^3)$ corrections entering
eq.~(\ref{Eq:Main1l})
are the tree level contributions to the partonic processes of t-channel single top production associated with the
emission of a photon $qb\to tq' \gamma$. \\
\noindent
The unpolarized differential cross section of these processes has been generated and squared using
\verb|FeynArts| and \verb|FormCalc|.
According to the KLN theorem~\cite{KLN} IR singularities and the
collinear singularities
related to the final state radiation cancel in sufficiently inclusive
observables while the collinear singularities related to initial state
radiation have to be absorbed into the Parton Distribution
Functions (PDF). \\
\noindent
In order to
handle with these divergences
we use two different procedures: the dipole subtraction method and
the phase space slicing method. \\
In the subtraction approach one has to add and subtract
to the squared amplitude
an auxiliary function wich matches pointwise the squared amplitude
in the singular region and
such that it can be analytically integrated
over the photon phase space. Different functions fullfilling these
requirements are available in literature, we
use the function quoted in ref.~\cite{Dipole}.
In this reference
explicit expression for the subtraction function and for its
analytical integration is obtained within mass regularization using
the so called Dipole Formalism~\cite{DipoleQCD}. \\
\noindent
According to the phase space slicing approach
the singular region of the phase
space is excluded by introducing a cutoff on the energy of the
photon and on the
angle between the photon and
the massless quarks. In the regular region the phase space
integration can be performed numerically while in the singular region
it can be done analytically in the eikonal approximation, provided that the
cutoffs are small enough.
The form of the differential cross section in the
singular region is universal and its explicit expression in the soft (collinear) region can be
found in ref.~\cite{DennerHab}~(\cite{WjetProd2}). \\
\noindent
The two methods are in good numerical agreement.
\subsection{Mass Factorization}
As pointed out in sect.~\ref{SSec:Real},
$\mathcal{O}(\alpha^3)$ corrections to partonic cross sections contain
universal
initial-state collinear singularities
that have to be absorbed into the PDFs choosing a factorization scheme.
We use the $\overline{\mbox{MS}}$ factorization scheme
at the scale $\mu_F = m_t$. \\
\noindent
Concerning
the choice of the parton distributions set,
we follow ref.~\cite{WjetProd}. The calculation of the full $\mathcal{O}(\alpha)$
corrections to any hadronic
observable must include
QED effects in the DGLAP evolution equations. Such effects are taken into
account in the
MRST2004QED PDFs~\cite{Martin:2004dh} which are NLO QCD.
Since our computation is leading order QCD and since the
QED effects are known to be small~\cite{Roth:2004ti}, we use the LO set CTEQ6L.
\section{SUSY Corrections}
\label{Sec:SUSY}
As already mentioned in sect.~\ref{Sec:Intro}, accurate knowledge of the cross
section of single top production processes allows a precise
determination of the $V_{tb}$ entry of the CKM matrix. Nevertheless some
non-standard physics could alter the prediction of the cross section biasing the
determination of this parameter. We calculate the impact of the one loop
corrections on t-channel single (anti-)top production within the MSSM. \\
\noindent
The one loop SUSY QCD corrections to t-channel single top production have
been computed at
LHC in ref.~\cite{Zhang:2006cx}.
We include these corrections, re-computing them from scratch.
Following a standard procedure in SUSY QCD, we
treat UV divergences
using dimensional regularization.
Moreover in this case we have to renormalize
only the wavefunctions of the squarks since the other renormalization constants do not
have $\mathcal{O}(\alpha_s)$ corrections. These corrections are IR and
collinear safe. \\
\noindent
To obtain the genuine SUSY EW corrections one has to cope with the different
structure of the Higgs sector in the MSSM and in the SM. These
corrections were obtained re-calculating the full
$\mathcal{O}(\alpha^3)$ corrections and then subtracting the SM
corrections computed setting the SM Higgs mass equal to the mass of the lightest
MSSM Higgs boson. The computation of $\mathcal{O}(\alpha^3)$ corrections
within the MSSM was performed according to the procedure described in sect.~\ref{SSec:Virtual}
\section{Numerical results}
\label{Sec:Numerics}
We consider the numerical impact of the corrections described previously by
looking at different observables.
\subsection{EW Corrections within the SM}
In the left panel of fig.~\ref{Fig:Fig01}
we show the NLO ({\it i.e.} LO +$\mathcal{O}(\alpha^3)$)
evaluation of the total cross section for
single (anti-)top as a function of $p^{\mbox{\tiny cut}}_T$, a cut on the transverse momentum
of the (anti-)top. As expected, at the LHC, single top production dominates
over single anti-top production. Nevertheless, as can be inferred from the
right panel of
fig.~\ref{Fig:Fig01}, the relative contribution of the electroweak corrections
is similar in the two cases. Indeed in both cases EW corrections are negative
and become more important as the value of $p_T^{\mbox{\tiny cut}}$
increases. EW corrections are well below $10~\%$ for any reasonable
value of $p_T^{\mbox{\tiny cut}}$. \\
\noindent
The similar behaviour of the EW
corrections in case of single top and single anti-top production makes the quantity
\be
R_{t\mbox{\tiny-prod.}/\bar{t}\mbox{\tiny-prod.}} =
\frac{\sigma_{t\mbox{\tiny-prod.}}}{\sigma_{\bar{t}\mbox{\tiny-prod.}}},
\label{Eq:Ratio}
\ee
independent of the EW corrections (fig.~\ref{Fig:Fig03}). This is an
interesting feature since the value of the ratio~(\ref{Eq:Ratio})
is used when looking for single top production processes at the
LHC~\cite{BSMchan}. \\
\noindent
In the left panel of fig.~\ref{Fig:Fig04}
we show the NLO evaluation of
the
transverse momentum ($p_T$)
distribution of the (anti-)top for the two production processes; while in
right panel the relative impact of the EW corrections is shown.
This observable shows the same features as the previous one: in
particular the contribution of single top production is the leading one and
the relative contribution of the electroweak corrections is similar in both
single top and single anti-top production processes. Moreover EW corrections are
negative and their absolute value increases as the value of $p_T$ increases:
in particular they are larger than
$10~\%$ in the region $p_T >300$~{\rm GeV}.
\subsection{SUSY Corrections}
We present the numerical results for the two representative
ATLAS DC2 mSUGRA benchmark points SU1 and SU6~\cite{DC2'}.
In fig.~\ref{Fig:Fig06}
we show the relative contribution of the SUSY corrections to the total cross
section for single top and single anti-top production as a function of
$p_T^{\mbox{\tiny cut}}$
in the case of the SU1 point (left panel) and SU6 point (right panel). In both
cases SUSY QCD and SUSY EW
corrections are tiny. Moreover they have different sign, so their sum is
further suppressed and in both cases below $0.1~\%$.
\section{Conclusions}
\label{Sec:Conclusions}
We have computed the one loop EW corrections to the process of single
(anti-)top
production in the t-channel.
The overall result is that the impact of these corrections on the total rate is small,
of the order of a negative few percent. EW corrections can play a more
important role in the definition of the distributions. \\
\noindent
Moreover we have studied the impact of the one loop SUSY corrections within the MSSM.
In the scenarios we have considered their impact is negligible and so their
eventual presence would not spoil the measurement of $V_{tb}$ performed
within the SM.\\
\noindent
{\bf Acnowledgements}\\
\noindent
We would like to thank M.~Beccaria, C.M.~Carloni~Calame, G.~Macorini,
F.~Piccinini, F.M.~Renard and C.~Verzegnassi for the pleasant collaboration
in the work presented here. We gratefully acknowledge W.~Hollik for
valuable suggestions.
\newpage
|
2,869,038,157,029 | arxiv | \section{Introduction}
\subsection{Motivation}
Literary fiction attracts large reading audiences both in the United States and internationally. A National Endowment for the Arts survey reveals that, despite a long steady decline in literary reading in the United States, the number of American adults who read at least one work of fiction a year, even after excluding books read for school or work, still hovers around 43\%. Social media, although often criticized for contributing to the decline in literary reading, has also offered opportunities for communities of readers to interact and engage in ongoing conversations, perhaps thereby reducing the otherwise negative impact of social media on reading. Book forums on social media provide readers an opportunity to share their experiences of reading and can, for some works of fiction, engender long running conversations about nuanced aspects of the work in question. These discussions range from explorations of twists and turns in the plot, to simple declarations of admiration for or familiarity with certain actants (characters, places, things). \cite{2020bourier}\cite{lehnert1980narrative} \textit{Taken individually}, book commentaries and reviews provide a highly individualized perspective on a work of fiction, focusing only on a few actants and their relevance to the narrative. \textit{Taken together}, these comments provide insight into a broader reader consensus of a novel's overarching narrative framework, comprising a majority of the actants and their nuanced relationships.%
\subsection{Objectives and Challenges}
In our work, we assume that we are given thousands of user reviews of a particular novel from a social cataloging/review website such as Goodreads.com. Given such a corpus, we ask the following questions: (i) Can one \textit{automatically discover all the primary actants} as well as meta-actants (authors, actors and actresses from film adaptations, etc.) that are mentioned across all of the book reviews for a given novel? (ii) Can one also \textit{discover and meaningfully cluster all the inter-actant relationships} that these reviews include? The results of goals (i) and (ii) provide, when properly thresholded and weighted, a representation of the consensus model of the novel as perceived by those readers who review the book. Inspired by the actantial narrative model of Algirdas Greimas \cite{greimas1973actants}, we represent these results as an automatically generated narrative network, where nodes are actants and edges are directed multi-edges annotated with the extracted relationships. (iii) Finally, \textit{given an expert generated ground truth narrative network}, can one \textit{automatically compare that ground truth network with the auto-generated summary narrative framework network} and compute meaningful metrics such as recall and precision?
Solving the above problems is tantamount to
developing a view of the reviewers' consensus about a target novel, as readers recollect and review the actual cast of actants and their inter-actant relationships.
The more often that an actant or relationship appears in the corpus, the more heavily it is weighted in the network graph. Importantly, the related methodologies presented here can be extended well beyond the realm of literary fiction to derive narrative frameworks undergirding nearly any collection of documents. We focus on literary fiction because of the unusual (for cultural datasets) presence of a ground truth against which to measure the accuracy of our results.
To construct the actant relationship narrative graph, we start with a dependency tree parsing of the sentences in each review and extract various syntactic structures, such as the Subject (captured as noun argument phrases), Object (also captured as noun argument phrases), actions connecting them (captured as verb phrases), as well as their alliances and social relationships (captured as explicitly connected adjective and appositive phrases; see Table \ref{tab:appos}; see the Methodology section for the tools used and relationship patterns extracted in this paper). \textit{The task of aggregating these extracted phrases into a single narrative network poses unique computational challenges}.
First, as these extractions are both varied and extremely noisy, we need to reduce ambiguity across entity mentions. For example, in reviews of \textit{The Hobbit}, Bilbo Baggins is referred to in numerous ways, including ``Bilbo'' (and its misspelling ``Bilbos''), ``The Hobbit'', ``Baggins'' and ``the Burgler'' or ``the Burglar''. We refer to this disambiguation task as the \textit{Entity Mention Grouping} (EMG) problem. Humans solve the EMG problem by using context: for the different mentions of a character to be the same, they must have the same relationships with other characters. The human ability to disambiguate in this manner has proven difficult to replicate with computational tools.
Second, the same challenge applies to inter-actant relationships. For example, the relationship ``create'' between Dr. Frankenstein and the monster in the novel \textit{Frankenstein}, can be referred to by a cloud of different phrases, including ``made'', ``assembled'', and ``constructed''. To solve this ambiguity, one must computationally recognize that these words are contextually synonymous and identify the group as constituting a single relationship. To make matters more challenging, there are often numerous different relationships between the same actant pair. The decision tree parsing step produces an unordered list of phrases, which then has to be clustered into semantically similar groups, where each group captures one of the distinct relationships. For example, the extracted relationship phrases between Dr. Frankenstein and the monster include
\{\textit{created, destroying, kill, regretting, constructed, denied, hates, disgusted, made, assemble, blaming, abandon, runs away}\}. These phrases, however, contain sample phrases from at least three distinct relationships: \underline{Create:} [\textit{created, constructed, made, assemble}],
\underline{Destroy:} [\textit{destroying, kill}], and \underline{Deny}: [\textit{denied, hates, disgusted, blaming, abandon, runs away, regretting}]. We label this problem of reliably clustering relationships as the \textit{Inter-actant Relationship Clustering} (IARC) problem.
Finally, the task of \textit{quantitative evaluation} -- comparison of the extracted networks with ground truth networks -- shares many of the same challenges as the previous two tasks. One has to \textit{semantically align} any expert-created network with the automatically created one. For example, one should be able to match an expert annotated relationship of ``X $\rightarrow$ Captured $\rightarrow$ Y,'' to an automatically aggregated relationship, such as ``Y $\rightarrow$\{ escaped, rescued\} from$\rightarrow$ X.''
\subsection{Related Work}
Numerous studies have explored book review collections while several other works have attempted to recreate story plots based on these reviews \cite{ucsd1, ucsd2, 2020bourier}. The sentence-level syntax relationship extraction task has been studied widely in work on Natural Language Processing and Open Information Extraction \cite{schmitz2012open, fader2011identifying, wu2010open, gildea2002automatic, baker1998berkeley, palmer2005proposition} as well as in relation to the discovery of actant-relationship models for corpora as diverse as conspiracy theories and national security documents \cite{mohr2013,samory2018a}.
There is considerable recent work on word and phrase embedding for encoding semantic similarity. While word embedding methods such as word2vec, fastText and GloVe \cite{fasttext,glove,word2vec} yield vectors that are context invariant, more recent models such as ELMo and BERT \cite{elmo, bert} allow for polysemy (context-dependent embedding). This polysemic feature allows entire phrases to be encoded to both word-level and phrase-level embedding. We use BERT embedding in this paper.
While there is work, such as Clusty \cite{clusty}, which categorizes entities into different categories in a semi-supervised manner, the category examples are fixed. Similarly, works such as ConceptNet \cite{conceptnet} use a fixed set of selected relations to generate their knowledge base. Other recent entity mention grouping work \cite{google_entity_m} seeks to map entity mentions via context vectors produced as an aggregated feature from high-level document metadata and proximal phrases to the mention within the text. Similar work in story graph applications \cite{chargraph1} create co-scene presence character networks predicated on higher-level annotated knowledge, such as joint scene presence and/or duration of dialogue between a pair of characters. Moreover, these works assume perfect reliability in character mentions (thus obviating the need for the critical step of Entity Mention Grouping that is needed for social media reviews), an assumption we cannot make given our data or data from similarly informal domains.
A major challenge in work on reader reviews of novels is that predefined categories for novel characters and for the diverse inter-character relationships do not exist. In addition, document level features are missing while the proximal text is sparse due to the inherent size of a review (or tweet, comment, opinion, etc.). An unsupervised scheme such as ours for grouping entity mentions into characters and clustering of relationships into semantically distinct groups, as an approximate imitation of human processes, has not been addressed previously.
\subsection{Outline of the paper}
In Section 2, we describe our data, our selection of the four novels for analysis, and our method for generating ground truth narrative frameworks. In Section 3 we describe our methodology and how we solve the EMG and IACR problems. In Section 4, we provide an overview of the limitations of our pipeline. In section 5, we present our results and evaluation, and in section 6, we discuss the findings. Lastly, in section 7, we suggest potential improvements that can be incorporated into the pipeline in future work.
\section{Resources}
We use reader reviews of four works of fiction from the community forums on Goodreads: \textit{Frankenstein} (1818); \textit{Of Mice and Men} (1937); \textit{The Hobbit} (1937); and \textit{To Kill a Mockingbird} (1960) \cite{shelley2015frankenstein,steinbeck1937mice,tolkien2012hobbit,lee1960kill}. The works were chosen from the list of the most frequently rated books on the Goodreads site (number of ratings $>500,000$). For highly rated novels, the number of reviews is also quite high, although significantly lower than the number of ratings. For example, \textit{The Hobbit} has been rated over $2.5$ million times, but has $44,831$ reviews (at the time of our data collection). For each of the novels, we downloaded the maximum allowed three thousand reviews given the Goodreads API limits on review requests.
\par
The reviews were harvested using a crawler specifically designed for this project. Not all reviews were useful since numerous posts were either spam, posts on different topics, or written in languages other than English. Other reviews were either too short to include meaningful content, or so garbled as to be unintelligible. After filtering the reviews, we were left with a corpus of 8693 usable reviews: \textit{Frankenstein} (2947), \textit{The Hobbit} (2897), \textit{Of Mice and Men} (2956), and \textit{To Kill a Mockingbird} (2893). We discovered two types of phrases in the reviews: (i) Opinion phrases that reflected the readers’ opinions about the book, the author, or the various characters and events. Relationships extracted from these phrases are the dominant ones when aggregated over all readers’ posts, which is not surprising given that these posts are intended to be reviews. (ii) Plot phrases that describe what happened to a subset of the actants, and how they interacted with each other. These phrases contain both the actants and their relationships, and are of primary interest to us.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
&\textbf{\# of posts} & \textbf{\# of sentences} \\\hline \hline
\textbf{Frankenstein}&2947&38432\\\hline
\textbf{The Hobbit}&2897&37529\\\hline
\textbf{Of Mice and Men}&2956&30205\\\hline
\textbf{To Kill a Mockingbird}&2893&33000\\
\hline
\end{tabular}
\caption{Data description and size.}
\label{tab:my_label}
\end{table}
\par
Although our initial study corpus consisted of sixteen novels, we selected these four novels for detailed analysis on the basis of the broad disparity in their narrative structures, large variability in the number of characters, and a broad range of character relationships. For example, \textit{The Hobbit} can be characterized as a multi-episodic, linear narrative that takes place across many different settings in an elaborate fantasy world, and includes a large cast of both human and non-human characters, instantiating an elaborate version of a standard hero's journey plot. \textit{Of Mice and Men}, by way of contrast, is a short novella with a limited cast of characters that takes place in a highly localized, realistic setting, and represents a straightforward version of Vonnegut’s “From bad to worse” plot. \textit{Frankenstein}, although told partly in flashback, has a largely linear plot and a limited cast of characters, with a strong central figure and a relatively clear villain, although this is complicated by its use of nested narratives. Finally, \textit{To Kill a Mockingbird} has an overlapping set of complex characters with multiple subplots.
\par
For our ground truth narrative framework graphs, we relied on the online SparkNotes resource for each of the four chosen novels. SparkNotes is a corpus of freely available, professionally generated summaries of works of fiction, and provides us with a list of actants, as well as a chapter level plot summary. These fine-grained summaries allowed us to manually create an actant-relationship narrative framework graph for each novel. These ground truth graphs were coded independently by two experts in literature, and a third expert was used to adjudicate any inter-annotator disagreements.
\par
Reviewers who post to Goodreads have a variety of motivations for posting. The majority of reviewers use the site as part of a social network focused on reading, with the gender balance of active reviewers skewing slightly toward women \cite{thelwall2017goodreads}. There appear to be several categories of active reviewers on the Goodreads site, including students reviewing books as part of school assignments, members of book clubs, and people who aspire to become professional book reviewers. We make no discrimination as to classes of reviewers, but rather consider each review equally, as our goal is to understand the aggregate narrative model of a reviewed book. At the same time, we recognize that reviews of a book are often conditioned by the pre-existing reviews of that same book, including reviews such as those found in SparkNotes, Cliff Notes, and other similar resources. In certain cases, we recognize that these reviews may be influenced by the filmed adaptations of the target novels or professionally written summaries.
\section{Methodology}
\begin{figure}
\centering
\includegraphics[scale=0.3]{pipeline_final.png}
\caption{Pipeline to extract actant-relationship graphs. Our contributions introduce the Entity Grouping and the Inter-actant Relationship Clustering blocks}
\label{fig:pipeline}
\end{figure}
Our methodology focuses on the underlying structure of the narrative framework that captures how a storytelling instance emerges via a collective negotiation process. Each post to a forum describes relationships among only a subset of actants (which are yet not known to our automated algorithms).
To write a sentence, a reviewer first picks a context $C_i \in C$ and then samples an underlying context-dependent network $G_{C_i}(V_{C_i},E_{C_i})$ (to be estimated by the algorithm) by drawing a pair of actants $(A_k, A_j)$ according to a conditional actant recall distribution across all the actants, $p_{C_i}(A_j)$. A context could represent a particular situation in the plot. For example, when someone wants to recount the scene in \textit{Frankenstein} where Dr. Frankenstein creates the monster, then certain actants and relationships are described much more often than others.
Following this, the reviewer draws a relationship for the pair $(A_k, A_j)$ from a distribution associated with the context-dependent edges: $D_{(E_{C_i}, (j,k))}({\mathcal {R}})$. The writer then composes the review according to these outcomes by choosing the proper words and syntax. In particular, the reviewer chooses noun phrases (as mentions of the actants $A_j$ and $A_k$) and the associated verb/relationship phrases (or other syntactical constructs) for the sampled relationship.
Recall that we have neither any knowledge of the underlying actants nor of the contexts that define different semantic relationships among them. After syntax-based relationship extractions from the reviews, we have multiple mentions/noun-phrases for the same actants, and multiple semantically equivalent relationship phrases to describe different contexts. In order to accurately estimate the different contexts $C_i$, actant frequency distributions $p_{C_i}(A_j)$, and the relationships $D_{(E_{C_i}, (j,k))}({\mathcal {R}})$, we must aggregate the different mentions of the same actant into a single group. In order to do that, we need to consider relationships: two mentions refer to the same actant only if the key relationships with other actants are semantically identical. Thus, the estimations of entity mention groups and relationships need to be done jointly.
The following subsections describe our approach to the estimation of the aggregate narrative network in the three steps of our pipeline presented in figure \ref{fig:pipeline}: (i) Syntax-Based Relationship Extraction, (ii) Entity Mention Grouping (EMG), and (iii) Inter-actant Relationship Clustering (IARC). The resulting graph constitutes an end-state ranked consensus model of all actants and relationships. The evaluation of our results focuses on the similarity of the ground truth and learned narrative graph based on a matching of actants and their contextual relationships. The frequency distributions of the actants, $p$, and relationships, $D$, can be estimated based on the counts of the occurrences of the associated groups of phrases. Currently, we use a threshold to decide whether an actant or a relationship is included in the consensus narrative graph. We leave a more detailed study of these frequency distributions and their relationship to reader consensus to ongoing and future work. These probabilities encode the relative importance of the different actants and relationships in ways not captured by the thresholded network. For example, in The Hobbit, the actant node ``Ring'' has only a single relationship edge (i.e., ``Bilbo'' finds the ``Ring'') yet, due to the centrality of the ``Ring'' to the story, it has a frequency rank in the top ten among all noun phrases.
\noindent \textbf{Syntax-Based Relationship Extraction}:
Each sentence in the text corpus is processed to extract specific patterns of syntax relationship tuples in the form of ($arg_1$, $rel$, $arg_2$) where arg1 and arg2 are noun phrases, and rel is a verb or other type of phrase. Our relation extraction combines dependency tree and Semantic Role Labeling (SRL) \cite{gildea2002automatic}\cite{manning2014stanford}. As opposed to limiting our extractions to agent-action-target triplets, we design a set of patterns (for example, Subject-Verb-Object (SVO) and Subject-Verb-Preposition (SVP)) to mine extractions from dependency trees using the NLTK package and various extensions.
The patterns are based on extensions of Open Language Learning for Information Extraction (OLLIE) \cite{schmitz2012open} and ClauseIE \cite{del2013clausie}. Next, we form extractions from the SENNA Semantic Role Labeling (SRL) model. We combine dependency-based extraction techniques with SRL to increase the recall of our system.
A list of all the syntax relationship patterns, their definitions, and related examples are provided in the GitHub link for our research.
Following these steps, we apply cleaning and de-duplication techniques to select unique and high precision extractions. Relationship tuples scraped from reviews only include those entity mentions that match or exceed a frequency lower bound ($\geq 50$).
\noindent \textbf{Entity Mention Grouping (EMG)}: As a semantically identifiable character in a book is expressed in reviews as diverse entity mentions, it is necessary to group these mentions and label them with the same character label.
Let the frequently-occurring set of entity mentions be $M$ and let $R_{ik}$ be the relationships between entity mention $m_i$ and $m_k$, where $m_i$ is the Subject and $m_k$ be the Object. The set $R_{ki}$ then denotes the relationships when the roles are reserved. First, we note that if there is a relationship triplet $(\mbox{Subject}=m_i, \mbox{Verb}, \mbox{Object}=m_j)$ then clearly $m_i$ and $m_j$ are mentions of different actants and are not to be grouped together. In order to avoid any noise-induced exclusion of such a pairing, we consider a pair $m_i, m_j$ as incompatible if $|R_{ij}| + |R{ji}| \geq \gamma$. Based on our observation of the low frequency of noisy relationships, the hyperparameter $\gamma$ is set to 3 in this paper. In the following we assume that for each mention $m_i$ we have removed all incompatible nodes $m_j$.
Intuitively, two compatible mentions $m_i$ and $m_j$ correspond to the same actant if, for every other mention $m_k$, the relationships between the pair $(m_i, m_k)$ are semantically the same as the relationships between the pair $(m_j, m_k)$. In practice, different mentions of the same actant will share only a subset of the relationships when aggregated over all the extractions. In the following we provide an algorithm to quantify this intuitive idea that yields robust EMGs.
Let $T_{ik} = H(R_{ik})$ describe the set of headwords in $R_{ik}$. Also let $G$ be the directed bipartite graph from the entity mentions $M$ to $M$ (see Fig.~\ref{fig:bi_graph}) with the edges representing the relationships between the entity mentions. We would like to find an Entity Mention Grouping (EMG) function $g:M \to [1,...,N]$, $N \leq |M|$, where (i) if $g(m_i)=g(m_j) =k$ then entity mentions $(m_i, m_j)$ are grouped together to form the $k^{th}$ actant. Moreover, (ii) we want the groups to be complete: that is, for two groups $g^{-1}(k_1)$ and $g^{-1}(k_2)$ (with $k_1\neq k_2$ and $k_1,k_2 \in [1,...,N]$), the entity mentions are semantically similar within each set and are semantically differentiated across the sets. To measure semantic similarity between $m_i$ and $m_j$, we consider the following measure involving another mention $m_k$:
\begin{equation}
\begin{split}
s_{(ij)k} &= \Pr (T_{ik} | T_{jk}) + \Pr (T_{jk} | T_{ik})\ ,\\
\Pr (T_{ik} | T_{jk}) &= \frac{|H(R_{ik}) \cap H(R_{jk})|}{|H(R_{jk})|}.
\end{split}
\end{equation}
To understand why $s_{(ij)k}$ is an effective similarity measure, consider the following cases:
(i) If $H(R_{ik})=H(R_{jk})$, implying that $m_i$ and $m_j$ share the exact relationships with $m_k$ and hence should be grouped together, then $s_{(ij)k}$ achieves the maximum value of 2, (ii) the $m_j$ mention of an actant occurs less frequently then $m_i$ and is reflected by $H(R_{ik}) \subset H(R_{jk})$, then $s_{(ij)k} \geq 1$. This captures the case where $m_j$ shares all its relationships with $m_i$ but not vice versa, (iii) $m_i$ and $m_j$ are indeed mentions of different actants, in which case $|H(R_{ik}) \cap H(R_{jk})|$ is expected to be a lot smaller than both $|H(R_{ik})|$ and $|H(R_{jk})|$ and $s_{(ij)k} << 1$.
To ensure that we compute similarity when $m_k$ is the Subject, we define an analogous similarity score:
\begin{equation}
\begin{split}
s_{k(ij)} &= \Pr (T_{ki} | T_{kj}) + \Pr (T_{kj} | T_{ki})\ ,\\
\Pr (T_{ki} | T_{kj}) &= \frac{|H(R_{ki}) \cap H(R_{kj})|}{|H(R_{kj})|}.
\end{split}
\end{equation}
Finally, the score matrix $S$ is computed where the score $S_{ij}$ between $m_i$ and $m_j$ aggregates the measure on all feasible $m_k \in M - \{m_i,m_j\}$ and provides a metric for similarity across all entity mentions:
\begin{equation}
S_{ij} = \sum_{m_k \in M - \{m_i,m_j\}} s_{(ij)k} + s_{k(ij)}.
\label{equ:similarity-metric}
\end{equation}
The grouping function $g$ is now constructed as follows: For every entity mention $m_i$, the scores in the vector $S_i$ are ranked in descending order. We next introduce two hyperparameters for each novel, $\alpha, \beta \geq 0$, such that an entity mention $m_i$ is grouped with $m_j$ only if the score $S_{ij}$ satisfies: $S_{ij} \geq \alpha$ {and} $\frac{S_{i(j-1)}}{S_{ij}} \geq \beta$ (for $j\geq 2$).
We compute $\alpha$ from novel-specific distribution statistics. In particular, we compute the histogram of all non-zero $S_{ij}$ and compute $\alpha$ as the $75$th percentile (i.e. $25\%$ of $S_{ij}$'s are $\geq \alpha$). For all considered books (except \textit{To Kill a Mockingbird} where $\alpha = 2.6$), $\alpha = 2.0$. The hyperparameter $\beta$ is set to $2$.
The parameters $\alpha$ and $\beta$ are similar to those in works such as the Elbow K-Means method \cite{kmean}, in which $\beta$ correlates to inertia if the scores $S_i$ correlate to the distortion, and $\alpha$ provides a means of resolution if the elbow is unreliable (common in our model for rarer entity mentions).
The entity mention groups, once found, are labeled with the most frequent mention in the respective groups. Empirically, these automatically computed labels match the ground truth entities as derived from SparkNotes.
\noindent \textbf{Inter-actant Relationship Clustering (IARC)}: The aggregated entity mentions captured in $g$ are fed back into the standard relationship extraction task. Then, the relationships aggregated between any pair of actants, represented by their respective entity mention groups (e.g.: $A_1 = g^{-1}(k_1)$ and $A_2 = g^{-1}(k_2)$) is computed as:
\begin{equation}
R_{A_1A_2} = \underset{p \in A_1, \, q \in A_2}{\cup} R_{pq}.
\end{equation}
$R_{A_1A_2}$ is a richer and potentially multi-modal set of relationships. This process enables a form of transfer learning, aiding relationship extractors in identifying connections at a higher semantic level of characters and not merely at the level of entity mentions. The associated relationship clusters are found using the cosine similarity measure in the BERT embedding space (Algorithm 1).
\begin{algorithm}[ht]
\SetAlgoLined
\KwResult{$C_{A_1A_2}$}
$\hat{R}_{A_1A_2},C_{A_1A_2}$ = \{\}\;
\For{$r \in R_{A_1A_2}$}{$
\texttt{append }\, \texttt{BERT}(r) \texttt{ to }\, \hat{R}_{A_1A_2}$
}
$C_{A_1A_2}$ = \texttt{Elbow K-Means Method on } $\hat{R}_{A_1A_2}$ \\
\caption{Inter-actant Relationship Clustering}
\end{algorithm}
$C_{A_1A_2}$ is the set of clusters of relationships that describe the multi-modality in $R_{A_1A_2}$. For each cluster $C$ we compute its dispersion (using the cosine similarity measure), $\beta_C$. We retain only those clusters with $\beta_C$ greater than a threshold (here, we set it to $0.8$) as a valid semantic relationship group.
\noindent \textbf{Evaluation: }
We compare these relationship clusters to the ground truth relationships between characters (e.g.: $J_{A_1A_2}$). We aim to find a mapping $h_{A_1A_2}:J_{A_1A_2} \rightarrow C_{A_1A_2}$. This process is described in Algorithm $2$, where $f_{cos}(a,b)$ is the function to compute the cosine similarity between $a,b$, and $\beta_C$ is the dispersion of a cluster $C$ using the cosine similarity measure. Thus, a ground truth relationship phrase is mapped to an automatically clustered semantic group only if its embedding is close enough to the centroid of the cluster.
\begin{algorithm}[ht]
\SetAlgoLined
\KwResult{$h_{A_1A_2}$}
\For{$C \in C_{A_1A_2}$}{
\If{$\beta_C \geq 0.8$}{
\If{$\underset{r \in C, \, j \in J_{A_1A_2}}{\max} f_{cos}(r,\texttt{BERT}(j)) \geq 0.8$}{
$h_{A_1A_2}(j) = C$
}
}
}
\caption{Evaluation: Mapping Relationship Clusters to Ground Truth}
\end{algorithm}
Similar to the EMG task, the clusters are well differentiated, resulting in high-fidelity labels. Furthermore, Algorithm 2 seeks to approximate a maximum likelihood estimation problem, where $\mathcal{L}$ represents the cosine similarity $f_{cos}$ implemented with thresholds:
\begin{equation}
h_{A_1A_2}(j)= \underset{C \in C_{A_1A_2}}{\textrm{argmax}} \,\, \mathcal{L}(C,j), \,\, \forall \, j \in J_{A_1A_2}.
\end{equation}
\section{Limitations}
Data can be noisy, particularly when social media posts, which are informal by nature, are the primary source. This informality creates noise in the relationship extraction phase. A missing punctuation mark, for example, can significantly change the dependency tree structure and lead to erroneous extractions of both the arguments and the relationship phrases.
Other parts of our pipeline are equally sensitive to noise, including pronoun resolution and BERT embeddings. While pronoun resolution is needed to improve coverage (that is, to capture relationships amongst entity mention references when they are expressed in terms of pronouns), the process adds additional noise by occasionally resolving pronouns to the wrong entity mentions. Error from pronoun resolution is more noticeable in relation to rare words. For example, in the sentence, ``The example their single father Atticus sets for \textit{them} is one all parents wish we could parallel.", \textit{them} is mapped to the single character \textit{Dill}. \textit{Dill} is among the characters mentioned least frequently in reviews of \textit{To Kill a Mockingbird}. In such a scenario, the extracted relationships have a low fidelity because of the sparse sample space. In addition, while the BERT embeddings that we use for this paper provide useful vectors in cosine-measured k-means clustering, the approach also suffers from sensitivity to noise.
Using SparkNotes as a ground truth also raises some issues, as the summaries in these reader guides are less detailed than the novels that they summarize. Consequently, comparing our extractions to the limited relationships described in SparkNotes means that some of our discovered relationships, which may be in the novel but not in the SparkNotes summary, are improperly evaluated (i.e. the relationship exists in both the target novel and our extractions but is missing in SparkNotes). For example, while our extractions reveal that George cares for or loves Lennie in \textit{Of Mice and Men}, this relationship is missing from the SparkNotes summary. Similarly, certain actants or relationships that exist in the ground truth summaries may simply be absent from the reader review corpus, as is the case for certain Frankenstein actants such as M. Krempe. Our methods are not able to discover actants or relationships that do not appear in reader reviews--this elision of characters and relationships, however, may be indicative of interesting aspects of reader review practice.
\section{Results}
We first examine the syntactic method of establishing actant-actant relationships for clustering. In Table \ref{tab:appos}, the Appos and SVCop relationships suggest not only limiting sentence-level associations, but also semantically invariant associations mentioned explicitly in the reviews. While this syntactic approach may work in many situations, book reviewers often \textit{assume} a basic shared knowledge of the plot of a novel. This assumption dissuades reviewers from explicitly writing out the relationships between actants. In addition, book reviews are not very descriptive in general, focusing more on specific plot points or a character's trajectory. This tendency in book reviews further weakens direct Appos and SVCop actant-relationship extraction.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, scale=0.2]{bipart_redo.png}
\caption{The pipeline of the EMG task shows the formation of the bipartite graph G with the computation of the Score Matrix $S$, along with hyperparameters $\alpha, \beta, \gamma$}
\label{fig:bi_graph}
\end{figure*}{}
\begin{figure*}
\centering
\includegraphics[width=\textwidth, scale=0.5]{dir_rels_final.png}
\caption{Directed and clustered relationships emergent after IARC between 2 actants per novel. In clockwise direction from top left: from Scout to School in \textit{To Kill a Mockingbird}, from Bilbo to Dwarves in \textit{The Hobbit}, from Frankenstein to Monster in \textit{Frankenstein} and from George to Lennie in \textit{Of Mice and Men}.}
\label{fig:dir_rels}
\end{figure*}{}
\begin{table}[]
\centering
\begin{tabular}{|p{1.8cm}|p{6cm}|}
\hline
\textbf{Entity} & \textbf{Descriptors} \\
\hline \hline
& \textbf{The Hobbit}\\
\hline
Bilbo& (a, the, simple, clean) hobbit, a burglar, baggins, hero, protagonist\\
\hline
Smaug & (a, the, horrible, vicious) dragon\\
\hline
Gandalf & (a, the, wise) wizard \\
\hline
& \textbf{Frankenstein}\\
\hline
Frankenstein & (a, the, fantasy) book, (the, a) creator, (a, the) doctor \\
\hline
Monster & (his, a, the) creation\\
\hline
& \textbf{Of Mice and Men}\\
\hline
George & a small (man,-, in height), Lennie's (caretaker, best friend, father figure, protector) \\
\hline
Lennie & (the, pitiful, unique, favorite) character, George's ( foil, best friend)\\
\hline
& \textbf{To Kill a Mockingbird}\\
\hline
Jem & (big, the older, strong) brother\\
\hline
Atticus & (the, loving, ordinary, her) father\\
\hline
Scout & (a, hotheaded, young, an interesting) Tomboy\\
\hline
\end{tabular}
\caption{Examples for Appos and SVcop candidate descriptors for entity mentions across the four novels.}
\label{tab:appos}
\end{table}
\begin{table}[]
\begin{minipage}[b]{1.0\linewidth}
\centering
\begin{tabular}{ | c | c | }
\hline
\textbf{Entity Mention} & \textbf{Ranked Similarity Scores} \\
($m_i$) & \textbf{for other Mentions ($m_j$)} \\
& \textbf{($S_{ij}$'s, see Eq.~\ref{equ:similarity-metric})} \\ \hline \hline
Bilbo & baggins,42.14\\
& hobbit,14.47 \\
& burglar,3.80 \\\hline
Burglar & bilbo,3.80\\
& dwarves,2.79\\ \hline
Wizard &gandalf,22.49\\
& gandolf,7.00\\
&grey,5.34\\
&thorin,3.32\\\hline
Hobbit&bilbo,14.47\\
&baggins,6.06\\\hline
\end{tabular}
\caption{Given two entity mentions $(m_i,m_j)$, the similarity score $S_{ij}$ (see Eq.~\ref{equ:similarity-metric}) measures the semantic ``fungibility'' of the mentions (i.e., whether both mentions are used interchangeably to refer to the same actant). The table shows several popular entity mentions ($m_i$'s) and the similarity scores of other candidate mentions, $m_j$'s, in \textit{The Hobbit}. Clearly, the mentions [Bilbo, baggins, Hobbit, Burglar] form a clique representing the same actant, \textit{Bilbo Baggins}. One can also see the emergence of another EMG [Wizard, Gandalf, Gandolf, Grey] for the actant \textit{The wizard}. }
\label{table:ranked candidates}
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\includegraphics[scale=0.4]{boxplot2.png}
\captionof{figure}{A Box plot of the similarity scores, $S_{ij}$'s (see Eq.~\ref{equ:similarity-metric}), for all entity mention pairs $(m_i,m_j)$ in \textit{The Hobbit}. For any entity mention, $m_i$, its Entity Mention group (EMG) is first pruned to contain $m_j$'s with scores, $S_{ij}\geq \alpha$, where $\alpha$ is the $75^{th}$ percentile of the score distribution. From the plot we find $\alpha=2$. This EMG is further pruned by first sorting the list by their scores, and then ensuring that the ratio of any two successive scores is bounded below, i.e., $\frac{S_{i(j-1)}}{S_{ij}} \geq \beta$ (for $j\geq 2$). We found that $\beta =2$ provided a good cutoff. }
\label{fig:hist scores}
\end{minipage}
\end{table}
We applied our EMG algorithm to obtain the actants as documented in Table \ref{tab:entity groups}. Table~\ref{table:ranked candidates} and Fig.~\ref{fig:hist scores} provide example statistics obtained during the execution of the EMG algorithm. Each actant, once formed, aggregates relationships that the individual entity mentions imply.
The clustering of relationships aggregated under the now-formed entity mention groups yield higher granularity and confidence in the IARC task, as semantic connections between entity mentions reinforce the relationships from one actant to another. This effect is observed across the four books as shown in Fig. \ref{fig:dir_rels}. The relative size of words in the figure correlate to their frequency in the aggregated relationships between the entity mention groups.
The task of mapping relationship clusters to particular ground truth labels is shown for the ``converse'' and ``warn'' clusters from George to Lennie in \textit{Of Mice and Men} (Figure \ref{fig:cosine}). The rich clusters, in comparison to the ground truth labels from SparkNotes suggests recall as a good measure of performance for our pipeline. A summary of our results for all four books including recall is presented in Table \ref{tab:table_2}.
In general, the relationships between actants reveal a high degree of consistency with the ground truth graph. The largest divergences consist of missed relationships rather than the identification of non-existent relationships, although these occur occasionally. This latter group of relationships is often the attribution of a relationship, such as the killing of Smaug (the dragon in \textit{The Hobbit}), to an important character such as Bilbo Baggins. In other words, many readers \textit{incorrectly believe} that Bilbo killed Smaug. Another small set of spurious relationships, including one that suggests that Jem killed Bob Ewell in \textit{To Kill a Mockingbird}, are caused by reader confusion, “what-if” scenarios or, more commonly, incorrect pronoun resolution and aggregation. Apart from the relatively infrequent misattribution of relationships, the reduction in relationships aligns with the corresponding reduction in the number of actants connected to the central component of the story graph.
\begin{figure}
\centering
\includegraphics[scale=0.3]{picture_ohman.png}
\caption{Evaluation phase: matching 2 clusters of relationships in Of Mice and Men, from George to Lennie, to ground truth labels, in accordance to Algorithm 2. $\beta_c$ determines the set of edges.}
\label{fig:cosine}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.15]{story_hobit_all1.png}
\caption{Narrative Framework graph of \textit{The Hobbit}. Green nodes are extracted entities not part of the ground truth, red edges are ground truth edges which were not detected by the algorithm, blue edges are detected ground truth edges.}
\label{fig:story_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.15]{story_hobit_all1_2.png}
\caption{Narrative Framework graph of \textit{The Hobbit} \textit{after} thresholding on the frequency of relationship. Blue edges have at least 5 relationship instances.}
\label{fig:story_2}
\end{figure}
Figure \ref{fig:story_1} depicts the narrative framework graph for \textit{The Hobbit} with blue nodes representing ground truth actants or meta-actants. We also show four examples of resolved actants or meta-actants (colored green) not found in the ground truth: \textbf{Tolkien}:[tolkein, author], \textbf{novel}:[book, fantasy, story, novel], \textbf{Fili}:[fili] and \textbf{Film}:[film, movie, scene].
Blue edges represent relationships in the ground truth found by using our methods (frequency threshold $\geq 5$), while red edges represent undetected ground truth relationships. Green edges connecting to green nodes (frequency threshold $\geq 10$) are edges that cannot be verified; we include them to indicate the richness of the extracted graph as opposed to the ground truth. Figure \ref{fig:story_2} shows a graph similar to Figure \ref{fig:story_1} after the deletion of low frequency edges ($\leq 5$), and represents the core structure of the narrative covered in the reviews conditioned on the SparkNotes ground truth.
There are shared structural properties (disregarding the specific relationships they encode) that can be used to automatically distinguish between actual characters in the novels and the various meta-actants. For example, the meta-actant \textbf{Tolkien} (the green node at the top center of Figure \ref{fig:story_1}) has only outgoing edges, indicating that Tolkien appears only as the subject in any inferred relationship triplet. This lack of incoming edges is a significant feature of meta-actants: An important character in a novel usually has bi-directional relationships with other characters. An author of the novel, on the other hand, usually ``acts'' on the characters; hence the corresponding node is directionally isolated. The incoming edges for the meta-actant ``Book'' are all attributable to phrases such as " character XNZ is portrayed \textit{in the} book/novel''. A simple filtering of these preposition-induced relationships directionally isolates the meta-actant ``Book.'' Further structural explorations of the derived networks, such as measures of centrality and importance of different characters, are part of our ongoing work.
\begin{center}
\begin{table*}[t]
\large
\begin{tabular}{|p{3cm}|p{14cm}|}
\hline
\textbf{Book Name} & \textbf{Entity Mention Groups} \\
\hline \hline
{\textbf{Of Mice and Men}} & \textbf{Lennie}:[Lennie, lenny],
\textbf{George}:[george, milton],
\textbf{Curley's Wife}:[curley's wife, tart, wife],
\textbf{Aunt Clara}:[aunt clara, aunt, clara],
\textbf{men}:[workers, men],
\textbf{ranch}:[ranch, farm],
\textbf{soft things}:[soft things, soft, things],
\textbf{mental disability}:[mental disability, mental, disability]\\
\hline
{\textbf{The Hobbit}} & \textbf{Bilbo}:[bilbo, baggins, burglar, hobbit],
\textbf{Rivendell}:[rivendell, middleearth],
\textbf{Gandalf}:[gandalf, wizard, gandolf, grey],
\textbf{dwarf}: [dwarf, dwarves],
\textbf{Thorin}: [thorin, company],
\textbf{trolls}:[trolls, orcs],
\textbf{elf}:[elf, elves],
\textbf{Hobbitown}:[hobbitown, shire, hobbiton],
\textbf{man}: [human, man, lakemen],
\textbf{dragon}:[dragon, smaug]\\
\hline
{\textbf{Frankenstein}} &
\textbf{monster}:[monster, creature, adam],
\textbf{Frankenstein}:[frankenstein, victor, doctor, creator],
\textbf{Mary Shelley}: [mary, shelley, author, mary shelley],
\textbf{Elizabeth}:[elizabeth, wife],
\textbf{Walton}:[walton, robert],
\textbf{Henry}:[henry, clerval],
\textbf{Justine}:[justine, moritz],
\textbf{Caroline}:[caroline, beaufort]
\\
\hline
{\textbf{To Kill a Mockingbird}} &
\textbf{Scout}:[scout, sister],
\textbf{Atticus}:[atticus, dad, father, finch],
\textbf{Jem}:[jem, brother],
\textbf{Harper Lee}: [lee, harper lee, author, harper],
\textbf{Tom}: [tom, robinson, negro, mockingbird, africanamerican],
\textbf{Bob}:[bob, ewell],
\textbf{Boo}: [boo, arthur, arthur radley, boo radley],
\textbf{Mayella}: [mayella, daughter],
\textbf{aunt}: [aunt, alexandra],
\textbf{Maycomb}: [maycomb, alabama, town],
\textbf{Heck}:[heck, tate],
\textbf{Cunningham}:[cunningham, walter]\\
\hline
\end{tabular}
\caption{Final actants after EMG per book. Each actant group is labeled with the most frequent mention in the group. Empirically, these automatically computed labels match the ground truth entities as derived from SparkNotes. }
\label{tab:entity groups}
\end{table*}
\end{center}
\begin{table*}[]
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|}
\hline
& \textbf{Of Mice and} & \textbf{The Hobbit} & \textbf{Frankenstein} & \textbf{To Kill a} \\
& \textbf{Men} & & & \textbf{Mockingbird} \\
\hline \hline
\textbf{Recall (\%)} & \textbf{88.33} (83.33) &\textbf{82.61} (59.42) & \textbf{69.04} (66.66) & \textbf{90.16} (68.85) \\ \hline
\textbf{Edge detection rate (\%)} & \textbf{98.33} (96.66) & \textbf{92.75} (69.56) & \textbf{73.80} (73.80) & \textbf{93.44} (77.04) \\ \hline
\textbf{Average Number of Relationships} & \textbf{246.55} (209.15) & \textbf{139.34} (14.03) & \textbf{20.33} (13.38) & \textbf{72.09} (27.34) \\ \hline
\textbf{Median Number of Relationships} & \textbf{54} (48) & \textbf{43} (3) & \textbf{7} (7) & \textbf{36} (6) \\ \hline
\end{tabular}}
\caption{Performance on character relationship extraction with IARC after (\underline{in bold}) and before (\underline{within parentheses}) EMG. In the ``before'', scenario an actant group consisted of only the mention used in the ground truth. Thus for actant ``Bilbo'' only the mention ``Bilbo'' was used to compute its relationship. Post EMG, the mentions in the group \textbf{Bilbo}:[bilbo, baggins,burglar,hobbit] were aggregated to compute the actant Bilbo's relationships.}
\label{tab:table_2}
\end{table*}
\section{Discussion}
The results support the idea that readers, when summarizing a novel, tend to reduce the scope of the story and to focus on the most memorable aspects of the plot, here modeled as inter-actant relationships. In the reviews we studied, people converge on a set of main actants and relationships that map well to a core set of actants and relationships in the ground truth summaries, suggesting that people are relatively adept at summarizing even complex novels. As part of their summaries, however, people tend to simplify. This simplification may be related to cognitive limits on the number of real-world relationships that a person can keep in mind.
Since reviews tend to be short, when compared to the length of the work summarized, it is not surprising that people reduce both the number of actants, particularly in works with very large casts of characters such as \textit{The Hobbit}, and the relationships between those actants. The inter-actant relationships are also simplified in the reader reviews. Readers can simplify complex plots, such as that in \textit{To Kill a Mockingbird}, into relatively straight forward stories of conflict, strategies to address that conflict, and the result of the use of those strategies. The reduction of plot complexity may also be influenced by the abstraction of the novel in other media. For certain books, such as \textit{The Hobbit}, recent films have been highly successful, and it is quite possible that movie watching has had some impact on reader reviews. The same may apply to the other books in this study given, for example, the numerous references to the actor Gregory Peck in the reviews of \textit{To Kill a Mockingbird}. Although we have not done so here, it may be interesting to compare reader reviews of filmatized novels to the summary story graphs for those films.
\section{Conclusion}
The approach we describe here is widely applicable to other crowd-sourced review sites such as Rotten Tomatoes and Metacritic (for films) and LibraryThing and Love Reading (for literature) that, much like Goodreads, allow viewers or readers to present their own reviews of fiction, be it literature or film. An intriguing aspect of many of these sites is the propensity of reviewers to provide ``plot summaries'' as opposed to critical engagements with more sophisticated thematic analysis. While this plot-based approach to reviewing works of fiction may drive literary scholars to the brink of insanity, it does allow us to consider questions regarding the popular engagement with literature and other forms of artistic production. In future work, we expect to include actant-relationship sequencing so that we derive automatically a reader consensus model of plot, represented as a dynamic narrative framework graph. Given the responses that people post, we can use the scale of these sites to derive insight into how people (or groups of people) not only read but also remember. Turning the process around, it may be possible to develop a dynamically updated crowd-sourced summary of a novel or film--as more people write reviews, the consensus summary would update, capturing the emphasis on actants, relationships, and events that commentators add. Such a system could act as a cultural response barometer since what people remember, and what they forget (or choose to leave out), can be telling indicators of popular engagement with art.
\nocite{tangherlini2016mommy}
|
2,869,038,157,030 | arxiv | \section*{Abbreviations}
\begin{tabular}{@{}ll}
LS&Latin square\\
LSC&Latin Super Cube\\
$d$-LSC&$d$-dimensional Latin Super Cube\\
RBC&remote brick couple
\end{tabular}
\section{Introduction}
The \emph{Hamming distance} between two \mbox{$d$-tuples} is the number of positions at which the corresponding coordinates differ. Formally if $a=(a_1,\ldots,a_d)$ and $b=(b_1,\ldots,b_d)$ then $d(a,b)=\left|\{i\colon a_i\neq b_i\}\right|$.
A \emph{Latin square of order $n$} is an $n\times n$ array consisting of $n$ different symbols such that each symbol appears exactly once in each row and column. The property that no row or column contains any symbol more than once is known as the \emph{Latin property}.
From now on, the set of symbols is always $\mathbb{Z}_n=\{1,2,\ldots,n\}$, so the Latin square can also be considered as a set of ordered triples of the form $(i,j,k)$, where $i$, $j$, $k\in\mathbb{Z}_n$, and each Hamming distance between two distinct triples is at least~2.
Two Latin squares are \emph{isotopic} if each can be obtained from the other by permuting the rows, columns, and symbols. This is an equivalence relation, with the equivalence classes are called \emph{isotopy classes}.
Each Latin square $Q$ has six \emph{conjugate} Latin squares obtained by uniformly permuting the coordinates in each of its triples. They are denoted by $Q(i,j,k)$, $Q(j,k,i)$, $Q(k,i,j)$, $Q(j,i,k)$, $Q(i,k,j)$, $Q(k,j,i)$, where $Q(i,j,k)=Q$.
Two Latin squares are called \emph{paratopic}, if one of them is isotopic to a conjugate of the other. This is an equivalence relation, where the equivalence classes are called \emph{main classes}.
\section{d-LSC}
A $3$-dimensional chess-board of size $n\times n\times n$ is denoted by $H_n^3$ or simply $H^3$.
Each cell is identified by a triple $(i,j,k)$, where $i$, $j$, $k\in\mathbb{Z}_n$ and the distance between two cells is the Hamming distance.
The chess-board $H_n^3$ can be regarded as a Rubik's cube with $n^3$ cells (cubelets) and can be placed in a coordinate system with one vertex at the origin, as depicted in Figure~\ref{fig1_1}.
\begin{figure}[htb]
\centering\includegraphics[scale=0.3] {DoR_Figures/Fig1_1}
\caption{}\label{fig1_1}
\end{figure}
There is an immediate generalization of this structure to dimension $d$, where $d > 3$. The chess-board $H_n^d$ contains $n^d$ cells and each cell is identified by a \mbox{$d$-tuple} $(e_1,e_2,\ldots,e_d)$ where $e_i\in\mathbb{Z}_n$ for each $i \in\mathbb{Z}_d$. In case of $d=2$ and $d=3$ we generally denote the coordinate axes $x$, $y$ and $x$, $y$, $z$, respectively.
An illustration of the latter case is shown in Figure ~\ref{fig1_1}.
If $d>3$, then the axes of the chess-board $H_n^d$ are denoted by $t_1$, $t_2$, \ldots, $t_d$.
If a cell is identified by a \mbox{$d$-tuple} $(e_1,e_2,\ldots,e_d)$, then we consider $e_i$ as a coordinate of the cell on the axis $t_i$ for each $i\in \mathbb{Z}_d$.
\begin{defi}
Let $H^d_n$ be a $d$-dimensional chess-board and $j$ be a fixed coordinate on the axis $t_i$, where $j\in \mathbb{Z}_n$ and $i\in \mathbb{Z}_d$. The set of cells of the chess-board whose coordinate is $j$ on the axis $t_i$, is called the \emph{$(d-1)$-dimensional subspace} of $H^d_n$ and denoted by $H_i(j)$.
\end{defi}
\begin{cor}
For any axis $t_i$, the subspaces $H_i(j)$ form a partition of $H^d_n$ if $j$ goes from $1$ to $n$.
\end{cor}
\begin{defi}
A subspace of the chess-board of dimension 1 is called a \emph{file}, of dimension 2 is called a \emph{layer}.
\end{defi}
\begin{defi}
If the chess-board $H_n^d$ contains exactly $n^{d-1}$ non-attacking rooks, then the structure is called a \emph{Latin Super Cube of dimension d} or a \emph{$d$-LSC} for short.
\end{defi}
A brief description of the concept can be found in~\cite{[1]}. Instead of the term “Latin property” we often say that the rooks are non-attacking, or they \emph{do not see each other}. Each subspace of the chess-board of dimension $k$, where $k\in \mathbb{Z}_d$, is itself considered as a $k$-LSC with $n^k$ cells and $n^{k-1}$ non-attacking rooks. Consequently, a file contains 1 rook, a layer contains $n$ non-attacking rooks.
\section{One Cube Represents One Main Class}
The first element of a double identifying the cell $(i,j)$ of a Latin square is the row coordinate on the $x$-axis, the second is the column coordinate on the $y$-axis and the plane containing the both axes is denoted by $[xy]$. The cube, in which rooks are placed according to symbols, lies on the plane and one vertex, denoted by $A$, is at the origin of $[xy]$ and the two neighboring vertices, denoted by $B$ and $D$ are on the $x$-axis and $y$-axis, respectively.
The third neighboring vertex of $A$, denoted by $F$, is on the $z$-axis.
\begin{defi}
Building a $3$-LSC from a Latin square by placing a rook into the cublet $(i,j,k)$ when the cell $(i,j)$ of the Latin square contains a symbol $k$ is called \emph{composition}.
\end{defi}
This implies that a 3-LSC derived by composition from a Latin square is always created in a right-handed coordinate system and the symbols no longer play a special role in the LSC (a symbol value is just one of the three coordinates).
\begin{defi}
Place a 3-LSC in a right-handed coordinate system with one vertex at the origin. Creating a Latin square by placing a symbol $k$ into the cell $(i,j)$ of the square when the cublet $(i,j,k)$ contains a rook is called \emph{projection}.
\end{defi}
Since the rooks are non-attacking, the result of the projection of a \mbox{3-LSC} to any face of the cube is clearly a Latin square.
\begin{defi}
Two 3-LSCs are \emph{isotopic} if each can be obtained from the other by permuting the rows layers, columns layers and symbols layers.
\end{defi}
\begin{rmrk}\label{rmrk240}
Let $L$ be a $3$-LSC derived from a Latin square $Q$ by composition. There is a one-to-one correspondence between the permutations of the rows, columns and symbols of $Q$ and the permutations of the row, column and symbol layers of $L$. Therefore, there is a one-to-one correspondence between the isotopes of $Q$ and isotopes of $L$, that is if $\mathcal{P}$ is a series of permutations that transforms $Q$ to $Q^*$ then applying $\mathcal{P}$ on $L$, the resulting LSC $L^*$ can be derived from $Q^*$ by composition.
\end{rmrk}
\begin{defi}
Since the permutations of the layers and the rotations of the cube leave the property unchanged that the rooks are non-attacking, they are called \emph{Latin transformations}.
\end{defi}
If we consider a $3$-LSC $L$ derived from a Latin square $Q$ by composition the $L$ does not ``remember'' which face of the cube was on the plane when the composition was applied, so applying Latin transformations, each face of the cube can be the face to apply a projection. To apply a projection to a specific face of the cube, without losing generality, we always rotate the cube so that the given face is on the plane [xy] and a pre-selected vertex is at the origin.
\begin{defi}
A face of the cube can be identified by 3 adjacent vertices, so the face that contains vertices $A$, $B$ and $D$ is denoted by $[BAD]$ where the middle vertex $A$ is at the origin, the first vertex $B$ is on the $x$-axis and the third vertex $D$ is on the $y$-axis, as shown in the Figure~\ref{fig1_1}. The face $[BAD]$ is called \emph{bottom face}, and the parallel face is called \emph{cover face} and can be denoted $[EFG]$ or $[GHE]$.
\end{defi}
\begin{rmrk}\label{rmrk270}
Note that in a right-handed coordinate system with the face $[EFG]$ on the plane [xy], the edge $EF$ is on the $x$-axis and the edge $FG$ is on the $y$-axis and similarly with the face $[GHE]$, the edge $GH$ is on the $x$-axis and the edge $HE$ is on the $y$-axis, since the cube is located under the given faces, so the coordinates on the z-axis increase downwards.
\end{rmrk}
\begin{defi}
Let $Q[BAD]$ denote the Latin square derived from the $3$-LSC $L$ by projection to the bottom face $[BAD]$.
\end{defi}
\begin{theo}
If $L$ is a 3-LSC with $n^2$ non-attacking rooks and $Q$ is the Latin square derived from $L$ by projection to an arbitrary face of the cube then the LSCs obtained from $L$ by Latin transformations are all and only those LSCs, that can be derived from a paratope of $Q$ by composition.
\end{theo}
\begin{proof}
Let $L$ be a 3-LSC in a right-handed coordinate system with vertex $A$ at the origin, $B$ on the $x$-axis, $D$ on the $y$-axis and $F$ on the $z$-axis as indicated in Figure~\ref{fig1_1}. Let $Q$ be the Latin square on the face $BAD$ derived from $L$ by projection, hence $Q[BAD]=Q=Q(i,j,k)$, according to our notations.
Derive $Q[DAF]$ from $L$ by projection onto the face $[DAF]$ in the right-handed coordinate system $(y,z,x)$, the rook in the cell, originally identified by 3-tuple, $(i,j,k)$ results the symbol $i$ in the cell $(j,k)$, so $Q[DAF] = Q(j,k,i)$.
In the same way, in the right-handed coordinate system $(z,x,y)$, the rook in the cell $(i,j,k)$ projected onto the face $[FAB]$ results the symbol $j$ in the cell $(k,i)$, thus $Q[FAB] = Q(k,i,j)$. So, if we take the Latin squares derived from $L$ by projection onto the 3 faces that coincide at the origin of the cube, we get 3 conjugate Latin squares $Q=Q(i,j,k)$, $Q(j,k,i)$ and $Q(k,i,j)$.
\begin{defi}
The Latin squares $Q(i,j,k)$, $Q(j,k,i)$ and $Q(k,i,j)$, where $Q(i,j,k) = Q$ are called \emph{primary conjugates of $Q$}.
\end{defi}
Permute the layers on each axis of $L$ in reverse order. The resulting LSC and $L$ are isotopic and the cell $(i,j,k)$ is moved to cell $(n+1-i,n+1-j,n+1-k)$. Place a right-handed coordinate system with the origin at vertex $H$ such that the $z$-axis is aligned with the edge $HC$ of the cube. Then the edge $HG$ is on the $x$-axis and the edge $HE$ is on the $y$-axis. So, the new $z$-axis is parallel to the old one, however the new $x$-axis is parallel to the old $y$-axis and the new $y$-axis is parallel to the old $x$-axis, according to Remark~\ref{rmrk270}. Consequently, the new coordinates of the cell $(n+1-i,n+1-j,n+1-k)$ are $(j,i,k)$, so, the projection to the face $GHE$ gives the Latin square $Q(j,i,k)$, the projection to the face $EHC$ gives the Latin square $Q(i,k,j)$, projection to the face $CHG$ gives the Latin square $Q(k,j,i)$.
\begin{defi}
The Latin squares $Q(j,i,k)$, $Q(i,k,j)$ and $Q(k,j,i)$ are called \emph{secondary conjugates of $Q$}.
\end{defi}
The primary conjugates of $Q$ can be derived from $L$ by projection, the secondary conjugates of $Q$ can be derived from a specific isotope of $L$ by projection. Consequently, based on Remark~\ref{rmrk240} each element of the main class of $Q$ can be derived from the proper isotope of $L$.
Now we prove that a Latin square $Q^*$ derived from an LSC $L^*$ produced from $L$ by Latin transformations is an element of the main class of $Q$.
All 6 faces of the cube can be moved to the bottom face and all four vertices of the bottom face can be rotated to the origin without changing the bottom face. Rotating the bottom face at -90 degrees is called \emph{face rotation}. If the face $ABCD$ in our example in Figure~\ref{fig1_1} is rotated by -90 degrees, vertex $B$ is moved to the origin and the face $ABCD$ remains the bottom face.
It is easy to see, that from each position of the cube all 24 positions can be achieved with a sequence of face rotations.
An equivalence class is transitive, so, it is enough to prove that a Latin square $Q^*$ derived from an LSC $L^*$ produced from $L$ by a single face rotation is an element of the main class of $Q$.
Let $f$ be a face rotation. Let $L^*$ be the resulting structure and $Q^*$ the Latin square derived from $L^*$ by projection to the bottom face. Then $Q^*$ is in the main class of $Q$. The transformation $f$ moves the row layer $i$ of $L$ into the column layer $(n+1-i)$, the column layer $j$ into the row layer $j$, and the coordinates of the symbol layers remain unchanged. The order of the column layers does not change, but the layers are moved to the other side of the origin, so their coordinates change from $1,2,\ldots ,n$ to $n,n-1,\ldots ,1$, therefore the new column coordinate is $(n+1-i)$.
So, the rows of the Latin square $Q^*$ derived from the resulting structure $L^*$ by projection to the bottom face contain the columns of $Q$ in the same order, the columns contain the rows of $Q$ in reverse order, the order of symbol layers unchanged.
Denote the derived Latin square by $Q(j,n+1-i,k)$. In $Q(j,n+1-i,k)$, permuting the the columns in reverse order yields the Latin square $Q(j,i,k)$, so $Q(j,n+1-i,k)$ is a paratope of $Q$.
\end{proof}
\begin{rmrk}
Permuting the layers of $H^3$ perpendicular to a given axis in reverse order corresponds to a plane reflection in the Euclidean sense. Permuting the layers on each axis of $H^3$ in reverse order corresponds to central symmetry.
\end{rmrk}
\begin{rmrk}
Secondary conjugates do not give us additional information, so from now on we only deal with primary conjugates.
\end{rmrk}
\goodbreak
\section{Hamming Bricks}
Let $X$ a subsystem induced by a family of sets $<E_1,E_2,\ldots ,E_d>$ over $\mathbb{Z}_n$. Then $X$ contains all cells identified by the \mbox{$d$-tuples} $(e_1,e_2,\ldots ,e_d)$ for which $e_i \in E_i$ for each $i\in \mathbb{Z}_d$.
\begin{defi}
The subsystem $X$ is visible if $max\{E_i\} - min\{E_i\} = |E_i|-1$
for each $i\in \mathbb{Z}_d$. If $X$ is visible, it is called a \emph{Hamming brick} or simply a \emph{brick}.
\end{defi}
For each set $E_i$, there exists a permutation $p_i$ which places the different elements of the set $E_i$ as coordinates on the axis $t_i$ into the set of coordinates $1, 2,\ldots ,|E_i|$, leaving the coordinates on the other axes unchanged. This permutation only changes the order of the $(d-1)$-dimensional subspaces $H_i(j)$, where $j\in \mathbb{Z}_n$.
After executing all the permutations $p_1,p_2,\ldots ,p_d$, the subsystem $X$ is a brick permuted to the origin. We generally examine the subsystems in this form.
\begin{rmrk}
A brick permuted to the origin of the chess-board is a solid rectangular cuboid of size \mbox{$|E_1|\times |E_2|\times \ldots\times |E_d|$} in the Euclidean geometry.
\end{rmrk}
\begin{defi}
Let $X$ be a set of cells and $c$ be a cell of $H_n^d$. We define the Hamming distance between $X$ and $c$ as follows:
\[
d(X,c)=\min\{d(x,c)\mid x\in X\},
\]
where $d(x,c)$ is the Hamming distance between the cells $x$ and $c$.
\end{defi}
Clearly $d(X,c)>0$ if and only if $c\notin X$.
\begin{defi}
Let $T\subseteq H_n^d$ be a brick. The Hamming sphere of radius $r$, center $T$ is a set of cells for which
\[
S_r(T)=\{c\in H_n^d\mid d(T,c)=r\}.
\]
\end{defi}
Obviously $S_0(T)=T$.
So, we generalized the cell-centered Hamming sphere to a brick-centered Hamming sphere. The Figure ~\ref{fig2_1} shows a Hamming sphere around the cubelet K with $r=1$, a Hamming sphere around the brick T with $r=1$.
\begin{figure}[htb]
\centering
\begin{tabular}{c}
\hbox to 0.9\textwidth {\includegraphics[scale=.45]{DoR_Figures/fig2_1a}\hfill\includegraphics[scale=.45]{DoR_Figures/fig2_1b}}
\end{tabular}
\caption{}\label{fig2_1}
\end{figure}
The Figure ~\ref{fig2_2} shows a Hamming sphere around the brick T with $r=2$ and a Hamming sphere around the brick T with $r=3$.
\begin{figure}[htb]
\centering
\begin{tabular}{c}
\hbox to 0.9\textwidth {\includegraphics[scale=.45]{DoR_Figures/fig2_2a}\hfill\includegraphics[scale=.45]{DoR_Figures/fig2_2b}}
\end{tabular}
\caption{}\label{fig2_2}
\end{figure}
\begin{defi}
Let $T$, $U\subseteq H_n^d$ be two bricks. The Hamming distance between these two bricks~is:
\[
d(T,U)=\min\{d(x,y)\mid x\in T, y\in U\}
\]
\end{defi}
It is evident that $d(T,U)=0$ if and only if $T\cap U\neq\emptyset$.
For $d=3$, $\dbinom{3}{0}$ brick has distance $0$ from brick $T$ ($T$ itself), $\dbinom{3}{1}$ bricks have distance 1 from $T$ and $\dbinom{3}{2}$ bricks have distance 2 from $T$ and $\dbinom{3}{3}$ brick has distance 3 from $T$.
\begin{defi}
A brick of size $e_1\times e_2\times \ldots\times e_d$ is a \emph{real brick}, if $1\leq e_1,e_2,\ldots,e_d<n$.
\end{defi}
Let $T_0$ be a brick of size $e_1\times e_2\times \ldots\times e_d$, where $1\leq e_1,e_2,\ldots,e_k<n$, $e_{k+1}=\ldots=e_d=n$. In this case $k$ is the maximum distance between the brick $T_0$ and a cell, so the maximum distance between $T_0$ and another brick.
Let $I$ be a set of indices, where $I\subseteq\{1,2,\ldots,k\}$ and $r=|I|$. Let us define the set $T_I$ as follows
\[
T_I=\{x=(x_1,x_2,\ldots,x_d)\in H_n^d\mid n\geq x_i>e_i\text{ for }i\in I\text{ and }1\leq x_i\leq e_i\text{ for }i\notin I\}.
\]
Obviously, $T_I$ is a brick as well and $d(T_0,T_I)=r$.
Consequently, we have $\dbinom{k}{r}$ subsets of cardinality $r$, so we have $\dbinom{k}{r}$ distinct disjoint bricks that have distance $r$ from $T_0$. Using the Hamming distance from $T_0$, $T_0$ generates a partition of $H_n^d$ into $2^r$ disjoint bricks based on the index set $I$. If $T_0$ is a real brick, then $k=d$, and each Hamming sphere of radius $r$, with center $T_0$ consists of $\dbinom{d}{r}$ disjoint bricks for $r \in\mathbb{Z}_d$ and $r=0$.
\begin{rmrk}\label{417}
A brick $T_k$ at a distance $k$ from a real brick $T_0$ has exactly $k$ edges that are obtained by taking the edge $(n-e_i)$ instead of the edge $e_i$ of $T_0$.
\end{rmrk}
\begin{rmrk}
Let $T$ be an arbitrary brick in the partition generated by $T_0$. Then $T$ generates the same bricks as $T_0$, so we get the the same partition of the space $H_n^d$.
\end{rmrk}
\begin{rmrk}
Let $T_0$ be a real brick of size $e_1\times e_2\times \ldots\times e_d$ permuted to the origin. If we look at the partition generated by $T_0$ in Euclidean geometry, then each brick has a vertex that coincides with a vertex of the chess-board, different bricks fit on different chess-board vertices and the opposite (the only inner) vertex of each brick is the Euclidean point with coordinates $(e_1,e_2,\ldots ,e_d)$.
\end{rmrk}
\begin{defi}
Let $T,U\subseteq H_n^d$ be two bricks. The bricks $T$ and $U$ are \emph{remote} if $d(T,U)=d$. The pair of $T$ and $U$ is called a \emph{remote couple}, if $U$ contains all the cells that have a distance $d$ from $T$. If $T$ and $U$ form a remote couple, then $T$ is called the \emph{remote mate of $U$} and $U$ is called the \emph{remote mate of $T$}.
\end{defi}
If $T$ and $U$ are remote, then $U$ and $T$ are remote as well. If $T$ and $U$ are remote, then $T\cap U=\emptyset$.
\begin{defi}
Let $X$ be a set of cells. We define $V(X)$, the \emph{volume} of $X$ as the number of cells in $X$.
\end{defi}
If $T\subseteq H_n^d$ is a brick of size $e_1\times e_2\times \ldots\times e_d$, then the volume of $T$ is
$V(T)=\displaystyle\prod_{i=1}^de_i$.
\begin{defi}
Let $T_0\subseteq H_n^d$ be a brick of size $e_1\times e_2\times \ldots\times e_d$.
The \emph{area of $T_0$ orthogonal to $e_i$ or $t_i$} is
\[
A(T_0,e_i)=A(T_0,t_i)=V(T_0)/e_i=e_1e_2\ldots e_{i-1}e_{i+1}\ldots e_{d-1}e_d.
\]
\end{defi}
\begin{defi}
Let $T_0\subseteq H_n^d$ be a brick of size $e_1\times e_2\times \ldots\times e_d$.
The brick $T_i$, in the partition generated by $T_0$, is the \emph{auxiliary brick} of $T_0$ along $t_i$, if $T_i$ contains all of the cells $(x_1,x_2,\ldots,x_i,\ldots,x_d)$ for which $x_k\le e_k$ if $k\neq i$ and $x_k>e_k$ if $k=i$. $T_0$ and $T_i$ together are called \emph{auxiliaries} along $t_i$.
\end{defi}
Clearly, $d(T_0,T_i)=1$ and $T_0$ has $\dbinom{d}{1}=d$ auxiliary bricks. Figure~\ref{fig2_1} shows the case $d=3$.
\begin{rmrk}
If the bricks $U_1$ and $U_2$ are auxiliaries along an axis $t$, then $U_1$ and $U_2$ are disjoint, so
$V(U_1\cup U_2)=V(U_1)+V(U_2)$ and $U_1\cup U_2$ is a brick with an edge of length $n$ on the axis $t$.
\end{rmrk}
\begin{defi}
A brick $T$ is an \emph{$n$-brick}, if $T$ has at least one edge of length $n$. A brick $T$ is an \emph{$n^2$-brick}, if $T$ has at least two edges of length $n$.
\end{defi}
\begin{rmrk}
A subspace is always an $n$-brick.
\end{rmrk}
Let $L$ be a $d$-LSC and $T\subseteq H_n^d$ be an $n$-brick of size $e_1\times e_2\times \ldots\times e_d$, where $e_i=n$. It is clear from the construction, that each file has exactly one rook. Therefore, the number of rooks in $T$ is the size of the area of $T$ orthogonal to $e_i$, i.e, $V(T)/n$. Based on this observation the following results:
Let $L$ be a $d$-LSC and $T_0$ be a brick of size $e_1\times e_2\times \ldots\times e_k\times e_{k+1}\times \ldots\times e_d$, which has $c_0$ rooks. Let $T_k$ be a brick with distance $k$ from $T_0$ in the partition generated by $T_0$ and contain $c_k$ rooks. Based on Remark~\ref{417} and without loss of generality, we assume that $T_k$ is a brick of size
\[
(n-e_1)\times (n-e_2)\times \ldots\times (n-e_k)\times e_{k+1}\times \ldots\times e_d,
\]
otherwise, we change the order of the axes.
\begin{theo}[Distribution Theorem]\label{theo2.17}
\begin{equation}\label{(201)}
c_k=\frac{V(T_k)-(-1)^kV(T_0)}{n}+(-1)^kc_0=\frac{V(T_k)}{n}-(-1)^k\left[\frac{V(T_0)}{n}-c_0\right]
\end{equation}
\end{theo}
\begin{proof}
Let $T_1$ be the auxiliary brick of $T_0$ along $t_1$ and let $T_1$ contain $c_1$ rooks. Since $T_0\cup T_1$ is an $n$-brick, so $T_0\cup T_1$ has $\dfrac{V(T_1\cup T_0)}{n}=\dfrac{V(T_1)+V(T_0)}{n}$ rooks, so $c_0+c_1=\dfrac{V(T_1)+V(T_0)}{n}$, and
\[
c_1=\dfrac{V(T_1)+V(T_0)}{n}-c_0.
\]
Let the brick $T_2$ be the auxiliary brick of $T_1$ along $t_2$ and let $T_2$ contain $c_2$ rooks. So $T_1\cup T_2$ is an $n$-brick, so $T_1\cup T_2$ has $\dfrac{V(T_2\cup T_1)}{n}=\dfrac{V(T_2)+V(T_1)}{n}$ rooks, so $c_1+c_2=\dfrac{V(T_2)+V(T_1)}{n}$, so
\begin{align*}
c_2&=\frac{V(T_2)+V(T_1)}{n}-c_1\\
&=\frac{V(T_2)+V(T_1)}{n}-\left(\frac{V(T_1)+V(T_0)}{n}-c_0\right)=
\frac{V(T_2)-V(T_0)}{n}+c_0.
\end{align*}
Let the brick $T_3$ be the auxiliary brick of $T_2$ along $t_3$ and let $T_3$ contain $c_3$ rooks. So $T_2\cup T_3$ is an $n$-brick, so $T_2\cup T_3$ has $\dfrac{V(T_3\cup T_2)}{n}=\dfrac{V(T_3)+V(T_2)}{n}$ rooks, so $c_2+c_3=\dfrac{V(T_3)+V(T_2)}{n}$, so
\begin{align*}
c_3&=\frac{V(T_3)+V(T_2)}{n}-c_2\\
&=\frac{V(T_3)+V(T_2)}{n}-\left(\frac{V(T_2)-V(T_0)}{n}+c_0\right)=\frac{V(T_3)+V(T_0)}{n}-c_0.
\end{align*}
When we take $T_{i+1}$, the auxiliary brick of $T_i$, and calculate $c_{i+1}$, then $V(T_i)$ falls out, the sign of $V(T_0)$ and $c_0$ change, in the next step $V(T_{i-1})$ falls out, the sign of $V(T_0)$ and $c_0$ change, etc\ldots
\begin{align*}
c_{i+1}&=\frac{V(T_{i+1})+V(T_i)}{n}-c_i=\frac{V(T_{i+1})+V(T_i)}{n}-\left(\frac{V(T_i)+V(T_{i-1})}{n}-c_{i-1}\right)=\ldots\\
&=\frac{V(T_{i+1})-(-1)^{i+1}V(T_0)}{n}+(-1)^{i+1}c_0.
\end{align*}
So \eqref{(201)} holds, and we can write it in the following form:
\begin{equation}\label{(202)}
(-1)^k\left[\frac{V(T_0)}{n}-c_0\right]=\frac{V(T_k)}{n}-c_k.
\end{equation}
\end{proof}
\noindent
So, for a given $k$, $c_k$ depends only on $c_0$ and the volume of $T_0$ and $T_k$ and the parity of $k$.
\begin{defi}
The \emph{density} of a set of cells $X$ is $\varrho(X)=c/V(X)$, where $c$ is the number of rooks in $X$.
\end{defi}
In case of a $d$-LSC the density of the entire chess-board is: $\varrho(H_n^d)=n^{d-1}/n^d=1/n$.
\begin{defi}
The set of cells $X$ has \emph{standard density} if $\varrho(X)=1/n$.
\end{defi}
\begin{defi}
We define the \emph{deflection} of the set of cells $X$ from the standard density as follows
\[
\operatorname{df}(X) = V(X)/n - c
\]
where $c$ is the number of rooks in $X$.
\end{defi}
The set of cells $X$ has standard density if and only if $\operatorname{df}(X)=0$. Using the deflection, the equality in \eqref{(201)} can be written in the following two ways:
\begin{theo}[Deflection Theorem]
\begin{equation}\label{(203)}
c_k=\frac{V(T_k)}{n}-(-1)^k\operatorname{df}(T_0).
\end{equation}
\end{theo}
\begin{theo}[Main Theorem]
\begin{equation}\label{(204)}
\operatorname{df}(T_k)=(-1)^k\operatorname{df}(T_0).
\end{equation}
\end{theo}
If $T_0$ has standard density, then $\operatorname{df}(T_k)=0$, ergo, each Hamming brick also has standard density. If $\operatorname{df}(T_0)\neq 0$, then each brick in the Hamming sphere $S_k(T_0)$ has the same deflection, either $\operatorname{df}(T_0)$ if $k$ is even or $-\operatorname{df}(T_0)$ if $k$ is odd. Ergo, the sign of the deflection of the bricks alternates when we step to the bricks of the next Hamming sphere $S_{k+1}(T_0)$. Note, that the deflection is not necessarily integer.
If $T_0$ is not a real brick, then $T_0$ has at least one edge of length $n$, so $\operatorname{df}(T_0)=0$, thus, $T_0$ has a standard density. Consequently, in this case, each brick of the partition has a standard density.
\begin{cor}
If $n$ is a prime number, then there is no real Hamming brick of standard density.
\end{cor}
\subsection*{Case $d=2$:}
The brick $T_0$, colored yellow in the Figure~\ref{fig2_3}, is a real brick of size $a\times b$. The bricks $T_{1a}$ and $T_{1b}$
are the two auxiliaries of $T_0$ and the brick $T_2$ is the remote mate of $T_0$.
Each file has exactly one rook and let $T_0$ have $c_0$ rooks.
Then $T_{1a}$ has
\[
\frac{V(T_0)+V(T_{1a})}{n}-c_0=b-c_0
\]
rooks and so $T_2$ has
\[
\frac{V(T_{1a})+V(T_2)}{n}-\left(\frac{V(T_0)+V(T_{1a})}{n}-c_0\right)=\frac{V(T_2)-V(T_0)}{n}+c_0
\]
rooks. So
\[
c_2=\frac{V(T_2)-V(T_0)}{n}+c_0=(n-a-b)+c_0.
\]
Another form of this equality is
\[
c_0-c_2=a+b-n.
\]
\begin{figure}[htb]
\centering\includegraphics[scale=.4]{DoR_Figures/Fig2_3}
\caption{}\label{fig2_3}
\end{figure}
\begin{defi}
If $T_0$ is a real brick of size $a\times b$, then the number $(a+b-n)$ is called the \emph{Ryser-number} of $T_0$ and is denoted by $\operatorname{Ry}(T_0)$.
\end{defi}
The Ryser-number of $T_0$ is the difference between the numbers of rooks in $T_0$ and $T_2$. If $c_0$ is known, then we know exactly how many rooks are in $T_2$, that is $c_2=c_0-\operatorname{Ry}(T_0)$.
It is clear, that $\operatorname{Ry}(T_2)=-\operatorname{Ry}(T_0)=(n-a-b)$ and $c_2=c_0+\operatorname{Ry}(T_2)$.
\begin{cor}
If $a+b=n$, then the white bricks $T_{1a}$ and $T_{1b}$ are squares and $\operatorname{Ry}(T_0)=0$, so $T_0$ and $T_2$ have the same number of rooks.
\end{cor}
\subsection*{Case $d=3$:}
The brick $T_0$, colored yellow in the Figure~\ref{fig2_4}, is a real brick of size $a\times b\times c$. The bricks $T_{1a}$, $T_{1b}$ and $T_{1c}$ are the auxiliaries of $T_0$, the brick $T_3$ is the remote mate of $T_0$ and the bricks $T_{2ab}$, $T_{2bc}$ and $T_{2ac}$ are the auxiliaries of $T_3$.
Each file has exactly 1 rook and let $T_0$ have $c_0$ rooks.
Then $T_0\cup T_{1a}$ has $\dfrac{V(T_0)+V(T_{1a})}{n}=bc$ rooks, so $T_{1a}$ has
\[
\frac{V(T_0)+V(T_{1a})}{n}-c_0=bc-c_0
\]
rooks.
\begin{figure}[htb]
\centering\includegraphics[scale=.5]{DoR_Figures/Fig2_4}
\caption{}\label{fig2_4}
\end{figure}
\noindent
$T_{1a}\cup T_{2ab}$ has $\dfrac{V(T_{1a})+V(T_{2ab})}{n}=(n-a)c$ rooks, hence $T_{2ab}$ has
\[
\frac{V(T_{1a})+V(T_{2ab})}{n}-\left(\frac{V(T_0)+V(T_{1a})}{n}-c_0\right)=\frac{V(T_{2ab})-V(T_0)}{n}+c_0
\]
$T_{2ab}\cup T_3$ has $\dfrac{V(T_{2ab})+V(T_3)}{n}=(n-a)(n-b)$ rooks, so $T_3$ has
\[
\frac{V(T_{2ab})+V(T_3)}{n}-\left(\frac{V(T_{2ab})-V(T_0)}{n}+c_0\right)=\frac{V(T_3)+V(T_0)}{n}-c_0.
\]
As a result,
\[
c_0+c_3=\frac{V(T_0)+V(T_3)}{n}=\frac{abc+(n-a)(n-b)(n-c)}{n}=n^2-(a+b+c)n+(ab+bc+ca).
\]
If $c=0$ or $c=n$, then we also get the right numbers.
\begin{rmrk}
Let $M$ be a $d$-fold stochastic matrix of dimension $d$. Then the sum of the numbers in each file is exactly~1. Let $c_0$ be the sum of the numbers in $T_0$ and $c_k$ be the sum of the numbers in $T_k$. Then the Distribution Theorem~\ref{theo2.17} holds for $d$-fold stochastic matrix, you only have to change the expression from ``number of rooks” to ``sum of the numbers” in the proof.
\end{rmrk}
\begin{cor}
The sum of the numbers in $T_k$ depends only on the sum of the numbers in $T_0$ and the volume of $T_0$ and $T_k$ and the parity of $k$.
\end{cor}
\begin{rmrk}
The Distribution Theorem~\ref{theo2.17} holds for any types of numbers of $d$-fold stochastic matrices too, if the sum of the numbers is exactly~1 in each file, but we are only dealing with the non-negative case.
\end{rmrk}
\begin{rmrk}
For $d=3$ we get $c_0+c_3=n^2-(a+b+c)n+(ab+bc+ca)$, which is a result of Cruse~\cite{[2]} for triply stochastic matrices including permutation cubes, which are characteristic matrices of Latin squares. Consequently, for triply stochastic matrices $c_0+c_3$ always is an integer.
\end{rmrk}
\begin{rmrk}
The $d=2$ we get $c_2=(n-a-b)+c_0$, thus $c_0-c_2 = a+b-n$, consequently, for doubly stochastic matrices $c_2-c_0$ always is an integer.
\end{rmrk}
\begin{defi}
A \emph{file of brick} $T$ means the cells of a file, which are in $T$.
\end{defi}
\begin{defi}
A \emph{layer of brick} $T$ means the cells of a layer, which are in $T$.
\end{defi}
\begin{defi}
The brick $T$ is large, if $T$ has a layer of size $a\times b$, for which $a+b>n$.
\end{defi}
\begin{cor}
Let $T$ be a large brick in a $d$-fold stochastic matrix or in a $d$-LSC. Then $T$ has no layer, for which the sum of numbers or the number of rooks is $0$, respectively.
\end{cor}
\begin{cor}
If $T$ is a brick of size $a\times b\times c$ in a $3$-LSC and $a+b>n$, $b+c>n$, $a+c>n$, then each layer of the brick $T$ has at least one rook.
\end{cor}
\section{Taxicab Geometry}
In the taxicab geometry the distance between two points is the sum of the absolute differences of their Cartesian coordinates or simply $d(O,P)=r+c$. $d(O,P)$ is the shortest path from $O$ to~$P$. There can be more than one shortest path. If you start from $O$ and always drive right or down (never left or up), you get a shortest path from $O$ to $P$ when you arrive at $P$ as indicated in the Figure~\ref{fig3_1}.
\begin{figure}[htb]
\centering\includegraphics[scale=.3]{DoR_Figures/Fig3_1}
\caption{}\label{fig3_1}
\end{figure}
\begin{defi}
The \emph{diameter} of the rectangular $T$, $\operatorname{diam}(T)=r+c$.
\end{defi}
The diameter is the distance between the two most remote points of $T$.
Let $T\subseteq H_n^d$ be a brick of size $e_1\times e_2\times \ldots\times e_d$. Based on the case of dimension 2 we generally define the diameter of a Hamming brick as follows:
\begin{defi}
The \emph{diameter} of the brick $T$ is $\operatorname{diam}(T)=e_1+e_2+\ldots+e_d$.
\end{defi}
\goodbreak
\section{Sum of Hamming Distances}
First, we prove an identity for binomial coefficients.
\begin{theo}\label{theo4.1}
\[
\sum_{k=0}^dk(-1)^k\binom{d}{k}=0
\]
for any $d\geq 2$.
\end{theo}
\begin{proof}
We use the two well-known identites
\begin{subequations}
\begin{equation}
\binom{k+1}{i}=\binom{k}{i}+\binom{k}{i-i}\label{(3a)}
\end{equation}
\begin{equation}
\sum_{k=0}^d(-1)^k\binom{d}{k}=0.\label{(3b)}
\end{equation}
\end{subequations}
We prove the theorem by induction on $d$. The assertion of the theorem is true if $d=2$, 3, 4.
\begin{align*}
0\cdot 1-1\cdot 2+2\cdot 1&=0\\
0\cdot 1-1\cdot 3+2\cdot 3-3\cdot 1&=0\\
0\cdot 1-1\cdot 4+2\cdot 6-3\cdot 4+4\cdot 1&=0
\end{align*}
We suppose that $d\geq 2$ and that the induction hypothesis holds for $d$, i.e,
\[
A_d=\sum_{k=0}^dk(-1)^k\binom{d}{k}=0,
\]
then we prove that
\[
A_{d+1}=\sum_{k=0}^{d+1}k(-1)^k\binom{d+1}{k}.
\]
Using \eqref{(3a)}
\begin{align*}
A_{d+1}&=\sum_{k=0}^{d+1}k(-1)^k\binom{d}{k}+\sum_{k=0}^{d+1}k(-1)^k\binom{d}{k-1}\\
A_{d+1}&=\sum_{k=0}^dk(-1)^k\binom{d}{k}+(d+1)(-1)^{d+1}\binom{d}{d+1}+0\cdot (-1)^0\binom{d}{-1}+\sum_{k=1}^{d+1}k(-1)^k\binom{d}{k-1}.
\end{align*}
Since $\dbinom{d}{d+1}=\dbinom{d}{-1}=0$ we get
\begin{align*}
A_{d+1}&=\sum_{k=0}^{d}k(-1)^k\binom{d}{k}+\sum_{k=1}^{d+1}k(-1)^k\binom{d}{k-1}\\
A_{d+1}&=A_d+\sum_{k=1}^{d+1}k(-1)^k\binom{d}{k-1}.
\end{align*}
$A_d=0$ by the assertion, so
\[
A_{d+1}=\sum_{k=1}^{d+1}k(-1)^k\binom{d}{k-1}.
\]
Replacing $k$ with $(j+1)$:
\begin{align*}
A_{d+1}&=\sum_{j=0}^d(j+1)(-1)^{j+1}\binom{d}{j}=(-1)\sum_{j=0}^d(j+1)(-1)^j\binom{d}{j}\\
&=(-1)\left[\sum_{j=0}^dj(-1)^j\binom{d}{j}+\sum_{j=0}^d1\cdot(-1)^j\binom{d}{j}\right]\\
&=(-1)\left[A_d+\sum_{j=0}^d1\cdot(-1)^j\binom{d}{j}\right]=(-1)\left[\sum_{j=0}^d(-1)^j\binom{d}{j}\right]=0
\end{align*}
because of \eqref{(3b)}.
So, the statement holds for $(d+1)$, that means it holds for every $d\ge 2$.%
\end{proof}
Let $L$ be a $d$-LSC and $T_0$ be a real brick. Each rook has a Hamming distance from $T_0$. Let us denote $s_k$ the number of rooks in the Hamming sphere $S_k(T_0)$, and $V(S_k(T_0))$ is the sum of the volumes of all the bricks of the Hamming sphere $S_k(T_0)$.
We sum up all Hamming distances from $T_0$ for all rooks. The sum is denoted by $h_n^d(T_0)$.
$S_k(T_0)$ has $\dbinom{d}{k}$ bricks, each of which has $c_k=\dfrac{V(T_k)}{n}-(-1)^k\operatorname{df}(T_0)$ rooks. Thus, the sum
\[
s_k=\dfrac{V(S_k(T_0))}{n}-\dbinom{d}{k}(-1)^k\operatorname{df}(T_0).
\]
\begin{align*}
h_n^d(T_0)&=\sum_{k=0}^dks_k=\sum_{k=0}^dk\left[\frac{V(S_k(T_0))}{n}-\binom{d}{k}(-1)^k\operatorname{df}(T_0)\right]\\
&=\sum_{k=0}^dk\frac{V(S_k(T_0))}{n}-\sum_{k=0}^dk\binom{d}{k}(-1)^k\operatorname{df}(T_0)\\
&=\sum_{k=0}^dk\frac{V(S_k(T_0))}{n}-\operatorname{df}(T_0)\sum_{k=0}^dk(-1)^k\binom{d}{k}.
\end{align*}
Because of
Theorem~\ref{theo4.1}
\begin{equation}\label{(401)}
h_n^d(T_0)=\sum_{k=0}^dks_k=\sum_{k=0}^dk\frac{V(S_k(T_0))}{n
\end{equation}
\begin{theo}[Distance Theorem
\begin{equation}\label{(402)}
h_n^d(T_0)=n^{d-2}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)\right]
\end{equation}
\end{theo}
\begin{proof
We prove, that
\begin{equation}\label{(403)}
nh_n^d(T_0)=\sum_{k=0}^d k V(S_k(T_0))=n^{d-1}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)\right
\end{equation}
by induction on $d$.
In the case $d=2$, there are $c_0$ rooks with distance $0$, $c_{1a}+c_{1b}$ rooks with distance 1 and $c_2$ rooks with distance~2. The sum is:
\begin{align*}
h_n^2(T_0)&=0\cdot c_0+1\cdot (c_{1a}+c_{1b})+2\cdot c_2\\
&=(b-c_0)+(a-c_0)+2(c_0-a-b+n)=2n-a-b=(n-a)+(n-b),
\end{align*}
so
\[
nh_n^2(T_0)=n\big[(n-a) + (n-b)\big]
\]
Let us assume that $d>2$ and \eqref{(403)} holds.
\[
\sum_{k=0}^{d+1}kV(S_k^{d+1}(T_0))=n^d\big[(n-e_1)+(n-e_2)+\ldots+(n-e_d)+(n-e_{d+1})\big]
\]
Because of \eqref{(401)}
\[
nh_n^{d+1}(T_0)=\displaystyle\sum_{k=0}^{d+1}kV(S_k^{d+1}(T_0))
\]
Let $T_0^{d+1}$ be a real brick in $H_n^{d+1}$. The volume of $T_0^{d+1}$ is $V(T_0^{d+1})=\displaystyle\prod_{i=1}^{d+1}e_i$. Let $T_k^{d+1}$ be a Hamming brick in the partition generated by $T_0^{d+1}$, and $d\big(T_0^{d+1}, T_k^{d+1}\big)=k$, then there exist an index set $I={i_1,i_2,\ldots,i_k}$ such, that we changed $e_i$ of $T_0^{d+1}$ to $(n-e_i)$ in $T_k^{d+1}$ for $i\in I$ and we kept $e_j$ of $T_0^{d+1}$ in $T_k^{d+1}$ for $j\notin I$. There are exactly $\dbinom{d+1}{k}$ index set with this property and each of them defines a $T_k^{d+1}$ brick.
The volume of $T_0^{d+1}$ is $V(T_0^{d+1})=\displaystyle\prod_{i=1}^{d+1}e_i$ and the volume of $T_k^{d+1}$ is
\[
V(T_k^{d+1})=\prod_{i\in I}(n-e_i)\cdot \prod_{j\notin I}e_j
\]
There are two types of bricks that have the distance $k$ from $T_0$. The first type has an edge $e_{d+1}$ on the axis $t_{d+1}$, the other has an edge $(n-e_{d+1})$ on the axis $t_{d+1}$. Take all bricks from the first type.
In this case we changed $k$ edges from the edges of $T_0^d$. The sum of the volume of these bricks is $e_{d+1}V(S_k(T_0^d))$. In the other case we changed $k-1$ edges from the edges of $T_0^d$ and the $e_{d+1}$ to $(n-e_{d+1})$.
The sum of the volume of these bricks is $(n-e_{d+1})V(S_{k-1}(T_0^d))$. So
\begin{align*}
V\big(S_k^{d+1}(T_0^{d+1})\big)&=e_{d+1}V\big(S_k^d(T_0^d)\big)+(n-e_{d+1})V\big(S_{k-1}^d(T_0^d)\big)\\
\sum_{k=0}^{d+1}kV\big(S_k^{d+1}(T_0^{d+1})\big)&=\sum_{k=0}^{d+1}k\left[e_{d+1}V(S_k^d(T_0^d))+(n-e_{d+1})V(S_{k-1}^d(T_0^d))\right]\\
&=e_{d+1}\sum_{k=0}^{d+1}kV(S_k^d(T_0^d))+(n-e_{d+1})\sum_{k=0}^{d+1}kV(S_{k-1}^d(T_0^d)).
\end{align*}
Because of $(d+1)V(S_{d+1}^d(T_0^d))=0$ és $0\cdot V(S_{-1}^d(T_0^d))=0$ we get
\begin{align*}
nh_n^{d+1}(T_0)&=\sum_{k=0}^{d+1}kV(S_k^{d+1}(T_0^{d+1}))=e_{d+1}\sum_{k=1}^{d+1}kV(S_k^d(T_0^d))+(n-e_{d+1})\sum_{k=1}^{d+1}kV(S_{k-1}d(T_0^d))\\
&=e_{d+1}nh_n^d(T_0)+(n-e_{d+1})\left[\sum_{k=0}^d1\cdot V(S_k^d(T_0))+\sum_{k=0}^d kV(S_k^d(T_0))\right]\\
&=e_{d+1}nh_n^d(T_0)+(n-e_{d+1})\left[n^d+nh_n^d(T_0)\right]\\
&=e_{d+1}nh_n^d(T_0)+n\left[n^d+nh_n^d(T_0)\right]-e_{d+1}\left[n^d+nh_n^d(T_0)\right]\\
&=e_{d+1}nh_n^d(T_0)+n^{d+1}+nnh_n^d(T_0)-e_{d+1}n^d-e_{d+1}nh_n^d(T_0)\\
&=(n-e_{d+1})n^d+ n^2h_n^d(T_0).
\end{align*}
Based on the assumption of the induction we change $h_n^d(T_0)$ for the right part of the equality \eqref{(402)}
\begin{align*}
nh_n^{d+1}(T_0)&=(n-e_{d+1})n^d+n^2n^d-2\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)\right]\\
&=n^d\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)+(n-e_{d+1})\right]
\end{align*}
so
\begin{equation}\label{(404)}
h_n^{d+1}(T_0)=n^{d-1}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)+(n-e_{d+1})\right]=n^{d-1}\operatorname{diam}(T_d
\end{equation}
So \eqref{(402)} holds for any $d>1$.
If $T_0$ is not a real brick, then $T_0$ is an $n$-brick. Since $\operatorname{df}(T_0)=0$, so \eqref{(402)} holds. Let's assume, that $T_0$ has $m$ edges, $e_1$, $e_2$, \ldots, $e_m<n$. Every $T_k$ brick in the partition generated by $T_0$ has $d-m$ edges of length $n$. So for $T_k$ of dimension $d$ the $V(T_k^d)=n^{d-m}V(T_k^m)$, where $T_k^m$ means a brick of dimension $m$ and edges $e_1$, $e_2$, \ldots, $e_m$. In this case \eqref{(401)} holds for $T_0^m$, so $h_n^m(T_0^m)=n^{m-2}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_m)\right]$
\[
n^{d-m} h_n^m(T_0^m)=n^{d-2}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_m)+(n-n)+\ldots+(n-n)\right]
\]
The left-hand side is $h_n^d(T_0)$, therefore, \eqref{(401)} holds for not real bricks too.%
\end{proof}
\begin{cor
If we take the sum of the Hamming distances of all rooks of a $d$-LSC from a brick $T_0$, then the result does not depend on the number of rooks in $T_0$.
\end{cor}
\begin{cor
If $T_0$ is a real brick, then $T_d$ is the only brick that has a distance $d$ from $T_0$, so because of \eqref{(402)}
\begin{equation}\label{(405)}
\begin{aligned}
h_n^d(T_0)&=n^{d-2}\left[(n-e_1)+(n-e_2)+\ldots+(n-e_d)\right]=n^{d-2}\operatorname{diam}(T_d)\\
h_n^d(T_d)&=n^{d-2}\left[e_1+e_2+\ldots+e_d\right]=n^{d-2}\operatorname{diam}(T_0)
\end{aligned}
\end{equation}
\end{cor}
\section{Remote Brick Couples (RBCs)}
From now on we are dealing with the cases $d=2$ and $d=3$.
Based on the Distribution Theorem~\ref{theo2.17}
\begin{align*}
c_0-c_2&=\frac{V(T_2)-V(T_0)}{n}=\frac{ab-(n-a)(n-b)}{n}=a+b-n\qquad\text{and}\\
c_0+c_3&=\frac{V(T_3)+ V(T_0)}{n}=\frac{abc+(n-a)(n-b)(n-c)}{n}=n^2-(a+b+c)n+(ab+bc+ca)
\end{align*}
\begin{defi
For $d=2$ the RBC $(T_0,T_2)$ is called \emph{balanced}, if $c_0-c_2=a+b-n$ holds.
\end{defi}
\begin{defi
For $d=3$ let the \emph{capacity} of RBC $(T_0,T_3)$ be
\[
\operatorname{cap}(T_0,T_3)=n^2-(a+b+c)n+(ab+bc+ca)
\]
\end{defi}
\begin{defi
For $d=3$ the RBC $(T_0,T_3)$ is called \emph{stuffed}, if $c_0+c_3=\operatorname{cap}(T_0,T_3)$.
\end{defi}
\begin{cor}
For a 2-LSC $L$ holds, that each RBC $(T_0,T_2)$ of $L$ is balanced.
For a 3-LSC $M$ holds, that each RBC $(T_0,T_2)$ of any layer of $M$ is balanced and each RBC $(T_0,T_3)$ of $M$ is stuffed.
\end{cor}
\begin{rmrk
$T_0$ has different meanings for $d=2$ and $d=3$.
\end{rmrk}
\begin{rmrk
The expression $\operatorname{cap}(T_0,T_3)$ is referred sometimes by $\operatorname{cap}(n,a,b,c)$ to emphasize that $\operatorname{cap}(T_0,T_3)$ is a function with integer variables $a$, $b$, $c$, where $a$, $b$, $c\in\{1,2\ldots,n\}$ and the integer $n$ is fixed. Cruse~\cite{[2]} proved some properties of this function.
\end{rmrk}
\begin{defi
The brick $T$ is \emph{degenerated}, if at least one edge of $T$ is~$0$.
\end{defi}
If $T$ is degenerated, then it contains no rooks.
We consider the remote bricks $(T_0,T_3)$ as degenerated RBC, if either $T_0$ or $T_3$ has one edge of size~0, for example $c = 0$ or $n-c = 0$. If $c=0$ then the size of $T_0$ is $a\times b\times 0$, if $n-c=0$ then the size of $T_3$ is $(n-a)\times (n-b)\times 0$ as shown in Figure~\ref{fig5_1}.
The capacity function provides the correct results for both cases, the number of rooks that the $n$-brick can contain.
\begin{figure}[htb]
\centering
\hbox{\includegraphics[scale=.234]{DoR_Figures/fig5_1a}\hfill\includegraphics[scale=.234]{DoR_Figures/fig5_1b}\hfill\includegraphics[scale=.234]{DoR_Figures/fig5_1c}}
\caption{}\label{fig5_1}
\end{figure}
The capacity function can be written as follows:
\[
\operatorname{cap}(n,a,b,c)=n^2-(a+b+c)n+(ab+bc+ca)=(n-a)(n-b)+c(a+b-n).
\]
If the integer variable $c$ goes from $0$ to $n$, then the value of the function goes from $(n-a)(n-b)$, which is the area of $T_3$ orthogonal to $z$, to $ab$, which is the area of $T_0$ orthogonal to $z$, as depicted on the left-hand side of Figure~\ref{fig5_1} and on the right-hand side of Figure~\ref{fig5_1}, respectively. The change is $(a+b-n)$ in each step.
The layer $c$ is balanced, so the yellow brick of the layer has $c_0$ rooks and the remote brick in this the layer has $c_2$ rooks and $c_0-c_2=(a+b-n)$.
The step from $(c-1)$ to $c$ can be seen in the Figure~\ref{fig5_2}.
\begin{figure}[htb]
\centering
\includegraphics[scale=.45] {DoR_Figures/fig5_2}
\caption{}\label{fig5_2}
\end{figure}
After this step, the number of rooks in brick $T_3$ changes by $-c_2$, the number of rooks in brick $T_0$ changes by $c_0$, so by $c_0-c_2=(a+b-n)$ combined.
If $a+b-n>0$, then $(n-a)(n-b)<ab$, so the capacity increases from $(n-a)(n-b)$ to $ab$, if $a+b-n<0$, then $(n-a)(n-b)>ab$, so the capacity decreases from $(n-a)(n-b)$ to $ab$ and if $a+b-n=0$, then $(n-a)(n-b)=ab$, so the capacity does not change, i.e, the RBCs have the same capacity for all~$c$.
The capacity of an RBC can be regarded as a generalization of the capacity of an $n$-brick (degenerated RBC).
\begin{rmrk
In a $3$-LSC $c_0+c_3=\dfrac{V(T_3)+V(T_0)}{n}$ for each RBC $(T_0,T_3)$. \\$V(T_3)+V(T_0)=V(T_3\cup T_0)$ because of $T_0\cap T_3=\emptyset$, hence
\[\varrho(T_0\cup T_3)=\dfrac{c_0+c_3}{V(T_0\cup T_3)}=\dfrac{c_0+c_3}{V(T_0)+V(T_3)}=\dfrac{1}{n},\]
ergo, each RBC $(T_0,T_3)$ has a standard density.
\end{rmrk}
\begin{figure}[htb]
\centering
\includegraphics [scale=.5]{DoR_Figures/fig5_3}
\caption{}\label{fig5_3}
\end{figure}
\begin{defi
An $n$-brick is called an \emph{axis} if it has exactly one edge of size $n$.
\end{defi}
If we consider the bricks $T_{1b}$ and $T_{2bc}$ together, like the brown $n$-brick in the Figure~\ref{fig5_3
, then the structure of $T_0$, $T_{1b}$, $T_{2bc}$, $T_3$ combined looks like a hinge (door hinge).
The brown $n$-brick $T_{1b}\cup T_{2bc}$ is the axis of the hinge, the bricks $T_0$ and $T_3$ are the leafs of the hinge.
On one hand, a hinge can be considered as an axis with two leafs, on the other hand, a hinge is the union of two disjoint $n$-bricks $(T_0\cup T_{1b})$ and $(T_3\cup T_{2bc})$. That gives
\begin{cor}[Hinge Volume
\begin{equation}\label{(502)}
V(T_0\cup T_3)=V(T_0\cup T_{1b})+(T_3\cup T_{2bc})-V(T_{1b}\cup T_{2bc}
\end{equation}
\end{cor}
and dividing both sides of \eqref{(502)} by $n$ we get
\begin{cor}[Hinge Capacity]\label{theo5.10
\begin{equation}\label{(503)}
\operatorname{cap}(T_0,T_3)=\frac{V(T_0)+ V(T_{1b})}{n}+\frac{V(T_3)+ V(T_{2bc})}{n}-\frac{V(T_{1b})+V(T_{2bc})}{n}
\end{equation}
\end{cor}
With the help of equality \eqref{(503)} we can prove, without using the Distribution Theorem~\ref{theo2.17}, that any RBC has as many rooks as its capacity.
\begin{theo
For any RBC $(T_0,T_3)$ in an LSC
\[
c_0+c_3=n^2-(a+b+c)n+(ab+bc+ca)
\]
\end{theo}
\begin{proof
The $n$-brick $T_0\cup T_{1b}$ contains $ac$ rooks, the $n$-brick $T_{2bc}\cup T_3$ contains $(n-b)(n-c)$ rooks and the $n$-brick $T_{1b}\cup T_{2bc}$ contains $a(n-b)$ rooks. So the RBC $(T_0,T_3)$ contains
\[
ac+(n-b)(n-c)-a(n-b)
\]
rooks, i.e,
\[
c_0+c_3=ac+(n-b)(n-c)-a(n-b),
\]
but the right-hand side is equal to $n^2-(a+b+c)n+(ab+bc+ca)$.%
\end{proof}
\begin{rmrk
The complement of a hinge is also a hinge, so the chess-board $H^3$ consists of two disjoint hinges.
\end{rmrk}
\bibliographystyle{plain}
|
2,869,038,157,031 | arxiv | \section{Introduction}
Quantum pumping is a transport mechanism which induces dc charge and
spin currents in a nano-scale conductor in the absence of a bias
voltage by means of a time-dependent control of some system
parameters. Research on quantum pumping has attracted continued
interest since its prototypical proposition due to its importance in
quantum dynamic theory and potential application in various
fields\cite{Ref40, Ref2, Ref3, Ref4, Ref5, Ref6, Ref7, Ref8, Ref9,
Ref10, Ref11, Ref12, Ref13, Ref14, Ref15, Ref16, Ref17, Ref18,
Ref19, Ref20, Ref21, Ref22, Ref23, Ref24, Ref25, Ref26, Ref27,
Ref28, Ref29, Ref31, Ref33, Ref36, Ref37, Ref41, Ref44, Ref45,
Ref46, Ref47}. The pumped current (PC) and noise properties in
various nano-scale structures were investigated such as the
magnetic-barrier-modulated two dimensional electron gas\cite{Ref5},
mesoscopic one-dimensional wire\cite{Ref7, Ref23}, quantum-dot
structures\cite{Ref6, Ref12, Ref13, Ref29, Ref30, Ref37}, mesoscopic
rings with Aharonov-Casher and Aharonov-Bohm effect\cite{Ref8},
magnetic tunnel junctions\cite{Ref11}, chains of tunnel-coupled
metallic islands\cite{Ref26}, the nanoscale helical
wire\cite{Ref27},the Tomonaga-Luttinger liquid\cite{Ref25}, and
garphene-based devices\cite{Ref21, Ref22, Ref41, Ref44, Ref45,
Ref46, Ref47}.
Graphene continues to attract intense interest, especially as an
electronic system in which charge carriers are Dirac-like particles
with linear dispersion and zero rest mass\cite{Ref42}. Quantum
pumping properties of graphene-based devises have been investigated
by several groups\cite{Ref21, Ref22, Ref41, Ref44, Ref45, Ref46,
Ref47}. It is found that the direction of the PC can be reversed
when a high potential barrier demonstrates stronger transparency
than a low one as an effect of the Klein paradox\cite{Ref21}. The
shot noise properties of a quantum pump are important in two
aspects: understanding the underlying mechanisms of the shot noise
may offer possible ways to improve pumping efficiency and achieve
optimal pumping. On the other hand, the shot noise reflects current
correlation and is sensitive to the pump source
configuration\cite{Ref43}. The pumped shot noise (PSN) properties
may provide further information of the correlation between the
transport Dirac Fermions of graphene governed by the Klein paradox
and electron chirality. However, this topic has not ever been looked
into. In this work, we focus on the PSN properties in adiabatically
modulated graphene-based double-barrier structures based on general
expressions we derived from the scattering approach. The effect of
the Klein paradox on the PSN is illuminated.
\section{Theoretical formulation}
The crystal structure of undoped graphene layers is that of a
honeycomb lattice of covalent-bond carbon atoms. One valence
electron corresponds to one carbon atom and the structure is
composed of two sublattices, labeled by A and B. In the vicinity of
the ${\bf{K}}$ point and in the presence of a potential $U$, the
low-energy excitations of the gated graphene monolayer are described
by the two-dimensional (2D) Dirac equation
\begin{equation}
v_F \left( {{\mathbf{\sigma}} \cdot {\bf{\hat p}}} \right)\Psi = \left( {E - U} \right)\Psi ,
\end{equation}
where the pseudospin matrix $\vec \sigma $ has components given by
Pauli's matrices and ${\bf{\hat p}} = (p_x ,p_y )$ is the momentum
operator. The ``speed of light" of the system is $v_{F}$, i.e., the
Fermi velocity ($v_F \approx 10^6 $ m/s). The eigenstates of Eq.
(1) are two-component spinors $\Psi = [\psi _A ,\psi _B ]^T $,
where $\psi _A $ and $\psi _B $ are the envelope functions
associated with the probability amplitudes at the respective
sublattice sites of the graphene sheet.
In the presence of a one-dimensional confining potential $U=U(x)$,
we attempt solutions of Eq. (1) in the form $\psi _A (x,y) = \phi _A
(x)e^{ik_y y} $ and $\psi _B (x,y) =i \phi _B (x)e^{ik_y y} $ due to
the translational invariance along the $y$ direction. The resulting
coupled, first-order differential equations read as
\begin{equation}
d\phi _B /d\xi + \beta \phi _B = (\varepsilon - u )\phi _A ,
\end{equation}
\begin{equation}
d\phi _A /d\xi - \beta \phi _A = - (\varepsilon - u )\phi _B .
\end{equation}
Here $\xi =x/L$, $\beta =k_y L$, $u = UL/\hbar v_F $, and
$\varepsilon =EL/\hbar v_F $ ($L$ is the width of the structure).
The incident angle $\theta $ is given by $\texttt{sin}(\theta)=
\beta / \varepsilon$. We consider a double-barrier structure with
two square potentials of height $U_1$ and $U_2$, which can be time
dependent modulated by ac gate voltages (see fig. 1). Eqs. (2) and
(3) admit solutions which describe electron states confined across
the well and propagating along it. As typical values $L/4$ for the
barrier widths and the inter-barrier separation $L/2$ are used, the
transmission and reflection amplitude $t$ and $s$ are determined by
matching $\phi _{A}$ and $\phi _{B}$ at region interfaces.
Following the standard scattering approach\cite{Ref3, Ref4} we
introduce the fermionic creation and annihilation operators for the
carrier scattering states. The operator $ \hat a_{L }^\dag (E,
\theta,t ) $ or $ \hat a_{L} (E, \theta ,t) $ creates or annihilates
particles with total energy $E$ and incident angle $\theta $ in the
left lead at time $t$, which are incident upon the sample.
Analogously, we define the creation $ \hat b_{L }^\dag (E, \theta
,t) $ and annihilation $ \hat b_{L } (E, \theta ,t) $ operators for
the outgoing single-particle states. Considering a particular
incident energy $E$ and incident angle $\theta$, the scattering
matrix $s$ follows from the relation
\begin{equation}
\left( {\begin{array}{*{20}c}
{b_{L } } \\
{b_{R } } \\
\end{array}} \right) = \underbrace {\left( {\begin{array}{*{20}c}
R & {T'} \\
T & {R'} \\
\end{array}} \right)}_{\hat s}\left( {\begin{array}{*{20}c}
{a_{L } } \\
{a_{R } } \\
\end{array}} \right),
\end{equation}
where, $T$ and $R$ are the scattering elements of incidence from the
left reservoir and $T'$ and $R'$ are those from the right reservoir.
The frequency of the potential modulation is small compared to the
characteristic times for traversal and reflection of electrons and
the pump is thus adiabatic. In this case one can employ an instant
scattering matrix approach, i.e. ${\hat s} (t)$ depends only
parametrically on the time $t$. To realize a quantum pump one varies
simultaneously two system parameters, e.g. \cite{Ref3, Ref4}
\begin{equation}
\begin{array}{l}
X_1 \left( t \right) = X_{10} + X_{\omega ,1} e^{i\left( {\omega t - \varphi _1 } \right)} + X_{\omega ,1} e^{ - i\left( {\omega t - \varphi _1 } \right)} , \\
X_2 \left( t \right) = X_{20} + X_{\omega ,2} e^{i\left( {\omega t - \varphi _2 } \right)} + X_{\omega ,2} e^{ - i\left( {\omega t - \varphi _2 } \right)} . \\
\end{array}
\end{equation}
Here, $X_1$ and $X_2$ are measures for the two time-dependent
barrier heights $U_1$ and $U_2$ (see Fig. 1), which can be modulated
by applying two low-frequency ($\omega$)
alternating gate voltages. $X_{\omega ,1} $ and $X_{\omega ,2} $
are the corresponding oscillating amplitudes with phases
$\varphi_{1/2}$;
$X_{10}$ and $X_{20}$ are the static (equilibrium) components. The scattering
matrix $\hat s$
being a function of parameters $X_{j} (t)$ depends on time.
We suppose an adiabatic quantum pump, i.e., the external parameter
changes so slowly that up to corrections of order $\hbar \omega /
\gamma$ ( $\gamma$ measures the escape rate), we can apply an
instant scattering description using the scattering matrix $ {\hat
s} \left( t \right)$ frozen at some time $t$. Usually the varying of
the wave is sufficiently smooth on the scale of the dwell time. And
we assume that the amplitude ${X_{\omega ,j} }$ is small enough to
keep only the terms linear in ${X_{\omega ,j} }$ in an expansion of
the scattering matrix\cite{Ref4}
\begin{equation}
\hat s\left( t
\right) \approx \hat s^0 + \hat s^{ - \omega } e^{i\omega t} + \hat
s^{ + \omega } e^{ - i\omega t} .
\end{equation}
In the limit of small frequencies the amplitudes $\hat s^{ \pm
\omega } $ can be expressed in terms of parametric derivatives of
the on-shell scattering matrix $\hat s$,
\begin{equation}
\hat s^{ \pm \omega } = \sum\limits_j {X_{\omega ,j} e^{ \pm
i\varphi _j } \frac{{\partial \hat s}}{{\partial X_j }}} .
\end{equation}
The expansion, Eq. (6), is equivalent to the nearest sideband
approximation which implies that a scattered electron can absorb or
emit only one energy quantum $\hbar \omega$ before it leaves the
scattering region.
The problem of current noise in a quantum pump is closely connected
with the problem of quantization of the charge pumped in one cycle.
On the other hand, the noise in mesoscopic phase-coherent conductors
is interesting in itself because it is very sensitive to quantum
mechanical interference effects and can give additional information
about the scattering matrix\cite{Ref4}. To describe the
current-current fluctuations we will use the correlation
function\cite{Ref48}
\begin{equation}
S_{\alpha \beta } \left( {t,t'} \right) = \frac{1}{2}\left\langle
{\Delta \hat I_\alpha \left( t \right)\Delta \hat I_\beta \left(
{t'} \right) + \Delta \hat I_\beta \left( {t'} \right)\Delta \hat
I_\alpha \left( t \right)} \right\rangle ,
\end{equation}
with $\Delta \hat I = \hat I - \left\langle {\hat I} \right\rangle $
and $\hat I_\alpha \left( t \right)$ is the quantum-mechanical
current operator in the lead $\alpha $ as
\begin{equation}
\hat I_\alpha \left( t \right) = \frac{e}{h}\left[ {\hat b_\alpha
^\dag \left( t \right)\hat b_\alpha \left( t \right) - \hat
a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)}
\right].
\end{equation}
The time-dependent operator is $\hat a_\alpha \left( t \right) =
\int {dE\hat a_\alpha \left( E \right)e^{{{ - iEt} \mathord{\left/
{\vphantom {{ - iEt} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } $ and $\hat b_\alpha \left( t \right) =
\sum\limits_\beta {s_{\alpha \beta } \left( t \right)\hat a_\beta \left( t \right)} $
with ${s_{\alpha \beta } }$ an element of the instant scattering
matrix $\hat s$. Note that in the case of a time-dependent scatterer
the correlation function depends on two times $t$ and $t'$. Here we
are interested in the noise averaged over a long time ($\Delta t \gg
{{2\pi } \mathord{\left/
{\vphantom {{2\pi } \omega }} \right.
\kern-\nulldelimiterspace} \omega }$) and we investigate
\begin{equation}
S_{\alpha \beta } \left( t \right) = \frac{\omega }{{2\pi }}\int_0^{{{2\pi } \mathord{\left/
{\vphantom {{2\pi } \omega }} \right.
\kern-\nulldelimiterspace} \omega }} {dtS_{\alpha \beta } \left( {t,t'}
\right)}.
\end{equation}
In addition we restrict our consideration to the zero-frequency
component of the noise spectra $S_{\alpha \beta } = \int
{dtS_{\alpha \beta } \left( t \right)} $. Substituting the current
operator Eq. (9), and taking into account Eqs. (4) and (6) we can
write the time-averaged zero-frequency PSN as
\begin{equation}
\begin{array}{c}
S_{\alpha \beta } = \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } s_{\upsilon \beta }^{\dag 0} \frac{{\partial s_{\alpha \nu } }}{{\partial X_{j_1 } }}\frac{{\partial s_{\beta \mu } }}{{\partial X_{j_2 } }}s_{\mu \alpha }^{\dag 0} \cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } s_{\upsilon \beta }^{\dag 0} s_{\alpha \nu }^0 \frac{{\partial s_{\beta \mu } }}{{\partial X_{j_2 } }}\frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}\cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } \frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_2 } }}\frac{{\partial s_{\alpha \upsilon } }}{{\partial X_{j_1 } }}s_{\beta \mu }^0 s_{\mu \alpha }^{\dag 0} \cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } \frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_2 } }}s_{\alpha \upsilon }^0 s_{\beta \mu }^0 \frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}\cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{-0.4cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \upsilon j_1 j_2 j_3 j_4 } {\left[ {X_{\omega ,j_1 } X_{\omega ,j_4 } X_{\omega ,j_2 } X_{\omega ,j_3 } \frac{{\partial s_{\beta \mu } }}{{\partial X_{j_4 } }}\frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}} \right.} \\
\hspace{0.8cm} \left. { \times \frac{{\partial s_{\alpha \upsilon } }}{{\partial X_{j_2 } }}\frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_3 } }}\cos \left( {\varphi _{j_4 } - \varphi _{j_1 } + \varphi _{j_3 } - \varphi _{j_2 } } \right)} \right]. \\
\end{array}
\end{equation}
Eq. (11) is the central result of this manuscript, which can be used
to investigate the time-averaged zero-frequency PSN properties in
different nanoscale adiabatic pumping structures. Detailed
derivation is provided in the Appendix A.
The PC could be expressed in terms of the scattering matrix as
follows\cite{Ref4, Ref21}.
\begin{equation} I_\alpha =
\frac{{e\omega }}{{2\pi }}\sum\limits_{\beta j_1 j_2 } {X_{\omega
,j_1 } X_{\omega ,j_2 } \frac{{\partial s_{\alpha \beta }
}}{{\partial X_{j_1 } }}\frac{{\partial s_{\alpha \beta }^*
}}{{\partial X_{j_2 } }}2i\sin \left( {\varphi _{j_1 } - \varphi
_{j_2 } } \right)}.
\end{equation}
Due to current conservation, it can be seen that for a two-lead
(left and right) quantum pump (see Fig. 1), $I_{L}=I_{R}$ and
$S_{LL} = S_{RR} = - S_{LR} = - S_{RL} $. It is reasonable to
consider only the $I_{L}$ and $S_{LL}$. The symbols $I_{p}$ and
$S_{p}$ are used for the PC $I_{L}$ and PSN $S_{LL}$, respectively.
A convenient measure for the relative noise strength is the Fano
factor defined by $F_{p}=S_{p}/2eI_{p}$, which characterizes the
noise with respect to the Poisson processes. The Poissonian shot
noise in the configuration of a quantum pump is discussed in the
Appendix B.
\section{Numerical results and interpretations}
We consider the PSN properties in the graphene-based conductor
modulated by two ac gate voltages sketched in Fig. 1. In numerical
calculations, the parameters $U_{10}=U_{20} =100$ meV, $L=200$ nm,
$U_{1 \omega}=U_{2 \omega}=0.01$ meV. The phase difference of the
two oscillating gate potentials $\phi = \varphi _2 - \varphi _1 $
in the radian unit.
The PC, PSN, and Fano factor as functions of the incident angle
$\theta $ for different Fermi energies are shown in Fig. 2.
Electrons at the Fermi levels of the reservoirs are driven to
flow in one direction by modulating the two barriers with a phase
lag, which results in a dc PC at zero bias. The
direction of the PC can be reversed when a high potential barrier
demonstrates stronger transparency than a low one, which results
from the Klein paradox\cite{Ref21}. The PSN is nonnegative as it
measures the PC-PC correlation flowing in the same direction. It can
be seen that the PSN increases when the PC is increased. The Poisson shot noise demonstrates the process governed by uncorrelated electrons and barrier gates without conduction structure (see the Apendix B). In graphene conductors, quantum states below potential barriers are hole states. Transmission from electron states outside the potential barriers into the hole states inside the potential barriers is characterized by the Klein paradox. For some incident angles and certain potential heights when chirality meets, the potential barrier is transparent. For other situations violating chirality alignment, the potential barrier is opaque. As the ac drivers modulate the potential barriers in time, the transmission is varied and a dc current is pumped from one reservoir to the other. Klein paradox virtually correlates the hole states with the electron states. Therefore, the PSN is remarkably enhanced beyond the Poisson value, the latter of which indicates uncorrelated transport. The PSN relative to the Poisson value measured by the Fano factor is presented in Fig. 2 (c). It can be seen that the Fano factor is above 1. Klein paradox induced virtual correlation between electrons and holes enhances the PSN beyond the Poisson value. It is also revealed in Fig. 2 that the PSN and Fano factor are extremely large at the incident angle when the PC reverses direction. At those incident angles, the chirality alignment is reversed, which induces extraordinary correlation between electrons and holes in virtual transport processes.
The PC, PSN, and Fano factor as functions of the Fermi energy of the two reservoirs
$E $ for the incident angle $\theta =0.01$ are shown in Fig. 3. The absolute value of the PC is in maximums at transmission peaks of the two-barrier graphene structure. Around the transmission peaks, the PC reverses direction. In our pumping configuration, ${\varphi _1} < {\varphi _2}$. The right gate opens in advance of the left gate. In quantum pumps constructed by other conductors, the PC always flows from the right to the left reservoir at the ${\varphi _1} < {\varphi _2}$ phase lag. As a result of the Klein paradox, higher potential barrier demonstrates stronger transmission when the chirality alignment meets and the PC reverses direction. The chirality consistency favoring transmission is different between the incident energy above and below the peak energy. When the Fermi energy is smaller than the Dirac point 100 meV, above the peak energy, higher potential barrier demonstrates stronger transmission and the PC flows from the left reservoir to the right. Below the peak energy, higher potential barrier demonstrates weaker transmission and the PC flows from the right reservoir to the left. When the Fermi energy is larger than the Dirac point, the PC direction is reversed as the transmission configuration is reversed. Larger PCs have relatively stronger current-current correlation. The shot noise demonstrates peaks at the PC peaks as shown in Fig. 3 (b). The shot noise is positive since the rightward current flow correlates with the rightward current flow and vice versa. The Fano factor is above 1 due to the Klein paradox induced virtual correlation between electrons and holes. At energies when the PC reverses direction, the shot noise is extraordinarily enhanced beyond the Poisson value. At those energies, the chirality alignment is reversed, which induces extraordinary correlation between electrons and holes in virtual transport processes.
The PC, PSN, and the Fano factor as
functions of the driving phase difference are shown in Fig. 4. The PC varies with the driving phase $\phi $ in sinusoidal function and the PSN in cosinusoidal function, which can be already seen in Eqs. (11) and (12). The last term of Eq. (11) is a product of four pumping amplitudes, four derivatives of the scattering-matrix elements relative to the oscillating parameter, and a $\cos 2\phi $ function. As small pumping amplitudes are considered in our approach, the magnitude of this term is negligible. Therefore, the PSN is a function of $\cos \phi $ and no $\cos 2\phi $-form modulation is observable. From Fig. 4 (c) we can see that for all the Fermi energies considered the Fano factor varies with $\phi $ in similar forms. When the Fermi energy $E$ and the incident angle $\theta $ are fixed, the transmission features of the conducting structure are fixed. The variation of the pumping phase lag would not change the transmission features. For all Fermi energies and incident angles, the pumping properties as functions of the driving phase difference are similar. For configurations of $E$ and $\theta $ that higher potential barriers have stronger transmission, the PC and Fano factor are positive at ${\varphi _2} - {\varphi _1} \in \left[ {\pi ,2\pi } \right]$ and negative at ${\varphi _2} - {\varphi _1} \in \left[ {0 ,\pi } \right]$. And for configurations of $E$ and $\theta $ that lower potential barriers have stronger transmission, the sign of the PC and Fano factor is reversed. At the phase lag $0$, $\pi $ , and $2 \pi $, the PC changes direction as a result of the swap of the opening order of the two gates. When the PC changes direction, interaction of electrons and holes in virtual processes is enhanced and the Fano factor demonstrates a sharp rise.
\section{Conclusions}
In summary, the PSN properties in adiabatically modulated graphene-based double-barrier structures are investigated. Within the scattering-matrix framework, general expressions for adiabatically PSN in phase-coherent
mesoscopic conductors are derived. In comparison with uncorrelated Poisson processes, numerical results of the PC, PSN, and Fano factor as functions of the incident angle, the Fermi energy of the reservoirs, and the phase difference of the two oscillating parameters are presented. It is revealed that the PSN is greatly enhanced beyond the Poisson process due to interactions of electrons and holes in Klein-type virtual tunneling processes. In particular, the PSN is
dramatically enhanced at the energy and incident angle configuration with which the dc pumped current changes flow
direction.
\section{Acknowledgements}
This project was supported by the National Natural Science
Foundation of China (No. 11004063), the Fundamental Research Funds
for the Central Universities, SCUT (No. 2009ZM0299), the Nature
Science Foundation of SCUT (No. x2lxE5090410) and the Graduate
Course Construction Project of SCUT (No. yjzk2009001 and No.
yjzk2010009).
\section{Appendix A: Derivation of the pumped shot noise}
To describe the current-current fluctuations we will use the
correlation function\cite{Ref48}
\begin{equation}
\begin{array}{l}
S_{\alpha \beta } \left( {t,t'} \right) = \frac{1}{2}\left\langle {\Delta \hat I_\alpha \left( t \right)\Delta \hat I_\beta \left( {t'} \right) + \Delta \hat I_\beta \left( {t'} \right)\Delta \hat I_\alpha \left( t \right)} \right\rangle \\
\hspace{1.7cm} = \frac{1}{2}\left[ {\left\langle {\hat I_\alpha \left( t \right)\hat I_\beta \left( {t'} \right)} \right\rangle + \left\langle {\hat I_\beta \left( {t'} \right)\hat I_\alpha \left( t \right)} \right\rangle } \right. \\
\hspace{1.7cm} \left. { - \left\langle {\hat I_\alpha \left( t \right)} \right\rangle \left\langle {\hat I_\beta \left( {t'} \right)} \right\rangle - \left\langle {\hat I_\beta \left( {t'} \right)} \right\rangle \left\langle {\hat I_\alpha \left( t \right)} \right\rangle } \right], \\
\end{array}
\end{equation}
with $\Delta \hat I = \hat I - \left\langle {\hat I} \right\rangle $
and $\hat I_\alpha \left( t \right)$ is the quantum-mechanical
current operator in the lead $\alpha $. The zero-frequency pumped
shot noise (PSN) averaged over a long time ($\Delta t \gg {{2\pi }
\mathord{\left/
{\vphantom {{2\pi } \omega }} \right.
\kern-\nulldelimiterspace} \omega }$) is the time integral of
$S_{\alpha \beta } \left( {t,t'} \right)$ as follows.
\begin{equation}
S_{\alpha \beta } = \frac{\omega }{{2\pi }}\int_{ - \infty }^{ +
\infty } {\int_0^{\frac{{2\pi }}{\omega }} {S_{\alpha \beta } \left(
{t,t'} \right)dt'dt} }
\end{equation}
The first term in the PSN is
\begin{equation}
\frac{1}{2}\frac{\omega }{{2\pi }}\int_{ - \infty }^{ + \infty }
{\int_0^{\frac{{2\pi }}{\omega }} {\left\langle {\hat I_\alpha
\left( t \right)\hat I_\beta \left( {t'} \right)} \right\rangle
dt'dt} }
\end{equation}
with
\begin{equation}
\hat I_\alpha \left( t \right) = \frac{e}{h}\left[ {\hat b_\alpha
^\dag \left( t \right)\hat b_\alpha \left( t \right) - \hat
a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)}
\right],
\end{equation}
and
\begin{equation}
\hat I_\beta \left( {t'} \right) = \frac{e}{h}\left[ {\hat b_\beta
^\dag \left( {t'} \right)\hat b_\beta \left( {t'} \right) - \hat
a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right)}
\right].
\end{equation}
Therefore, we have
\begin{equation}
\begin{array}{c}
\hat I_\alpha \left( t \right)\hat I_\beta \left( {t'} \right) = \frac{{e^2 }}{{h^2 }}\left[ {\hat b_\alpha ^\dag \left( t \right)\hat b_\alpha \left( t \right)\hat b_\beta ^\dag \left( {t'} \right)\hat b_\beta \left( {t'} \right)} \right. \\
\hspace{1.2cm} - \hat b_\alpha ^\dag \left( t \right)\hat b_\alpha \left( t \right)\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right) \\
\hspace{1.2cm} - \hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)\hat b_\beta ^\dag \left( {t'} \right)\hat b_\beta \left( {t'} \right) \\
\hspace{1.6cm} \left. { + \hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right)} \right]. \\
\end{array}
\end{equation}
Substituting $\hat b_\alpha \left( t \right) =
\sum\limits_\beta {s_{\alpha \beta } \left( t \right)\hat a_\beta \left( t \right)} $
into the above equation, we have
\begin{equation}
\begin{array}{c}
\hat I_\alpha \left( t \right)\hat I_\beta \left( {t'} \right) = \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon \xi \eta } {\hat a_\mu ^\dag \left( t \right)s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)\hat a_\upsilon \left( t \right)\hat a_\xi ^\dag \left( {t'} \right)s_{\xi \beta }^\dag \left( {t'} \right)s_{\beta \eta } \left( {t'} \right)\hat a_\eta \left( {t'} \right)} \\
\hspace{-0.6cm} - \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon } {\hat a_\mu ^\dag \left( t \right)s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)\hat a_\upsilon \left( t \right)\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right)} \\
\hspace{-0.4cm} - \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon } {\hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)\hat a_\mu ^\dag \left( {t'} \right)s_{\mu \beta }^\dag \left( {t'} \right)s_{\beta \upsilon } \left( {t'} \right)\hat a_\upsilon \left( {t'} \right)} \\
\hspace{-3.3cm} + \frac{{e^2 }}{{h^2 }}\hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right), \\
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{c}
\left\langle {\hat I_\alpha \left( t \right)} \right\rangle \left\langle {\hat I_\beta \left( {t'} \right)} \right\rangle = \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon \xi \eta } {\left\langle {\hat a_\mu ^\dag \left( t \right)s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)\hat a_\upsilon \left( t \right)} \right\rangle \left\langle {\hat a_\xi ^\dag \left( {t'} \right)s_{\xi \beta }^\dag \left( {t'} \right)s_{\beta \eta } \left( {t'} \right)\hat a_\eta \left( {t'} \right)} \right\rangle } \\
\hspace{0.5cm} - \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon } {\left\langle {\hat a_\mu ^\dag \left( t \right)s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)\hat a_\upsilon \left( t \right)} \right\rangle \left\langle {\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right)} \right\rangle } \\
\hspace{0.7cm} - \frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon } {\left\langle {\hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)} \right\rangle \left\langle {\hat a_\mu ^\dag \left( {t'} \right)s_{\mu \beta }^\dag \left( {t'} \right)s_{\beta \upsilon } \left( {t'} \right)\hat a_\upsilon \left( {t'} \right)} \right\rangle } \\
\hspace{-2.2cm} + \frac{{e^2 }}{{h^2 }}\left\langle {\hat a_\alpha ^\dag \left( t \right)\hat a_\alpha \left( t \right)} \right\rangle \left\langle {\hat a_\beta ^\dag \left( {t'} \right)\hat a_\beta \left( {t'} \right)} \right\rangle . \\
\end{array}
\end{equation}
Using $\hat a_\alpha \left( t \right) = \int {dE\hat a_\alpha
\left( E \right)e^{{{ - iEt} \mathord{\left/
{\vphantom {{ - iEt} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } $ and
$\hat a_{\alpha}^{\dag} \left( t \right) = \int {dE\hat a_{\alpha}^{\dag}
\left( E \right)e^{{{ iEt} \mathord{\left/
{\vphantom {{ - iEt} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } $, the first term in Eq. (19)
reads
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon \xi \eta } {\int {dE_1 dE_2 dE_3 dE_4 } \hat a_\mu ^\dag \left( {E_1 } \right)e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)\hat a_\upsilon \left( {E_2 } \right)e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \\
\hspace{0.6cm} \times \hat a_\xi ^\dag \left( {E_3 } \right)e^{{{iE_3 t'} \mathord{\left/
{\vphantom {{iE_3 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\xi \beta }^\dag \left( {t'} \right)s_{\beta \eta } \left( {t'} \right)\hat a_\eta \left( {E_4 } \right)e^{{{ - iE_4 t'} \mathord{\left/
{\vphantom {{ - iE_4 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} . \\
\end{array}
\end{equation}
Wick's theorem gives the quantum statistical expectation value of
products of four operators $\hat a$. For a Fermi gas at equilibrium
this expectation value is\cite{Ref48}
\begin{equation}
\begin{array}{l}
\left\langle {\hat a_\mu ^\dag \left( {E_1 } \right)\hat a_\upsilon \left( {E_2 } \right)\hat a_\xi ^\dag \left( {E_3 } \right)\hat a_\eta \left( {E_4 } \right)} \right\rangle - \left\langle {\hat a_\mu ^\dag \left( {E_1 } \right)\hat a_\upsilon \left( {E_2 } \right)} \right\rangle \left\langle {\hat a_\xi ^\dag \left( {E_3 } \right)\hat a_\eta \left( {E_4 } \right)} \right\rangle \\
= \delta _{\mu \eta } \delta _{\upsilon \xi } \delta \left( {E_1 - E_4 } \right)\delta \left( {E_2 - E_3 } \right)f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_2 } \right)} \right]. \\
\end{array}
\end{equation}
$f_{\alpha } (E)$ is the Fermi distribution function of the $\alpha
$ reservoir connected to the adiabatically modulated conductor.
Substituting Eq. (22) into the first term of $\left\langle {\hat
I_\alpha \left( t \right)\hat I_\beta \left( {t'} \right)}
\right\rangle - \left\langle {\hat I_\alpha \left( t \right)}
\right\rangle \left\langle {\hat I_\beta \left( {t'} \right)}
\right\rangle $, we have
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon \xi \eta } {\int {dE_1 dE_2 dE_3 dE_4 } \delta _{\mu \eta } \delta _{\nu \xi } \delta \left( {E_1 - E_4 } \right)\delta \left( {E_2 - E_3 } \right)f_\mu \left( {E_1 } \right)\left[ {1 - f_\nu \left( {E_2 } \right)} \right]} \\
\hspace{0.6cm} \times e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\mu \alpha }^\dag \left( t \right)s_{\alpha \upsilon } \left( t \right)e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_3 t'} \mathord{\left/
{\vphantom {{iE_3 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\xi \beta }^\dag \left( {t'} \right)s_{\beta \eta } \left( {t'} \right)e^{{{ - iE_4 t'} \mathord{\left/
{\vphantom {{ - iE_4 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} . \\
\end{array}
\end{equation}
Integrating out $\eta $, $\xi $, $E_4$, and $E_3$, we obtain
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{{h^2 }}\sum\limits_{\mu \upsilon } {\int {dE_1 dE_2 } f_\mu \left( {E_1 } \right)\left[ {1 - f_\nu \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\mu \alpha }^\dag \left( t \right)} \\
\hspace{0.6cm} \times s_{\alpha \upsilon } \left( t \right)e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\nu \beta }^\dag \left( {t'} \right)s_{\beta \mu } \left( {t'} \right)e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} . \\
\end{array}
\end{equation}
Following similar procedures to all the other terms in Eq. (13), we
can obtain
\begin{equation}
\begin{array}{l}
S_{\alpha \beta } \left( {t,t'} \right) = \frac{{e^2 }}{{2h^2 }}\sum\limits_{\mu \upsilon } {\left[ {\int {dE_1 dE_2 } f_\mu \left( {E_1 } \right)\left[ {1 - f_\nu \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\mu \alpha }^\dag \left( t \right)} \right.} \\
\left. { \times s_{\alpha \upsilon } \left( t \right)e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\nu \beta }^\dag \left( {t'} \right)s_{\beta \mu } \left( {t'} \right)e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right] \\
- \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\beta \left( {E_1 } \right)\left[ {1 - f_\beta \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\beta \alpha }^\dag \left( t \right)s_{\alpha \beta } \left( t \right)e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} \\
- \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\alpha \left( {E_1 } \right)\left[ {1 - f_\alpha \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\alpha \beta }^\dag \left( {t'} \right)s_{\beta \alpha } \left( {t'} \right)e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} \\
+ \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\alpha \left( {E_1 } \right)\left[ {1 - f_\alpha \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} \\
+ \frac{{e^2 }}{{2h^2 }}\sum\limits_{\mu \upsilon } {\left[ {\int {dE_1 dE_2 } f_\mu \left( {E_1 } \right)\left[ {1 - f_\nu \left( {E_2 } \right)} \right]e^{{{iE_1 t'} \mathord{\left/
{\vphantom {{iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\mu \beta }^\dag \left( {t'} \right)} \right.} \\
\left. { \times s_{\beta \upsilon } \left( {t'} \right)e^{{{ - iE_2 t'} \mathord{\left/
{\vphantom {{ - iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t} \mathord{\left/
{\vphantom {{iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\nu \alpha }^\dag \left( t \right)s_{\alpha \mu } \left( t \right)e^{{{ - iE_1 t} \mathord{\left/
{\vphantom {{ - iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right] \\
- \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\alpha \left( {E_1 } \right)\left[ {1 - f_\alpha \left( {E_2 } \right)} \right]e^{{{iE_1 t'} \mathord{\left/
{\vphantom {{iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\alpha \beta }^\dag \left( {t'} \right)s_{\beta \alpha } \left( {t'} \right)e^{{{ - iE_2 t'} \mathord{\left/
{\vphantom {{ - iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t} \mathord{\left/
{\vphantom {{iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t} \mathord{\left/
{\vphantom {{ - iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} \\
- \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\beta \left( {E_1 } \right)\left[ {1 - f_\beta \left( {E_2 } \right)} \right]e^{{{iE_1 t'} \mathord{\left/
{\vphantom {{iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_2 t'} \mathord{\left/
{\vphantom {{ - iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t} \mathord{\left/
{\vphantom {{iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} s_{\beta \alpha }^\dag \left( t \right)s_{\alpha \beta } \left( t \right)e^{{{ - iE_1 t} \mathord{\left/
{\vphantom {{ - iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} \\
+ \frac{{e^2 }}{{2h^2 }}\int {dE_1 dE_2 } f_\beta \left( {E_1 } \right)\left[ {1 - f_\beta \left( {E_2 } \right)} \right]e^{{{iE_1 t'} \mathord{\left/
{\vphantom {{iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_2 t'} \mathord{\left/
{\vphantom {{ - iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t} \mathord{\left/
{\vphantom {{iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t} \mathord{\left/
{\vphantom {{ - iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} . \\
\end{array}
\end{equation}
The first term of the above equation has a product of four
scattering matrix elements. We list the four scattering matrix
expanded into the form of Eq. (6) as
\begin{equation}
\begin{array}{*{20}c}
1 & 2 & 3 \\
{\left( {s_{\mu \alpha }^{\dag 0} } \right.} & { + s_{\mu \alpha }^{\dag - \omega } e^{ - i\omega t} } & {\left. { + s_{\mu \alpha }^{\dag + \omega } e^{i\omega t} } \right)} \\
{\left( {s_{\alpha \nu }^0 } \right.} & { + s_{\alpha \nu }^{ - \omega } e^{i\omega t} } & {\left. { + s_{\alpha \nu }^{ + \omega } e^{ - i\omega t} } \right)} \\
{\left( {s_{\nu \beta }^{\dag 0} } \right.} & { + s_{\nu \beta }^{\dag - \omega } e^{ - i\omega t'} } & {\left. { + s_{\nu \beta }^{\dag + \omega } e^{i\omega t'} } \right)} \\
{\left( {s_{\beta \mu }^0 } \right.} & { + s_{\beta \mu }^{ - \omega } e^{i\omega t'} } & {\left. { + s_{\beta \mu }^{ + \omega } e^{ - i\omega t'} } \right)}
.\\
\end{array}
\end{equation}
We calculate the column 1111 term of Eq. (26) in the time-averaged
zero-frequency PSN as
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{{h^2 }}\frac{\omega }{{4\pi }}\sum\limits_{\mu \nu } {\int {dE_1 dE_2 } \int_{ - \infty }^{ + \infty } {dt} \int_0^{\frac{{2\pi }}{\omega }} {dt'\left[ {f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right.} } \\
\hspace{1cm} \left. { \times s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^0 s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^0 e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right]. \\
\end{array}
\end{equation}
From the relation $\frac{1}{{2\pi }} \int_{ - \infty }^{ + \infty }
{dte^{{{i\left( {E_1 - E_2 } \right)t} \mathord{\left/
{\vphantom {{i\left( {E_1 - E_2 } \right)t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } =\hbar \delta \left( {E_1 - E_2 }
\right)$, it can be seen that the two-fold integral over the energy is
reduced to one. For the configuration of a quantum pump, no bias is
applied. Therefore for any value of the energy, the Fermi
distribution function $f_\alpha \left( E \right)$ is
simultaneously $1$ or $0$ at zero temperature for all leads. Hence,
$ {f_\mu \left( {E } \right)\left[ {1 - f_\nu \left( {E } \right)}
\right]=0} $ for any $\mu$ and $\nu$s. We can achieve that Eq. (27)
is equal to zero. For the same reason, all the 11** term taken into
the PSN are equal to zero since the $t'$ exponential $e^{\pm i
\omega t'}$ would not affect the integral of the time $t$. Then we
go to the 1211 term taken into the time-averaged zero-frequency PSN:
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{{h^2 }}\frac{\omega }{{4\pi }}\sum\limits_{\mu \nu } {\int {dE_1 dE_2 } \int_{ - \infty }^{ + \infty } {dt} \int_0^{\frac{{2\pi }}{\omega }} {dt'\left[ {f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_2 } \right)} \right]e^{{{iE_1 t} \mathord{\left/
{\vphantom {{iE_1 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right.} } \\
\hspace{1cm} \left. { \times s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^{ - \omega } e^{i\omega t} s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^0 e^{{{ - iE_2 t} \mathord{\left/
{\vphantom {{ - iE_2 t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{iE_2 t'} \mathord{\left/
{\vphantom {{iE_2 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} e^{{{ - iE_1 t'} \mathord{\left/
{\vphantom {{ - iE_1 t'} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } \right]. \\
\end{array}
\end{equation}
With the definition of the $\delta $ function
\begin{equation}
\frac{1}{{2\pi }} \int_{ - \infty }^{ + \infty } {dte^{{{i\left(
{E_1 + \hbar \omega - E_2 } \right)t} \mathord{\left/
{\vphantom {{i\left( {E_1 + \hbar \omega - E_2 } \right)t} \hbar }} \right.
\kern-\nulldelimiterspace} \hbar }} } = \hbar \delta \left( {E_1 + \hbar \omega - E_2 }
\right),
\end{equation}
we get
\begin{equation}
\frac{{e^2 }}{h}\frac{\omega }{{4 \pi }}\sum\limits_{\mu \nu }
{\int {dE_1 } \int_0^{\frac{{2\pi }}{\omega }} {dt'f_\mu \left(
{E_1 } \right)\left[ {1 - f_\upsilon \left( {E_1 + \hbar \omega }
\right)} \right]s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^{ -
\omega } s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^0 e^{i\omega
t'} } } .
\end{equation}
$e^{i\omega t'}$ is a periodic function of $t'$ with the period
$2\pi / \omega$. Its integral over one period is zero. Therefore the
above whole term is zero. Similarly, the 1212 term is zero with an
additional exponential $e^{i \omega t'}$ the only difference from
the 1211 term, whose one-period-integral is again zero. Following
analogous procedures, we can derive the 1213 term as
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{h}\frac{\omega }{{4\pi }}\sum\limits_{\mu \nu } {\int {dE_1 } \int_0^{\frac{{2\pi }}{\omega }} {dt'f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_1 + \hbar \omega } \right)} \right]s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^{ + \omega } } } \\
\hspace{1cm} = \frac{{e^2 }}{{2h}}\sum\limits_{\mu \nu } {\int {dE_1 } f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_1 + \hbar \omega } \right)} \right]s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^{ + \omega } } . \\
\end{array}
\end{equation}
The quantum pumping configuration sets equal chemical potentials in
all reservoirs, i.e., for any $\alpha $, we have
\begin{equation}
f_\alpha \left( E \right) = \left\{ \begin{array}{l}
\begin{array}{*{20}c}
{1,} & {E \le \mu ,} \\
\end{array} \\
\begin{array}{*{20}c}
{0,} & {E > \mu .} \\
\end{array} \\
\end{array} \right.
\end{equation}
Hence, only the integral range $\int_{\mu - \hbar \omega }^\mu
{dE_1 } $ contributes in Eq. (31), which is
\begin{equation}
\frac{{e^2 \omega }}{{4\pi }}\sum\limits_{\mu \nu } {s_{\mu \alpha
}^{\dag 0} s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta
}^{\dag 0} s_{\beta \mu }^{ + \omega } }.
\end{equation}
Analogously, the 1221 term is equal to
\begin{equation}
\frac{{e^2 \omega }}{{4\pi }}\sum\limits_{\mu \nu } {s_{\mu \alpha
}^{\dag 0} s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta
}^{\dag - \omega } s_{\beta \mu }^0 }.
\end{equation}
Following similar algebra, we could see that the 1222, 1223, $
\cdots $ , 3111, 3112 terms are all zero. And the 3113 term is equal
to
\begin{equation}
\frac{{e^2 \omega }}{{4\pi }}\sum\limits_{\mu \nu } {s_{\mu \alpha
}^{\dag + \omega } s_{\alpha \upsilon }^0 s_{\upsilon \beta }^{\dag
0} s_{\beta \mu }^{ + \omega } } .
\end{equation}
The 3121 term is equal to
\begin{equation}
\frac{{e^2 \omega }}{{4\pi }}\sum\limits_{\mu \nu } {s_{\mu \alpha
}^{\dag + \omega } s_{\alpha \upsilon }^0 s_{\upsilon \beta }^{\dag
- \omega } s_{\beta \mu }^0 } .
\end{equation}
The 3122, 3123, $ \cdots $ , 3221, 3222 terms are all zero. The 3223
term is equal to
\begin{equation}
\begin{array}{l}
\frac{{e^2 }}{h}\frac{\omega }{{4\pi }}\sum\limits_{\mu \nu } {\int {dE_1 } \int_0^{\frac{{2\pi }}{\omega }} {dt'f_\mu \left( {E_1 } \right)\left[ {1 - f_\upsilon \left( {E_1 + 2\hbar \omega } \right)} \right]} } \\
\hspace{2cm} \times s_{\mu \alpha }^{\dag + \omega } s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta }^{\dag - \omega } s_{\beta \mu }^{ + \omega } \\
\hspace{1cm} = \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu } {s_{\mu \alpha }^{\dag 0} s_{\alpha \upsilon }^{ - \omega } s_{\upsilon \beta }^{\dag 0} s_{\beta \mu }^{ + \omega } } . \\
\end{array}
\end{equation}
The rest terms from 3231 to 3333 are all zero. Following similar
algebra, we could obtain that the two-scattering-matrix and
no-scattering-matrix terms are all equal to zero. And the
contribution of $\left\langle {\hat I_\beta \left( {t'} \right)\hat
I_\alpha \left( t \right)} \right\rangle - \left\langle {\hat
I_\beta \left( {t'} \right)} \right\rangle \left\langle {\hat
I_\alpha \left( t \right)} \right\rangle $ follows from that of
$\left\langle {\hat I_\alpha \left( t \right)\hat I_\beta \left(
{t'} \right)} \right\rangle - \left\langle {\hat I_\alpha \left( t
\right)} \right\rangle \left\langle {\hat I_\beta \left( {t'}
\right)} \right\rangle $. Totally five plus five terms contribute to
the time-averaged zero-frequency PSN. Collecting the above results
and using the expansion of the scattering matrix [Eqs. (6) and (7)],
we reach the general expression of the time-averaged zero-frequency
PSN.
\begin{equation}
\begin{array}{c}
S_{\alpha \beta } = \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } s_{\upsilon \beta }^{\dag 0} \frac{{\partial s_{\alpha \nu } }}{{\partial X_{j_1 } }}\frac{{\partial s_{\beta \mu } }}{{\partial X_{j_2 } }}s_{\mu \alpha }^{\dag 0} \cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } s_{\upsilon \beta }^{\dag 0} s_{\alpha \nu }^0 \frac{{\partial s_{\beta \mu } }}{{\partial X_{j_2 } }}\frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}\cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } \frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_2 } }}\frac{{\partial s_{\alpha \upsilon } }}{{\partial X_{j_1 } }}s_{\beta \mu }^0 s_{\mu \alpha }^{\dag 0} \cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{0.8cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \nu j_1 j_2 } {X_{\omega ,j_2 } X_{\omega ,j_1 } \frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_2 } }}s_{\alpha \upsilon }^0 s_{\beta \mu }^0 \frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}\cos \left( {\varphi _{j_1 } - \varphi _{j_2 } } \right)} \\
\hspace{-0.4cm} + \frac{{e^2 \omega }}{{2\pi }}\sum\limits_{\mu \upsilon j_1 j_2 j_3 j_4 } {\left[ {X_{\omega ,j_1 } X_{\omega ,j_4 } X_{\omega ,j_2 } X_{\omega ,j_3 } \frac{{\partial s_{\beta \mu } }}{{\partial X_{j_4 } }}\frac{{\partial s_{\mu \alpha }^\dag }}{{\partial X_{j_1 } }}} \right.} \\
\hspace{0.8cm} \left. { \times \frac{{\partial s_{\alpha \upsilon } }}{{\partial X_{j_2 } }}\frac{{\partial s_{\upsilon \beta }^\dag }}{{\partial X_{j_3 } }}\cos \left( {\varphi _{j_4 } - \varphi _{j_1 } + \varphi _{j_3 } - \varphi _{j_2 } } \right)} \right]. \\
\end{array}
\end{equation}
\section{Appendix B: Discussion of the Poissonian pumped shot noise}
The Schottky's result\cite{Ref48, Ref49} for the shot noise corresponds to the uncorrelated arrival of particles with a distribution function of
time intervals between arrival times which is Poissonian, $P\left( {\Delta t} \right) = {\tau ^{ - 1}}\exp \left( { - {{\Delta t} \mathord{\left/
{\vphantom {{\Delta t} \tau }} \right.
\kern-\nulldelimiterspace} \tau }} \right)$ with $\tau $ being the
mean time interval between carriers. [$P\left( {\Delta t} \right)$ is normalized with $\int_0^{ + \infty } {P\left( {\Delta t} \right)d\left( {\Delta t} \right)} = 1$ and $\int_0^{ + \infty } {\left( {\Delta t} \right)P\left( {\Delta t} \right)d\left( {\Delta t} \right)} = \tau $]. With the Poissonian time interval distribution function, we could consider the Poissonian current and shot noise. It is convenient to look at a single-electron tunneling process with $P\left( {\Delta t} \right)$ normalized to $1$ and the complete relevant time range is in the order of $\tau $.
We take an infinitesimal time segment $\left[ {t,t + dt} \right]$ from the continuous time flow in $[0, + \infty )$. The time dependent current generated by the reservoir could be expressed as
\begin{equation}
I\left( t \right) = \frac{{\int_t^{t + dt} {eP\left( {t'} \right)dt'} }}{{dt}} = \frac{e}{\tau }{e^{ - {t \mathord{\left/
{\vphantom {t \tau }} \right.
\kern-\nulldelimiterspace} \tau }}}.
\end{equation}
The mean current follows as
\begin{equation}
\overline {I\left( t \right)} = \mathop {\lim }\limits_{T \to \infty } \frac{1}{T}\int_0^T {I\left( t \right)dt} = \frac{1}{\tau }\int_0^{ + \infty } {I\left( t \right)dt} = \frac{e}{\tau }.
\end{equation}
Here the single-electron-tunneling picture is used. The mathematical object which allows us to characterize the duration of the current pulse is called the autocorrelation function and is defined by
\begin{equation}
{R_I}\left( {t'} \right) = \mathop {\lim }\limits_{T \to \infty } \frac{1}{T}\int_{ - {T \mathord{\left/
{\vphantom {T 2}} \right.
\kern-\nulldelimiterspace} 2}}^{{T \mathord{\left/
{\vphantom {T 2}} \right.
\kern-\nulldelimiterspace} 2}} {I\left( t \right)I\left( {t + t'} \right)dt} .
\end{equation}
From the time-dependent current, we can obtain the autocorrelation function as
\begin{equation}
{R_I}\left( {t'} \right) = {\left. {\overline {I\left( t \right)I\left( {t + t'} \right)} } \right|_t} = {\left. {\overline {\frac{{{e^2}}}{{{\tau ^2}}}{e^{ - \frac{{2t}}{\tau }}}} } \right|_t}{e^{ - \frac{{t'}}{\tau }}}.
\end{equation}
The footnote $t$ means the mean value is evaluated relative to the variable $t$.
Using the following relation coming from the result of Eq. (40)
\begin{equation}
{\left. {\overline {\frac{e}{\tau }{e^{ - \frac{{2t}}{\tau }}}} } \right|_t} = {\left. {\overline {\frac{1}{2}\frac{e}{{\frac{\tau }{2}}}{e^{ - \frac{t}{{\frac{\tau }{2}}}}}} } \right|_t} = \frac{1}{2}\frac{e}{{\frac{\tau }{2}}} = \frac{e}{\tau },
\end{equation}
we have
\begin{equation}
{R_I}\left( {t'} \right) = \frac{{{e^2}}}{{{\tau ^2}}}{e^{ - \frac{{t'}}{\tau }}}.
\end{equation}
The Wiener-Khinchin theorem states that the noise spectrum is the Fourier transform of the autocorrelation function:
\begin{equation}
{S_I}\left( f \right) = 2\int_0^\infty {{R_I}\left( {t'} \right){e^{ - i2\pi ft'}}dt'} .
\end{equation}
Therefore, the zero-frequency shot noise
\begin{equation}
{S_I}\left( 0 \right) = 2\int_0^{ \infty } {\frac{{{e^2}}}{{{\tau ^2}}}{e^{ - \frac{{t'}}{\tau }}}dt'} = 2\frac{{{e^2}}}{\tau } = 2e\overline I ,
\end{equation}
which is just the Poisson shot noise.
Following that, we consider the pumping configuration to achieve the poissonian quantum pumped shot noise. To achieve a pure poisson process, we should exclude all conducting structure and let the conductance totally governed by two Poisson-distributed random emitters at the left and right leads since any scattering structure would induce interactions and break the Poissonian picture. The pumping mechanism is thus reduced to a semi-classical one with two modulating gates and a single-particle level between the two gates. The two gates are modulated with a phase lag $\phi = {\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}$. We assume the gates to be two oscillating semi-classical potential barrier with the time dependence of their heights as follows.
\begin{equation}
\left\{ \begin{array}{l}
{U_1} = \sin \left( {t + \frac{\pi }{2}} \right),\\
{U_2} = \sin \left( t \right).
\end{array} \right.
\end{equation}
In typical quantum pumps, the oscillation period $T = {{2\pi } \mathord{\left/
{\vphantom {{2\pi } \omega }} \right.
\kern-\nulldelimiterspace} \omega }$ is much larger than the mean time interval between carriers $\tau $. Here the pumping frequency $\omega $ is set to be $1$ without blurring any physics. We divide one pumping period into four quarters. When $t \in \left[ {0,{\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]$, $\sin \left( t \right)$ changes from 0 to 1 and
$\sin \left( {t + {\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}} \right)$ changes from 1 to 0. Considering the integral
effect, the two gates are equally high and the system could be approximated by two identical emitter shooting electrons at each other with a possible emission phase lag. The time-dependent current could be formulated as
\begin{equation}
I_{p}\left( t \right) = \frac{e}{\tau }{e^{\frac{{t - {t_{0L}}}}{\tau }}} - \frac{e}{\tau }{e^{\frac{{t - {t_{0R}}}}{\tau }}}.
\end{equation}
For two uncorrelated emitter, $t_{0L}$ and $t_{0R}$ are possibly different.
When $t \in \left[ {{\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2},\pi } \right]$, $\sin \left( t \right)$
changes from 1 to 0 and $\sin \left( {t + {\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}} \right)$ changes from 0 to -1. In this quarter, the gate $U_{1}$ is open and the gate $U_{2}$ is closed. The electron has some probability to be emitted from the left reservoir to the middle single-electron level and fill it. There is a current flow from the left reservoir to the middle level. The time-dependent current flow from the left emitter to the middle level could be formulated as
\begin{equation}
{I_p}\left( t \right) = \frac{e}{\tau }{e^{\frac{{t - t{'_{0L}}}}{\tau }}}.
\end{equation}
When $t \in \left[ {\pi ,{{3\pi } \mathord{\left/
{\vphantom {{3\pi } 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]$, $\sin \left( t \right)$
changes from 0 to -1 and $\sin \left( {t + {\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}} \right)$ changes from -1 to
0. The integral effects of the two gates balance out. The electron could not tunnel out of the middle level. When $t \in \left[ {{{3\pi } \mathord{\left/
{\vphantom {{3\pi } 2}} \right.
\kern-\nulldelimiterspace} 2},2\pi } \right]$, $\sin \left( t \right)$ changes from -1 to 0 and $\sin \left( {t + {\pi \mathord{\left/
{\vphantom {\pi 2}} \right.
\kern-\nulldelimiterspace} 2}} \right)$
changes from 0 to 1. $U_1$ maintains higher than $U_2$.
The left gate is closed and the right gate is open, which drives the particle in the middle level to the right reservoir. As the right reservoir is a Poisson source and simultaneously a Poisson drain, the tunneling from the middle level would also be time-dependent as
\begin{equation}
{I_p}\left( t \right) = \frac{e}{\tau }{e^{\frac{{t - t{'_{0R}}}}{\tau }}}.
\end{equation}
For adiabatic quantum pumps, ${T \mathord{\left/
{\vphantom {T 4}} \right.
\kern-\nulldelimiterspace} 4} \gg \tau $. Therefore, the time average in one period could be approximated as the time average in the infinite time interval $[0, + \infty )$. Following similar derivation as the ordinary conductor, we could obtain
\begin{equation}
\overline {{I_p}\left( t \right)} = \frac{e}{\tau }.
\end{equation}
And the the zero-frequency shot noise
\begin{equation}
{S_p}\left( 0 \right) = 2\frac{{{e^2}}}{\tau } = 2e\overline {{I_p}} ,
\end{equation}
which is the Poisson pumped shot noise.
\clearpage
|
2,869,038,157,032 | arxiv | \section{Introduction}
Treewidth, which measures how close a graph is to a tree,
is arguably one of the most powerful tools for designing efficient algorithms for graph problems.
The application of treewidth is quite wide and the general theory built there often gives
a very efficient algorithm (e.g.,~\cite{Bodlaender88,ArnborgLS91,Courcelle92}).
However, still many problems are found to be intractable on graphs of bounded treewidth (e.g.,~\cite{Szeider11arxiv}).
To cope with such problems, one may use pathwidth, which is always larger than or equal to treewidth.
Unfortunately, this approach did not quite work
as no natural problem was known to change its complexity with respect to treewidth and pathwidth,
until very recently~\cite{BelmonteKLMO20}.
Treedepth is a further restriction of pathwidth.
However, still most of the problems do not change their complexity,
except for some problems with hardness depending on the existence of long paths
(e.g.,~\cite{DvorakK18,KellerhalsK20}).
One successful approach in this direction is parameterization by the vertex cover number,
which is a strong restriction of treedepth.
Many problems that are intractable parameterized by treewidth
have been shown to become tractable when parameterized by
vertex cover number~\cite{FellowsLMRS08,EncisoFGKRS09,FialaGK11,Abu-Khzam14,Lokshtanov15,BonnetS17}.
One drawback of the vertex-cover parameterization is its limitation to a very small class of graphs.
To overcome the drawback, we propose a new approach for parameterizing graph problems by vertex integrity~\cite{BarefootES87}.
The \emph{vertex integrity} of a graph $G$, denoted $\vi(G)$,
is the minimum integer $k$ satisfying that
there is $S \subseteq V(G)$ such that $|S| + |V(C)| \le k$ for each component $C$ of $G-S$.
We call such $S$ a \emph{$\vi(k)$-set} of $G$.
This parameter is bounded from above by vertex cover number${}+1$ and from below by treedepth.
As a structural parameter in parameterized algorithms,
vertex integrity (and its close variants) was used only in a couple of previous studies~\cite{DvorakEGKO17,GanianKO18,BodlaenderHOOZ19}.
Our goal is to fill some gaps between treedepth and vertex cover number
by presenting finer algorithmic and complexity results parameterized by vertex integrity.
Note that the parameterization by vertex integrity is equivalent to the one by $\ell$-component order connectivity${}+\ell$~\cite{DrangeDH16}.
\medskip
\noindent\textit{Short preliminaries.}
For the basic terms and concepts in the parameterized complexity theory,
we refer the readers to standard textbooks, e.g.~\cite{DowneyF99,CyganFKLMPPS15}.
For a graph $G$, we denote
its treewidth by $\tw(G)$,
pathwidth by $\pw(G)$,
treedepth by $\td(G)$, and
vertex cover number by $\vc(G)$.
(See Section~\ref{sec:graph-parameters} for definitions.)
It is known that $\tw(G) \le \pw(G) \le \td(G)-1 \le \vi(G)-1 \le \vc(G)$ for every graph $G$.
We say informally that a problem is fixed-parameter tractable ``parameterized by $\vi$'',
which means ``parameterized by the vertex integrity of the input graphs.''
We also say ``graphs of $\vi = c$ (or $\vi \le c$)''.
\medskip
\noindent\textit{Our results.}
The main contribution of this paper is to generalize several known FPT algorithms parameterized by $\vc$ to the ones by $\vi$.
We also show some results considering parameterizations by $\vc$, $\vi$, or $\td$
to tighten the complexity gaps between parameterizations by $\vc$ and by $\td$.
See Table~\ref{tbl:summary} for the summary of results.
Due to the space limitation, we had to move most of the results into the appendix.
In the main text, we present full descriptions of selected results only.
(Even for the selected results, we still have to omit some proofs. They are marked with $\bigstar$.)
\textit{Extending FPT results parameterized by $\vc$.}
We show that
\textsc{Imbalance},
\textsc{Maximum Common (Induced) Subgraph},
\textsc{Capacitated Vertex Cover},
\textsc{Capacitated Dominating Set},
\textsc{Precoloring Extension},
\textsc{Equitable Coloring}, and
\textsc{Equitable Connected Partition}
are fixed-parameter tractable parameterized by vertex integrity.
We present the algorithms for \textsc{Imbalance} as a simple but still powerful example that generalizes known results (Section~\ref{sec:imbalance})
and for \textsc{Maximum Common Subgraph} as one of the most involved examples (Section~\ref{sec:mcs}).
See Section~\ref{sec:extending-vc} for the other problems.
A commonly used trick is to reduce the problem instance to a number of instances of integer linear programming,
while each problem requires a nontrivially tailored reduction depending on its structure.
It was the same for parameterizations by $\vc$,
but the reductions here are more involved because of the generality of $\vi$.
Finding the similarity among the reductions and algorithms would be a good starting point to develop a general way for handling problems
parameterized by $\vi$ (or $\vc$).
Additionally, we show that \textsc{Bandwidth} is W[1]-hard parameterized by $\td$,
while we were not able to extend the algorithm parameterized by $\vc$ to the one by $\vi$.
\textit{Filling some complexity gaps.}
We observe that \textsc{Graph Motif} and \textsc{Steiner Forest}
have different complexity with respect to $\vc$ and $\vi$ (Section~\ref{sec:hard-vi}).
In particular, we see that not all FPT algorithms parameterized by $\vc$ can be generalized to the ones by $\vi$.
\textsc{Min Max Outdegree Orientation} gives an example that a known hardness for $\td$ can be strengthened to the one for $\vc$
(Section~\ref{sec:min-max-outdeg}).
We additionally observe that some W[1]-hard problems parameterized by $\tw$
become tractable parameterized by $\td$. Such problems include \textsc{Metric Dimension}, \textsc{Directed} $(p,q)$-\textsc{Edge Dominating Set}, and \textsc{List Hamiltonian Path} (Section~\ref{sec:easy-td}).
\begin{table}[bt]
\centering
\caption{Summary. The results stated without references are shown in this paper.}
\begin{tabular}{l|l|l}
\textsc{Problem} & Lower bounds & Upper bounds\\\hline
\multirow{2}{*}{\textsc{Imbalance}} & \multirow{2}{*}{NP-h \cite{BiedlCGHW05}} & FPT by $\tw + \Delta$ \cite{LokshtanovMS13}\\
& & FPT by $\vi$\\\hline
\textsc{Max Common Subgraph} & NP-h for $\vi(G_2) = 3$ & \multirow{2}{*}{FPT by $\vi(G_1) + \vi(G_2)$}\\
\textsc{Max Common Ind.\ Subgraph} & NP-h for $\vc(G_2) = 0$ &\\\hline
\textsc{Capacitated Vertex Cover} & W[1]-h by $\td$ \cite{DomLSV08} & FPT by $\vi$\\\hline
\textsc{Capacitated Dominating Set} & W[1]-h by $\td + k$ \cite{DomLSV08} & FPT by $\vi$\\\hline
\textsc{Precoloring Extension} & W[1]-h by $\td$ \cite{FellowsFLRSST11} & FPT by $\vi$\\\hline
\textsc{Equitable Coloring} & W[1]-h by $\td$ \cite{FellowsFLRSST11} & FPT by $\vi$\\\hline
\textsc{Equitable Connected Part.} & W[1]-h by $\pw$ \cite{EncisoFGKRS09} & FPT by $\vi$\\\hline
\multirow{2}{*}{\textsc{Bandwidth}} & W[1]-h by $\td$ & FPT by $\vc$ \cite{FellowsLMRS08} \\
&NP-h for $\pw = 2$ \cite{Muradian03} & P for $\pw \le 1$~\cite{AssmannPSZ81}\\\hline
\multirow{2}{*}{\textsc{Graph Motif}} & \multirow{2}{*}{NP-h for $\vi = 4$} & FPT by $\vc$ \cite{BonnetS17}\\
&& P for $\vi \le 3$\\\hline
\textsc{Steiner Forest} & NP-h for $\vi = 5$ \cite{Gassner10} & XP by $\vc$\\
\textsc{Unweighted Steiner Forest} & NP-h for $\tw = 3$ \cite{Gassner10} & FPT by $\vc$\\\hline
\textsc{Unary Min Max Outdeg.\ Ori.} & W[1]-h by $\vc$ & XP by $\tw$ \cite{Szeider11} \\
\textsc{Binary Min Max Outdeg.\ Ori.} & NP-h for $\vc = 3$ & P for $\vc \le 2$ \\\hline
\multirow{2}{*}{\textsc{Metric Dimension}} & \multirow{2}{*}{W[1]-h by $\pw$ \cite{BonnetP19}} & FPT by $\tw + \Delta$ \cite{BelmonteFGR17}\\
&& FPT by $\td$\\\hline
\multirow{2}{*}{\textsc{Directed $(p, q)$-Edge Dom.\ Set}} & \multirow{2}{*}{W[1]-h by $\pw$ \cite{BelmonteHK0L18}} & FPT by $\tw + p + q$ \cite{BelmonteHK0L18}\\
&& FPT by $\td$\\\hline
\textsc{List Hamiltonian Path} & W[1]-h by $\pw$ \cite{MeeksS16} & FPT by $\td$
\end{tabular}
\label{tbl:summary}
\end{table}
\section{\textsc{Imbalance}}
\label{sec:imbalance}
In this section, we show that \textsc{Imbalance} is fixed-parameter tractable parameterized by $\vi$.
Let $G = (V,E)$ be a graph.
Given a linear ordering $\sigma$ on $V$,
the \emph{imbalance} $\im_{\sigma}(v)$ of $v \in V$
is the absolute difference of the numbers of the neighbors of $v$
that appear before $v$ and after $v$ in $\sigma$.
The \emph{imbalance} of $G$, denoted $\im(G)$,
is defined as $\min_{\sigma} \sum_{v \in V} \im(v)$,
where the minimum is taken over all linear orderings on $V$.
Given a graph $G$ and an integer $b$,
\textsc{Imbalance} asks whether $\im(G) \le b$.
Fellows et al.~\cite{FellowsLMRS08} showed that \textsc{Imbalance} is fixed-parameter tractable parameterized by $\vc$.
Recently, Misra and Mittal~\cite{MisraM20} have extended the result by showing that \textsc{Imbalance} is fixed-parameter tractable parameterized by
the sum of the twin-cover number and the maximum twin-class size.
Although twin-cover number is incomparable with vertex integrity,
the combined parameter in~\cite{MisraM20} is always larger than or equal to the vertex integrity of the same graph.
On the other hand, the combined parameter can be arbitrarily large for some graphs of constant vertex integrity
(e.g., disjoint unions of $P_{3}$'s).
Hence, our result here properly extends the result in~\cite{MisraM20} as well.
\smallskip
\textit{Key concepts.}
Before proceeding to the algorithm,
we need to introduce two important concepts
that are common in our algorithms parameterized by $\vi$.
1. \textit{ILP parameterized by the number of variables.}
It is known that the feasibility of an instance of integer linear programming (ILP)
parameterized by the number of variables is fixed-parameter tractable~\cite{Lenstra83}.
Using the algorithm for the feasibility problem as a black box,
one can show the same fact for the optimization version as well.
(See Section~\ref{sec:ilp} for the detail.)
This fact has been used heavily
for designing FPT algorithms parameterized by $\vc$ (see e.g.~\cite{FellowsLMRS08}).
We are going to see that some of these algorithms can be generalized for the parameterization by $\vi$,
and \textsc{Imbalance} is the first such example.
2. \textit{Equivalence relation among components.}
For a vertex set $S$ of $G$,
we define an equivalence relation $\sim_{G,S}$ among components of $G-S$
by setting $C_{1} \sim_{G,S} C_{2}$
if and only if
there is an isomorphism $g$ from $G[S \cup V(C_{1})]$ to $G[S \cup V(C_{2})]$
that fixes $S$; that is, $g|_{S}$ is the identity function.
When $C_{1} \sim_{G,S} C_{2}$, we say that $C_{1}$ and $C_{2}$ have the same \emph{$(G, S)$-type}
(or just the same \emph{type} if $G$ and $S$ are clear from the context).
See \figref{fig:type}.
We say that a component $C$ of $G - S$ is of \emph{$(G, S)$-type $t$} (or just \emph{type $t$})
by using a canonical form $t$ of the members of the $(G, S)$-type equivalence class of $C$.
We can set the canonical form $t$ in such a way that it can be computed from $S$ and $C$ in time depending only on
$|S \cup V(C)|$.\footnote{For example, by fixing the ordering of vertices in $S$ as $v_{1}, \dots, v_{|S|}$,
we can set $t$ to be the adjacency matrix of $G[S \cup V(C)]$
such that the $i$th row and column correspond to $v_{i}$ for $1 \le i \le |S|$
and under this condition the string $t[1,1], \dots, t[1, s], t[2,1], \dots, t[s,s]$
is lexicographically minimal, where $s = |S \cup V(C)|$.}
Observe that if $S$ is a $\vi(k)$-set of $G$,
then the number of $\sim_{G,S}$ classes depends only on $k$
since $|S \cup V(C)| \le k$ for each component $C$ of $G-S$.
Hence, we can compute for all types $t$
the number of type-$t$ components of $G-S$ in $O(f(k) \cdot n)$ total running time,
where $n = |V|$ and $f(k)$ is a computable function depending only on $k$.
Note that this information (the numbers of type-$t$ components for all $t$)
completely characterizes the graph $G$ up to isomorphism.
\begin{figure}[htb]
\centering
\includegraphics[scale=.8]{./fig/type}
\caption{The components $C_{2}$ and $C_{3}$ of $G-S$ have the same $(G,S)$-type.}
\label{fig:type}
\end{figure}
\begin{theorem}
\label{thm:imb}
\textsc{Imbalance} is fixed-parameter tractable parameterized by $\vi$.
\end{theorem}
\begin{proof}
Let $S$ be a $\vi(k)$-set of $G$. Such a set can be found in $O(k^{k+1} n)$ time~\cite{DrangeDH16}.
We first guess and fix the relative ordering of $S$ in an optimal ordering.
There are only $k!$ candidates for this guess.
For each $v \in S$, let $\ell(v)$ and $r(v)$ be the numbers of vertices in $N(v) \cap S$
that appear before $v$ and after $v$, respectively, in the guessed relative ordering of $S$.
Observe that the imbalance of a vertex $v$ in a component $C$ of $G-S$ depends only on the relative ordering of $S \cup V(C)$
since $N(v) \subseteq S \cup V(C)$.
For each type $t$ and for each relative ordering $p$ of $S \cup V(C)$, where $C$ is a type-$t$ component of $G-S$,
we denote by $\im(t,p)$ the sum of imbalance of the vertices in $C$.
Similarly, the numbers of vertices in a type-$t$ component $C$
that appear before $v \in S$ and after $v$ depend only on the relative ordering $p$ of $S \cup V(C)$;
we denote these numbers by $\ell(v,t,p)$ and $r(v,t,p)$, respectively.
The numbers $\im(t,p)$, $\ell(v,t,p)$, and $r(v,t,p)$ can be computed from their arguments in time
depending only on $k$, and thus they are treated as constants in the following ILP\@.
We represent by a nonnegative variable $x_{t,p}$ the number of type-$t$ components
that have relative ordering $p$ with $S$. Note that the number of combinations of $t$ and $p$ depends only on $k$.
For each $v \in S$, we represent (an upper bound of) the imbalance of $v$ by an auxiliary variable $y_{v}$.
This can be done by the following constraints:
\begin{align*}
y_{v} &\ge \textstyle
(\ell(v) + \sum_{t,p} \ell(v,t,p) \cdot x_{t,p} )
-(r(v) + \sum_{t,p} r(v,t,p) \cdot x_{t,p} ), \\
y_{v} &\ge \textstyle
(r(v) + \sum_{t,p} r(v,t,p) \cdot x_{t,p} )
-(\ell(v) + \sum_{t,p} \ell(v,t,p) \cdot x_{t,p} ).
\end{align*}
Then the imbalance of the whole ordering, which is our objective function to minimize, can be expressed as
\[
\textstyle\sum_{v \in S} y_{v} + \sum_{t,p} \im(t,p) \cdot x_{t,p}.
\]
Now we need the following constraints to keep the total number of type-$t$ components right:
\[
\textstyle\sum_{p} x_{t,p} = c_{t} \quad \text{for each type} \ t,
\]
where $c_{t}$ is the number of components of type $t$ in $G - S$.
By finding an optimal solution to the ILP above for each guess of the relative ordering of $S$,
we can find an optimal ordering.
Since the number of guesses and the number of variables depend only on $k$,
the theorem follows.
\end{proof}
\section{\textsc{Maximum Common (Induced) Subgraph}}
\label{sec:mcs}
In this section, we show that \textsc{Maximum Common Subgraph} (MCS) and \textsc{Maximum Common Induced Subgraph} (MCIS)
are fixed-parameter tractable parameterized by $\vi$ of both graphs.
(See Section~\ref{asec:mcs} for the proof for MCIS.)
The results extend known results and fill some complexity gaps as described below.
A graph $Q$ is \emph{subgraph-isomorphic} to $G$, denoted $Q \preceq G$,
if there is an injection $\eta$ from $V(Q)$ to $V(G)$
such that $\{\eta(u),\eta(v)\} \in E(G)$ for every $\{u,v\} \in E(Q)$.
A graph $Q$ is \emph{induced subgraph-isomorphic} to $G$, denoted $Q \indsub G$,
if there is an injection $\eta$ from $V(Q)$ to $V(G)$
such that $\{\eta(u),\eta(v)\} \in E(G)$ if and only if $\{u,v\} \in E(Q)$.
Given two graphs $G$ and $Q$,
\textsc{Subgraph Isomorphism} (SI) asks whether $Q \preceq G$,
and \textsc{Induced Subgraph Isomorphism} (ISI) asks whether $Q \indsub G$.
The results of this section are on their generalizations.
Given two graphs $G_{1}$ and $G_{2}$,
MCS asks to find a graph $H$ with maximum $|E(H)|$ such that $H \preceq G_{1}$ and $H \preceq G_{2}$.
Similarly,
MCIS asks to find a graph $H$ with maximum $|V(H)|$ such that $H \indsub G_{1}$ and $H \indsub G_{2}$.
If we restrict the structure of only one of the input graphs, then both problems remain quite hard.
Since \textsc{Partition Into Triangles}~\cite{GareyJ79} is a special case of SI
where the graph $Q$ is a disjoint union of triangles,
MCS is NP-hard even if one of the input graphs has $\vi = 3$.
Also, since \textsc{Independent Set}~\cite{GareyJ79} is a special case of ISI where $Q$ is an edge-less graph,
MCIS is NP-hard even if one of the input graphs has $\vc = 0$.
Furthermore, since SI and ISI generalize \textsc{Clique}~\cite{DowneyF99},
MCS and MCIS are W[1]-hard parameterized by the order of one of the input graphs.
When parameterized by $\vc$ of one graph,
an XP algorithm for (a generalization of) MCS is known~\cite{BodlaenderHJOOZ20}.
For parameters restricting both input graphs, some partial results were known.
It is known that SI is fixed-parameter tractable parameterized by $\vi$ of both graphs,
while it is NP-complete when both graphs have $\td \le 3$~\cite{BodlaenderHOOZ19}.
The hardness proof in~\cite{BodlaenderHOOZ19} can be easily adapted to ISI without increasing $\td$.
It is known that MCIS is fixed-parameter tractable parameterized by $\vc$ of both graphs~\cite{Abu-Khzam14}.
\begin{restatable}{theorem}{vimcsboth}
\label{thm:vi-mcs-both}
\textsc{Maximum Common Subgraph}
is fixed-parameter tractable parameterized by $\vi$ of both input graphs.
\end{restatable}
\begin{proof}
Let $G_{1} = (V_{1}, E_{1})$ and $G_{2} = (V_{2}, E_{2})$ be the input graphs of vertex integrity at most $k$.
We will find isomorphic subgraphs
$\Gamma_{1} = (U_{1}, F_{1})$ of $G_{1}$ and
$\Gamma_{2} = (U_{2}, F_{2})$ of $G_{2}$ with maximum number of edges,
and an isomorphism $\eta \colon U_{1} \to U_{2}$ from $\Gamma_{1}$ to $\Gamma_{2}$.
\smallskip
\noindent\textit{Step 1. Guessing matched $\vi(2k)$-sets $R_{1}$ and $R_{2}$.}
Let $S_{1}$ and $S_{2}$ be $\vi(k)$-sets of $G_{1}$ and $G_{2}$, respectively.
At this point, there is no guarantee that $S_{i} \subseteq U_{i}$ or $\eta(S_{1}) = S_{2}$.
To have such assumptions, we make some guesses about $\eta$
and find $\vi(2k)$-sets $R_{1}$ and $R_{2}$ of the graphs such that $\eta(R_{1}) = R_{2}$.
\textit{Step 1-1. Guessing subsets $X_{i}, Y_{i} \subseteq S_{i}$ for $i \in \{1,2\}$.}
We guess disjoint subsets $X_{1}$ and $Y_{1}$ of $S_{1}$ such that
$X_{1} = S_{1} \cap {\eta^{-1}(U_{2} \cap S_{2})}$ and
$Y_{1} = S_{1} \cap {\eta^{-1}(U_{2} \setminus S_{2})}$.
We also guess disjoint subsets $X_{2}$ and $Y_{2}$ of $S_{2}$
defined similarly as
$X_{2} = S_{2} \cap {\eta(U_{1} \cap S_{1})}$ and
$Y_{2} = S_{2} \cap {\eta(U_{1} \setminus S_{1})}$.
Note that $\eta(X_{1}) = X_{2}$.
There are $3^{|S_{1}|} \cdot 3^{|S_{2}|} \le 3^{2k}$ candidates for the combinations of $X_{1}$, $Y_{1}$, $X_{2}$, and $Y_{2}$.
Observe that the vertices in $S_{i} \setminus (X_{i} \cup Y_{i})$ do not contribute to the isomorphic subgraphs
and can be safely removed. We denote the resultant graphs by $H_{i}$.
\smallskip
\textit{Step 1-2. Guessing $\eta$ on $X_{1} \cup Y_{1}$ and $\eta^{-1}$ on $X_{2} \cup Y_{2}$.}
Given the guessed subsets $X_{1}$, $Y_{1}$, $X_{2}$, and $Y_{2}$,
we further guess how $\eta$ maps these subsets.
There are $|X_{1}|! \le k!$ candidates for the bijection $\eta|_{X_{1}}$
(equivalently for $\eta^{-1}|_{X_{2}} = (\eta|_{X_{1}})^{-1}$).
Now we guess $\eta|_{Y_{1}}$ from at most $2^{k^{3}}$ non-isomorphic candidates as follows.
Recall that $\eta(Y_{1}) \subseteq V_{2} \setminus S_{2}$.
Observe that each subset $A \subseteq V_{2} \setminus S_{2}$ is completely characterized up to isomorphism
by the numbers of ways $A$ intersects type-$t$ components
for all $(H_{2},S_{2})$-types $t$.
Since there are at most $2^{\binom{k}{2}}$ types and each component has order at most $k$,
the total number of non-equivalent subsets of components is at most $2^{\binom{k}{2}} \cdot 2^{k} \le 2^{k^{2}}$.
Since $\eta(Y_{1})$ is the union of at most $|Y_{1}|$ such subsets,
the number of non-isomorphic candidates of $\eta(Y_{1})$ is at most $(2^{k^{2}})^{|Y_{1}|} \le 2^{k^{3}}$.
In the analogous way, we can guess $\eta^{-1}|_{Y_{2}}$ from at most $2^{k^{3}}$ non-isomorphic candidates.
Now we set $Z_{1} = \eta^{-1}(Y_{2})$ and $Z_{2} = \eta(Y_{1})$.
Let $R_{1} = X_{1} \cup Y_{1} \cup Z_{1}$ and $R_{2} = X_{2} \cup Y_{2} \cup Z_{2}$.
Observe that each component $C$ of $H_{1} - R_{1}$ satisfies that
$|C| \le k - |S_{1}| \le k$ and
$|C| + |R_{1}| \le (k - |S_{1}|) + (|S_{1}| + |\eta^{-1}(Y_{2})|) \le 2k$.
Hence, $R_{1}$ is a $\vi(2k)$-set of $H_{1}$.
Similarly, we can see that $R_{2}$ is a $\vi(2k)$-set of $H_{2}$.
Furthermore, we know that $\eta(R_{1}) = R_{2}$.
\medskip
\noindent\textit{Step 2. Extending the guessed parts of $\eta$.}
Assuming that the guesses we made so far are correct, we now find the entire $\eta$.
Recall that we are seeking for isomorphic subgraphs
$\Gamma_{1} = (U_{1}, F_{1})$ of $G_{1}$ and
$\Gamma_{2} = (U_{2}, F_{2})$ of $G_{2}$ with maximum number of edges,
and the isomorphism $\eta \colon U_{1} \to U_{2}$ from $\Gamma_{1}$ to $\Gamma_{2}$.
Since we already know the part $\eta|_{R_{1}} \colon R_{1} \to R_{2}$,
it suffices to find a bijective mapping from a subset of $V(H_{1} - R_{1})$ to a subset of $V(H_{2} - R_{1})$
that maximizes the number of matched edges where the connections to $R_{i}$ are also taken into account.
As we describe below, the subproblem we consider here can be solved by formulating it as an ILP instance with $2^{O(k^{3})}$ variables.
The trick here is that instead of directly finding the mapping,
we find which vertices and edges in $H_{i} - R_{i}$ are used in the common subgraph.
In the following, we are going to use a generalized version of \emph{types}
since the vertex set of a component of $H_{i} - R_{i}$ does not necessarily induce a connected subgraph of $\Gamma_{i}$.
It is defined in a similar way as $(H_{i}, R_{i})$-types
except that it is defined for each pair $(A,B)$ of a connected subgraph $A$ of $H_{i} - R_{i}$ and a subset $B$ of the edges between $A$ and $R_{i}$.
Let $(A_{1}, B_{1})$ and $(A_{2},B_{2})$ be such pairs in $H_{i} - R_{i}$.
We say that $(A_{1}, B_{1})$ and $(A_{2},B_{2})$ have the same \emph{g-$(H_{i}, R_{i})$-type} (or just \emph{g-type})
if there is an isomorphism from $H_{i}(A_{1}, B_{1})$ to $H_{i}(A_{2}, B_{2})$ that fixes $R_{i}$,
where $H_{i}(A_{j},B_{j})$ is the subgraph of $H_{i}$ formed by $B_{j}$ and the edges in $A_{j}$.
See \figref{fig:g-type}.
We say that a pair $(A,B)$ is of \emph{g-$(H_{i}, R_{i})$-type $t$} (or just \emph{g-type $t$})
by using a canonical form $t$ of the g-$(H_{i}, R_{i})$-type equivalence class of $(A,B)$.
Observe that all possible canonical forms of g-types can be computed in time depending only on $k$.
\begin{figure}[htb]
\centering
\includegraphics[scale=.8]{./fig/g-type}
\caption{The pairs $(A_{1}, B_{1})$ and $(A_{2},B_{2})$ have the same g-$(H_{i}, R_{i})$-type.}
\label{fig:g-type}
\end{figure}
\smallskip
\textit{Step 2-1. Decomposing components of $H_{i} - R_{i}$ into smaller pieces.}
We say that an edge $\{u,v\}$ in $H_{1}$ is \emph{used by $\eta$}
if $u, v \in U_{1}$ and $H_{2}$ has the edge $\{\eta(u),\eta(v)\}$.
Similarly, an edge $\{u,v\}$ in $H_{2}$ is \emph{used by $\eta$}
if $u, v \in U_{2}$ and $H_{1}$ has the edge $\{\eta^{-1}(u),\eta^{-1}(v)\}$.
Let $i \in \{1,2\}$, $t$ be an $(H_{i}, R_{i})$-type, and $T$ be a multiset of g-$(H_{i}, R_{i})$-types.
Let $C$ be a type $t$ component of $H_{i} - R_{i}$,
$C'$ the subgraph of $C$ formed by the edges used by $\eta$, and
$E'$ the subset of the edges between $C'$ and $R_{i}$ used by $\eta$.
If $T$ coincides with the multiset of g-types of the pairs $(A,B)$ such that $A$ is a component of $C'$
and $B$ is the subset of $E'$ connecting $A$ and $R_{i}$,
then we say that $\eta$ \emph{decomposes} the type-$t$ component $C$ into $T$.
We represent by a nonnegative variable $x^{(i)}_{t,T}$
the number of type-$t$ components of $H_{i} - R_{i}$
that are decomposed into $T$ by $\eta$.
We have the following constraint:
\begin{align}
\textstyle
\sum_{T} x^{(i)}_{t,T} = c^{(i)}_{t}
\quad \text{for each} \ (H_{i}, R_{i})\text{-type} \ t \ \text{and} \ i \in \{1,2\},
\nonumber
\end{align}
where the sum is taken over all possible multisets $T$ of g-$(H_{i}, R_{i})$-types,
and $c^{(i)}_{t}$ is the number of components of type $t$ in $H_{i} - R_{i}$.
Additionally, if there is no way to decompose a type-$t$ component into $T$,
we add a constraint $x^{(i)}_{t,T} = 0$.
As each component of $H_{i} - R_{i}$ has order at most $k$, $T$ contains at most $k$ elements.
Since there are at most $2^{\binom{2k}{2}}$ g-types,
there are at most $(2^{\binom{2k}{2}})^{k}$ options for choosing $T$.
Thus the number of variables $x^{(i)}_{t,T}$ is at most
$2 \cdot 2^{\binom{2k}{2}} \cdot (2^{\binom{2k}{2}})^{k+1}$.
Now we introduce a nonnegative variable $y_{t}^{(i)}$
that represents the number of pairs $(A,B)$ of g-type $t$
obtained from the components of $H_{i} - R_{i}$ by decomposing them by $\eta$.
The definition of $y_{t}^{(i)}$ gives the following constraint:
\begin{align}
\textstyle
y_{t}^{(i)} = \sum_{t', \, T} \mu(T, t) \cdot x^{(i)}_{t',T}
\quad \text{for each} \ \text{g-}(H_{i}, R_{i})\text{-type} \ t \ \text{and} \ i \in \{1,2\},
\nonumber
\end{align}
where $\mu(T, t)$ is the multiplicity of $g$-type $t$ in $T$
and the sum is taken over all possible $(H_{i}, R_{i})$-types $t'$ and
multisets $T$ of g-$(H_{i}, R_{i})$-types.
As in the previous case, we can see that the number of variables $y_{t}$ depends only on $k$.
\smallskip
\textit{Step 2-2. Matching decomposed pieces.}
Observe that for each g-$(H_{1}, R_{1})$-type $t_{1}$,
there exists a unique g-$(H_{2}, R_{2})$-type $t_{2}$
such that
there is an isomorphism $g$ from $H_{1}(A_{1}, B_{1})$ to $H_{2}(A_{2}, B_{2})$ with $g|_{R_{1}} = \eta|_{R_{1}}$,
where $(A_{i},B_{i})$ is a pair of g-$(H_{i}, R_{i})$-type $t_{i}$ for $i \in \{1,2\}$.
We say that such g-types $t_{1}$ and $t_{2}$ \emph{match}.
Since $\eta$ is an isomorphism from $\Gamma_{1}$ to $\Gamma_{2}$,
$\eta$ maps each g-$(H_{1}, R_{1})$-type $t_{1}$ pair
to a g-$(H_{2}, R_{2})$-type $t_{2}$ pair, where $t_{1}$ and $t_{2}$ match.
This implies that $y_{t_{1}}^{(1)} = y_{t_{2}}^{(2)}$, which we add as a constraint.
Now the total number of edges used by $\eta$ can be computed from $y_{t}^{(1)}$.
Let $m_{t}$ be the number of edges in $H_{1}(A, B)$, where $(A,B)$ is a pair of g-$(H_{1}, R_{1})$-type $t$.
Let $r$ be the number of matched edges in $R_{1}$;
that is, $r = |\{\{u,v\} \in E(H_{1}[R_{1}]) \mid \{\eta(u), \eta(v)\} \in E(G_{2}[R_{2}]) \}|$.
Then, the number of matched edges is $r + \sum_{t} m_{t} \cdot y_{t}^{(1)}$.
On the other hand, given an assignment to the variables,
it is easy to find isomorphic subgraphs with that many edges.
Since $r$ is a constant here,
we set $\sum_{t} m_{t} \cdot y_{t}^{(1)}$
to the objective function to be maximized.
Since the number of candidates in the guesses we made
and the number of variables in the ILP instances depend only on $k$,
the theorem follows.
\end{proof}
\section{\textsc{Min Max Outdegree Orientation}}
\label{sec:min-max-outdeg}
Given an undirected graph $G = (V,E)$, an edge weight function $w\colon E \to \mathbb{Z}^{+}$, and a positive integer $r$,
\textsc{Min Max Outdegree Orientation} (MMOO) asks whether
there exists an orientation $\Lambda$ of $G$ such that each vertex has outdegree at most $r$ under $\Lambda$,
where the outdegree of a vertex is the sum of the weights of out-going edges.
If each edge weight is given in binary, we call the problem \textsc{Binary MMOO},
and if it is given in unary, we call the problem \textsc{Unary MMOO}.
Note that in the binary version, the weight of an edge can be exponential in the input size,
whereas the unary version does not allow such weights.
\textsc{Unary MMOO}
admits an $n^{O(\tw)}$-time algorithm~\cite{Szeider11},
but it is W[1]-hard parameterized by $\td$~\cite{Szeider11arxiv}.\footnote{%
In \cite{Szeider11arxiv}, W[1]-hardness was stated for $\tw$ but the proof shows it for $\td$ as well.}
In this section, we show a stronger hardness parameterized by $\vc$.
\textsc{Binary MMOO} is known to be NP-complete for graphs of $\vi = 4$~\cite{AsahiroMO11}.
In Section \ref{asec:min-max-outdeg}, we show a stronger hardness result that the binary version is NP-complete for graphs of $\vc = 3$.
This result is tight as we can show that the binary version is polynomial-time solvable for graphs of $\vc \le 2$.
\begin{theorem}
\label{thm:uminmaxod-vc}
\textsc{Unary MMOO} is W[1]-hard parameterized by $\vc$.
\end{theorem}
\begin{proof}
We give a parameterized reduction from \textsc{Unary Bin Packing}.
Given a positive integer $t$ and $n$ positive integers $a_{1}, a_{2}, \dots, a_{n}$ in unary,
\textsc{Unary Bin Packing} asks the existence of a partition $S_{1}, \dots, S_{t}$ of $\{1, 2, \dots, n\}$
such that $\sum_{i \in S_{j}} a_{i} = \frac{1}{t}\sum_{1 \le i \le n} a_{i}$ for $1 \le j \le t$.
\textsc{Unary Bin Packing} is W[1]-hard parameterized by $t$~\cite{JansenKMS13}.
We assume that $t \ge 3$ since otherwise the problem can be solved in polynomial time as the integers $a_{i}$ are given in unary.
Let $B = \frac{1}{t} \sum_{1 \le i \le n} a_{i}$ and $W = (t - 1)B = \sum_{1 \le i \le n} a_{i} - B$.
The assumption $t \ge 3$ implies that $B \le W/2$.
Observe that if $a_{i} \ge B$ for some $i$,
then the instance is a trivial no instance (when $a_{i} > B$)
or the element $a_{i}$ is irrelevant (when $a_{i} = B$).
Hence, we assume that $a_{i} < B$ (and thus $a_{i} < W/2$) for every $i$.
The reduction to \textsc{Unary MMOO} is depicted in \figref{fig:minmaxod-vc3}.
From the integers $a_{1}, a_{2}, \dots, a_{n}$,
we construct the graph obtained from
a complete bipartite graph on the vertex set $\{u, s_{1}, s_{2}, \dots, s_{t}\} \cup \{v_{1}, \dots, v_{n}\}$
by adding the edge $\{u, s_{1}\}$.
We set $w(\{v_{i}, s_{j}\}) = a_{i}$ for all $i,j$,
$w(\{v_{i}, u\}) = W - a_{i}$ for all $i$,
and $w(\{u, s_{1}\}) = W$.
The vertices $s_{1}, s_{2}, \dots, s_{t}, u$ form a vertex cover of size $t + 1$.
We set the target maximum outdegree $r$ to $W$.
We show that this instance of \textsc{Unary MMOO} is a yes instance
if and only if there exists a partition $S_{1}, \dots, S_{t}$ of $\{1, 2, \dots, n\}$ such that
$\sum_{i \in S_{j}} a_{i} = B$ for all $j$.
Intuitively, we can translate the solutions of the problems
by picking $a_{i}$ into $S_{j}$ if $\{v_{i}, s_{j}\}$ is oriented from $v_{i}$ to $s_{j}$, and vice versa.
Assume that there exists a partition $S_{1}, \dots, S_{t}$ of $\{1, 2, \dots, n\}$ such that
$\sum_{i \in S_{j}} a_{i} = B$ for all $j$.
We first orient the edge $\{u,s_{1}\}$ from $u$ to $s_{1}$ and each edge $\{v_{i}, u\}$ from $v_{i}$ to $u$.
(See the thick edges in \figref{fig:minmaxod-vc3}.)
Then, we orient $\{v_{i}, s_{j}\}$ from $v_{i}$ to $s_{j}$ if and only if $i \in S_{j}$.
Under this orientation, all vertices have outdegree exactly $W$:
$a_{i} + (W-a_{i})$ for each $v_{i}$ and
$\sum_{i \notin S_{j}} a_{i} = \sum_{1 \le i \le n} a_{i} - B$ for each $s_{j}$.
Conversely, assume that there is an orientation
such that each vertex has outdegree at most $W$.
Since the sum of the edge weights is $(n+t+1)W$ and the graph has $n+t+1$ vertices,
the outdegree of each vertex has to be exactly $W$.
Since $a_{i} < W/2$ for all $i$, each edge $\{v_{i}, u\}$ has weight larger than $W/2$.
Hence, for $u$, the only way to obtain outdegree exactly $W$ is
to orient $\{u,s_{1}\}$ from $u$ to $s_{1}$
and $\{v_{i},u\}$ from $v_{i}$ to $u$ for all $i$.
Furthermore, for each $i$, there exists exactly one vertex $s_{j}$
such that $\{v_{i}, s_{j}\}$ is oriented from $v_{i}$ to $s_{j}$.
Let $S_{j} \subseteq \{1,2, \dots, n\}$ be the set of indices $i$ such that
$\{v_{i}, s_{j}\}$ is oriented from $v_{i}$ to $s_{j}$.
The discussion above implies that $S_{1}, \dots, S_{t}$ is a partition of $\{1,\dots,n\}$.
The outdegree of $s_{j}$ is $\sum_{i \notin S_{j}} a_{i}$, which is equal to $W = \sum_{1 \le i \le n} a_{i} - B$.
Thus, $\sum_{i \in S_{j}} a_{i} = \sum_{1 \le i \le n} a_{i} - W = B$.
\end{proof}
\begin{figure}[tb]
\centering
\includegraphics[scale=.8]{./fig/new_minmaxod-vc3}
\caption{Reduction from \textsc{Unary Bin Packing} to \textsc{Unary MMOO}.}
\label{fig:minmaxod-vc3}
\end{figure}
\section{\textsc{Bandwidth}}
\label{sec:bandwidth}
Let $G = (V,E)$ be a graph.
Given a linear ordering $\sigma$ on $V$,
the \emph{stretch} of $\{u,v\} \in E$, denoted $\mathsf{str}_{\sigma}(\{u,v\})$, is $|\sigma(u) - \sigma(v)|$.
The \emph{bandwidth} of $G$, denoted $\bw(G)$,
is defined as $\min_{\sigma} \max_{e \in E} \mathsf{str}_{\sigma}(e)$,
where the minimum is taken over all linear orderings on $V$.
Given a graph $G$ and an integer $w$,
\textsc{Bandwidth} asks whether $\bw(G) \le w$.
\textsc{Bandwidth} is NP-complete on trees of $\pw = 3$~\cite{Monien86}
and on graphs of $\pw = 2$~\cite{Muradian03}.
Fellows et al.~\cite{FellowsLMRS08} presented an FPT algorithm for \textsc{Bandwidth} parameterized by $\vc$ .
Here we show that \textsc{Bandwidth} is W[1]-hard parameterized by $\td$ on trees.
The proof is inspired by the one by Muradian~\cite{Muradian03}.
\begin{theorem}
\label{thm:bandwidth}
\textsc{Bandwidth} is W[1]-hard parameterized by $\td$ on trees.
\end{theorem}
\begin{proof}
Let $(a_{1}, \dots, a_{n}; t)$ be an instance of \textsc{Unary Bin Packing} with $t \ge 2$.
Let $B = \frac{1}{t}\sum_{1 \le i \le n} a_{i}$ be the target weight.
We construct an equivalent instance $(T =(V,E), w)$ of \textsc{Bandwidth} as follows~(see \figref{fig:bw-td}).
We start with a path $(z_{0}, x_{1}, y_{1}, z_{1}, \dots, x_{t}, y_{t}, z_{t})$ of length $3t$.
For $1 \le i \le t-1$, we attach $12tnB$ leaves to $z_{i}$.
To $z_{0}$ and $z_{t}$, we attach $12tnB + 4n + 1$ leaves.
For $1 \le i \le n$, we take a star with $6tn \cdot a_{i}-1$ leaves centered at $v_{i}$.
Finally, we connect each $v_{i}$ to $x_{1}$ with a path with $6t-4$ inner vertices.
We set the target width $w$ to $6tnB + 2n + 1$.
Note that $|V| = (3t + 2)w +1$.
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth]{./fig/bw-td}
\caption{Reductions from \textsc{Unary Bin Packing} to \textsc{Bandwidth}.}
\label{fig:bw-td}
\end{figure}
We can see an upper bound of $\td(T)$ as follows.
We remove $x_{1}$ and all the leaves from $T$.
This decreases treedepth by at most $2$.
The remaining graph is a disjoint union of paths
and a longest path has order $6t-3$.
Since $\td(P_{n}) = \lceil \log_{2} (n+1) \rceil$~\cite{NesetrilO2012},
we have $\td(T) \le 2 + \lceil \log_{2} (6t-2) \rceil \le \log_{2}t + 6$.
Now we show that
$(T, w)$ is a yes instance of of \textsc{Bandwidth}
if and only if
$(a_{1}, \dots, a_{n}; t)$ is a yes instance of \textsc{Unary Bin Packing}.
\smallskip
($\implies$)
First assume that $\bw(T) \le w$ and
that $\sigma$ is a linear ordering on $V$ such that $\max_{e \in E}\mathsf{str}_{\sigma}(e) \le w$.
Since $\deg(z_{0}) = 12tnB+4n+2 = 2w$,
its closed neighborhood $N[z_{0}]$ has to appear in $\sigma$ consecutively,
where $z_{0}$ appears at the middle of this subordering.
Furthermore, no edge can connect a vertex appearing before $z_{0}$ in $\sigma$
and a vertex appearing after $z_{0}$ as such an edge has stretch larger than $w$.
Since the edges not incident to $z_{0}$ form a connected subgraph,
we can conclude that the vertices in $V - N[z_{0}]$ appear
either all before $N[z_{0}]$ or all after $N[z_{0}]$ in $\sigma$.
By symmetry, we can assume that those vertices appear after $N[z_{0}]$ in $\sigma$.
This implies that $\sigma(z_{0}) = w+1$.
By the same argument, we can show that all vertices in $N[z_{t}]$ appear consecutively in the end of $\sigma$
and $\sigma(z_{t}) = |V|-w = (3t+1)w+1$.
Since $\sigma(z_{t}) - \sigma(z_{0}) = 3tw$
and the path $(z_{0}, x_{1}, y_{1}, z_{1}, \dots, x_{t}, y_{t}, z_{t})$ has length $3t$,
each edge in this path has stretch exactly $w$ in $\sigma$.
Namely, $\sigma(x_{i}) = (3i-1)w + 1$, $\sigma(y_{i}) = 3iw + 1$, and $\sigma(z_{i}) = (3i+1)w + 1$.
For each leaf $\ell$ attached to $z_{i}$ ($1 \le i \le t-1$),
$\sigma(y_{i}) < \sigma(\ell) < \sigma(x_{i+1})$ holds.
Other than these leaves, there are $2(w-1) - 12tnB = 4n$ vertices placed between $y_{i}$ and $x_{i+1}$.
Let $V_{i}$ be the set consisting of $v_{i}$ and the leaves attached to it.
For $j \in \{1,\dots,t\}$, let $I_{j}$ be the set of indices $i$
such that $v_{i}$ is put between $z_{j-1}$ and $z_{j}$.
If $i \in I_{j}$, then all $6tn \cdot a_{i}$ vertices in $V_{i}$ are put between $y_{j-1}$ and $x_{j+1}$.
(We set $y_{0} \coloneqq z_{0}$.)
If $\sum_{i \in I_{j}} a_{i} \ge B+1$, then
$|\bigcup_{i \in I_{j}} V_{i}| \ge 6tn(B+1) > w+8n-1$ as $t \ge 2$.
This number of vertices cannot be put between $y_{j-1}$ and $x_{j+1}$
after putting the leaves attached to $z_{j-1}$ and $z_{j}$:
we can put at most $4n$ vertices between $y_{j-1}$ and $x_{j}$, at most $4n$ vertices between $y_{j}$ and $x_{j+1}$,
and at most $w-1$ vertices between $x_{j}$ and $y_{j}$.
Since $I_{1}, \dots, I_{t}$ form a partition of $\{1,\dots,n\}$
and $\sum_{1 \le i \le n} a_{i} = t B$,
we can conclude that $\sum_{i \in I_{j}} a_{i} = B$ for $1 \le j \le t$.
\smallskip
($\impliedby$)
Next assume that there exists a partition $S_{1}, \dots, S_{t}$ of $\{1,2,\dots,n\}$
such that $\sum_{i \in S_{j}} a_{i} = B$ for all $1 \le j \le t$.
We put $N[z_{0}]$ at the beginning of $\sigma$ and $N[z_{t}]$ at the end.
We set $\sigma(x_{i}) = (3i-1)w + 1$, $\sigma(y_{i}) = 3iw + 1$, and $\sigma(z_{i}) = (3i+1)w + 1$.
For $1 \le i \le t-1$, we put the leaves attached to $z_{i}$
so that a half of them have the first $6tnB$ positions between $y_{i}$ and $z_{i}$
and the other half have the first $6tnB$ positions between $z_{i}$ and $x_{i+1}$.
For each $S_{j}$, we put the vertices in $\bigcup_{i \in S_{j}} V_{i}$
so that they take the first $6tnB$ positions between $x_{j}$ and $y_{j}$.
Now we have $2n$ vacant positions at the end of each interval
between $x_{i}$ and $y_{i}$ for $1 \le i \le t$,
between $y_{i}$ and $z_{i}$ for $1 \le i \le t-1$, and
between $z_{i}$ and $x_{i+1}$ for $1 \le i \le t-1$.
To these positions, we need to put the inner vertices of the paths connecting $x_{1}$ and $v_{1}, \dots, v_{n}$.
Let $P_{i}$ be the inner part of $x_{1}$--$v_{i}$ path.
The path $P_{i}$ uses the $(2i-1)$st and $(2i)$th vacant positions in each interval as follows
(see \figref{fig:bw-td_path}).
Let $i \in S_{j}$.
Starting from $x_{1}$, $P_{i}$ proceeds from left to right and
visits the two positions in each interval consecutively
until it arrives the interval between $x_{j}$ and $y_{j}$.
At the interval between $x_{j}$ and $y_{j}$, $P_{i}$ switches to the phase where it
only visits the $(2i)$th vacant position in each interval and still proceeds from left to right
until it reaches the interval between $x_{t}$ and $y_{t}$.
Then $P_{i}$ changes the direction and switches to the phase where
it visits the $(2i-1)$st vacant position only in each interval
until it reaches the interval between $x_{j}$ and $y_{j}$.
Now all the vertices are put at distinct positions
and it is easy to see that no edge has stretch more than $w$.
This completes the proof.
\end{proof}
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{./fig/bw-td_path}
\caption{Embedding the path from $x_{1}$ to $v_{i}$.
The gray boxes are the occupied position
and the white points are the vacant positions.
($n = 2$, $j = 2$, $t = 3$.)}
\label{fig:bw-td_path}
\end{figure}
\section{Conclusion}
Using vertex integrity as a structural graph parameter,
we presented finer analyses of the parameterized complexity of well-studied problems.
Although we needed a case-by-case analysis depending on individual problems,
the results in this paper would be useful for obtaining a general method to deal with vertex integrity.
Although we succeeded to extend many fixed-parameter algorithms parameterized by $\vc$ to the ones parameterized by $\vi$,
we were not so successful on graph layout problems.
Fellows et al.~\cite{FellowsLMRS08} showed that \textsc{Imbalance}, \textsc{Bandwidth}, \textsc{Cutwidth}, and \textsc{Distortion}
are fixed-parameter tractable parameterized by $\vc$.
Lokshtanov~\cite{Lokshtanov15} showed that \textsc{Optimal Linear Arrangement} is fixed-parameter tractable parameterized by $\vc$.
Are these problems fixed-parameter tractable parameterized by $\vi$?
We answered only for \textsc{Imbalance} in this paper.
|
2,869,038,157,033 | arxiv | \section{Asymptotic behaviour of elliptic function}
Our goal is to construct asymptotic formula for Jacobi elliptic function $\hbox{sn}(t|m)$, when $m\to1-0$, which is uniform for all period of the function. The asymptotic expansions for the elliptic functions, when $m\to1-0$, are given in numerous handbooks, see for example \cite{AbramowitzStegun}. However that expansions not are uniform for very large $t$, when $t=O(T(k))$, where $T(k)$ is a period of the function. An obstacle for the uniformity of the expansions is the special behaviours of the elliptic functions in neighbourhoods of the turning points $t=T(k)/4+nT/2,\forall n\in \mathbb{Z}$ and far from them.
Let us consider an equation
\begin{equation}
(u')^2=(1-u^2)(1-(1-\epsilon)u^2),\quad 0<\epsilon\ll1.
\label{eqSnJacobi}
\end{equation}
with an initial condition $u(0)=0$. The solution of this Cauchy problem is the Jacobi elliptic function:
$$
u(t,\epsilon)=\hbox{sn}(t|m),\quad m=1-\epsilon.
$$
The handbook gives the following approximation (see \cite{AbramowitzStegun}, formula 16.15.1):
$$
\hbox{sn}(t|1-\epsilon)\sim \tanh(t)+\frac{1}{4}\epsilon\left(\sinh(t)\cosh(t) -t\right)\hbox{sech}^2(t).
$$
\begin{figure}
\vspace{-7cm}
\includegraphics[scale=0.5]{fig-sn-th-diverge.pdf}
\caption{The divergence of the asymptotic curve and function $\hbox{sn}(t|1-\epsilon)$ near the turning point.}
\end{figure}
This approximation is non-periodic, but $\hbox{sn}(t,1-\epsilon)$ is periodic function with formula for the period:
$$
T(\epsilon)=4\int_0^1\frac{dy}{\sqrt{(1-y^2)(1-(1-\epsilon)y^2)}}\equiv 4 K(1-\epsilon).
$$
The integral in the right-hand side of the formula is the elliptic integral of the first kind, which is typically denoted by $K(m)$. The handbook \cite{AbramowitzStegun} gives an polynomial approximation of the integral (formula (17.3.34)):
\begin{eqnarray}
K(m)=
&(1.38662943+
0.09666344259 \epsilon+
0.03590092383\epsilon^2+
\nonumber
\\
&
0.03742563713\epsilon^3+
0.01451196212\epsilon^4)+
\nonumber
\\
&
(0.5+
0.12498593597\epsilon+
0.06880248576\epsilon^2+
\nonumber
\\
&
0.03328355346\epsilon^3+
0.00441787012
\epsilon^4)\log(1/\epsilon)+e(m),
\nonumber
\\
&
|e(m)|<2\times10^{-8}.
\label{numericApproximationFromAbramowitzStegun}
\end{eqnarray}
In this work we clarify the formula for asymptotic of the elliptic integral of first order and obtain an asymptotic an uniform asymptotic approximation for $\hbox{sn}(t|1-\epsilon)$. Due to the symmetry
$$
\hbox{sn}(t|m)=-\hbox{sn}(-t|m),\quad \hbox{sn}(t+T/2|m)=-\hbox{sn}(t|m)
$$
the asymptotic approximation is sufficiently for a half of the period.
\subsection{The asymptotic behaviour of the period}
For small values of $\epsilon$ the elliptic integral can be represented in a form of an integral with weak singularity at $y=1$.
Let us consider the integral as a sum of two integrals over two intervals from zero to small neighbour of $1$ and over small neighbourhood of $y<1$:
\begin{eqnarray*}
\int_0^1\frac{dy}{\sqrt{(1-y^2)(1-(1-\epsilon)y^2)}}=
\int_0^1\frac{dy}{\sqrt{(1+y)(1+\sqrt{1-\epsilon}y)}}\frac{1}{\sqrt{(1-y)(1-\sqrt{1-\epsilon}y)}}.
\end{eqnarray*}
Denote $1-\sqrt{1-\epsilon}=\mu$ then
\begin{eqnarray*}
K(1-\epsilon)=\int_0^1\frac{dy}{\sqrt{(1+y)(1+y-\mu y)}}\frac{1}{\sqrt{(1-y)(1-y+\mu y)}}
\end{eqnarray*}
Now it is convenient to expand the first multiplier into series of $\mu$:
\begin{eqnarray*}
\frac{1}{\sqrt{(1+y)(1+y-\mu y)}}=
&
\frac{1}{y+1}+\frac{\mu y}{2(y+1)^2}+\frac{3\mu^2 y^2}{8(y+1)^3}+
\\
&
\frac{5\mu^3 y^3}{16(y+1)^4}+\frac{35\mu^4 y^4}{128(y+1)^5}+O(\mu^5).
\end{eqnarray*}
Next step is a substitution of this expansion into integral for $K(1-\epsilon)$. Now the integral should be represented as a sum of integrals. These integrals are obviously calculated by Computer Algebra System, such as Maxima \cite{maxima}. For example:
\begin{eqnarray*}
I_0=\int_0^1
\frac{dy}{( y+1)\sqrt{(1-y)( 1-( 1-\mu)y)}}=
\\
\frac{\sqrt{4-2\,\mu}\,\log( \mu)}{2\,\mu-4}-\frac{\log( -\mu+2\,\sqrt{4-2\,\mu}+4) \,\sqrt{4-2\,\mu}}{2\,\mu-4}.
\end{eqnarray*}
Similar formulas are obtained for the next integrals:
$$
I_k=\mu^k a_k\int_0^1
\frac{y^k dy}{( y+1)^{k+1}\sqrt{(1-y)( 1-( 1-\mu)y)}}, \quad k=1,1,2,3,4.
$$
Here $a_1=1/2,\,,a_2=3/8,\, a_3=5/16,\,a_4=35/128$. As a result one obtain:
$$
K(1-\epsilon)\sim I_0+I_1+i_2+I_3+I_4,\mu\to0.
$$
This formula in the terms of $\epsilon$ has the following form:
\begin{eqnarray}
K(1-\epsilon)\sim
&
-\frac{\log(\epsilon)}{2}+2\log(2)+
\left(\frac{-1+2\log(2)}{4}-\frac{\log(\epsilon)}{8}\right) \,\epsilon+
\nonumber
\\
&
\left( \frac{-21+36\,\log( 2) }{128}-\frac{9\,\log( \epsilon)}{128}\right) \,{\epsilon}^{2}+
\nonumber\\
&
\left( \frac{-185+300\,\log( 2) }{1536}-\frac{25\,\log(\epsilon)}{512}\right) \,{\epsilon}^{3}+
\nonumber\\
&
\left( \frac{-18655+29400\,\log( 2) }{196608}-\frac{1225\,\log( \epsilon) }{32768}\right) \,{\epsilon}^{4},\quad \epsilon\to0.
\label{asymptoticOfEllipticIntegralOfFirstKind}
\end{eqnarray}
The similar formula in the numeric form:
\begin{eqnarray}
K(1-\epsilon)\sim
-0.5\,\mathrm{log}\left( \epsilon\right) +
1.386294361119891+
\nonumber
\\
\epsilon\,\left( 0.09657359027997264-0.125\,\mathrm{log}\left( \epsilon\right) \right) +
\nonumber
\\
{\epsilon}^{2}\,\left( 0.03088514453248459-0.0703125\,\mathrm{log}\left( \epsilon\right) \right)
+
\nonumber
\\
{\epsilon}^{3}\,\left( 0.01493760036978098-0.048828125\,\mathrm{log}\left( \epsilon\right) \right)
+
\nonumber
\\
{\epsilon}^{4}\,\left( 0.00876631219717606-0.037384033203125\,\mathrm{log}\left( \epsilon\right) \right).
\label{numericOfEllipticIntegralOfFirstKind}
\end{eqnarray}
The differences between (\ref{numericOfEllipticIntegralOfFirstKind}) and (\ref{numericApproximationFromAbramowitzStegun}) can be explained by the different kind of these formulas. The formula (\ref{numericOfEllipticIntegralOfFirstKind}) is asymptotic, but (\ref{numericApproximationFromAbramowitzStegun}) is numerical approximation by polynomials of the value for numerical calculated of the elliptic integral.
\subsection{Asymptotic behaviour on a regular section of trajectory}
Let us construct the solution in the form of expansion on $\epsilon$:
\begin{equation}
u=\tanh(t)+\sum_{n=1}^\infty \epsilon^n u_n(t).
\label{AsymptoticsCloseToSeparatrix}
\end{equation}
Here the primary term of asymptotic expansion is a separatrix of equation (\ref{eqSnJacobi}) as $\epsilon=0$.
Equations of the high-order terms can be obtained after substituting of (\ref{AsymptoticsCloseToSeparatrix}) into (\ref{eqSnJacobi}) and collecting coefficients of $\epsilon^k$ for $k\i\mathbb{N}$. These equations are combined into a recurrent system.
For example the equations for $u_1$ and $u_2$:
\begin{eqnarray*}
\frac{2}{\cosh^2(t)} u_1'+4\frac{\tanh(t)}{\cosh^2(t)}u_1 +\tanh^4(t)-\tanh^2(t)=0
\\
\frac{2}{\cosh^2(t)} u_2'+4\frac{\tanh(t)}{\cosh^2(t)}u_2 +(u_1')^2 +(- 6\tanh^2(t) +2)u_1^2
\\
+(4\tanh^3(t)-2\tanh(t)) u_1=0.
\end{eqnarray*}
The equation for $n$-th order term has the form:
\begin{eqnarray}
\frac{2}{\cosh^2(t)} u_n'+4\frac{\tanh(t)}{\cosh^2(t)}u_n+\sum_{|\alpha|=n}A_\alpha \tanh^{\alpha_0}(t) u_1^{\alpha_1}\dots u_{n-1}^{\alpha_{n-1}}=0.
\label{EqSnJacobiUn}
\end{eqnarray}
Initial conditions for all corrections are $u_n|_{t=0}=0$.
The equations for the high-order terms have solutions in the form:
$$
u_n=\frac{a_n(t)}{\cosh^2(t)}.
$$
Particularly for $a_1$ one obtains:
$$
a_1'=\sinh^2(t),
$$
It yields:
$$
a_1=\frac{1}{8}\sinh(2t)-\frac{1}{4} t.
$$
The equation for $a_2$:
$$
a_2'=\frac{1}{64}\cosh(4t)-\frac{5}{32}\cosh(2t)+\frac{1}{8} t \tanh(t)+\frac{1}{16}t^2\sinh^2(t)+\frac{9}{64}.
$$
It yields:
$$
a_2=-\frac{t^2}{16}\tanh(t)-\frac{1}{256}\sinh(4t)+\frac{5}{64}\sinh(2t)-\frac{9}{64}t.
$$
The high-order terms can be obtained by the same way using (\ref{EqSnJacobiUn}). In particular for $n$-th order term one obtains:
$$
a_n'=-\frac{1}{2\cosh^{2n-6}(t)}\sum_{|\alpha|=n}A_\alpha \tanh^{\alpha_0}(t)a_1^{\alpha_1}\dots a_{n-1}^{\alpha_{n-1}}
$$
Obvious form of the solutions are very large. Here the asymptotic behaviour as $t\to\pm\infty$ are more important:
$$
a_n=O(e^{\pm 2nt}).
$$
Then the interval of validity for the constructed expansion is:
$$
\epsilon e^{\pm 2t}\ll1,\quad t\ll\mp\frac{1}{2}\log(\epsilon).
$$
As a result we get that the asymptotic expansion of the elliptic functions is valid when $\log(\epsilon)/2\ll t\ll -\log(\epsilon)/2$.
\begin{figure}
\vspace{-10cm}
\includegraphics[scale=0.5]{fig-sn-th.pdf}
\caption{Asymptotic curve and function $\hbox{sn}(t|1-\epsilon)$ near the separatrix.}
\end{figure}
The constructed expansion is valid for less than a half of the period for the elliptic function $\hbox{sn}$. This expansion is not valid for neighbourhoods of the turning points $u\sim1,u'=0$.
To match this expansion to another which will be constructed near the turning points we need an asymptotic properties of this expansion near the border of suitability.
Let us change the variable:
$$
t=-1/2\log(\epsilon)+\tau.
$$
Here $\tau$ is new independent variable. After substitution one obtains an asymptotic expansion as $\tau\ll-1$
\begin{eqnarray*}
u\sim 1+\epsilon\left(-\frac{e^{2\tau}}{128}-2 e^{-2\tau}+\frac{1}{4}\right)+
\\
\epsilon^2\left(\frac{\tau e^{2\tau}}{256}-\frac{\log(\epsilon)e^{2\tau}}{512} -\frac{5 e^{2\tau}}{512}-\tau e^{-2\tau} +\frac{\log(\epsilon)e^{-2\tau}}{2}-\frac{e^{-2\tau}}{2}+2e^{-4\tau}+\frac{11}{64}\right)
\end{eqnarray*}
The same asymptotic expansion can be obtained for neighbourhood of lower separatrix, if one uses the formula: $u(t+T/2,\epsilon)=-u(t,\epsilon)$.
\subsection{Asymptotic behaviour near turning point}
The elliptic function $\hbox{sn}$ has an asymptotic behaviour of another type near the turning points. Here we constructs the asymptotic expansion for the $\hbox{sn}$ near the saddle-point $(1,0)$.
\begin{equation}
u(t,\epsilon)=1+\sum_{n=1}^\infty \epsilon^nv_n(\tau).
\label{asymptotoicsNearTurningPoint}
\end{equation}
Here $v_n=v_n(\tau)$.
The equations for the coefficients of the expansion can be obtained by ordinary way. One should collect terms with the similar order of $\epsilon$. As a result one obtains a recurrent system of equations:
\begin{eqnarray*}
(v_1')^2=4v_1^2+2 v_1,
\\
2v_1'v_2'=8v_1 v_2-2v_2+4 v_1^3+5v_1,
\\
2v_1'v_n'=8v_1 v_n-2v_n+ P_n(v_1,\dots,v_{n-1}).
\end{eqnarray*}
Here $P_n$ is a polynomial of four power with $v_{k_1}v_{k_2} v_{k_3}v_{k_4}$, where $k_1+k_2+k_3+k_4=n$.
\begin{figure}
\vspace{-9cm}
\includegraphics[scale=0.5]{fig-sn-turning-point.pdf}
\caption{The neighbourhood of the turning point $u=1$. The asymptotic curve and function $\hbox{sn}(t-\log(\epsilon)/2,\sqrt{1-\epsilon}$ when $\epsilon=0.01$. The turning point not coincides with $\tau=0$, because for $\tau$ used only primary term of period for the elliptic function and not used the shift term $2\log(2)$.
}
\end{figure}
The solution for $v_1$ has the form:
$$
v_1=\frac{e^{2\tau}}{16} c_1+\frac{e^{-2\tau}}{4c_1}+\frac{1}{4}.
$$
Here $c_1$ is a parameter of solution.
The solution for the second-order term is:
$$
v_2=e^{2\tau}c_2-256 e^{-2\tau}c_2+\frac{e^{4\tau}}{32768}+\frac{\tau e^{2\tau}}{256}-\tau e^{-2\tau}-3 e^{-2\tau}+2 e^{-4\tau}+\frac{11}{64}.
$$
Here $c_2$ is a parameter of solution also.
The higher terms are solutions of linear equations of the first order. Their solutions can be presented in the form:
$$
v_n=e^{2\tau}c_n-256 e^{-2\tau}c_n+ O(e^{\pm2n\tau}),\quad \tau\to\pm\infty.
$$
Here $c_n$ is a parameter. It is defined by matching with asymptotic expansion which are valid outside of the small neighbourhoods of the turning points.
The validity of this expansion is defined by the condition:
$$
\epsilon^{n+1}v_{n+1}=o(\epsilon^{n} v_n).
$$
Using estimates for the grows of the terms one obtains:
$$
|\tau|\ll-1/2\log(\epsilon).
$$
The intervals of validity for (\ref{AsymptoticsCloseToSeparatrix}) are intersect when $t\gg1$ and $\tau\ll-1$. In the intersected field one can match the parameters of the asymptotic expansions. Here we choose the parameters $c_n$ of the terms $v_n$. In particular, the matching gives $c_1=-1/8$ and $c_2=-(\log(\epsilon)+5)/512$.
As a result:
\begin{eqnarray*}
u(t,\epsilon)\sim 1-\epsilon\left(-\frac{e^{2\tau}}{128} -2e^{-2\tau}+\frac{1}{4}\right)+
\\
\epsilon^2
\left(
-e^{2\tau}\frac{(\log(\epsilon)+5)}{512}- e^{-2\tau}\frac{(\log(\epsilon)+5)}{2}+\right.
\\
\left. \frac{e^{4\tau}}{32768}+\frac{\tau e^{2\tau}}{256}-\tau e^{-2\tau}-3 e^{-2\tau}+2 e^{-4\tau}+\frac{11}{64}
\right).
\end{eqnarray*}
\subsection{Uniform asymptotic expansion}
\begin{figure}[t]
\vspace{-8cm}
\includegraphics[scale=0.5]{fig-sn-matching-asymptotics-1.pdf}
\vspace{-1cm}
\caption{The combined asymptotic approximation, which is valid on over than a half of the period of elliptic function. $\epsilon=0.01$}
\label{fig-sn-matching-asymptotics-1}
\end{figure}
Now we are ready to construct a combined approximation of the function $u(t,\epsilon)\equiv\hbox{sn}(t|1-\epsilon)$, which will be uniform over more than a half of the period $t\in(\log(\sqrt{\epsilon}),-3\log(\sqrt{\epsilon}))$ as $\epsilon\to\infty$. To this we use an asymptotic device which was offered by S.Kaplun \cite{Kaplun} for combined approximations. We sum the constructed asymptotic expansions and subtract their common part. As a result we obtain the following formula (see figure \ref{fig-sn-matching-asymptotics-1}):
\begin{equation}
u(t,\epsilon)\sim \tanh(t)+\epsilon\frac{1}{\cosh^2(t)}\left(\frac{1}{8}\sinh(2t)-\frac{1}{4} t\right)+\frac{\epsilon}{4}-\epsilon^2\frac{e^{2t}}{128}.
\label{jacobi-sn-asymptotics-1}
\end{equation}
In the neighbourhood of the left saddle-point $u=-1$, $u'=0$ we can construct the same asymptotic expansion. But it is easy to get the formula $u(t+T/2,\epsilon))=-u(t,\epsilon)$ and the asymptotic expansion will be obtained automatically way using the asymptotic expansion (\ref{jacobi-sn-asymptotics-1}) (see figure \ref{fig-sn-matching-asymptotics-2}).
\begin{eqnarray}
u(t,\epsilon)\sim -\tanh(t-T/2)-
\nonumber
\\
\epsilon\frac{1}{\cosh^2(t-T/2)}\left(\frac{1}{8}\sinh(2(t-T/2))-
\right.
\nonumber
\\
\left.
\frac{1}{4} (t-T/2)\right)+
\frac{\epsilon}{4}-\epsilon^2\frac{e^{2(t-T/2)}}{128}.
\label{jacobi-sn-asymptotics-2}
\end{eqnarray}
\begin{figure}[t]
\vspace{-8cm}
\includegraphics[scale=0.5]{fig-sn-matching-asymptotics-2.pdf}
\caption{
The combined asymptotic approximation which is valid on the second half of the period.$\epsilon=0.01$}
\label{fig-sn-matching-asymptotics-2}
\end{figure}
The combined asymptotic approximation which is valid over all period of the elliptic function can be constructed by the same way, using the formulas (\ref{jacobi-sn-asymptotics-1}) and (\ref{jacobi-sn-asymptotics-2}) for the sum and subtract their common parts. But such combined asymptotic formula is large and does not written here.
\pagebreak
|
2,869,038,157,034 | arxiv | \section{Introduction}\label{intro}
We are interested in higher dimensional spacetimes for which curvature invariants of all orders vanish (VSI spacetimes), contained within this class are the higher dimensional pp-waves. In general, the higher dimensional VSI spacetimes have Ricci and Weyl type III \cite{Higher}. However, it is desirable to obtain explicit metric functions as has been done in four dimensions \cite{4DVSI}. We present metrics for Ricci type N, Weyl type III VSI spacetimes.
\section{Higher dimensional VSI spacetimes}
Any $N$-dimensional VSI metric can be written in the form \cite{CMPPPZ,CSI}
\begin{eqnarray} \mathrm{d} s^2=2\mathrm{d} u\left[\mathrm{d}
v+H(u,v,x^k)\;\mathrm{d} u+W_{i}(u,v,x^k)\;\mathrm{d} x^i\right]+\mathrm{d} x^i\mathrm{d}
x^i \label{Kundt}\end{eqnarray}
where $u$, $v$ are light-cone coordinates and $x^i$, $i=1,\dots,N-2$, are real spatial coordinates. The functions $H$, $W_i$ are real-valued. Note that (\ref{Kundt}) is a subclass of the higher dimensional Kundt metrics \cite{CMPPPZ}. Restricting (\ref{Kundt}) to be of Ricci type N results in the Einstein equation $R_{uu}=\Phi$, where $\Phi$ is determined by the matter field (see Appendix). Two distinct cases arise depending on whether the functions $W_i$ depend on the light-cone coordinate $v$ ($\epsilon=1$) or not ($\epsilon=0$):
\begin{equation}
W_1 = -\frac{2\epsilon}{x^1}\;v +W^{(0)}_1(u,x^k),\;\;\;
W_j = W^{(0)}_j(u,x^k) \nonumber
\end{equation}
Here $j=2,\dots,N-2$; the superscript $(0)$ indicates functions without $v$-dependence. The case $\epsilon=W^{(0)}_i=0$ corresponds to higher dimensional pp-waves. It is perhaps surprising that only one of the $W_i$ functions is allowed to depend on $v$, as in the $4$-D case. The resulting spacetimes are summarized in Table $1$. An important remark is that the function $W^{(0)}_1$ can be gauged away by a coordinate transformation, the corresponding results have been recently presented \cite{vsipaper}. In this note we treat all $W^{(0)}_i$ functions equally, thereby revealing additional algebraic symmetries found in the metric functions of solutions. This complements the paper by Coley et al, wherein further details and explanation may be found.
\begin{table}[h] \label{table:1}
\begin{center}
\scriptsize{
\begin{tabular}{|l|c|c|c|}
\hline
&&&\\
$\epsilon$ & {\bf Weyl} & {\bf Metric functions} & {\bf Eq.} \\
&&&\\
\hline
&&&\\
& III & $\begin{array}{c}
W_{i} = W^{(0)}_{i}(u,x^{k}) \\ \\
H = H^{(0)}(u,x^{i})+\frac{1}{2}\left(F-W^{(0)i}_{\quad\quad,i}\right)v,\;\; F(u,x^i)\;\mbox{defined by}\; F,_i=\Delta W^{(0)}_{i}
\end{array}$ & \ref{app1} \\
&&&\\
\cline{2-4}
&&&\\
0 & N & $ \begin{array}{c}
W_{i} = x^{k}B_{ki}(u),\;\;B_{ki}(u)\;\mbox{antisymmetric} \\ \\
H = H^{(0)}(u,x^{i}) \end{array}$ & \ref{app2} \\
&&&\\
\cline{2-4}
&&&\\
& O & $\begin{array}{c} W_i\;\mbox{as in type N} \\ \\ H^{(0)}=\frac{1}{2}W^iW_i+x^if_i(u)+H^{(0)}_{\Phi} \end{array}$ & $\begin{array}{c} \ref{app2} \\ \ref{app3} \end{array}$ \\
&&&\\
\hline
&&&\\
& III & $\begin{array}{c} W_1 = -\frac{2}{x^1}v + W^{(0)}_{1}(u,x^{k}) \\ \\
W_{j} = W^{(0)}_{j}(u,x^{k}) \\ \\
H = H^{(0)}(u,x^{i})+\frac{1}{2}\left(\tilde{F}-W^{(0)i}_{\quad\quad,i}-\frac{2W^{(0)}_{1}}{x^1}\right)v+\frac{v^2}{2(x^1)^2},\\ \tilde{F}(u,x^i)\;\mbox{defined by} \; \tilde{F},_1=\frac{2}{x^1}W^{(0)i}_{\quad\quad,i}+\Delta W^{(0)}_{1},\; \tilde{F},_j=\Delta W^{(0)}_{j} \end{array}$ & \ref{app4} \\
&&&\\
\cline{2-4}
&&&\\
1 & N & $\begin{array}{c} W_{1} = -2\frac{v}{x^{1}} + x^{j}B_{j1}(u)+C_{1}(u) \\ \\
W_{j} = x^{i}B_{ij}(u)+C_{j}(u),\;\; B_{j1}(u),B_{ij}(u)\; \mbox{antisymmetric} \\ \\
H = H^{(0)}(u,x^{i}) -\frac{W^{(0)}_{1}}{x^1}v+\frac{v^2}{2(x^{1})^{2}} \end{array}$ & \ref{app5} \\
&&&\\
\cline{2-4}
&&&\\
& O & $\begin{array}{c} W_i\;\mbox{as in type N} \\ \\ H^{(0)}=\frac{1}{2}(W^{(0)}_1)^2 +\frac{1}{2}\underset{j}{\sum} (W_j-x^1B_{1j})^2+x^1 g_0(u)+x^1x^ig_{i}(u)-\frac{1}{16}\Phi_0(u)x^1x^ix_i, \\ \Phi=\Phi_0(u)x^1 \end{array}$ & \ref{app5} \\
&&&\\
\hline
\end{tabular}
}
\caption{All higher dimensional VSI spacetimes of Ricci type N. Ricci type O (vacuum) for $\Phi=0$ in (\ref{app1})-(\ref{app5}).}
\end{center}
\end{table}
\section{Outlook: VSI's in supergravity}
We can consider the embedding of VSI spacetimes in supergravity. The idea is to construct bosonic solutions of the supergravity equations of motion where the metric is that of a VSI\cite{gyrsugra,ortin,alanlett}. This has been extensively studied in the literature in the case of (Weyl type N) pp-waves. These are exact string solutions to all orders in $\alpha'$. The corresponding proof relies on the pp-waves invariants property. Furthermore, supergravity pp wave solutions preserve supersymmetry. The VSI metrics presented here generalize the pp-waves while the invariants property is maintained. Therefore it is natural to expect VSI supergravity solutions to have some of the pp-wave solutions' features. In fact, a certain class of VSI supergravity solutions has already been proved to be exact to all orders in $\alpha'$ \cite{alanlett}. Supersymmetry properties have only been studied in the case of VSI's with a covariantly constant null vector \cite{ortin}. In conclusion, these results provide an opportunity to explore VSI supergravity solutions, and their associated supersymmetries. \\
|
2,869,038,157,035 | arxiv | \section{Introduction}
In the loop quantum gravity field variable is a self-dual connection, instead of the metric, and this new field variable is called the Ashtekar connection ~\cite{a1, a2}. The Wheeler-DeWitt equation is then reformulated in terms of the traces of the holonomies of the Ashtekar connection ~\cite{b1, b2}. In loop quantum gravity, the area and volume are represented by operators with discrete eigenvalues ~\cite{c1}. This discretization occurs near Planck scale, and this leads to a modification of the usual energy momentum dispersion relation to a modified dispersion relation at Planck scale ~\cite{lqg1, lqg2}. Even though this modified dispersion relation reduces to the usual dispersion relation in the IR limit, it considerably deviates from the usual energy momentum dispersion relation in the UV limit. This behavior of modification in the UV limit is also observed in the Horava-Lifshitz gravity, which is motivated by Lifshitz scaling between space and time ~\cite{12a, 12b}. In fact, such Lifshitz scaling also produces a deformation of the standard energy momentum dispersion relation to modified dispersion relation in the UV limit of the theory ~\cite{12d, 12e}. The Lifshitz deformation of supergravity theories in the UV limit has also been studied ~\cite{lf1, lf2, lf4, lf5}. It is also possible to motivate a different form of modified dispersion relation ~\cite{6, 7, 6ab, 7ab} using the high energy cosmic ray anomalies ~\cite{1, 2}. The modification of standard dispersion relation to modified dispersion relation has motivated the construction of double special relativity, where the Planck energy acts as another universal constant ~\cite{4,5}. This theory of double special relativity has been constructed using a non-linear modification of the Lorentz group. Such modification to the
dispersion relation have also been obtained from string field theory ~\cite{7a,kuch,kuch1}.
Thus, along with the phenomenological reasons, there are strong theoretical reasons to modify the energy momentum dispersion relation ~\cite{6, 7, 6ab, 7ab}.
It is possible to generalize double special relativity to curved space-time, and the resultant theory is called gravity's rainbow ~\cite{gr, gr12}. In this theory the metric depends on the energy of the probe used to analyze the geometry. The energy dependence is introduced into the metric using rainbow functions. As these rainbow functions depend on the energy of the probe, which in turn depend implicitly on the coordinates, they cannot be removed by rescaling ~\cite{gr14, gr14ab, gr17ab, gr18ab}. In fact, this is expected as the gravity's rainbow is related to Lifshitz deformation of geometries ~\cite{gr14}. Here the energy of the probe is converted into the length scale at which the probe is investigating the geometry, and this in turn has upper bound. Hence in gravity's rainbow, we cannot probe the space-time below Planck scale, as it is not possible to obtain an energy greater than the Planck energy. The experimental constraints on the rainbow functions from various experiments have been proposed ~\cite{w1}. The effect of these rainbow functions on the black information paradox have also been investigated ~\cite{w2}. Such a modification of a higher curvature gravity from gravity's rainbow has been studied, and used to analyze its quasinormal modes ~\cite{w5}. The gravity's rainbow geometries have been used to study the modification to the physics of neutron stars ~\cite{w7}. In all these systems, the gravity's rainbow only changes the UV Planck scale behavior of the system. However, the system behaves as the original un-deformed system, in the IR limit. Furthermore, the effect of rainbow function on a system depends on the kind of rainbow functions used to deform it ~\cite{w1}. Thus, in this paper, we will use the rainbow function obtained from the modification of energy momentum dispersion relation due to loop quantum gravity ~\cite{y1, y2}.
It has been proposed that naked singularity will not form due to loop quantum gravitational effects, and this was done by analyzing the non-perturbative semi-classical modifications to a collapsing system near the singularity ~\cite{singu}. Here we will demonstrate that these results can be obtained by analyzing the modification of energy momentum dispersion relation from loop quantum gravity ~\cite{gr, gr12}. To analyze such effect from loop quantum gravitational modifications on the formation of naked singularity, we will use gravity's rainbow. It has been proposed that naked singularities cannot form due to the weak cosmic censorship conjecture ~\cite{cc12, cc14}. However, several violation of cosmic censorship conjecture have been studied, and thus it seems that it is possible to form naked singularity in space ~\cite{cc16, cc18, cc19, cc20}. In fact, it has been argued that accretion properties of a collapsing system can distinguish between a naked singularity, a wormhole and a black hole ~\cite{si12}. It has been suggested that a naked singularity can form during a critical collapse of a scalar field ~\cite{si14}. The gravitational lensing by a strongly naked null singularity has been investigated ~\cite{si15}. It has been demonstrated that the nature of this divergence are not logarithmic. It has also been suggested that the formation of a naked singularity can be tested using astrophysical observation ~\cite{si17}. The shadow of a naked singularity without photon sphere has been analyzed ~\cite{si16}. It has been argued that the naked singularity cannot form due to quantum gravitational effects ~\cite{si18, si19}. Thus, it becomes important to analyze the effect of loop quantum gravity on the formation of naked singularities. As such modification to black hole geometries have already been studied ~\cite{gr14, gr14ab, gr17ab, gr18ab}, we will use gravity's rainbow to analyze the effect of loop quantum gravity on the formation of naked singularities. The effect of rainbow functions on the formation of naked singularities has also been studied, and it was observed that rainbow deformation can violate the cosmic censorship conjecture ~\cite{cca12}. However, we will argue that this cannot be the case, as due to the rainbow deformation, it is not possible to probe space-time below Planck scale. This presents the formation of naked singularities.
\section{Collapse in gravity´s rainbow }
To properly analyze the formation of naked singularities, and weak cosmic censorship conjecture, we will first analyze the deformation of a solution with Einstein equation by rainbow functions, which are consistent with loop quantum gravity ~\cite{lqg1, lqg2}. It has been proposed that the Einstein equations depend on the energy due to rainbow deformation, $ {G_{\mu \nu } (E / E_p)=\mathcal{R}_{\mu\nu}(E / E_p)-\frac{1}{2}g_{\mu\nu}(E / E _p)\mathcal{R}(E / E_p)=8{\pi T}_{\mu \nu }}$ (with the standard unit $G=c=1$). Here $E_p$ is Planck energy, and $E$ is the energy at which the system is probed. Thus, the geometry depends on the energy used to probe it. However, for $E<< E_p$, we can neglect the rainbow deformation. It is possible to incorporate this rainbow deformation into the spherically symmetric line element in comoving coordinates $(t, \; r,\; \theta, \; \phi)$ as (with (-,+,+,+) as the signature)
\begin{align}\label{1}
ds^{2}= -\frac{e^{2 \lambda (t,r)}}{f(E)^2} dt^2 +\frac{e^{2 \psi (t,r)}}{g(E)^2} dr^2 +\frac{R(t,r)^{2}}{g(E)^2} d\theta^{2} +\frac{R(t,r)^{2} \sin^{2}{\theta}}{g(E)^2} d\phi^{2}
\end{align}
where $R(t,r)$ is the physical radius at time $t$ of the shell labeled by $r$. Here $f(E)$ and $g(E)$ are rainbow functions which make the metric depend on the energy of the probe $E$. As in the IR limit, the gravity's rainbow has to reduce to the usual general relativity, we expect that these rainbow function satisfy
\begin{align}
\lim_{E / E _p\to 0} f(E/ E _p) &= 1 & \lim_{E /E
_p \to 0} g(E / E_p) & = 1
\end{align}
It is possible to obtain a specific form of these rainbow function from loop quantum gravity ~\cite{lqg1,lqg2}. We will use these these specific rainbow functions, and demonstrate that they will prevent the formation of naked singularities. However, before that we observe that any rainbow functions will limit the scale to which we can probe the system. It is possible to translate the uncertainty $\Delta p \geq 1/ \Delta x$ into a bound on energy of the probe $E$, as $E \geq 1/\Delta x$. Here $\Delta x$ would correspond to the scale to which any length scale in the system can be measured. We cannot take this length to be the same order as the Planck length, as such a bound is restricted by black hole physics ~\cite{uncer1, uncer2,quant}.
As there is this minimal length in the system, we have a bound on the maximum energy need to probe such a minimal length. Now if $E< E_p$ is such a maximum energy needed to probe, then we have to analyze rainbow modification to the systems when $E$ is of the same order as $E_p$ (and can be neglected for $E<< E_p$).
The formation of naked singularity from a collapsing spherically symmetric object has been already been studied ~\cite{Singh:1994tb, Singh:1997wa}. Here we will investigate the rainbow deformation of a collapsing spherically symmetric object.
To explicitly analyze such a system, we express the energy momentum tensor for a spherically symmetric object in comoving coordinates as $T^{\mu}_{~{\nu}}=\text{diag}(-\rho,\; p_r, \; p_\theta, \;p_\theta),
$ where $\rho,\; p_r\; \text{and} \; p_\theta,$ are functions of $t$ and $r$.
After solving for different non-zero components of Einstein equations, and suitably deforming it by rainbow functions (using the $OGRE$ package ~\cite{Shoshany:2021iuc}), we get the following set of equations,
\begin{align}
G^t{}_t &= -\frac{g(E )^2\left(1-e^{-2\psi }\left(R'^2+2R R''-2R R'\psi '\right)\right)+e^{-2\lambda }f(E )^2\left(\dot{R}^2+2
R \dot{R}\dot{\psi }\right)}{R^2} \nonumber \\ & = -8\pi \rho \label{4.1} \\
G^r{}_r &= -\frac{e^{-2\psi }g(E )^2}{R^2}\left(e^{2\psi }-R'^2-2R R'\lambda '\right)-\frac{e^{-2\lambda }f(E )^2}{R^2}\left(\dot{R}^2+2R
\ddot{R}-2R \dot{R}\dot{\lambda }\right) \nonumber \\& =8\pi p_r \label{5.2} \\
G^{\theta }{}_{\theta } &= G^{\phi }{}_{\phi }=\frac{e^{-2\psi }g(E )^2}{R}\left(R''+R'\left(\lambda '-\psi '\right)+R\left(\lambda ''+\lambda
'^2-\lambda '\psi '\right)\right) \nonumber \\& -\frac{e^{-2\lambda }f(E )^2}{R}\left(\ddot{R}-\dot{R}\left(\dot{\lambda }-\dot{\psi }\right)+R\left(+\ddot{\psi
}+\dot{\psi }^2-\dot{\lambda }\dot{\psi }\right)\right)\nonumber \\& = 8\pi p_\theta \label{8} \\
G^r{}_t& = \frac{2e^{-2\psi }g(E )^2\left(\dot{R}\lambda '+R'\dot{\psi }-\dot{R}'\right)}{R}=0\label{7}
\end{align}
where $dot$ and $prime$ are the derivatives with respect to time and radial coordinates respectively. Now using the definition of Misner–Sharp Mass~$F(t,r)$ as $ 1-(F(t,r)/R)=g^{\mu \nu }\triangledown _{\mu } R \triangledown _{\nu }R$ ~\cite{Bambi:2019xzp}, we can write a rainbow deformation of
Misner–Sharp Mass as
\begin{align}\label{4}
F(t,r)=R\left(1+f(E )^2e^{-2\lambda }\dot{R}^2-g(E )^2e^{-2\psi }R^{\prime ^2}\right)
\end{align}
It would be useful to define $j=1-g(E)^2$. Using this, we observe $F'=j R'+8\pi \rho R^2R'$ and $\dot{F}=j \dot{R}-8\pi p_rR^2\dot{R}$. Now due to the conservation of energy momentum tensor ${T^{\mu }{}_{\nu ;\mu }=0}$, we obtain $\dot{\rho }+\left(\rho +p_r\right)\dot{\psi }+{2\left(\rho +p_{\theta }\right)\dot{R}}/{R}=0$ and $p_{\theta }'+\lambda '\left(\rho +p_r\right)+{2\left(p_r-p_{\theta }\right)R'}/{R}=0$.
The acceleration equation can be obtained by defining $h(t,r)=1-g(E)^2e^{-2\psi }R^{\prime ^2}$. Here $h(t,r)$ depends on the rainbow function $g(E)$, hence is an energy dependent function. Thus, the initial density and velocity profile conditions of the collapsing dust ~\cite{Singh:1994tb} would depend on the maximum energy of the system, and would get deformed in the UV limit. We observe that using $h(t,r)$, we can obtain,
\begin{align}\label{12}
{F(t,r)}/{R}=f(E )^2e^{-2\lambda}\dot{R}^2+h(t,r)
\end{align}
and ${\dot{h}}/{(1-h)}=- {2\dot{R}\lambda '}/{R'}$. Now we can write the equation for acceleration as
\begin{align}\label{14}
\ddot{R}&=\dot{R}\dot{\lambda }+\frac{e^{2\lambda }}{f(E )^2}\left(\frac{j }{2R}-4\pi R p_r-\frac{F}{2R^2}+\frac{(1-h)}{f(E
)^2R'}\left(-\frac{p_r'}{\rho +p_r}+\frac{2R'\left(p_{\theta }-p_r\right)}{R\left(\rho +p_r\right)}\right)\right)
\end{align}
We can define the proper time as ${{d\tau } =\frac{e^{\lambda }}{f(E)}}{dt} $, and use it to obtain
\begin{align}\label{16}
\frac{d^2R}{d \tau ^2} &= \frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\bigg(\frac{j}{2R}-4\pi R p_r-\frac{F}{2R^2}+\frac{(1-h)}{f(E )^2R'}\bigg(-\frac{p_r'}{\rho +p_r}\nonumber\\&+\frac{2R'\left(p_{\theta }-p_r\right)}{R\left(\rho
+p_r\right)}\bigg)\bigg)
\end{align}
This modified equation depends on the maximum energy of the system due to the rainbow functions $f(E)$ and $g(E)$. So, the conditions for the collapse from this equation will be implicitly energy dependent. This interesting energy dependence constraints can have important consequences for the formation of naked singularity.
Let us take an example of perfect fluid where the radial and tangential pressures are equal ($p_r=p_\theta=p$), and write the equation of acceleration for this perfect fluid as
\begin{align}\label{17}
\frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-4\pi R p-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\frac{p'}{\rho +p}\right)
\end{align}
Now by setting the acceleration and velocity equal to zero, we obtain the Oppenheimer–Volkoff equation for hydrostatic equilibrium ~\cite{Oppenheimer:1939ne}
\begin{align}\label{18}
{-R^2p'=\frac{f(E )^2(\rho +p)}{1-\frac{F}{R}}\left(\frac{F}{2}+4\pi R^3 p-\frac{R(1 - g(E)^2) }{2}\right)}
\end{align}
where we have used ${h={F}/{r}}$ (after putting velocity equal to zero). The derivative of radial pressure appears in the acceleration equation. The fate of the collapse is determined by the radial and tangential pressure. It is noted from Eq.\eqref{16} that positive tangential pressure and negative tangential pressure opposes and supports the collapse, respectively. The rainbow functions that appear in Eq.\eqref{14} and \eqref{16} could reduce the value of acceleration, but could not change the overall sign of these equations, because these rainbow functions are always less or equal to one ($f(E) \leq 1 \; \text{and} \; g(E) \leq 1)$).
Thus, they can only change the relative magnitude of these forces, and the effect these forces will have on the collapsing system.
\section{Conditions for collapse from acceleration equation}
In this section, we will discuss different cases for the spherical collapse, and the effect of the rainbow functions on them. For the dust case, we set the tangential and radial pressures equal to be zero ($p_r=p_\theta=0$). So, let us assume that collapse starts from rest at $t=0$, and we set the initial conditions as $
R(t,r)\big|_{t=0}=r ~ \text{and}
~ F(t,r)\big|_{t=0}=F_c(r)$.
Using these initial conditions in expression of $\dot{F}$, Misner- Sharp mass becomes
\begin{align}
F(t,r)=F_c(r) + (1-g (E)^2)(R-r) \label{18.1}
\end{align}
Here $ p_{\theta}= p_{r} =0$ for the dust case and the $\lambda$ is only the function of $t$. Thus, we can redefine $t$ and set $\lambda=0$. Similarly, for the dust case, $h(t,r)$ will be function of $r$ only. With these re-definitions of the variables, we can write the rainbow deformed metric for the dust case as
\iffalse
From Eq.(\ref{13}) it is evident that $h(r)$ is a function of $r$ only. The metric (\ref{1}) with the help of Eq.(\ref{11}) can be written as,
\fi
\begin{align}\label{20}
{{ds}^2=\frac{-{dt}^2}{f(E )^2}+\frac{R^{{\prime 2}}}{1-h(r)}{dr}^2+\frac{R^2}{g(E )^2}{d\Omega }^2}
\end{align}
where $d\Omega^2=d\theta^2+sin^2\theta ~ d\phi^2$. Here $h(r)$ is less than $1$, so that the coefficient of $dr^2$ remain spacelike.
From Eq.(\ref{12}), we find the equations, governing the behaviour of dust in the framework of rainbow functions that are consistent with quantum gravity.
\begin{align}\label{21}
\frac{F_c(r)+ j(R-r)}{R}=f(E)^2\dot{R}^2+h(r)
\end{align}
Eq.\eqref{21} is discussed in details in the next section.
For collapse to start, the acceleration has to be negative or inward. We will use this fact to derive the condition for collapse to discuss different cases of tangential pressure, radial pressure and perfect fluid.
\subsection{Tangential Pressure}\label{tangential}
Now if the tangential pressure $p_{\theta}$ is non-zero, and the radial pressure $p_{r}$ is zero, a simplification of Eq.(\ref{16}) occurs. This simplified equation of acceleration can be written as
\begin{align}\label{22}
\frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-\frac{F}{2R^2}+\frac{(1-h)}{f(E )^2R'}\frac{2R'p_{\theta }}{\rho R}\right)
\end{align}
For collapse to begin, the acceleration has to be negative. Assuming the collapse to start from rest($\dot{R}=0$), and from Eq.(\ref{12}) we obtain ${h=\frac{F_c}{r}}$, then the condition for collapse to begin at $t=0$ turns out to be,
\begin{align}
\frac{F_c}{2r}>\frac{\frac{j }{2}+\frac{2p_{\theta }}{f(E )^2\rho }}{1+\frac{4p_{\theta }}{f(E )^2\rho }}=p_\theta \; \text{condition}
\label{23}
\end{align}
If the condition of Eq.\eqref{23} is satisfied at $t=0$ and holds at all later times, the collapse will begin. In this case, the singularity will form at $r=0$, if acceleration is negative, throughout the later time evolution of the system. We also observe from Eq.\eqref{23}, that this system depends on the energy because of the rainbow functions.
\begin{figure}[htp]
\centering
\includegraphics[height=6cm,width=8cm]{pf.pdf}
\includegraphics[height=6cm,width=8cm]{pg.pdf}
\caption{Plots of R.H.S. of Eq.\eqref{23} versus f($E)$ and g($E)$ for different of ratio of $p_\theta/\rho.$}
\label{pfg}
\end{figure}
The dependence on the ratio $p_{\theta}/\rho$ generally evolves with time. However, to
examine the dependency of collapse on these rainbow functions, we assume the ratio $p_{\theta}/\rho$, remains constant in time. For positive tangential pressure and density, we assume that this ratio lies in the range $0 \leq p_\theta/\rho \leq 1$. In Fig.\eqref{pfg}, we have plotted Eq.\eqref{23}. It follows from both the graphs, as the energy of the system tends towards the Planck scale, $f(E)$ and $g(E)$ tends towards zero. This makes it difficult for collapse to happen, which in turn has consequences for the singularity formation. Hence we can conclude that the rainbow functions modify the collapsing system. In the IR limit, $f(E) = g(E)=1$, we get back the same condition as described in ~\cite{Barve:1999ph}.
\begin{align}
{\frac{F_c}{r}>\frac{4\left.p_{\theta }\right/\rho }{1+4\left.p_{\theta }\right/\rho }}
\end{align}
\subsection{Radial Pressure}
Here we can analyze the effect of a non-zero radial pressure $p_{r}$, with a vanishing tangential pressure $p_{\theta}$. This assumption again leads to a simplification of Eq.(\ref{16}), and this modified equation of acceleration is given by
\begin{align}\label{48}
\frac{d^2R}{d \tau ^2} &= \frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\bigg(\frac{j}{2R}-4\pi R p_r-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\left(\frac{2R'p_r}{R\left(\rho +p_r\right)}+\frac{p_r'}{\rho +p_r}\right)\bigg)
\end{align}
Now the condition on radial pressure for the collapse to begin at $t=0$ turns out to be,
\begin{align}\label{49}
\frac{F_c}{2 r}>\frac{\frac{j }{2}-4\pi r^2 p_r-\frac{2p_r+r p_r'}{f(E)^2(\rho +p_r)}}{1-\frac{4p_r+2 r p_r'}{f(E)^2(\rho +p_r)}}
\end{align}
Again, in the IR limit $f(E) = g(E)=1$, we obtain the condition for collapse which can be derived from general relativity ~\cite{Barve:1999ph}. This depends on the density $\rho $, radial pressure and its derivative
\begin{align}
\frac{F_c}{2 r}>\frac{4\pi r^2 p_r+\frac{2p_r+r p_r'}{\rho +p_r}}{-1+\frac{4p_r+2 r p_r'}{\rho +p_r}}
\end{align}
\subsection{The Perfect Fluid}
In this approximation, called the perfect fluid approximation both the radial and tangential pressures are set to zero, $p_r = p_\theta = p$. In this case, Eq.(\ref{16}) becomes,
\begin{align}\label{24}
{\frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-4\pi R p-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\frac{p'}{\rho +p}\right)}
\end{align}
For collapse to begin, the acceleration has to be negative. Now using Eq.(\ref{12}), we obtain ${h=\frac{F_c}{r}}$ and the condition for collapse to begin at $t=0$ turns out to be,
\begin{align}\label{24.01}
{\frac{F_c}{2r}>\frac{\frac{j }{2}-4\pi r^2 p-\frac{r p'}{f(E )^2(\rho +p)}}{1-\frac{2r p'}{f(E )^2(\rho +p)}}}
\end{align}
Here in the IR limit, $f(E) = g(E)=1$, we obtain the condition which can be obtained from general relativity
\begin{align}
\frac{F_c}{2r}>\frac{4\pi r^2 p+\frac{r p'}{\rho +p}}{-1+\frac{2r p'}{\rho +p}}
\end{align}
We have investigate the dependence of this conditions for the different physical values of the pressure, such as when the tangential pressure or radial pressure are zero or both are set equal. We are interested in studying the case of dust and the effect of loop quantum gravitational modifications on the formation of naked singularity using gravity’s rainbow framework.
\section{The Dust Solution}\label{dust}
The Tolman-Bondi dust collapse has been investigated in general relativity ~\cite{Singh:1994tb,Barve:1999ph,Singh:1997wa,Gundlach:1997wm}. The results for a marginally bound case and a non-marginally bound case have been analyzed in these studies. However, we will restrict our discussion here to only the marginally bound case $i.e.~ h(r) = 0$. The results for the non-marginally bound case can be derived using the same procedure. Now we will explicitly use the rainbow function motivated from loop quantum gravity ~\cite{y1, Amelino-Camelia:2008aez}
\begin{eqnarray}
f(E )=1, &&\,\,\,\,\,\,\,\,\,\,\,\,\,\, g(E )=\sqrt{1-\eta \frac{E}{E_p}}
\end{eqnarray}
Using these rainbow functions, we can explicitly write the metric as
\begin{align}\label{29}
{ds}^2=-{dt}^2+R^{{\prime 2}}(t,r){dr}^2+\frac{R^2(t,r)}{g(E )^2}{d\Omega }^2
\end{align}
Here $ {\dot{R}}$ will also depend on the energy of the probe
\begin{align}\label{30}
{\dot{R}=-\sqrt{j +\frac{F_c(r)-rj }{R}}}
\end{align}
We note that, along radial null geodesic, we can write
\begin{align}\label{29.1}
\frac{\partial t}{\partial r}=R'
\end{align}
Solving the above equation, at constant $r$ using the boundary condition $R_{t=0}=r$, we obtain
\begin{align}\label{31}
t&= \frac{\sqrt{r F_c}}{j }-\frac{F_c-rj}{j^{3/2}}\tanh ^{-1}\big(\sqrt{{F_c}}{(rj)^{-1}}\big) -\frac{R}{j }\sqrt{j +\frac{F_c-rj}{R}}\nonumber\\&+\frac{F_c-rj}{j^{3/2}}\tanh
^{-1}\bigg(\sqrt{1+\frac{F_c-rj}{Rj}}\bigg)
\end{align}
Using the standard procedure, we introduce the auxiliary variables $u,X$ ~\cite{Dadhich:2003gw},
\begin{eqnarray}
u=r^\alpha ,\; \alpha>0, &&
\; X=\frac{R}{u}
\end{eqnarray}
In order for the singularity at $r = 0$ to be naked, radial null geodesics should be able to propagate outwards, starting from the singularity. A necessary and sufficient condition for this to happen is that the area radius $R$ increases along an outgoing geodesic, because $R$ becomes negative in the unphysical region. Thus in the limit of approach to singularity, we write
\begin{align}\label{ghosh}
X_0 &= \underset{R\to 0,u\to 0}{\lim}\frac{R}{u}=\underset{R\to 0,u\to 0}{{\lim}}\frac{{dR}}{{du}}
\nonumber \\ & =\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}\frac{{dR}}{{dr}}=\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}\bigg(R'+\frac{\partial t}{\partial r}\dot{R}\bigg)\nonumber\\&=\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}R'\left(1+\dot{R}\right)
\end{align}
We can evaluate $R'$ from Eq. \eqref{31} and in the resulting expression, substitute $R=Xr^\alpha$. Then divide by $r^{\alpha-1}$, we obtain the following expression
\begin{align}\label{32}
\frac{R'}{r^{\alpha -1}}&=X\big\{-2r^{3/2} A_1 A_2F'_c\sqrt{j } +r \sqrt{F_c} \big(A_2+r^{\alpha /2} A_1\sqrt{X} \sqrt{j} +2A_1A_2\big(\tanh ^{-1}\big(\sqrt{{F_c}{(rj)^{-1} }}\big)\nonumber\\&-\tanh ^{-1}A_1\big) \big) \big(-j +F'_c\big)\big\}\big\{\sqrt{F_c} \big(r A_2j +r^{\alpha /2} A_1\sqrt{X} \big(r-2 r^{\alpha } X\big)j^{3/2}
\nonumber\\& +F_c \big(A_2-r^{\alpha /2} A_1\sqrt{X} \sqrt{j } \big)\big)\big\}^{-1}
\end{align}
where $A_1^2= {1-{r^{-\alpha } (j r+F_c)}{(j X)^{-1}}} \; \text{and} \;
A_2^2=j (r-r^{\alpha } X )+F_c$. Assuming $u=r^\alpha$ along the radial null geodesic, we can write the following
\begin{align}
\frac{dR}{du}=\frac{1}{{\alpha r}^{\alpha -1}}\frac{{dR}}{{dr}}=\frac{1}{{\alpha r}^{\alpha -1}}\left(R'+\frac{\partial
t}{\partial r}\dot{R}\right)
\end{align}
Using the expression from Eq. \eqref{29.1}, the above equation takes the form,
\begin{align}\label{33}
{\alpha \frac{{dR}}{{du}}=\left(1-\sqrt{j +\frac{F_c-rj }{R}}\right)\frac{R'}{r^{\alpha -1}}}
\end{align}
here $\frac{R'}{r^{\alpha -1}}$ is given by Eq. \eqref{32}. Roots analysis is done for the above case numerically as it is not possible analytically. To proceed further we consider the power series form of $F_c(r)$ as
\begin{align}\label{seriesF}
F_c(r)=F_0+F_1 r+F_2 r^2+F_3 r^3+ F_4 r^4...
\end{align}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{53Xr.pdf}
\includegraphics[scale=0.4]{73Xr.pdf}
\includegraphics[scale=0.4]{93Xr.pdf}
\caption{Plots of X versus r for various values of Energy $E=0,0.1,10$ and $100$ in Rainbow energy function $f(E )=1, g(E )=\sqrt{1-\eta \frac{E}{E_p}}$. Here $E_p$ is taken as $10^{19}$ and $\eta=10^{17}$}
\label{fig:Xr}
\end{figure}
We have plotted the value of $dR/du=R/u$ versus $r$ to see how the plot behaves near $r\to0$. This is done to investigate if $dR/du=R/u$ has a solution, in the limit $r\to0$. As shown by red curves in Fig. (\ref{fig:Xr}) that, in general relativity there is a real and positive value of $X$ in the limit $r\to 0$, while in gravity's rainbow, the curves departs farther from the vertical axis with increasing the value of $E$ as depicted from the plots itself. This suggest that there is no real and positive value of $X$ exists as $r\to 0$ due to the deformation by gravity´s rainbow as opposed to general relativity. In fact, one can check numerically that it gives the complex roots of equation $dR/du=R/u$ for all values of $r<r_0$ near $r=0$. This can be checked for different values of the energy of the probe $E$. We observe as long as $E<E_p$, and of the same order (such that we cannot neglect $E/E_p$), the naked singularity will not form. This has been explicitly demonstrated for $E=0.1,10$ and $100$. For numerical analysis values of all model parameters are mentioned in plots . It can be concluded from above analysis that due to the deformation from the rainbow function which are motivated from loop quantum gravity ~\cite{y1, Amelino-Camelia:2008aez}, the naked singularity will not form.
\section{Conclusion}
It is known that the energy momentum dispersion relation would be modified at Planck scale due to loop quantum gravitational effects. This loop quantum gravitational modified energy momentum dispersion relation can be used to obtain suitable rainbow functions. We have used those rainbow function to analyze the effect of loop quantum gravity on a collapsing system. We demonstrate that the modifications to the collapsing system by loop quantum gravity prevents the formation of a naked singularity. We comment that this was expected, as in loop quantum gravity the Planck scale structure of space-time is modified, and so we would expect the singularities would be removed due to these effects. This is explicitly demonstrated in this paper using gravity's rainbow. The maximum energy of the system, which acts as a probe, is fixed using the uncertainty principle. Thus, the energy of the rainbow functions is expressed in terms of distance scale in the collapsing system.
It would be interesting to analyze the collapse in different modified theories of gravity using this Planck scale modification. Thus, we can analyze this system gravity with higher curvature terms, and then deform that system by gravity's rainbow. We would also like to point out that the deformation by gravity's rainbow depends on the rainbow functions. Here the rainbow functions were obtained using results from loop quantum gravity. However, it is possible to obtain rainbow functions from other motivations. It is expected that the formation of naked singularities would also depend critically on the kind of rainbow functions used to deform the system.
\section{Acknowledgement}
M. V. Akram would like to thank Sukratu Barve for useful discussions. I.A.Bhat would like to thank the Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi for its hospitality where a major part of this work was carried out.
|
2,869,038,157,036 | arxiv | \section{Introduction}
The origin of the emission from the Galactic Center (GC) at keV--TeV energies
has been extensively discussed in the literature over last few years. In
their recent paper, \citet{Hooper10} claimed that the $\gamma$-ray emission
from the Galactic Center region, measured with the \emph{Fermi}\ LAT
instrument~\citep{FermiOverview} cannot be described by a combination of
spectra of known point sources, diffuse emission from the Galactic plane and
diffuse spherically symmetric component (changing on the scales much larger
than $1^\circ$). An additional spherically symmetric component was suggested
to be needed in the central several degrees. This component was then
interpreted as a dark matter annihilation signal with the dark matter
distribution having power law density profile $\rho(r) \propto r^{-\alpha}$,
$\alpha \approx 1.34$. The observed excess is at energies between $\sim
600$~MeV and $\sim 6$~GeV and the mass of the proposed DM particle was
suggested to be in the GeV energy band.
In this work we analyze the \emph{Fermi}\ data, used in \citet{Hooper10}, utilizing
the data analysis tool, provided by the \emph{Fermi}\ team.
\section{Data}
For our analysis we consider 2 years of \emph{Fermi}\ data collected between August,
4th, 2008 and August 18th, 2010. The standard event selection for source
analysis, resulting in the strongest background-rejection power (\emph{diffuse
event class}) was applied.\footnote{See e.g.
\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools}} In addition,
photons coming from zenith angles larger than $105^\circ$ were rejected to
reduce the background from gamma rays produced in the atmosphere of the Earth.
The \emph{Fermi}'s point-spread function (PSF) is non-Gaussian and strongly depends
on energy \citep{FermiCalibration,FermiOverview}. In order to properly take it
into account and better constrain the contributions from Galactic and
Extragalactic diffuse backgrounds we analyze a $10^\circ \times 10^\circ$
region around the Galactic Center.
\subsection{Model}
\label{sec:Model}
To describe emission in the $10^\circ \times 10^\circ$ region we use the model
containing two components -- point sources and diffuse backgrounds.
To model the contribution from the point sources we include 19 sources from 11
months \emph{Fermi}\ catalog \citep{1FGLcat} falling into the selected region plus 4
additional sources described in \citet{Chernyakova10}. We fix the positions of
the sources to coordinates given in the catalog. We model their spectra as
power law (in agreement with \citealt{1FGLcat}). Thus we have 46 free
parameters (power law index and norm for each of the sources) to describe the
point-source component of the model.
To describe the diffuse component of emission, we use the models for the
Galactic diffuse emission (\texttt{gll\_iem\_v02.fit}) and isotropic
(\texttt{isotropic\_iem\_v002.txt}) backgrounds that were developed by the LAT
team and recommended for the high-level
analysis~\citep{FermiEGB}\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood\_tutorial.html}}.
These models describe contributions from galactic and extragalactic diffuse
backgrounds correspondingly. The number of free parameters for the diffuse
background model is 2 (the norms for each of the backgrounds). The total
number of free parameters in our model is thus 48.
This model is similar to the one described in \citet{Chernyakova10}.
\subsection{Analysis}
\label{sec:analysis}
The data analysis was performed using the LAT Science Tools package with the
P6\_V3 post-launch instrument response function \citep{Rando09}.
We find the best-fit values of all parameters of the model of
Section~\ref{sec:Model} (using \texttt{gtlike} likelihood fitting tool) and
determine resulting log-likelihood \citep{Mattox96} of the model. Best fit
values for the obtained fluxes agree within statistical uncertainties with
fluxes reported in \emph{Fermi}\ Catalog~\citep{1FGLcat} and in
\citet{Chernyakova10} (e.g. for the central source we obtained the flux
$5.68\times 10^{-8}~\mathrm{cts/cm^2/s}$ while the catalog gives
$(5.77\pm0.3)\times 10^{-8}~\mathrm{cts/cm^2/s}$).
We then freeze the values of the free parameters of our model and simulate
spatial distribution of photons at energies above 1~GeV (using
\texttt{gtmodel} tool). The significance of residuals, (Observation - Model)/
statistical error, is shown in Fig~\ref{fig:rel_residuals}. We see the absence
of structures in the central $2^\circ$ region. The average value of residuals
is about 10\% in the $2^\circ$ region around the GC, compatible with estimated
systematic errors (10-20\%) of \emph{Fermi}\ LAT at 1~GeV.\footnote{See e.g.
\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT\_caveats.html}}
Thus we see that the adopted model (point sources plus galactic and
extragalactic diffuse components) explains the emission from the GC region and
no additional components is required.
\begin{figure}
\includegraphics[width=.5\textwidth]{resid_significance}
\caption{The map of significance of residuals for the region around the
Galactic Center.}
\label{fig:rel_residuals}
\end{figure}
\begin{figure}
\includegraphics[width=.5\textwidth]{spectrum-v5}
\caption{Spectrum of the point source at the GC reported in
\citet{Chernyakova10} (green points) together with the HG10 total spectrum
from $1.25^\circ$ (black points), excess (blue squares) and GC point
source flux from HG10 (red open circles). Continuation of the HESS data
~\citep{vanEldik:07,Aharonian:04} (blue points) data with a power law is
shown with dashed black line.}
\label{fig:gc_spectrum}
\end{figure}
\begin{figure*}
\centering
\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{GalPlane}
\end{minipage}~\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{galactic_tot1_3}
\end{minipage}
\caption{\textbf{Left:} the ``inner'' ($5^\circ$ around the Galactic plane)
and ``outer'' regions. \textbf{Right:} Effects of the energy dependence of
the effective area for the spectra of the ``inner'' and ``outer'' regions.
}
\label{fig:aeff}
\end{figure*}
\section{Discussion}
We conclude that the signal within central $1^\circ{-}2^\circ$, containing the
``excess'' found by \citealt{Hooper10} (\textbf{HG10} hereafter), can be well
described by our model : (point sources plus Galactic and extragalactic
diffuse background components). The discrepancy is then due to a different
interpretation of the data.
The spectrum of the central point source (1FGL J1745.6-2900c, probably
associated with the Galactic black hole Sgr A$^*$) was taken in HG10 to be a
featureless power-law starting from energies about 10~TeV (results of HESS
measurements, blue points with error bars in Fig.~\ref{fig:gc_spectrum},
\citep{Aharonian:04,vanEldik:07}) and continuing all the way down to $\sim
1$~GeV. The flux attributed in this way to the central point source is
significantly weaker than in the previous works. For comparison, the (PSF
corrected) spectrum of the GC point source reported in~\citet{Chernyakova10}
is shown in Fig.~\ref{fig:gc_spectrum} in green points. Its spectral
characteristics are fully consistent with the results of 11-months \emph{Fermi}\
catalog~\cite{1FGLcat} ($\sim 6\times 10^{-8}~\mathrm{cts/cm^2/s}$ above
1~GeV, compared to the $\sim 5\times 10^{-9}~\mathrm{cts/cm^2/s}$ at the same
energies in HG10). The change of the slope of the source spectrum below $\sim
100$~GeV, as compared with the HESS data is explained by \citet{Chernyakova10}
with the model of energy dependent diffusion of protons in the few central
parsecs around the GC. Alternatively, the spectrum can be explained with the
model developed in \citet{Aharonian05}. The low-energy (GeV) component of the
spectra in this model is explained by synchrotron emission from accelerated
electrons, while high-energy (TeV) one by inverse Compton radiation of the
same particles. According to the analysis of \citet{1FGLcat,Chernyakova10}
the central point source provides significant contribution to the flux in the
1.25$^\circ$ central region. HG10 suggest, apparently, a different
interpretation. They assume that there is no significant change in the
spectrum of the central source at $\sim 100$~GeV and the spectrum observed by
HESS at high energies continues to lower energies. Then, large fraction of
the flux between the energies $\sim 600$ MeV and $\sim 6$~GeV has to be
attributed to the ``DM excess''. One of the reasons in favor of such an
interpretation could be the feature in the total spectrum from the central
region (rise between $\sim 600$~MeV and several GeV) discussed in HG10. Such
a feature would also be consistent with a possible contribution from
millisecond pulsars \citep{Abazajian10a}, that is also expected to have a
maximum at $\sim 2-3$~GeV.
To illustrate the nature of the spectral shape at these energies we collected
``front converted'' (\textsc{front}) photons from the region of the width
$5^\circ$ around the Galactic Plane (the ``\textit{inner}'' region) and from
the ``\textit{outer}'' region as demonstrated on the left panel in
Fig.~\ref{fig:aeff}. The count rate from each of these regions was divided by
the \emph{constant} effective area ($3500~\mathrm{cm}^2$) to obtain the
flux.\footnote{The effective area of \emph{Fermi}\ LAT is strongly energy dependent.
The number $3500~\mathrm{cm}^2$, roughly corresponding to the effective area
at $\sim 1$~GeV, is used here as a quick expedient (see below).} One sees
that the total emission from both regions demonstrates the same spectral
behavior as the excess of HG10, suggesting that this spectral shape is
\emph{not} related to the physics of the several central degrees. This drop
of flux at low energies is mainly due to the decreasing effective area of the
satellite.\footnote{\url{http://www-glast.slac.stanford.edu/software/IS/glast_lat_performance.htm}}
If we properly take into account the dependence of the effective area on
energy, we obtain the spectrum that ``flattens'' at small energies and exceeds
by a significant factor the flux from the central point source (as it should)
(compare red and magenta points on the right panel in Fig.~\ref{fig:aeff}).
Another reason for the decrease of the HG10 spectrum is the increase of
\emph{Fermi}\ LAT PSF at low ($\lesssim 1$ GeV) energies.\footnote{For example, for
normal incidence 95\% of the photons at $1$ GeV are contained within $\sim
1.6^\circ$ and in $2.8^\circ$ at $500$ MeV} This means that if one collects
photons from a relatively small region, such that a contribution from its
boundary (with the PSF width) is comparable to the flux from the whole region,
the spectrum would artificially decline, due to increasing loss of photons at
low energies. To disentangle properly what photons in the PSF region had
originated from a localized source, and what are parts of the diffuse
background, special modeling is needed. In the monotonic spectrum of the GC,
obtained by~\citet{Chernyakova10} both these effects (effective area and PSF)
were taken into account as it was obtained from $10^\circ\times10^\circ$
region, using the \emph{Fermi}\ software.
To further check the nature of the emission from the central several degrees,
we took a fiducial model, that contained the same galactic and extragalactic
diffuse components plus all the same point sources, but \emph{excluding the
point source in the center}. We then fit our data to this new model. Such a
fit attempts to attribute as many photons as possible from the region around
the GC to the emission of diffuse components. The procedure leaves strong
positive residuals within the central $1{-}2^\circ$. The spectrum of these
residuals is consistent with the spectrum of the central point source
of~\citet{Chernyakova10} (green points in Fig.~\ref{fig:gc_spectrum}). To
demonstrate, that the spatial distribution of these residuals is fully
consistent with the PSF of \emph{Fermi}, we compare their radial distribution in
various energy bins with the radial distribution around the Crab pulsar (as it
was done e.g. in~\citet{Neronov:10b}). The pulsar wind nebula, associated with
the Crab has an angular size $\sim 0.05^\circ$~\citep{Hester:08}. Thus, for
\emph{Fermi}\ LAT Crab is a point source. The radial profile of residuals at all
energies has the same shape as Crab, as Fig.~\ref{fig:crab} clearly
demonstrates. As an additional check, we repeated the above test using only
\textsc{front} photons (as in this case the PSF is more narrow) and arrived to
the same conclusion.
The above analysis demonstrates that the emission around the GC in excess of
diffuse components (galactic and extragalactic) is fully consistent with being
produced by the point source with the power-law spectrum, obtained
in~\cite{1FGLcat,Chernyakova10}, \emph{and no additional component is
required.}
\begin{figure*}
\centering
\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{1000MeV_front_profile}
\end{minipage}~\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{1500MeV_front_profile}
\end{minipage}
\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{2500MeV_front_profile}
\end{minipage}~\begin{minipage}{.5\textwidth}
\includegraphics[width=\linewidth]{5000MeV_front_profile}
\end{minipage}
\caption{Radial profile of residuals at different energies around the GC as
compared to the radial profile of Crab emission (renormalized so that the
total flux in each energy range coincide). In both cases only \textsc{front}
photons were used.}
\label{fig:crab}
\end{figure*}
A different question however is whether such an additional component may be
ruled out. To this end we have added to our model of Sec.\ref{sec:Model} an
additional spherically symmetric component, whose intensity is distributed
around the center as $\rho^2(r)$ (where $\rho(r) \propto r^{-1.34}$, as found
in HG10). We observe, that such a procedure does improve the fit (change in
the log-likelihood is 25 with only one new parameter added). The resulting
spectral component is shown in Fig.~\ref{fig:dm}. Some of the photons from the
galactic diffuse background were attributed by the fit procedure to the new
component, concentrated in several central degrees (within the Galactic
Plane). This phenomenon is probably related to the complicated and highly
non-uniform in the central region galactic diffuse background\footnote{See
``\textit{Description and Caveats for the LAT Team Model of Diffuse
Gamma-Ray Emission}'' by the Diffuse and Molecular Clouds Science Working
Group, Fermi LAT Collaboration,
\url{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/ring_for_FSSC_final4.pdf}.}
(cf. also the right panel of the Fig.~\ref{fig:model_cnt_map}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{DM2_5_2}
\caption{Spectrum of an additional spherically symmetric component,
distributed around the GC as the HG10 excess.}
\label{fig:dm}
\end{figure}
We should also note that HG10 modeled diffuse background differently. They
considered contributions from the Galactic disk and spherically symmetric
emission in the region \emph{outside} central $2^\circ$ and then extrapolated
the diffuse model into the innermost $1^\circ-2^\circ$, arguing that the
contribution does not vary significantly in the range $2^\circ - 10^\circ$
off-center. The background model we used (see \citealt{1FGLcat,FermiEGB} for
the detailed description) is different from that of HG10, especially in the central
1-2$^\circ$, where the model flux is higher than the one extrapolated from larger
galactic longitudes, as one can clearly see on the right panel of the
Fig.~\ref{fig:model_cnt_map}.
\begin{figure*}
\includegraphics[width=.45\textwidth, angle=-90]{model_cntmap}
\caption{Left: 10$^\circ$x10$^\circ$ count map of best-fit model. Right: only
contribution from galactic and extragalactic backgrounds is shown}
\label{fig:model_cnt_map}
\end{figure*}
\bigskip
Having the above considerations in mind, we think that the spectrum of the
central region, changing monotonously with the energy, is well described by
purely astrophysical model of the central point source and therefore present
data do not require any additional physical ingredients, such as DM
annihilation signal or
additional contributions from millisecond pulsars. However, to firmly rule
out the emission from DM annihilation in the GC, more detailed model of the
galactic diffuse background is required. Additionally, with the future data,
better statistics will reduce the error bars on the data point around $\sim
100$~GeV which will be helpful to better understand the central point source
physics.
\subsubsection*{Acknowledgments} We would like to thank M.~Chernyakova,
J.~Cohen-Tanugi, D.~Hooper, I.~Moscalenko, A.~Neronov, I.~Vovk for useful
comments. This work of A.B. and O.R. was supported in part by Swiss National
Science Foundation and by the SCOPES grant No.~IZ73Z0\_128040. The work of
D.M. was supported by grant 07/RFP/PHYF761 from Science Foundation Ireland
(SFI) under its Research Frontiers Programme.
\def\apj{ApJ}%
\def\apjl{ApJ}%
\def\apjs{ApJS}%
\def\araa{ARA\&A}%
\def\aap{A\&A}%
|
2,869,038,157,037 | arxiv | \section{Introduction}
Effective temperature
is a fundamental stellar parameter because
it defines the physical conditions of the stellar atmosphere and
it directly relates to the physical properties of the star: mass, radius and luminosity.
Its measurement is essential to determine the evolutionary state of the stars, to perform detailed chemical
abundance analysis, and to characterize exoplanets.
Among a variety of model-dependent techniques used to derive $T_{\mathrm{eff}}$ in F, G, and K type stars,
fitting Balmer lines offers two important advantages: it
is not sensitive to reddening and is
very little sensitive to other stellar parameters, such as metallicity
([Fe/H]\footnote{[A/B] = log $\text{N(A)/N(B)}_{\text{star}} -$ log $\text{N(A)/N(B)}_{\text{Sun}}$,
where N denotes the number abundance of a given element.})
and surface gravity (\mbox{log~\textit{g}}) \citep{Fu1993,Fu1994,BPO2000,BPO2002}.
For instance, variations of about 0.1 dex in either of these parameters induce
3 to 35 K variations in $T_{\mathrm{eff}}$,
depending on the metallicity of the star (see Table 4 in \citet{BPO2002}, hereafter BPO02).
Thanks to this, the degeneracy between $T_{\mathrm{eff}}$ and [Fe/H]
when both parameters are simultaneously constrained with the
excitation and ionization balance of iron lines
(the parameters measured with this technique will be referred as ``spectroscopic'' hereafter)
can be reduced by fixing the first to subsequently derive the second.
Thus, it is possible to distinguish minute differences in chemical abundances, as done e.g.
by \citet{porto2008} and \citet{ram2011}.
In spite of these advantages, the use of Balmer profiles fitting
remains sporadic because:
\begin{enumerate}[label=(\roman*)]
\item The complex normalization of wide line-profiles,
especially in cross-dispersed echelle spectra because of
the instrumental blaze and of the fragmentation of the spectrum into multiple orders.
\item The accuracy of the models of Balmer lines is not well established,
which is partially a consequence of (i).
A clear example are the two ranges of $T_{\mathrm{eff}}$
derived for the Sun using the model of
BPO02 and spectra from different instruments including two versions of the
Kitt Peak National Observatory solar atlas \citet{kurucz1984} and \citet{kurucz2005}
(hereafter KPNO1984 and KPNO2005, respectively).
A ``cool'' value of $\sim\!\!5670$ K is found by
\citet{pereira2013}\footnote{The authors used a different
implementation of self-broadening with a later model atmospheres and different input physics.}
and \citet{onehag2014} from KPNO2005 and KPNO1984, respectively,
while a ``hot'' value of $\sim\!\!5730$ K is found by
BPO02, \citet{ram2011,ram2014} and \citet{cor2012} from other
spectra; precise values are listed in Table~\ref{zero-point}.
\end{enumerate}
The problem of normalizing H$\alpha$ in echelle spectra has been approached
making use of fiber-fed spectra, whose blaze function is efficiently
removed by the flat field procedure
\citep[e.g.][]{fuhrmann1997,korn2003,korn2006,korn2007,lind2008,onehag2014}.
Also, a complex normalization method explained by BPO02 (hereafter 2D-normalization)
has been applied by some authors to remove the blaze
\citep[e.g.][]{fuhrmann1997,allende2004,ram2011,ram2014,matsu2017b,matsu2017}.
Briefly, the method consist on interpolating the blaze function for the echelle orders contiguous
to that containing H$\alpha$.
It is recognized
that the introduction of the self-broadening theory of hydrogen atoms by
BPO02 constitutes a significant advance to the completeness of the physics of the Balmer lines formation,
however the tests on the Sun performed by the authors quoted above
indicate that the model, or its application, is not accurate enough.
As a consequence, subsequent works concentrated on improving the model
by adding more transitions in the self-broadening
\citep{allard2008,cayrel2011}, and replacing
LTE and 1D by non-LTE and 3D model atmospheres
\citep{barklem2007, ludwig2009, pereira2013, amarsi2018}
but the solar $T_{\mathrm{eff}}$ has not yet been recovered.
The large discrepancies in the solar temperatures derived using the same model and
different instruments suggest that
the treatment of observational spectra is the dominant source
of uncertainty;
H$\alpha$ profiles are so sensitive that a minute error in the continuum location
may significantly vary the derived temperature.
The continuum location problem was already identified by BPO02,
who also estimated the errors induced by this process in the derived temperature.
In this work we aim to minimize these errors by a meticulous analysis of
spectra of F, G, and K stars.
We first eliminate instrumental blaze and spectral fragmentation inherent to echelle spectra by
using a long-slit single order spectrograph.
The continuum location is then optimized by a normalization-fitting iterative procedure,
and it is also fine tuned during the process by identifying telluric features that
contaminate the spectra.
As a first step of our program of chemical tagging,
mainly based in HARPS spectra, we establish the methodology
to derive $T_{\mathrm{eff}}$ from H$\alpha$ profiles.
We determine the accuracy of the temperature diagnostics
with H$\alpha$ profiles from 1D + LTE model atmospheres
and the self-broadening theory of BPO02
(these profiles will be referred henceforth as profiles from 1D model atmospheres
and their temperatures will be represented by $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$)
by comparing them with the accurate $T_{\mathrm{eff}}$'s of the \textit{Gaia Benchmark Stars} derived by interferometry.
The method we present is further validated by comparing the temperatures of the same stars
from \mbox{MUSICOS} spectra normalized by the \mbox{2D-normalization}, which is an independent method.
Finally, we prove the absence of residual blaze features in HARPS spectra
by processing them in the same way we performed with coud\'{e},
and obtaining compatible $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s.
This paper is organized as follows. In section~\ref{data} the selection of the sample is described
together with the characteristics of the spectroscopic observations.
In section~\ref{norm} we describe the normalization method.
In section~\ref{fitting} we describe the fitting procedure.
In section~\ref{2DN} we validate the normalization method.
The results are presented from section~\ref{accuracy} on.
In this section the accuracy of H$\alpha$ profiles from 1D models is determined.
In section~\ref{consistency} $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~is compared against temperature diagnostics from other frequent techniques.
In section~\ref{otherHa} we compare our H$\alpha$ temperature scale with others from the same and different models.
In section~\ref{3D} the effect of replacing 3D by 1D models is tested.
In section~\ref{Reliability} the suitability of HARPS for the use of this technique is tested.
Finally, in section~\ref{resume} we summarize our results and conclusions.
\section{Data}
\label{data}
\subsection{Sample selection}
The sample stars are presented in Table~\ref{objects}.
These are 43 F, G, and K type stars including the Sun observed by means of the proxies Ganymede, Ceres, Calisto and Moon.
They were selected from the HARPS/ESO archive of reduced and calibrated data,
brighter than $V = 7$ to obtain spectra of good quality with
the MUSICOS and coud\'{e} instruments.
Thus, three samples of spectra were collected
(named according to the spectrograph of acquisition).
More stars were observed with coud\'{e} in order to cover
as much as possible the \mbox{$T_{\mathrm{eff}}$--[Fe/H]--\mbox{log~\textit{g}}} parameter space.
Therefore, every object in the HARPS and MUSICOS subsamples has associated coud\'{e} spectra.
The parameter space covered by the sample stars is presented in Fig.~\ref{sample_space}.
Stellar parameters were extracted from a compilation of catalogs from the literature coded henceforth as follows:
(Sousa08) \citet{sousa2008},
(Ghezzi10) \citet{gh2010},
(Tsantaki13) \citet{tsa2013},
(Ramirez13) \citet{ram2013},
(Bensby14) \citet{bensby2014},
(Ramirez14a) \citet{ram_2014},
(Ramirez14b) \citet{ram2014},
(Maldonado15) \citet{maldo2015},
(Heiter15) \citet{heiter2015}.
In order to compare literature $T_{\mathrm{eff}}$ scales
with ours, we selected works that derived $T_{\mathrm{eff}}$ with three different techniques:
excitation and ionization of Fe lines (Sousa08, Ghezzi10, Tsantaki13, Bensby14,
Ramirez14a, Ramirez14b, Maldonado15),
photometric calibrations based in the Infrared flux method (Ramirez13)
and interferometry (Heiter15).
Most of the parameters in Table~\ref{objects} belong to Ramirez13 because our selection started
with this catalog, which has a large number of stars from H{\scriptsize IPPARCOS}
observable in the southern telescopes.
We added Ceres to the HARPS sample
to expand the data in time in order to
check the temporal stability of the instrument.
The solar proxies analyzed are listed in Table~\ref{proxies} together with their date of observation, S/N ratio, and
the temperatures derived in this work. We extracted 10 random spectra of the same object per day/year.
The only 6 spectra available of 2010/10 were complemented with
spectra of the close date 2010/12, and for 2007 and 2009 only the available spectra were used.
\begin{table*}
\small
\centering
\caption{ \small Sample stars.
The column 4 specifies the spectrograph of acquisition: coud\'{e} (Co), HARPS (HA) and MUSICOS (MU).
The columns 5, 6 and 7 list the atmospheric parameters used to select the sample.
The last column indicate the catalogs
that provide parameters of the star, with which we compare our results in Sect.~\ref{accuracy} and \ref{consistency}.
The identification code is:
(1) \citet{sousa2008}, (2) \citet{gh2010},
(3) \citet{tsa2013}, (4) \citet{ram2013}, (5) \citet{bensby2014}, (6) \citet{ram_2014},
(7) \citet{ram2014}, (8) \citet{maldo2015}, (9) \citet{heiter2015}.
The catalog from which the parameters in columns 5, 6 and 7 were taken is highlighted in bold.}
\label{objects}
\begin{tabular}{l c c c c c c l}
\hline\hline \\
Name & HD & HIP & spectrum & $T_{\mathrm{eff}}$ (K) & \mbox{log~\textit{g}} & [Fe/H] & \;\;\;ctlg \\
\hline \\
Moon & & & Co/HA/MU & 5771 & 4.44 & 0.00 & \\
Ganymede & & & Co/HA/MU & 5771 & 4.44 & 0.00 & \\
Calisto & & & Co & 5771 & 4.44 & 0.00 & \\
Ceres & & & HA & 5771 & 4.44 & 0.00 & \\
$\zeta$ Tuc & 1581 & 1599 & Co/HA & 5947 & 4.39 & $-0.22$ & 1,2,3,\textbf{4},5,8 \\
$\beta$ Hyi & 2151 & 2021 & Co & 5819 & 3.95 & $-0.13$ & 3,\textbf{4},9 \\
& 3823 & 3170 & Co/HA & 5963 & 4.05 & $-0.24$ & 1,2,3,\textbf{5},8 \\
$\tau$ Cet & 10700 & 8102 & Co/HA & 5390 & 4.52 & $-0.50$ & 1,2,3,\textbf{4},8,9 \\
$\epsilon$ For & 18907 & 14086 & Co/HA & 5065 & 3.50 & $-0.62$ & \textbf{4},9 \\
$\alpha$ For & 20010 & 14879 & Co & 6073 & 3.91 & $-0.30$ & \textbf{4},5 \\
$\kappa$ Cet & 20630 & 15457 & Co & 5663 & 4.47 & 0.00 & 2,\textbf{4},8 \\
$10$ Tau & 22484 & 16852 & Co & 5971 & 4.06 & $-0.09$ & 2,\textbf{4},5,8 \\
$\delta$ Eri & 23249 & 17378 & Co/HA & 5012 & 3.76 & 0.06 & 1,3,\textbf{4},8,9 \\
40 Eri & 26965 & 19849 & Co/HA & 5202 & 4.55 & $-0.28$ & 1,3,\textbf{4},8 \\
& 100623 & 56452 & Co/HA & 5241 & 4.59 & $-0.37$ & \textbf{4},5,8 \\
$\beta$ Vir & 102870 & 57757 & Co/MU & 6103 & 4.08 & 0.11 & 2,\textbf{4},9 \\
& 114174 & 64150 & Co & 5723 & 4.37 & 0.05 & \textbf{4},7 \\
$59$ Vir & 115383 & 64792 & Co & 5995 & 4.24 & 0.11 & 2,\textbf{4},5,8 \\
$61$ Vir & 115617 & 64924 & Co/HA/MU & 5571 & 4.42 & $-0.02$ & 1,2,3,\textbf{4},5,8 \\
$\eta$ Boo & 121370 & 67927 & Co/HA & 6047 & 3.78 & 0.26 & \textbf{4},9 \\
& 126053 & 70319 & Co & 5691 & 4.44 & $-0.36$ & 2,\textbf{4},8 \\
$\alpha$ Cen A & 128620 & 71683 & Co/HA & 5809 & 4.32 & 0.23 & \textbf{4},8,9 \\
$\psi$ Ser & 140538 & 77052 & Co/HA & 5750 & 4.66 & 0.12 & 7,\textbf{8} \\
& 144585 & 78955 & Co/HA & 5940 & 4.40 & 0.37 & 1,3,\textbf{5},6 \\
18 Sco & 146233 & 79672 & Co/HA/MU & 5789 & 4.43 & 0.02 & 1,3,\textbf{4},7,8,9 \\
& 147513 & 80337 & Co & 5855 & 4.50 & 0.03 & 1,2,3,\textbf{4},5,8 \\
$\zeta$ TrA & 147584 & 80686 & Co/HA & 6030 & 4.43 & $-0.08$ & \textbf{4},5 \\
12 Oph & 149661 & 81300 & Co/HA & 5248 & 4.55 & 0.01 & \textbf{4},5,8 \\
& 150177 & 81580 & Co/HA & 6112 & 3.77 & $-0.66$ & \textbf{5} \\
& 154417 & 83601 & Co/HA & 6018 & 4.38 & $-0.03$ & \textbf{4},5 \\
$\mu$ Ara & 160691 & 86796 & Co/HA/MU & 5683 & 4.20 & 0.27 & 2,\textbf{4},6,9 \\
70 Oph & 165341 & 88601 & Co & 5394 & 4.56 & 0.07 & \textbf{4},8 \\
$\iota$ Pav & 165499 & 89042 & Co & 5914 & 4.27 & $-0.13$ & \textbf{8} \\
& 172051 & 91438 & Co & 5651 & 4.52 & $-0.24$ & \textbf{4},5,8 \\
& 179949 & 94645 & Co/HA & 6365 & 4.56 & 0.24 & 1,2,3,\textbf{5},6 \\
31 Aql & 182572 & 95447 & Co/MU & 5639 & 4.41 & 0.41 & \textbf{5} \\
& 184985 & 96536 & Co/HA & 6309 & 4.03 & 0.01 & \textbf{2},5 \\
$\delta$ Pav & 190248 & 99240 & Co/HA & 5517 & 4.28 & 0.33 & 1,2,3,\textbf{4},8 \\
15 Sge& 190406 & 98819 & Co & 5961 & 4.42 & 0.05 & 2,\textbf{4},8 \\
$\phi^2$ Pav & 196378 & 101983 & Co & 5971 & 3.82 & $-0.44$ & \textbf{8} \\
$\gamma$ Pav & 203608 & 105858 & Co/HA/MU & 6150 & 4.35 & $-0.66$ & \textbf{4},5,8 \\
& 206860 & 107350 & Co/HA & 5961 & 4.45 & $-0.06$ & \textbf{4},8 \\
$\xi$ Peg & 215648 & 112447 & Co/MU & 6178 & 3.97 & $-0.27$ & \textbf{2} \\
49 Peg & 216385 & 112935 & Co/HA & 6292 & 3.99 & $-0.22$ & \textbf{4} \\
51 Peg & 217014 & 113357 & Co/HA/MU & 5752 & 4.32 & 0.19 & \textbf{4},5,8 \\
$\iota$ Psc & 222368 & 116771 & Co/HA & 6211 & 4.11 & $-0.12$ & 2,\textbf{4},8 \\
\hline \\
\end{tabular}
\end{table*}
\begin{figure}
\centering
{\includegraphics[width=.4\textwidth]{observed_stars_HR-eps-converted-to}}
\caption{Parameter space covered by the sample stars. The values are listed in Table~\ref{objects}.}
\label{sample_space}
\end{figure}
\subsection{do Pico dos Dias observations}
We used coud\'{e} and MUSICOS in 2016 and 2017.
Both spectrographs are fed by the 1.60 m Perkin-Elmer telescope of
do Pico dos Dias Observatory (OPD, Braz\'{o}polis, Brazil),
operated by Laboratório Nacional de Astrofısica (LNA/CNPq).
In the coud\'{e} spectrograph the slit width was adjusted to give a two-pixel resolving power
$R = \lambda/ \Delta \lambda = 45\,000$.
A 1800 l/mm diffraction grating was employed in the first order,
projecting onto a 13.5 $\mu$m, 2048 pixels CCD.
The spectral region is centered on the H$\alpha$ line \mbox{$\lambda = 6562.797$~\AA},
with a spectral coverage of $155$~\AA.
MUSICOS is a fiber-fed echelle spectrograph \citep[e.g.][]{1992A&A...259..711B}
(on loan from Pic du Midi Observatory since 2012)
available for the OPD/LNA. We employed the red channel,
covering $\lambda$5400-8900~\AA ~approximately, comprising about 50 spectral orders,
at \mbox{R $\sim 40~000$} and 0.05~\AA/pix dispersion in the H$\alpha$ wavelength range.
The exposure times were chosen to obtain S/N ratios of at least 250 for the faintest stars ($V \sim 7$)
and 300 in average for the other stars.
\section{Normalization}
\label{norm}
The challenge in normalizing H$\alpha$ profiles
arises from the uncertainty of the continuum location,
that is estimated defining ``continuum windows''.
Thus, the success of the normalization
resides in the capability of identifying many wide
windows that allow to determine the shape of the spectrograph response.
Frequently, the continuum windows are determined
using automatic or semiautomatic procedures, as the
IRAF\footnote{\textit{Image Reduction and Analysis Facility} (IRAF) is distributed by
the National Optical Astronomical Observatories (NOAO), which is operated by the
Association of Universities for Research in Astronomy (AURA), Inc., under contract to
the National Science Foundation (NSF).} task \textit{``continuum''},
selecting the wavelength bins with the highest fluxes by applying clipping.
We improve this procedure by iterating on the normalization and fitting processes,
in this way the compatibility at the extremes of the wings are checked after every fit.
This check is fundamental for consistent temperature measurements because,
although the spectrograph response may be well described by a low order polynomial
(as is the case of coud\'{e}),
the normalization by interpolation may be highly imprecise close to the line-core.
It occurs because the continuum regions available to interpolate the polynomial are short compared to the
fitted region,
thus small errors in the outer profile wings
trigger larger errors close to the line-core, where the H$\alpha$ profile is more sensitive to the temperature.
With this method, explained below in detail, we minimize the main source of uncertainty,
as demonstrated by the very low dispersion of $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~values obtained with many solar
spectra in Sect.~\ref{zero-p} and \ref{Reliability}.
Normalization is more complex in echelle spectra,
because of the correction of the blaze
and order merging.
As discussed by \citet{skoda2004}, distortions in the spectra, such as
discontinuities of the orders and ripple-like patterns
\citep[see][Fig.~11]{skoda2008}
are often produced in slit echelle spectrographs but possibly also in fiber-fed instruments.
When this occurs, the spectra are useless and a new reduction from raw data should be applied
following the recipe recommended by \citet{skoda2008}.
Of course, empirical corrections on the reduced spectra could recover the profiles,
but their goodness must be tested by recovering the $T_{\mathrm{eff}}$~accuracy obtained with non-distorted profiles.
On the other hand, also spectra with no obvious distortions need to be tested,
because subtle residual blaze features may remain and systematically impact the
$T_{\mathrm{eff}}$~estimate.
Residual blaze features distort the profiles making them shallower
(more strongly close to the center of the spectral order),
thus the distorted spectra mimic profiles of cooler temperatures.
In order to investigate this effect in HARPS, the 1D pipeline-reduced HARPS spectra were analyzed
in the same way as the coud\'{e} ones, and the derived $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s were compared.
The results of this analysis are presented in Sect.~\ref{Reliability}.
The normalization method applied to coud\'{e} and HARPS is independently validated by
deriving $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s from MUSICOS spectra normalized with the 2D-normalization.
These results are presented in Sect.~\ref{2DN}.
\begin{figure*}
\centering
{\includegraphics[width=14cm]{continuum0coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum1coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum2coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum3coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum4coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum5coude-eps-converted-to}}
{\includegraphics[width=14cm]{continuum6coude-eps-converted-to}}
{\includegraphics[width=.4\textwidth]{Moon2_clean_1d_norm2-eps-converted-to}}
\caption{Coud\'{e} H$\alpha$ profile of one of the solar proxies in Table~\ref{proxies}.
The red and black lines represent the synthetic and observed profiles.
The shaded regions are the windows of fits and
the circles represent the continuum bins
color-coded according to their frequency of appearance
in all coud\'{e} spectra.
The most frequent continuum windows are observed at [6500.25, 6500.50], [6504.50, 6505.00], [6619.70, 6620.50],
[6625.60, 6625.80] and [6626.50, 6626.80].
\textit{Bottom panel:} Histogram of temperatures related to the wavelength bins within the windows of fits.
A Gaussian is fitted to its median and robust standard deviation.}
\label{sun_normal}
\end{figure*}
\begin{figure*}[!]
\centering
{\includegraphics[width=14cm]{continuum0HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum1HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum2HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum3HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum4HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum5HARPS2-eps-converted-to}}
{\includegraphics[width=14cm]{continuum6HARPS2-eps-converted-to}}
{\includegraphics[width=.4\textwidth]{HARPS_ceres-eps-converted-to}}
\caption{Analogous to Fig.~\ref{sun_normal} with a HARPS spectrum of one of the solar proxies
from Table~\ref{proxies}.
The gray line represents the spectrum in its original resolution and the black line represents the spectrum degraded to the
resolution of coud\'{e}.
Continuum bins in the degraded spectrum are highlighted in green; notice that they mostly match those of
Fig.~\ref{sun_normal}.}
\label{sun_normal_HARPS}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=.8\textwidth]{continuumBB+mute2151-eps-converted-to}}
{\includegraphics[width=.1265\textwidth]{CONTINUUM_HD2151_M104_n8_1d2_norm2_m_TAC-eps-converted-to}}
{\includegraphics[width=.8\textwidth]{continuumBB+mute10700-eps-converted-to}}
{\includegraphics[width=.1265\textwidth]{CONTINUUM_HD10700_M102_n1_1d2_NRM4-eps-converted-to}}
{\includegraphics[width=.8\textwidth]{Xtitle-eps-converted-to}}
{\includegraphics[width=.1265\textwidth]{continuum_Xtitle-eps-converted-to}}
\caption{\textit{Left panels:} Fitting of two coud\'{e} spectra (gray line)
with synthetic spectra of PWV with concentrations of 10 and 20 mm (red and blue lines).
The circles are the continuum wavelength bins on 1 $\pm~\sigma$(noise).
The shades represent 3 of the 5 continuum windows selected in Fig.~\ref{sun_normal}.
The arrows point the windows contaminated by telluric features.
\textit{Right panels:} Flux histograms of the spectra on the left panels
with the same flux scale. The black horizontal line points the continuum,
the dashed line is the average flux of the 5 continuum windows of Fig.~\ref{sun_normal}
and the shades are the spread.}
\label{telluric}
\end{figure*}
\subsection{Normalization of coud\'{e} and HARPS spectra}
\label{coude}
The normalization is applied by interpolating low order polynomials with
the IRAF task \textit{``continuum''}, integrated with the fitting code described in Sect.~\ref{fitting} in an iterative procedure:
\begin{enumerate}
\item A first gross normalization is performed neglecting
the region $6514-6610$~\AA~in the interpolation.
Although the extension of the H$\alpha$ wings is variable,
this region is kept the same for all the sample stars
with the purpose of keeping enough room to apply weights in nearby regions
to modulate the normalizing curve.
\item The obtained profile is used to fit a precipitable water vapor (PWV) spectrum
that will be used to verify the continuum level after every iteration,
see Sect.~\ref{Continuum_tuning}.
\item The same normalized profile is compared with the grid of synthetic profiles
using the fitting code described in Sect.~\ref{fitting} to find the most compatible one.
\item The compatibility between the normalized and synthetic profiles
must be visually checked at the
``transition regions'' ($\lambda < 6536$~\AA~and $6590$~\AA~< $\lambda$)
in which the continuum turns into line wings.
The regions of the line interior are very sensitive to
temperature, hence they are predominant in the fittings.
For this reason, if distortions are artificially introduced in the profile during the normalization,
they become more evident in the transition regions.
This procedure makes our normalizations dependent on the model but very weakly,
because metallicity and
surface gravity (the parameters set beforehand) do not greatly influence the shape of the line,
especially in the transition regions.
We verified that changes as large as \mbox{$\sim\!\!\pm0.3$ dex} do not modify significantly
the shape of the normalized profiles, while larger changes may truncate the procedure.
For consistency, HARPS spectra were degraded to the resolution of coud\'{e} in this step
(only for this step, not for the fitting procedure), see Fig.~\ref{sun_normal_HARPS}.
In Fig.~\ref{transition_regions} examples of transition regions at the red wing of
H$\alpha$ in solar spectra normalized by different authors are provided.
In it, the fit of the coud\'{e} spectrum of Fig.~\ref{sun_normal}
is compared with fits of KPNO2005, and the solar atlas of \citet{wall2011} (KPNO2011)
to show how this method improves the normalization.
\item Usually the first normalization is deficient, in this case
a second one is performed \textit{from scratch} applying weights to the wings around 6514 and $6610$~\AA~to
make the profile deeper or shallower as required to match the flux of the synthetic profile.
Then, another fit is applied and the matching check
described in step 4 is repeated.
The procedure finishes when the observed and synthetic profiles are
compatible in the transition regions, as shown in
Fig.~\ref{sun_normal} and \ref{sun_normal_HARPS}.
An example of the difference between the first gross normalization and the final normalization
is shown in Fig.~\ref{gross_norm}.
\end{enumerate}
\subsection{Continuum fine-tune}
\label{Continuum_tuning}
The solar KPNO2005 atlas and the lines catalog of
\citet{moore} were used to select windows free from metallic lines to check the continuum
during the normalization procedure.
However, the availability of these windows diminish progressively in cool and metal-rich stars
and because of the presence of telluric lines.
Since the humidity at do Pico dos Dias Observatory often exceeded $90\%$ during our observations,
the contribution of many minute telluric lines is relevant in the coud\'{e} spectra.
To fine-tune the continuum level, as part of the procedure described in Sect.~\ref{coude},
we separated telluric features from noise fitting the observed spectra with synthetic telluric spectra
as shown in Fig.~\ref{telluric}.
Attempts for the fittings were performed with the \textit{Molecfit} software package
described in detail in Sect.~\ref{telluric_correction}
and with the PWV library of \citet{Moehler2014}\footnote{\url{ftp://ftp.eso.org/pub/dfs/pipelines/skytools/telluric\_libs}}.
The first demonstrated to be precise for fitting strong features
but many more weak features are present in the second, which makes it more suitable for this
analysis.
The PWV library is available at resolutions R = 300~000 and R = 60~000,
for the air-masses 1.0, 1.5, 2.0, 2.5, 3.0
and water content of 0.5, 1.0, 1.5, 2.5, 3.5, 5.0, 7.5, 10.0, and 20.0 mm.
The fitting is performed degrading the resolution of the original PWV spectra to match
those of the spectrograph used, and selecting the set of PWV spectra with the air-mass closest to that of the observation.
We quantified the displacement of the continuum due to the presence of telluric features as follows.
After normalized all coud\'{e} spectra, continuum wavelength bins were identified in the
solar spectrum of Fig.~\ref{sun_normal} applying \mbox{$\sigma$-clipping}.
The fluxes of these wavelength bins were then checked in all other normalized coud\'{e} spectra,
and none of them was found to remain as continuum in all the sample.
The color-code of the plot in the figure represents the percentage rate,
being the windows at [6500.25, 6500.50], [6504.50, 6505.00], [6619.70, 6620.50], [6625.60, 6625.80],
[6626.50, 6626.80]~\AA~the most frequent.
Fig.~\ref{telluric} shows two cases where two of these windows are affected by the presence of minute telluric lines,
and how much the average flux of the five mentioned windows decreases.
Analyzing all the sample spectra, we find that when the content of PWV is high, say between 7.5 and 20.0 mm,
minute telluric features are almost omnipresent and displace the continuum flux by about $0.5\%$.
In our experience, this issue may induce to underestimate the stellar temperature
between 30 and 100 K.
It is however difficult to provide a precise estimate because
the displacement produced is often not homogeneous, but a distortion of the continuum shape.
We stress that no correction is applied during this procedure, only a visual check.
The correction of strong features is done later, and it is explained Sect.~\ref{telluric_correction}.
\section{Profiles fitting}
\begin{figure*}[t]
\centering
\includegraphics[width=14cm]{fitting_Mute_AXISHD2151-eps-converted-to
\includegraphics[width=3.5cm]{Mute_HD2151_M104_n8_1d2_norm2_m_TAC-eps-converted-to}\\
\caption{Telluric correction and profile fitting of the coud\'{e} spectrum of \mbox{HD 2151}.
\textit{Left panel: }Corrected and non corrected spectra are represented by
the black and blue lines, respectively. The windows of fits are represented by the shades,
and the arrows point those where the relative flux was perfectly recovered.
The red line represent the synthetic profile fitted.
\textit{Right panel: }
Histogram of temperatures related to the wavelength bins inside the windows of fits.
The most probable $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~is shown in the top part of the plots, also log \mbox{log~\textit{g}}~and [Fe/H] values used for the fittings
along with their source in the literature are shown.}
\label{t_correction}
\end{figure*}
\label{fitting}
This study is based on the grid of synthetic profiles of BPO02
computed using the self-broadening theory developed in \citet{BPO2000}
and the 1D LTE plane-parallel model atmospheres from the MARCS code
\citep{asplund1997}.
The atmospheric parameters of the grid are
$T_{\mathrm{eff}}$: 4400 to 7500 K with steps of 100 K,
[Fe/H]: $-3.0$ to $+0.5$ dex with steps of 0.5 dex,
\mbox{log~\textit{g}}: 3.4 to 5.0 dex with steps of 0.5 dex and
microturbulence velocity of 1.5 Km$/$s.
In order to derive very precise $T_{\mathrm{eff}}$'s around solar parameters,
a more detailed grid from the same theoretical recipe used by \citet{ram2011}
(kindly provided by the first author by private communication) is also used here,
its parameters are
$T_{\mathrm{eff}}$: 5500 to 6100 K with steps of 10 K,
[Fe/H]: $-3.0$ to $+0.3$ dex with steps of 0.05 dex,
\mbox{log~\textit{g}}: 4.2 to 4.65 dex with steps of 0.05 dex and
microturbulence velocity of 1.5 Km$/$s.
The fitting between the observed and synthetic profiles is performed using
the ``windows of fits'' free from metallic lines: [6556.45, 6556.55], [6559.00, 6559.20], [6559.86, 6560.08],
[6561.30, 6561.60], [6566.00, 6566.30], [6567.90, 6568.10], [6577.10, 6577.40],
[6589.55, 6589.80]\footnote{No more windows in the blue wing of the profile were included
because our spectra appear systematically contaminated by telluric features in this region}.
A program in IDL\footnote{Interactive Data Language, version 7.0} was written to perform the fits
eliminating the influence of contaminated wavelength bins.
It first interpolates the resolution of the grids to 1 K, 0.01 dex, 0.01 dex
in $T_{\mathrm{eff}}$, [Fe/H], \mbox{log~\textit{g}}.
Then, for each wavelength bin, the temperature related to the interpolated synthetic profile
with the closest flux value is chosen, [Fe/H] and \mbox{log~\textit{g}}~previously fixed by the user.
The most probable temperature and its uncertainty are determined by the median and the robust
standard deviation (1.4826 times the median absolute deviation) of the histogram,
see e.g. Fig.~\ref{sun_normal} and \ref{sun_normal_HARPS}.
\subsection{Telluric correction}
\label{telluric_correction}
The resolution and sampling of the coud\'{e} spectra allow a total of 26 to 27 wavelength bins
inside the windows of fit, enough to perform the fitting procedure described in Sect~\ref{fitting}.
In order to optimize $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~and its error determination when windows of fits are contaminated
and to provide a spectral library clean from telluric features,
we corrected the normalized coud\'{e} spectra with the \textit{Molecfit} software package
\citep{smette_molecfit_2015, kausch_molecfit:_2015}.
This software computes the transmission of the Earth's atmosphere at the time of the observations
with the radiative transfer code LBLRTM \citep{clough_atmospheric_2005},
taking into account spectroscopic parameters from the HITRAN database
\citep{rothman_hitran2012_2013} and an atmospheric profile.
The atmospheric transmission is fitted to the observed spectrum,
and the telluric correction is done dividing the observed
spectrum by the atmospheric transmission.
We used the average equatorial atmospheric profile, which is \textit{Molecfit}'s
default profile. We chose to fit H$_2$O (the main absorber in this wavelength region),
O$_2$, and O$_3$. The line shape is fitted by a boxcar profile; as starting value for
the boxcar FWHM we used 0.36 times the slit width.
The wavelength solution of the atmospheric transmission is adjusted with a first degree polynomial.
First, we ran \textit{Molecfit} automatically on all spectra,
avoiding the center of the H$\alpha$ line from 6560 to 6566~\AA.
If the residuals of this first telluric correction were larger than 2\% of the continuum,
we adapted the starting value of the water abundance and performed a second fit.
This telluric correction allowed us
to recover with precision the stellar flux inside the contaminated windows of fits in most cases.
An example is shown in Fig~\ref{t_correction} where the corrected
and non-corrected spectra of \mbox{HD 2151} are over-plotted.
The telluric corrected and non-corrected normalized coud\'{e} spectra of the sample stars in
Table~\ref{objects} can be accessed at an on-line repository\footnote{\url{https://github.com/RGiribaldi/Halpha-FGKstars}},
or by contacting the first author.
\section{Validation of the normalization method}
\label{2DN}
BPO02 found the 2D-normalization efficient in removing the spectral blaze,
the method is described in detail in their paper.
It is referred as 2D-normalization because it depends on the two spacial dimensions of the CCD detector.
Namely, the normalization curve of the spectral order of
interest is found by interpolating the normalization curves of the adjacent orders in the pixel domain.
We validate the normalization method described in Sect.~\ref{coude}
used on coud\'{e} and HARPS spectra,
deriving $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~with MUSICOS spectra normalized by the 2D-normalization.
The comparison in Fig.~\ref{coude_MUSICOS} shows that
$T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~derived with coud\'{e} and MUSICOS
are compatible for all stars
We find no trend with respect to the
atmospheric parameters, a negligible offset of $-1$ K and a
low scatter of 25 K.
Solar spectra reflected in the Moon and Ganymede
were also normalized with this method, from which we derive the average value $5745 \pm 16$ K
(see comparative values in Table~\ref{proxies}, the profile fits are shown in Fig.~\ref{MUSICOS_fittings})
consistent with $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s listed in Table~\ref{zero-point}
derived from coud\'{e} and HARPS spectra.
\begin{figure}
\centering
{\includegraphics[width=.45\textwidth]{coude_vs_MUSICOS_teff-eps-converted-to}}
{\includegraphics[width=.45\textwidth]{coude_vs_MUSICOS_fe_h-eps-converted-to}}
{\includegraphics[width=.45\textwidth]{coude_vs_MUSICOS_log_g-eps-converted-to}}
\caption{Temperature diagnostics from MUSICOS with respect to those of coud\'{e} vs.
atmospheric parameters.
[Fe/H] and \mbox{log~\textit{g}}~values from Table~\ref{objects} were used here.
The \mbox{$-1$ K} offset and its 25 K scatter are represented by the dashed lines and the shades, respectively.}
\label{coude_MUSICOS}
\end{figure}
\section{Accuracy of 1D model atmospheres}
\label{accuracy}
\subsection{The zero-point}
\label{zero-p}
\begin{figure*}[t]
\centering
{\includegraphics[width=0.8\textwidth]{Temperatures_literature-eps-converted-to}}
\caption{Graphic representation of solar $T_{\mathrm{eff}}$~values in Table~\ref{zero-point}.
The horizontal line represents the solar $T_{\mathrm{eff}}$~measured by the Stefan-Boltzmann equation.
Works that used theoretical models based on 1D atmosphere models are represented by circles,
and those that used 3D models by triangles. Gray circles represent works that used the theoretical model of BPO02, and
green circles represent a different/enhanced recipe.
Works that used KPNO solar atlases are labeled in blue. For them, for comparison purposes, our measurements from
corresponding KPNO spectra are included as red crosses in the same line.}
\label{temperatures_literature}
\end{figure*}
We used the 6 blaze-free coud\'{e} solar spectra listed in Table~\ref{proxies}
to determine the accuracy of H$\alpha$ profiles from 1D model atmospheres for the Sun.
The profiles fitted are shown in Fig.~\ref{coude_fitted},
we obtain the average value $5744 \pm 7$ K.
Since we find good agreement between the determinations
from coud\'{e}, MUSICOS, and HARPS spectra
(Sec.~\ref{2DN} and \ref{Reliability}),
we determine the zero-point of the model by averaging the
inferred $T_{\mathrm{eff}}$~values from all solar spectra,
resulting an offset of $-28 \pm 1$ K
with respect to the 5772 K \citep{Prsa2016,heiter2015}
measured by the Stefan-Boltzmann equation.
Our zero-point supports the temperature values initially found by
BPO02 with their \mbox{MUSICOS} spectrum and the KPNO1984 atlas,
and those found later by \citet{ram2011,cor2012} and \citet{ram2014}
with MIKE spectra.
On the other hand, it disagrees with any value derived from
KPNO solar atlases, including our own determinations.
These values are presented in Table~\ref{zero-point} and Fig.~\ref{temperatures_literature}
along with those derived by other authors using
enhanced theories from BPO02 on.
Fig.~\ref{temperatures_literature} shows that none of the models recovered the solar $T_{\mathrm{eff}}$,
included the most sophisticated ones, i.e. \citet{pereira2013} based on
3D models and \citet{amarsi2018} based on 3D models and NLTE conditions.
The plot also shows that the determinations from KPNO spectra are systematically cooler than those from other spectra,
except for the first one of BPO02.
Notice that this determination disagrees with that of \citet{onehag2014} although they were
obtained with the same version of KPNO atlas and
the same broadening recipe. Which is explained by synthetic profiles computed from different versions of MARCS model atmospheres
that use distinct mixing-length parameters.
It is not satisfactory that such dispersion remains
for the Sun, our reference star from which spectra of supreme
quality are not difficult to obtain.
Thus, in the attempt of identifying the origin of the problem, we fitted KPNO atlases with the theoretical profiles
of BPO02 (fittings with no further normalization).
From these fits, we firstly computed the temperature difference that other models of
H$\alpha$ produce with respect to that of BPO02 for the Sun,
they are provided in Table~\ref{zero-point}.
Secondly, we compared these fits with those of coud\'{e}/HARPS/MUSICOS to
analyze the goodness of their normalizations.
The fits are shown in Fig.~\ref{kurucz_fittings}, they are very precise in the inner profile regions
thanks to their high temperature sensitivity and to the high spectral quality in S/N and sampling.
However, when the outer regions are scrutinized, evident departures appear, see Fig.~\ref{transition_regions}.
We observed similar departures, after the first iteration in our normalization procedure,
i.e. the custom normalization by polynomial interpolation (see Fig.~\ref{gross_norm}),
whose causes were explained in Sect.~\ref{norm}.
From KPNO2005 we obtain a 30 K
cooler value than what we obtain with coud\'{e}/HARPS/MUSICOS spectra.
This atlas version was normalized by polynomial fitting of the observed spectral fluxes,
considering also the presence of broad $\mathrm{O_{3}}$ and $[\mathrm{O_{2}}]_{2}$ atmospheric
features produced by synthetic spectra.
The differences between the temperature values derived by us and the two authors
that used profiles from 1D models are
entirely explained by the different physics of the models.
H$\alpha$ profiles of \citet{cayrel2011} were synthesized by \mbox{ATLAS9}, \mbox{BALMER9} codes \citep{castelli2004} and the
impact-broadening of \citet{allard2008} that includes more transitions than the self-broadening of BPO02.
The profiles of \citet{pereira2013} were synthesized also with a slight different input physics and
an updated atmosphere model than that in BPO02.
From KPNO2011 we obtain a similar value to that obtained with KPNO2005,
meaning that the relative flux of both spectra in the innermost regions of the profile agree.
On the other hand, significant differences are observed in the outer wings, see Fig.~\ref{transition_regions}.
No information is provided about the normalization method of this atlas,
but we suspect that the custom method was applied because
we observe significant flux disparities
around the continuum regions [6500.25, 6500.50], [6504.50, 6505.00] and [6625.60, 6625.80],
see Fig.~\ref{normalization_errors}.
If their flux excess of $\sim\!0.2\%$ was constant through all the wavelength range,
it would imply a temperature underestimate of at least \mbox{20}.
This analysis show that the systematic low temperatures
from solar spectra in Table~\ref{zero-point} are associated to
disparities with the synthetic spectra and/or the continuum,
which may indicate minute normalization errors.
We show that when a special care is taken in the continuum placement and in fitting the outermost
profile regions, consistent results are obtained.
These results are further supported by the agreement with all other $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~measurements
from spectra other than KPNO, as Fig.~\ref{temperatures_literature} shows.
The temperature differences listed in the last column of Table~\ref{zero-point}
are computed subtracting the diagnostics by the BPO02 model
to those by the H$\alpha$ models of the authors listed in the first column,
both obtained from the same solar spectra listed in second column.
Hence, they give the zero-points of the H$\alpha$ models relative to that of BPO02 \mbox{($-28 \pm 1$ K)},
so the two quantities added give the zero-point of the model.
Remarkably, we find that the two models using 3D atmospheric models improve
the agreement with the actual solar $T_{\mathrm{eff}}$, and
\citet{amarsi2018} that also consider NLTE
reproduce almost exactly the solar $T_{\mathrm{eff}}$.
\begin{table}
\caption{Third column lists $T_{\mathrm{eff}}$ values derived for the Sun
with H$\alpha$ profiles from 1D model atmospheres (top table) and
from 3D model atmospheres (bottom table).
Forth column lists the temperature differences given by different models of H$\alpha$
with respect to BPO02-grid based analysis for the same solar spectrum.
Fits of the spectra are shown in the appendix.}
\label{zero-point}
\centering
\small
\begin{tabular}{c c c c}
\hline\hline
Author & spectrum & $T_{\mathrm{eff}}$ (K) & $\Delta$$T_{\mathrm{eff}}$~(K) \\
\hline
BPO02 & KPNO1984 & $5733$ & ---\\
BPO02 & MUSICOS & $5743$ & ---\\
\citet{ram2011} & MIKE & $5732 \pm 32$ & ---\\
\citet{cayrel2011} & KPNO2005 & $5678 \pm 5$ & $-29$\\
\citet{cor2012} & MIKE & $5752 \pm 16$ & ---\\
\citet{ram2014} & MIKE & $5731 \pm 21$ & ---\\
\citet{pereira2013} & KPNO2005 & $5674$ & $-33$\\
\citet{onehag2014} & KPNO1984 & $5670$ & ---\\
\citet{amarsi2018} & KPNO2011 & $5681 \pm 40$ & $-14$\\
This work & coud\'{e} & $5744 \pm 7$ & ---\\
This work & HARPS & $5744 \pm 10$ & ---\\
This work & MUSICOS & $5745 \pm 16$ & ---\\
This work & KPNO2005 & $5707 \pm 6$ & ---\\
This work & KPNO2011 & $5695 \pm 18$ & ---\\
\hline
\citet{pereira2013} & KPNO2005 & $5722$ & 15\\
\citet{amarsi2018} & KPNO2011 & $5721 \pm 40$ &26\\
\hline
\end{tabular}
\end{table}
\subsection{Accuracy for non solar stars}
\label{interferomety_section}
Atmospheric parameters of 34 Gaia Benchmark stars with a wide range of temperature and metallicity were published by Heiter15.
Their $T_{\mathrm{eff}}$'s were derived by measuring angular diameters with interferometry,
that is the least model-dependent technique.
We acquired coud\'{e} spectra of 9 Gaia Benchmark stars and
$T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~were derived for them using the [Fe/H] and \mbox{log~\textit{g}}~values given by the authors.
The plot in Fig.~\ref{interferometry} shows the comparison of
$T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~with $T_{\mathrm{eff}}$~from interferometry.
We find a constant offset of 30 K between the two scales, that confirms
the $-28$ K zero-point found with the solar spectra in Sect.~\ref{zero-p}.
No temperature dependence is found with \mbox{log~\textit{g}}~but a trend is present with metallicity.
The right panel of the figure shows that H$\alpha$ underestimates
$T_{\mathrm{eff}}$~by \mbox{$\sim\!100$ K} at [Fe/H] = $-0.5$.
In the plots, the temperature values of \mbox{$\mu$ Ara} \mbox{(HD 160691)} appear highly discrepant
and were ignored to compute the trend.
Its interferometric $T_{\mathrm{eff}}$ is flagged by the authors
as not reliable because its angular diameter is not directly measured (see Sect. 3.2 in paper);
also its mass measurements derived by evolutionary and seismic techniques disagree. On the other hand,
we find its $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~to be consistent with IRFM and all
the spectroscopic values in following sections.
The other star with a high discrepancy is $\delta$ Eri (HD 23249).
It appears also discrepant in the comparison with IRFM in Sect.~\ref{IRFM_section},
and even our temperatures from coud\'{e} and HARPS disagree.
However, its values in the plots of Fig.~\ref{interferometry}
were not ignored at computing the trends,
in order to do an homogeneous comparison with the trends in Fig.~\ref{IRFM}.
Having determined with high precision the offset of $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~with respect to $T_{\mathrm{eff}}$~at solar
parameters in the previous subsection,
the $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~accuracy with respect to [Fe/H] over the metallicity range analyzed, is improved from the
relation in the plot on right panel of
Fig.~\ref{interferometry} to
\mbox{$T_{\mathrm{eff}} = T_{\mathrm{eff}}^{H\alpha}$ $-159(\pm80)$[Fe/H] $+ 28(\pm1)$ K}
(68 K scatter).
\begin{figure}
\centering
{\includegraphics[width=.24\textwidth]{T10-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe10_-eps-converted-to}}
\caption{\textit{Left panel: }Comparison of $T_{\mathrm{eff}}^{H\alpha}$ with $T_{\mathrm{eff}}$~from interferometry of
the Gaia Benchmark stars (Heiter15). The red dashed line represent the offset.
\textit{Right panel: }Relative temperatures in function of [Fe/H]. The red line and the shade represent
the trend and its scatter.
The corresponding function and the errors of its coefficients (in brackets) are shown in the legends.
The cross symbol in both plots point \mbox{$\mu$ Ara}'s \mbox{(HD 160691)}
considered as outlier.}
\label{interferometry}
\end{figure}
\section{Consistency with other $T_{\mathrm{eff}}$~scales}
\label{consistency}
We used 10 catalogs from literature to determine the consistency of the
H$\alpha$ profile diagnostics with other techniques.
Among them, Sousa08, Ghezzi10, Tsantaki13, Besnby14, Ramirez14a, Ramirez14b, and Maldonado15
determine spectroscopic $T_{\mathrm{eff}}$'s,
while Ramirez13 has $T_{\mathrm{eff}}$'s derived by photometric calibrations from IRFM.
In this section, as well in Sect.~\ref{interferomety_section}, $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s were
derived for comparison purposes using as stellar imput \mbox{log~\textit{g}}~and [Fe/H] parameters provided by each author,
so that the comparisons are consistent as far as the stellar parameters are concerned.
In the next subsections $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~determinations are separately compared with the results
obtained with each method.
\subsection{IRFM effective temperatures}
\label{IRFM_section}
\begin{figure*}
\centering
{\includegraphics[width=.24\textwidth]{T4-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe4_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe4_correct-eps-converted-to}}
\caption{\textit{Left and middle panels:} Same as in Fig.~\ref{interferometry} for the $T_{\mathrm{eff}}^{IRFM}$'s of Ramirez13.
\textit{Right panel:} Relative temperatures in function of [Fe/H] after
applying the correction relation given in~\ref{interferomety_section} to $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$.}
\label{IRFM}
\end{figure*}
The comparison with IRFM is performed with the temperatures of Ramirez13,
that were derived by the metallicity-dependent color--$T_{\mathrm{eff}}$
calibrations of \citet{casagrande2010} using the \mbox{Johnson-Cousins}, 2MASS, Tycho2
and Str\"{o}mgreen available photometry.
To obtain these temperatures, represented by $T_{\mathrm{eff}}^{IRFM}$,
the authors used an homogeneous set of metallicity derived from Fe lines,
where $T_{\mathrm{eff}}$~is not obtained simultaneously with the other parameters
but fixed from photometric calibrations. In this way, both techniques are combined iteratively minimizing the
$T_{\mathrm{eff}}$--[Fe/H] degeneracy.
The plot in Fig.~\ref{IRFM} shows the comparison between $T_{\mathrm{eff}}^{IRFM}$
and our coud\'{e} $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$.
There is a constant offset of +34 K between the two scales
with a \mbox{59 K} scatter.
Their difference show a trend with metallicity according to the equation displayed in the plot on the middle panel.
This trend is practically the same found in the comparison with interferometric measurements,
asserting the equivalence of the two scales \citep{casagrande2014}.
After applying the relation given in Sect.~\ref{interferomety_section} to
$T_{\mathrm{eff}}^{\mathrm{H}\alpha}$, the trend is indeed fully removed, as shown in the right panel of the figure.
The remaining \mbox{45 K} scatter is close to the average formal errors of
$T_{\mathrm{eff}}^{IRFM}$ of the stars compared \mbox{(52 K)},
which implies that it is dominated by the uncertainties
of the color measurements. Therefore the contribution of random errors of $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~related
to the normalization is negligible, supporting the precision of our method.
\subsection{Spectroscopic effective temperatures}
\label{spectroscopic}
\begin{figure*}
\centering
{\includegraphics[width=.24\textwidth]{T1-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe1_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe1_correct-eps-converted-to}}\\
{\includegraphics[width=.24\textwidth]{T2-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe2_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe2_correct-eps-converted-to}}\\
{\includegraphics[width=.24\textwidth]{T3-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe3_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe3_correct-eps-converted-to}}\\
{\includegraphics[width=.24\textwidth]{T5-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe5_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe5_correct-eps-converted-to}}\\
{\includegraphics[width=.24\textwidth]{T7-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe7_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe7_correct-eps-converted-to}}\\
{\includegraphics[width=.24\textwidth]{T8-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe8_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T_fe8_correct-eps-converted-to}}
\caption{Same as in Fig.~\ref{IRFM} for spectroscopic $T_{\mathrm{eff}}$'s.
The authors are indicated in the plots on the left panels.
In all plots, the black lines represent the prefect agreement and the red lines the trends.
When the trends are found not significant, the offsets are drown with dashed red lines.
$T_{\mathrm{eff}}$'s from Ramirez14a (plus symbols) and Ramirez14b (green circles),
derived with the same method,
are compared in the same plots.}
\label{temperature-scales}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=.24\textwidth]{IRFM_vs_Sousa08-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{Sousa08_Ramirez13_fe_h_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{fe_h_Sousa08_Ramirez13_vs_Teff-eps-converted-to}}
\caption{\textit{Left and middle panels:} Similar to Fig.~\ref{interferometry}
for the $T_{\mathrm{eff}}^{IRFM}$'s of Ramirez13 against
the spectroscopic $T_{\mathrm{eff}}$'s of Sousa08.
\textit{Right panel:} $\Delta$[Fe/H] represent the metallicity values of Sousa08 with respect to those of Ramirez13.
The blue symbols are the stars with over-solar $T_{\mathrm{eff}}$'s.}
\label{IRFM_vs_spec}
\end{figure*}
The need of deriving accurate stellar atmospheric parameters got more attention
with the discovery of exoplanets, because their characterization depends directly on
how accurately and precisely the physical parameters of the host stars are known.
Other studies also require a refined determination of $T_{\mathrm{eff}}$,
for instance, finding the nature of the connection between stellar metallicity and planetary presence
\citep[e.g.][]{santos2003,fischer2005,sousa2008,gh2010},
the detection of diffusion effects in the stellar atmospheres \citep[e.g.][]{korn2006,korn2007} and
the search for chemical signatures of planetary formation \citep[e.g.][]{Melendez2009,ram2009}.
Some of them deal with a large amount of stars, for which automatic spectroscopic
procedures have been developed, that provide results with high internal precision.
However, as shown by \citet{ryab2015} in their Fig.~1, when results from different spectroscopic procedures
are compared, significant discrepancies may appear.
In this work we considered for comparison catalogs with small internal errors.
Among them Ramirez14a and Ramirez14b are the most precise with \mbox{$\sim\!10$ K}. They are followed by
Sousa08, Tsantaki13 and Maldonado15 with \mbox{$\sim\!20$} K, a bit further Ghezzi10 and Heiter15 with \mbox{$\sim\!30$} K and
Bensby14 \mbox{$\sim\!70$} K.
The plots in Fig.~\ref{temperature-scales} show the comparison of our temperature
determination from coud\'{e} with those derived by the different sources.
Sousa08, Ghezzi10 and Tsantaki13:
all derive $T_{\mathrm{eff}}$~assuming LTE and 1D geometry by the Kurucz Atlas 9 \citep{kurucz1993} model atmospheres.
They used the 2002 version of MOOG \citep{sneden1973}
and the ARES code for automatic measurement of equivalent widths \citep{sousa2007}.
They differ in the line-lists used and in the atomic data adopted.
Tsantaki13's line-list is an upgrade of the Sousa08's list selected with HARPS,
where ``bad'' lines were suppressed to correct $T_{\mathrm{eff}}$~overestimate in cooler stars.
Both works computed log \textit{gf} values from an inverted solar analysis using equivalent widths measured
in solar spectra.
Ghezzi10's list is short in comparison with those of Sousa08 and Tsantaki13, it was selected for
the FEROS spectrograph \citep{1999Msngr..95....8K} at lower resolution;
the log \textit{gf} they used are obtained in laboratory.
The comparison with these three works show a trend with $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$:
the larger $T_{\mathrm{eff}}$, the larger is the discrepancy.
For Ghezzi10, the comparison between our measurements and theirs show a
positive trend with [Fe/H], while for Sousa08 and Tsantaki13
no trend with [Fe/H] is found, but offsets of 48 and 33 K, respectively.
Bensby14: derived $T_{\mathrm{eff}}$~considering NLTE corrections on spectral lines measured manually.
The 1D MARCS model atmospheres \citep{asplund1997} were used with
an own code of convergence of atmospheric parameters.
They used a large line-list and spectra from different instruments of medium and high resolution,
with log \textit{gf} values obtained in laboratory.
The comparison of their $T_{\mathrm{eff}}$~scale against $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~is similar to those of Sousa08 and Tsantaki13.
Indeed, Sousa08 find their scale to be compatible to an offset of $+18$ K respect Bensby14's
(see Fig. 3 in paper).
We find a slightly significant positive trend with [Fe/H].
Ramirez14a and Ramirez14b: used a differential method \citep{Mel2006}
with which the atmospheric parameters of high internal precision are
obtained. By means of the ``$\textit{q}^2$''
package\footnote{The Python package ``$\textit{q}^2$'' \url{https://github.com/astroChasqui/q2}}
both groups of authors used the 2013 version of MOOG and 1D-LTE model atmospheres grids.
They, measured spectral lines manually and used atomic data from laboratory.
There are two main differences between the procedures of Ramirez14a and Ramirez14b.
Firstly, Ramirez14a used the ``odfnew'' version of Kurucz, while Ramirez14b used
MARCS atmosphere model \citep{gustafson2008}.
However, according to Ramirez14b the use of different models does not
affect significantly the parameters diagnostics because of the differential method applied.
Secondly, the stars analyzed in both works differ in [Fe/H]: Ramirez14b analyzed
solar twins, while Ramirez14a more metal-rich stars, i.e. [Fe/H] $\gtrsim 0.2$.
Thus, Ramirez14b naturally used the Sun as standard for the solar twins, while in Ramirez14a
the differential method was applied respect every star of the sample.
For the Ramirez14b's scale of solar twins we find an offset of
$+42 \pm 13$ K respect H$\alpha$, which agrees with
the \mbox{$28 \pm 1$ K} needed to correct H$\alpha$ zero-point.
For the Ramirez14a's scale we find an offset of $+72 \pm 17$.
Considering Ramirez14a and Ramirez14b a unique sample, we find a positive trend with [Fe/H].
Maldonado15: assumed LTE and 1D geometry by the Kurucz Atlas 9 model atmospheres
as Sousa08, Ghezzi10, and Tsantaki13, but they used the line-list from \citet{G_S1999} and
spectra from several sources including HARPS.
For the convergence of the atmospheric parameters they used
TGVIT \citep{takeda2005}. The comparison of their $T_{\mathrm{eff}}$~scale against H$\alpha$ does not show a
significant trend, but an offset of +34 K.
We found the same offset for IRFM against H$\alpha$ (Sect.~\ref{IRFM_section}),
which confirms the agreement\footnote{Maldonado et al. find an offset of 41 K,
which is not significant considering the $\sim100$ K error bar relative to their IRFM calculations.}
between this $T_{\mathrm{eff}}$~scale and IRFM reported by the authors.
On the other hand we find a positive trend with [Fe/H].
The spectroscopic scales analyzed in this section show, in general,
agreement with H$\alpha$ up to $\sim\!\!5700$ K and hotter diagnostics for hotter $T_{\mathrm{eff}}$'s.
The trends with [Fe/H] are opposite to that we observe with interferometry and IRFM.
After applying the correction relation for metallicity of Sect.~\ref{interferomety_section} to $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$,
the H$\alpha$ scale can be considered in the same frame of the interferometry scale, allowing to
study the accuracy of the spectroscopic scales.
This is shown in the right panels of Fig.~\ref{temperature-scales},
the common pattern shows that spectroscopic temperatures are underestimated by \mbox{100-200 K}
at [Fe/H] = $-0.6$ dex and overestimated by
\mbox{$\sim\!\!100$ K} at [Fe/H] = $+0.4$ dex.
The most accurate [Fe/H] range is around the solar value, say between $-0.3$ and $+0.1$ dex.
The relations presented in the plots can be used to empirically correct spectroscopic scales.
These corrections become important as $T_{\mathrm{eff}}$~depart from solar,
to derive unbiased [Fe/H] values.
An example of the impact of the $T_{\mathrm{eff}}$~scale on [Fe/H]
is provided in Fig.~\ref{IRFM_vs_spec}.
The plots compare the temperature and metallicity scales of Sousa08 and Ramirez13.
No offset between both temperature scales appears,
but their difference plotted against [Fe/H] replicate the trend
obtained in the top right panel of Fig.~\ref{temperature-scales}.
The difference between metallicity scales also shows a trend with $T_{\mathrm{eff}}$,
associating larger [Fe/H] discrepancies with $T_{\mathrm{eff}}$~farther from solar.
\section{Comparison with other H$\alpha$ scales}
\label{otherHa}
\begin{figure*}
\centering
{\includegraphics[width=.24\textwidth]{BPO02_Ramirez13_fe_h_-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{Cayrel_Ramirez13_Fe_h-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{Amarsi_Ramirez13_Fe_h-eps-converted-to}}
\caption{From left to right, analogous comparisons to the middle panel in Fig.~\ref{IRFM} for the H$\alpha$ scales of
BPO02, \citet{cayrel2011}, and \citet{amarsi2018}.
In all plots, for a quick comparison, the trend with [Fe/H] of Fig.~\ref{IRFM} is represented by the dotted line.
Green symbols represent interferometric $T_{\mathrm{eff}}$~replacing $T_{\mathrm{eff}}^{IRFM}$.}
\label{Ha_scales}
\end{figure*}
In Sect.~\ref{zero-p} we determined $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~for the Sun and compared it with
other authors that use the same diagnostic.
In this section we compare not only zero-points but the temperature scales.
We again discuss the possible sources of the differences between them
and how the enhanced models improve the results.
The works have stars in common with the IRFM catalog of Ramirez13,
but they have a few or no stars in common with this work.
Accordingly, the comparisons are preformed with respect to IRFM in function of [Fe/H],
as done in Sect.~\ref{IRFM_section} with our H$\alpha$ scale.
See the plots in Fig.~\ref{Ha_scales} to follow the discussions below.
BPO02 scale:
10 stars are in common with Ramirez13.
An analogous plot to that in Fig.~\ref{IRFM} show a similar
slope shifted by \mbox{$\sim 70$ K} for the metallicity range we analyze.
A probable cause for the shift is that the synthetic fitted spectra seem slightly biased towards lower relative fluxes,
see e.g. profiles of \mbox{HR 22879} and \mbox{HR 5914} at 6566-6568~\AA~in Fig. 6 in the paper.
It may be due to the $\chi^2_{min}$ fitting method without sigma clipping
applied in low S/N spectra,
e.g. Ramirez14b find systematic high $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~values for larger $\chi^2_{min}$.
It however deserves to be mentioned that BPO02's results are consistent with ours.
Consider that quality of their spectra and their fitting method were not
conceived to get the precision that this work attempts.
\textit{Cayrel et al. (2011) scale:}
The comparison against $T_{\mathrm{eff}}^{IRFM}$~in function of [Fe/H] shows a slightly significant trend.
In the comparison against $T_{\mathrm{eff}}$~from interferometry the trend disappear remaining a flat offset of \mbox{$\sim100$ K}
(check green symbols in the plot), as shown by the authors.
It appears that the H$\alpha$ model of \citet{allard2008} enhances the difference between the model of BPO02
and interferometry close around the solar [Fe/H].
We obtain the same result in Sect.~\ref{zero-p} for the Sun,
i.e. the zero-point of the model is nearly twice that of BPO02.
\textit{Ramirez14b scale:} Precise $T_{\mathrm{eff}}$~were derived for 88 solar analogs
\citep[i.e. stars that share the same atmospheric parameters with the Sun within
an arbitrary narrow range of errors, according to the definition in][]{porto2014}
by the photometric calibrations of \citet{casagrande2010} (IFRM) and H$\alpha$ profiles using the model of BPO02,
in addition to the spectroscopic technique described in Sect.~\ref{spectroscopic}.
In their Fig. 13, these authors compare their determinations from H$\alpha$ with spectroscopy and find, after a
zero-point correction, a small trend, as we did in Sect.~\ref{spectroscopic}
comparing our H$\alpha$ scale with
their spectroscopic scale and several others.
No comparison is presented against [Fe/H], which is to be expected, given that the range of their sample
is very narrow around the solar metallicity ($\pm0.1$ dex).
\textit{Amarsi et al. (2008) scale:} spectra of six templates were used to test the model.
Two of these stars, the Sun and Procyon, lie within the [Fe/H] range of our sample,
while the other four with [Fe/H] between $-2.8$ and $-1.2$ dex
exceed our range.
The comparison with $T_{\mathrm{eff}}^{IRFM}$~in function of [Fe/H] shows a trend, which disappears when interferometric
$T_{\mathrm{eff}}$~is instead compared.
The change in slope is mainly given by the Procyon's interferometric measurement, that precisely agree with that from H$\alpha$.
The comparison with interferometry shows then a perfect agreement with this H$\alpha$ scale along the [Fe/H] range of analysis.
Further, we also estimated a perfect agreement for the Sun from a differential analysis in Sect.~\ref{zero-p},
i.e. the zero-point of the model is practically null.
\section{H$\alpha$ profiles from 3D models}
\label{3D}
The previous sections have shown as the comparison with
the accurate interferometric and IRFM scales
is quite robust and free of biases or trends.
The only trend is a dependence on metallicity in both cases.
In order to further investigate such a trend, we have produced and analyzed
eight H$\alpha$ profiles from 3D models, with which we expect to understand whether
the 1D approximation is indeed the main culprit.
The eight 3D profiles are from the CIFIST grid of CO5BOLD models \citep{2009MmSAI..80..711L,freytag},
calculated using the spectral synthesis code Linfor3D (version 6.2.2) in LTE approximation.
Self-resonance broadening followed BPO02 and Stark broadening followed \citet{griem1967}.
We chose the atmospheric parameters of four profiles to bracket a solar model $T_{\mathrm{eff}}$~and \mbox{log~\textit{g}}.
The four bracketing models were accompanied by four further models of sub-solar metallicity with [Fe/H]$=-0.5$ dex.
The chemical composition follows \citet{grevesse1998} with the exception of the CNO
elements which were updated following \citet{asplund2005}. For the metal-depleted
models an $\alpha$-enhancement of $+0.2$ dex was assumed.
The variation of the continuum across the H$\alpha$ profile was modeled by
assuming a parabolic dependence of the continuum intensity on
wavelength.
Doppler shifts stemming from the underlying velocity field were
fully taken into account -- albeit they have a minor effect on the overall
profile shape. The final flux profiles were horizontal and temporal averages
over typically 20 instants in time, the center-to-limb variation of the line
was calculated using three limb-angles.
To estimate the effects of 3D models on $T_{\mathrm{eff}}$, we analyzed the synthetic H$\alpha$ profiles in the
same way as the observed ones. The synthetic profiles were resampled with the same pixel size of HARPS
and 0.1\% of white noise was added.
The fits are shown in Fig.~\ref{3D_1D_fits} and the temperatures retrieved from 1D models
are compared with their nominal temperatures in Fig.~\ref{3D_1D_}
as done in Sect.~\ref{accuracy}.
In this figure, in the plot in function of [Fe/H],
the improvement given by the 3D models (continuous red line) can be estimated by
how similar the trend of Fig.~\ref{interferometry} (dotted line here)
is reproduced.
The comparison show that temperatures from 1D models are practically reproduced by 3D models at [Fe/H] = 0 dex,
but at [Fe/H] = $-0.5$ dex 3D models produce 100-200 hotter temperatures depending on \mbox{log~\textit{g}}.
Hence, temperatures from 3D models are significantly closer to those from interferometry at [Fe/H] = $-0.5$ dex,
they particularly agree for low \mbox{log~\textit{g}}~values.
\begin{figure}
\centering
{\includegraphics[width=.24\textwidth]{T3D_1D-eps-converted-to}}
{\includegraphics[width=.24\textwidth]{T3D_1D_fe_h-eps-converted-to}}
\caption{Same as in Fig.~\ref{interferometry} for 3D models.
In the panel on the right different symbols and colors are used for the two \mbox{log~\textit{g}}~values
according to the legends. The accuracy of 1D models
\mbox{$T_{\mathrm{eff}} = T_{\mathrm{eff}}^{H\alpha}$ $-159$[Fe/H] + 28} found in
Sect.~\ref{accuracy} is represented by the dotted line.}
\label{3D_1D_}
\end{figure}
We therefore conclude that the most likely cause for the trend with metallicity
of our H$\alpha$ diagnostics with respect to interferometric and IRFM
measurements is the use of 1D models.
We consider, on the other hand, and excellent approximation the use of 1D models, which are easily available, together
with the correction for metallicity given in section 6.2.
\section{Suitability of HARPS}
\label{Reliability}
\begin{table}[!]
\small
\caption{Solar proxies. The list is ordered by date of observation along with the
S/N of the spectra and the effective temperature
derived from H$\alpha$ profiles.}
\label{proxies}
\centering
\begin{tabular}{c c c c}
\hline\hline
Date & object & S/N & $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~(K) \\
\hline \multicolumn{4}{c}{\textbf{coud\'{e}}}\\
\hline
2014/10 & Moon & 300 & $5741 \pm 32$\\
2017/07 & Moon & 400 & $5748 \pm 25$\\
2017/07 & Moon & 400 & $5746 \pm 28$\\
2017/07 & Moon & 400 & $5751 \pm 25$\\
2017/07 & Calisto & 350 & $5740 \pm 28$\\
2017/07 & Ganymede & 350 & $5732 \pm 35$\\
\hline \multicolumn{4}{c}{\textbf{MUSICOS}}\\
\hline
2017/11 & Ganymede & 300 & $5726\pm 28$ \\
2017/11 & Moon & 250 & $5756\pm 45$ \\
2017/11 & Moon & 250 & $5759\pm 41$ \\
2017/11 & Moon & 250 & $5753\pm 30$ \\
\hline \multicolumn{4}{c}{\textbf{HARPS}}\\
\hline
2007/04 & Ganymede & 174 & $5746\pm 52$ \\
2007/04 & Ganymede & 172 & $5750\pm 76$ \\
2007/04 & Ganymede & 171 & $5745\pm 88$ \\
2007/04 & Ganymede & 173 & $5745\pm 68$ \\
2007/04 & Ganymede & 174 & $5735\pm 99$ \\
2007/04 & Ganymede & 391 & $5747\pm 54$ \\
2009/03 & Moon & 532 & $5747 \pm 32$ \\
2010/10 & Moon & 263 & $5741 \pm 65$ \\
2010/10 & Moon & 307 & $5759 \pm 43$ \\
2010/10 & Moon & 288 & $5755 \pm 53$ \\
2010/10 & Moon & 299 & $5743 \pm 74$ \\
2010/10 & Moon & 308 & $5753 \pm 60$ \\
2010/10 & Moon & 304 & $5759 \pm 66$ \\
2010/12 & Moon & 578 & $5746 \pm 29$ \\
2010/12 & Moon & 408 & $5735 \pm 38$ \\
2010/12 & Moon & 412 & $5744 \pm 38$ \\
2010/12 & Moon & 494 & $5732 \pm 36$ \\
2012/06 & Moon & 479 & $5742 \pm 45$ \\
2012/06 & Moon & 478 & $5737 \pm 48$ \\
2012/06 & Moon & 488 & $5746 \pm 38$ \\
2012/06 & Moon & 487 & $5742 \pm 43$ \\
2012/06 & Moon & 485 & $5735 \pm 44$ \\
2012/06 & Moon & 486 & $5735 \pm 42$ \\
2012/06 & Moon & 488 & $5739 \pm 39$ \\
2012/06 & Moon & 490 & $5742 \pm 33$ \\
2012/06 & Moon & 478 & $5734 \pm 33$ \\
2012/06 & Moon & 476 & $5753 \pm 35$ \\
2014/02 & Ganymede & 119 & $5765\pm 98$ \\
2014/02 & Ganymede & 107 & $5750\pm 103$ \\
2014/02 & Ganymede & 117 & $5760\pm 105$ \\
2014/02 & Ganymede & 118 & $5750\pm 107$ \\
2014/02 & Ganymede & 109 & $5767\pm 97$ \\
2014/02 & Ganymede & 117 & $5757\pm 108$ \\
2014/02 & Ganymede & 116 & $5744\pm 139$ \\
2014/02 & Ganymede & 109 & $5757\pm 138$ \\
2014/02 & Ganymede & 109 & $5759\pm 118$ \\
2014/02 & Ganymede & 122 & $5760\pm 98$ \\
2015/07 & Ceres & 89 & $5754\pm 134$ \\
2015/07 & Ceres & 87 & $5748\pm 140$ \\
2015/07 & Ceres & 88 & $5751\pm 111$ \\
2015/07 & Ceres & 89 & $5745\pm 145$ \\
2015/07 & Ceres & 91 & $5755\pm 137$ \\
2015/07 & Ceres & 103 & $5753\pm 143$ \\
2015/07 & Ceres & 87 & $5751\pm 126$ \\
2015/07 & Ceres & 100 & $5753\pm 110$ \\
2015/07 & Ceres & 115 & $5754\pm 116$ \\
2015/07 & Ceres & 128 & $5746\pm 92$ \\
\hline
\end{tabular}
\end{table}
Having shown the suitability of the method with the coud\'{e} spectra,
we apply it to HARPS \citep{Mayor2003}.
HARPS has been chosen because, in order to achieve high radial velocity precision,
the instrument has a very stable field and pupil injection.
It is also thermally stable and in vacuum.
In addition, HARPS archive contains a lot of observations of solar type stars,
including a rich set of solar spectra taken by observing solar system bodies for many years.
All these characteristics make of HARPS the ideal instrument to investigate
the precision of the H$\alpha$ method we have developed.
The fact that the solar siblings observations have been repeated
for several years, allows us to also investigate the stability of this instrument in time,
and to determine to which extent the HARPS H$\alpha$ profile has remained constant in time.
The test is performed with all solar spectra
set out in Table~\ref{proxies},
for which $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s were derived.
The plot in the top panel of Fig.~\ref{temporal} visually summarizes the results displayed in the table.
For each date, $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~values are represented by plus symbols. Their
weighted mean and corresponding spread values are drawn with bars.
Next to them, the number of spectra used and their average S/N ratio are noted
to show the precision reached when measurements from several spectra are combined.
The weighted mean and spread of all measurements
are represented by the horizontal line and the shade at \mbox{$5744 \pm 10$ K}.
Evidently, there is no trend with time and the scatter is very low, which
confirms the blaze stability of HARPS.
This value is in perfect agreement with that of coud\'{e} (see values in Table~\ref{zero-point}),
which implies that not only the blaze is stable but it is also
fully removed through the flat-field procedure.
In the bottom panel of Fig.~\ref{temporal} we plot
the precision obtained from individual spectra in function of S/N.
It is observed that $\sim\!\!\!40$ K can be obtained from spectra of
\mbox{S/N = 400-500}.
Finally, we compare the temperatures derived from HARPS
with those derived from coud\'{e} spectra for the other stars in common.
The comparison is shown in Fig.~\ref{coude_harps} against the three main stellar parameters.
It shows an excellent agreement with a negligible offset between the two samples of \mbox{$-13 \pm 34$ K} with no trends.
The temperatures of all stars agree within 1$\sigma$ errors,
with the exception of two ($\delta$ Eri and HD 184985)
that agree within 2$\sigma$.
\begin{figure}
\centering
{\includegraphics{temporal_change2-eps-converted-to}}
{\includegraphics{temporal_change_SNR-eps-converted-to}}
\caption{\textit{Top panel:} Temperatures of the HARPS solar proxies in Table~\ref{proxies}
plotted versus date.
Daily values are represented by plus symbols and
weighted means and errors for each month are drown in red.
{The weighted mean and error of all the measurements are represented by the continuous
line and the shade on 5744 $\pm 10$ K.
Next to the bars, the number of spectra analyzed and the mean S/N are noted.
\textit{Bottom panel:} The errors of individual measurements in the top panel are plotted versus
S/N. The exponential curve given by the equation in the plot is the best fit to the points.}}
\label{temporal}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=.45\textwidth]{coude_vs_harps_teff_NO_HD-eps-converted-to}}
{\includegraphics[width=.45\textwidth]{coude_vs_harps_fe_h-eps-converted-to}}
{\includegraphics[width=.45\textwidth]{coude_vs_harps_log_g-eps-converted-to}}
\caption{Temperature diagnostics from HARPS respect those of coud\'{e} vs.
atmospheric parameters.
[Fe/H] and \mbox{log~\textit{g}}~values from Table~\ref{objects} were used here.
The \mbox{$-13$ K} offset and its 34 K scatter are represented by the dashed lines and the shades, respectively.}
\label{coude_harps}
\end{figure}
\section{Summary and conclusions}
\label{resume}
With the aim of better understanding and minimizing the errors that affect H$\alpha$
measurements of effective temperature, we have developed a new method to analyze the spectra and
tested it extensively. The results are quite consistent, and they allow us also to test the accuracy
of the temperature diagnostics with
H$\alpha$ profiles from 1D model atmospheres in LTE conditions \citep{BPO2002}.
The core of this work is the special effort adopted in recovering realistic H$\alpha$ profiles
free from instrumental signatures. Namely, the blaze function of the echelle spectrographs
and those induced by random errors of normalization.
We eliminated the blaze by using the single-order coud\'{e} instrument at
do Pico dos Dias Observatory. With it, spectra of 44 F, G, and K stars, including the Sun,
with a wide parameter range
\mbox{$T_{\mathrm{eff}}$--[Fe/H]--\mbox{log~\textit{g}}} (see Fig.~\ref{sample_space}) were acquired.
We minimized the errors of normalization of H$\alpha$ profiles,
by integrating normalization and fit into an iterative procedure, with which we derive precise $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s.
This procedure, additionally uses synthetic spectra of telluric features of PWV
to optimize the continuum location. PWV features may be very small and nearly omnipresent
around H$\alpha$, so they can be easily confused with spectral noise
and shift the continuum to lower flux values.
The accuracy of H$\alpha$ lines from 1D model atmospheres
is found to follow the relation \mbox{$T_{\mathrm{eff}} = T_{\mathrm{eff}}^{H\alpha}$ $-159$[Fe/H] + 28}
within the metallicity range $-0.7$ to $+0.45$ dex.
It was determined at solar parameters by
$T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s from 57 coud\'{e}/HARPS/MUSICOS solar spectra (Table~\ref{proxies})
compared with the reference solar $T_{\mathrm{eff}}$~= 5772 K \citep{Prsa2016,heiter2015},
and at non solar parameters comparing $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$'s of 10 \textit{Gaia Benchmark Stars}
\citep{heiter2015} with their $T_{\mathrm{eff}}$'s from interferometric measurements.
The consistency of our results with effective temperature scales from IRFM and excitation and
ionization equilibrium of Fe lines
was also investigated.
The comparison with IRFM using the photometric calibrations of \citet{casagrande2010}
show exactly the same trend as the interferometric one of \citet{heiter2015}
(compare Fig.~\ref{IRFM} with Fig.~\ref{interferometry}),
asserting the equivalence of the two scales.
As far spectroscopic measurements, the results vary slightly with the authors, but in general they show agreement
with H$\alpha$ up to 5700 K.
A trend with metallicity is present and is opposite to that observed with interferometry and IRFM.
Implying that the spectroscopic scale, in general,
underestimates/overestimates $T_{\mathrm{eff}}$~by \mbox{100 K} at [Fe/H] = $-0.6/$+0.4 dex
with respect to interferometry and IRFM (see Fig.~\ref{temperature-scales}).
In order to investigate the observed trend with metallicity
when comparing our measurements with the
interferometric and IRFM ones, we tested 3D model atmospheres.
H$\alpha$ profiles from 3D models
produce quite similar diagnostics to
1D models at solar parameters (we obtain a \mbox{$-15$ K} zero point),
while at the metal-poor range [Fe/H] = $-0.5$ dex,
they almost fully correct 1D models underestimates (see Fig.~\ref{3D_1D_}).
This therefore indicates that the
trend with metallicity is largely due to the use of 1D models.
The correction we provide by the equation above, however, brings
the three scales H$\alpha$(1D + LTE), interferometry and IRFM on the same base.
We further find that the systematic ``cool''
solar temperature determinations from H$\alpha$ models in the literature
are associated to normalization errors of the different versions of Kitt Peak National Observatory solar atlases.
We quantified the impact of the errors in $T_{\mathrm{eff}}$~and find that models
enhanced by 3D atmosphere geometry and NLTE conditions do improve
the accuracy of 1D + LTE models, leading to practically null differences with the
solar $T_{\mathrm{eff}}$~derived by Stefan-Boltzmann equation 5772 K.
We tested the suitability of HARPS for the temperature determination with H$\alpha$ profiles.
The tests were performed analyzing spectra of 26 stars in common with the coud\'{e} sample and
47 solar spectra from the period 2007-2015, The solar spectra show consistent results, to better than $\pm$ 10 K,
demonstrating the stability of the HARPS blaze and the goodness of the de-blazing process.
The very small ($-13$ K) offset resulting from the comparison of the stars in common with the coud\'{e} sample,
confirms that the normalization-fitting integrated method minimizes random normalization errors.
Hence, when this method is applied, the internal errors of the H$\alpha$ profiles
fitting are entirely due to the spectral noise.
Finally, in Table~\ref{final_teff} we list $T_{\mathrm{eff}}^{\mathrm{H}\alpha}$~as measured
(by combining all measurements from coud\'{e}, HARPS, and MUSICOS spectra)
and our best $T_{\mathrm{eff}}$~estimate obtained applying the correction for
metallicity.
The [Fe/H] and log \textit{g} values used for deriving $T_{\mathrm{eff}}^{H\alpha}$
follow the hierarchy Heiter15, Ramirez13, Ramirez14b,
Ramirez14a, Maldonado15, Ghezzi10, Sousa08, Tsantaki13, Bensby14.
\begin{table}
\tiny
\centering
\caption{\small $T_{\mathrm{eff}}$ of the sample stars.
Column 4 lists the [Fe/H] values used to derive $T_{\mathrm{eff}}^{H\alpha}$ and their sources are
shown in last column in the same way as in Table~\ref{objects}:
(1) \citet{sousa2008}, (2) \citet{gh2010},
(3) \citet{tsa2013}, (4) \citet{ram2013}, (5) \citet{bensby2014}, (6) \citet{ram_2014},
(7) \citet{ram2014}, (8) \citet{maldo2015}, (9) \citet{heiter2015}.
Column 5 lists the weighted mean of the temperatures derived with coud\'{e}, HARPS, and MUSICOS
spectra.
Column 6 lists $T_{\mathrm{eff}}$ corrected from the H$\alpha$ diagnostics following the
relation \mbox{$T_{\mathrm{eff}} = T_{\mathrm{eff}}^{H\alpha} -159$[Fe/H] + 28}.
The errors presented are internal and are associated to the dispersion of the fit.
These are the best estimates.
}
\label{final_teff}
\begin{tabular}{l c c c c c c}
\hline\hline \\
Name & HD & HIP & [Fe/H] &$T_{\mathrm{eff}}^{H\alpha}$ (K) & best $T_{\mathrm{eff}}$ (K) & ctlg\\
\hline \\
$\zeta$ Tuc & 1581 & 1599 & $-0.22$ & $5866 $ & $5930 \pm 17$ & 4\\
$\beta$ Hyi & 2151 & 2021 & $-0.04$ & $5813 $ & $5848 \pm 20$ & 9\\
& 3823 & 3170 & $-0.34$ & $5947$ & $6030 \pm 18$ & 8\\
$\tau$ Cet & 10700 & 8102 & $-0.49$ & $5311 $ & $5417 \pm 22$ & 9\\
$\epsilon$ For & 18907 & 14086 & $-0.60$ & $4984 $ & $5108 \pm 48$ & 9\\
$\alpha$ For & 20010 & 14879 & $-0.30$ & $6112 $ & $6188 \pm 23$ & 4\\
$\kappa$ Cet & 20630 & 15457 & $~~~0.00$ & $5675 $ & $5704 \pm 22$ & 4\\
$10$ Tau & 22484 & 16852 & $-0.09$ & $5947 $ & $5990 \pm 25$ & 4\\
$\delta$ Eri & 23249 & 17378 & $+0.06$ & $5090 $ & $5110 \pm 12$ & 9\\
40 Eri & 26965 & 19849 & $-0.28$ & $5109 $ & $5182 \pm 33$ & 4\\
& 100623 & 56452 & $-0.37$ & $5101 $ & $5188 \pm 17$ & 4\\
$\beta$ Vir & 102870 & 57757 & $+0.24$ & $6096 $ & $6087\pm 18$ & 9\\
& 114174 & 64150 & $+0.05$ & $5703 $ & $5724 \pm 32$ & 4\\
$59$ Vir & 115383 & 64792 & $+0.11$ & $5975 $ & $5987 \pm 23$ & 4\\
$61$ Vir & 115617 & 64924 & $-0.02$ & $5557 $ & $5589 \pm 18$ & 4\\
$\eta$ Boo & 121370 & 67927 & $+0.32$ & $6042 $ & $6020 \pm 25$ & 9\\
& 126053 & 70319 & $-0.36$ & $5663$ & $5749 \pm 58$ & 4\\
$\alpha$ Cen A & 128620 & 71683 & $+0.26$ & $5765 $ & $5753 \pm 12$ & 9\\
$\psi$ Ser & 140538 & 77052 & $+0.12$ & $5653 $ & $5663 \pm 21$ & 8\\
& 144585 & 78955 & $+0.29$ & $5816 $ & $5799 \pm 27$ & 6\\
18 Sco & 146233 & 79672 & $+0.06$ & $5760 $ & $5780 \pm 20$ & 9\\
& 147513 & 80337 & $+0.03$ & $5805 $ & $5829 \pm 24$ & 4\\
$\zeta$ TrA & 147584 & 80686 & $-0.08$ & $6012 $ & $6054 \pm 17$ & 4\\
12 Oph & 149661 & 81300 & $+0.01$ & $5209 $ & $5236 \pm 34$ & 4\\
& 150177 & 81580 & $-0.66$ & $6056 $ & $6189 \pm 60$ & 5\\
& 154417 & 83601 & $-0.03$ & $5950 $ & $5984 \pm 12$ & 4\\
$\mu$ Ara & 160691 & 86796 & $+0.35$ & $5690 $ & $5664 \pm 13$ & 9\\
70 Oph & 165341 & 88601 & $+0.07$ & $5305 $ & $5323 \pm 33$ & 4\\
$\iota$ Pav & 165499 & 89042 & $-0.13$ & $5891 $ & $5941 \pm 32$ & 8\\
& 172051 & 91438 & $-0.24$ & $5565 $ & $5632 \pm 71$ & 4\\
& 179949 & 94645 & $+0.2$ & $6134 $ & $6131 \pm 32$ & 6\\
31 Aql & 182572 & 95447 & $+0.41$ & $5581 $ & $5545 \pm 14$ & 5\\
& 184985 & 96536 & $+0.01$ & $6255 $ & $6282 \pm 21$ & 2\\
$\delta$ Pav & 190248 & 99240 & $+0.33$ & $5633$ & $5610 \pm 14$ & 4\\
15 Sge& 190406 & 98819 & $+0.05$ & $5904 $ & $5925 \pm 16$ & 4\\
$\phi^2$ Pav & 196378 & 101983 & $-0.44$ & $5979$ & $6078 \pm 28$ & 8\\
$\gamma$ Pav & 203608 & 105858 & $-0.66$ & $5991$ & $6124 \pm 31$ & 4\\
& 206860 & 107350 & $-0.06$ & $5878 $ & $5916 \pm 27$ & 4\\
$\xi$ Peg & 215648 & 112447 & $-0.27$ & $6125$ & $6197 \pm 21$ & 2\\
49 Peg & 216385 & 112935 & $-0.22$ & $6193 $ & $6257 \pm 35$ & 4\\
51 Peg & 217014 & 113357 & $+0.19$ & $5785 $ & $5784 \pm 15$ & 4\\
$\iota$ Psc & 222368 & 116771 & $-0.12$ & $6150$ & $6198 \pm 25$ & 4\\
\hline \\
\end{tabular}
\end{table}
\begin{acknowledgements}
R.E.G. acknowledges a ESO PhD studentship.
R.E.G. and M.L.U.M. acknowledge CAPES studentships.
G.F.P.M. acknowledges grant 474972/2009-7 from CNPq/Brazil.
D.L.O. acknowledges the support from FAPESP (2016/20667-8).
S.U. Acknowledges the support of the Funda\c{c}\~ao para a Ci\^{e}ncia e Tecnologia (FCT) through national funds
and of the FEDER through COMPETE2020 by these grants UID/FIS/04434/2013 \& POCI-01-01-145-FEDER-007672 and PTDC/FIS-AST/1526/2014
\& POCI-01-0145-FEDER-016886.
H.G.L. acknowledges financial support by the Sonderforschungsbereich SFB\,881
``The Milky Way System'' (subprojects A4) of the German Research Foundation (DFG).
We thank the staff of the OPD/LNA for considerable support in the observing runs needed to complete this project.
Use was made of the Simbad database, operated at the CDS, Strasbourg, France,
and of NASA Astrophysics Data System Bibliographic Services.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,157,038 | arxiv | \section{Acknowledgment} \par
We thank Prof. Dr. med. Friedrich Köhler and his team for the provisioning of the dataset and Prof. Dr. med. Alexander Meyer for helping and supporting us in analyzing the dataset. We are grateful to Alexander Acker for the fruitful discussions, Volker Möller for his help with the dataset, and Boris Pfahringer for his valuable feedback on our evaluation. This research and the Telemed5000 project have been supported by the Federal Ministry for Economic Affairs and Energy of Germany as part of the program "Smart Data" (project
number 01MD19014C).
\section{Conclusion}
Based on the dataset of daily recordings of vital parameters, medical interventions, hospitalizations and deaths we developed a machine learning model to predict the risk of a patient requiring an intervention. We showed that our approach outperforms the rule-based model used in the Fontane project. The Deep Neural Network may help a medical practitioner to provide critical patients more valuable assessment on time.
In the future we plan to investigate more complex models, take into consideration raw ECG data and study the effect of patient specified learning in a distributed setting.
\section{Conclusion}
Based on the dataset of daily recordings of vital parameters, medical interventions, hospitalizations and deaths we developed a machine learning model to predict the risk of a patient requiring an intervention.
We showed that our approach outperforms the rule-based model used in the Fontane project. The DNN may help a medical practitioner to provide valuable assessment to more critical patients on time.
To ensure that no patient is overseen, investigations in the TMC could be scheduled in addition to the model's sorting. This would reduce the capacity for the model but ensures that each patient is going to be seen within a defined time period (e.g. 14 days).
In further research we will investigate Recurrent Neural Networks (RNN), because of the time series nature of the dataset. The model devised in this research is not patient specified but generalized among all patients. It can be assumed that there is some variance between the patients which could be used to adapt the model to each patient individually and thus boosting its performance.
\section{Introduction} \par
According to the World Health Organization, cardiovascular diseases (CVDs) are the main cause of a non-communicable disease mortality in the world~\cite{WHO_2020}.
It is important to detect a patient's critical condition early to enable a timely intervention. One way to ensure this is to monitor patients remotely in their homes from telemedical centers (TMCs). Modern technology makes it possible to provide patients with monitoring services even in areas without comprehensive medical infrastructures. In recent years, it was shown that telemedical interventions reduce the mortality in patients with heart failures~\cite{koehler-timhf2,telemedicine_effectiveness}.
\par This paper is a part of the Telemed5000 project and follows our previous work on clinical decision support systems for heart failure which was a part of the Fontane project in collaboration with Charité, Berlin~\cite{fontane,heinze_hybrid_remote_monitoring}. Our aim is to scale up the TMC capacity to ensure that up to 5,000 patients will be cared for in the future utilizing Artificial Intelligence (AI).
\par In this paper we describe the development and evaluation of a machine learning model for the prediction of the daily per-patient risk of being in a medically critical condition.
The patients are sorted by this estimated risk, enabling the TMC to concentrate on the most severe cases.
To accomplish this we use a database with daily vital parameters from the TIM-HF 2 study~\cite{koehler-timhf2}.
\section{Materials and Methods}
In this research we used the Telemedical Interventional Management in Heart Failure II (TIM-HF2) database, which was created by Charité, Berlin during the Fontane project~\cite{koehler-timhf2}. The trial has been conducted in Germany between 2013 and 2018. TIM-HF2 included 1,538 patients (773 usual care) whose stage of heart failure is classed II or III according to the New York Health Association (NYHA) classification. Additionally they were admitted to a hospital at most 12 months prior to the study due to heart failure and had a left ventricular ejection fraction (LVEF) of $<$ 45\%.
The dataset includes daily measurements performed by the patients themselves using a weight scale, a blood pressure monitor, a pulse oximeter, a small ECG device and a tablet-like device for the self-reporting of their well-being. In total the unprocessed dataset consists of records from 763 patients out of which 100 died before the end of the study (66 within 7 days after their last measurement). The database also contains clinical events like 387 endpoint-adjudicated hospitalizations and 4,329 interventions performed by the TMC. Patients were asked to participate for one year.
We included the following features into the data: age, weight, blood pressure, oxygen saturation, gender, diabetes, the NYHA class, several symptoms and signs of heart failure (e.g. AV Block, LBBB), automatically extracted data from ECG (heart rate, sinus rhythm, ventricular tachycardia, atrial fibrillation), self-assessed state of health, weight difference (1, 3 and 8 days difference), social variables (e.g. living alone, anxiety). The binary predictor variable is a union of TMC's intervention, hospitalization or death events.
Missing values were linearly imputed for up to 2 consecutive missing days, the rest got dropped. The positive class forms only approximately 2\% of the dataset, thus we oversampled observations from the minority class in the training set to balance the classes.
The dataset was split into three sets: train, validation, and test in a proportion 4:1:1 respectively. The distribution of samples and events per patient was preserved across all sets. Each patient was assigned to exactly one set. To evaluate model performance we took the following metrics into consideration: Receiver Operating Characteristic (ROC) curve, area under ROC curve (AUCROC), Precision - Recall curve, and area under PR curve (AUCPR).
We investigate different deep neural network (DNN) models and compare them to a rule-based baseline. The rule-based model (RB) is based off the TIM-HF 2 study~\cite{koehler-timhf2}. The rules are heart related and consist of engineered features and thresholds defined by an expert group at the Charité \cite{koehler-studydesign}. All DNN models had an output layer with \textit{a sigmoid activation} function, \textit{binary cross-entropy} as a loss function, and \textit{Adam} as the optimization algorithm. We tested between 2 and 5 hidden layers with 5 to 150 neurons in each. Additionally we tested linear, sigmoid, and ReLU activation functions, and dropout rates between 0 and 0.5.
\section{Related Work} \par
Decision Support Systems (DSSs) have been used in the medical field since the seventies~\cite{shortliffe1975computer}. Seto et al. applied a rule-based model to monitor patients with heart failure~\cite{rulebased_telemedicine_hf}. A rule-based model was implemented for the Fontane project, which had to prioritize patients based on their daily vital parameters~\cite{koehler-timhf2,fontane}.
Groccia et. al. proposed a linear Support Vector Machine (SVM) model which predicts major cardiovascular worsening events for patients with heart failure~\cite{worsening_hf_ml}.
Stehlik et. al. studied the potential efficiency of noninvasive remote monitoring in predicting heart failure re-hospitalizations ~\cite{stehlik_hf_hosp_pred}.
Heinze et al. proposed a Hybrid AI model as an improvement for the rule-based model in Fontane~\cite{heinze_hybrid_remote_monitoring}. The hybrid model consists of a Neural Network (NN) with one hidden layer and two rules which were handcrafted by medical experts.
\section{Results}
Fig. \ref{fig:roc_curve} shows the ROC curves for the selected DNN and the rule-based model. The DNN outperforms the rule-based model having better sensitivity at any specificity and an AUCROC of 0.84 as compared to 0.73. Fig. \ref{fig:pr_curve} shows the Precision/Recall curve for both models. The DNN outperforms the rule-based model in precision at any recall rates. The plots in Fig. \ref{prob_distr} show the distributions of the predicted risk-scores
for both classes, as predicted by the DNN and rule-based models. It can be seen that the DNN model performs better in detecting events than the rule-based model, as there is a clearer cut between the distributions. The final DNN model was trained for 453 epochs using a batch size of 4096, and a learning rate of 0.001. It has 3 hidden layers with 35, 20, and 35 neurons respectively. All neurons in the hidden layers use ReLU as their activation function and have dropout rates of 0.25, 0.15, and 0.3. The patient's self assessment, weight differences, pulse-rate, and complaints had the highest impact on the models decision making.
\begin{figure}[h!]
\centering
\hspace{\fill}%
\subfloat[ROC Curves\label{fig:roc_curve}]{
\includegraphics[width=0.4\textwidth]{roc_curve.pdf}
}
\hspace{\fill}%
\subfloat[PR Curves\label{fig:pr_curve}]{
\includegraphics[width=0.4\textwidth]{pr_curve.pdf}
}
\hspace*{\fill}%
\caption{The figures show the (a) ROC curves and (b) PR curves for both the DNN and the rule-based model. The dashed lines represent what performance a purely random classifier would achieve.} \label{roc_pr_curve}
\end{figure}
\begin{figure}[h!]
\centering
\hspace{\fill}%
\subfloat[Deep Neural Network]{
\includegraphics[width=0.45\textwidth]{prob_distr_model_ocolor.pdf}
}
\hspace{\fill}%
\subfloat[Rule-Based Model]{
\includegraphics[width=0.47\textwidth]{prob_distr_rule_ocolor.pdf}
}
\hspace*{\fill}%
\caption{The depicted plots show the distribution of the predicted risk-scores in the test set, separated by the true label.} \label{prob_distr}
\end{figure}
|
2,869,038,157,039 | arxiv | \section{Introduction}
In atomic physics, the mean-field method describing the
motion of electrons confined in the three-dimensional spherically symmetric
Coulomb potential of the nucleus provides an impressively powerful tool to
explain the chemical inertness and special stability of the noble gases.
The well-known atomic shell structure is a consequence of the fact
that the atomic levels 1s, 2s, 2p, 3s, 3p, ... show a ``bunchiness'' in their
distribution as a function of energy. Particular stability of the electronic
system is reached when a bunch of such levels is fully occupied. If then one
more electron is added, the electron configuration would involve a singularly
occupied orbital from the next higher shell, and consequently, the system is
then less stable. Shell filling is thus reflected by large maxima in the
ionization energy for atomic numbers 2, 10, 18,~..., corresponding to the
nobel gas atoms He, Ne, Ar, ...~. In the mid-shell regions, large level
degeneracies occur as a consequence of the spherical symmetry of the
confining potential of the atomic nucleus. The mid-shell levels are then
filled according to Hund's rules, in particular maximizing the total electron
spin for half-filled orbitals~\cite{weissbluth}.
A shell structure is not only unique to atoms, but actually is a
recurring property in finite fermion systems with high symmetry~\cite{bm}.
It equally explains the occurence of ``magic'' proton and
neutron numbers in the binding energies of nuclei, and more recently the
discovery of ``magic'' atom numbers in metal clusters~\cite{clusters} -- small
aggregates of metal atoms in which delocalized valence electrons move in the
positive charge background of the ions. Fundamentally, in contrast to atoms,
however, both mid-shell nuclei and clusters deform their mean field rather
than obey Hund's rules.
The two-dimensional analogue to the atom with its static $1/r$
radial Coulomb confinement due to the nucleus can be realized in small
semiconductor devices: artificial semiconductor atoms based on quantum
dot technology. Clean, well-defined, and highly symmetric vertical quantum
dots (``islands'') can now be made so small that the dot size is comparable to
the Fermi-wavelength~\cite{AUSTING,AUSTING2,TARUCHA2}. Typical micrographs of
micron-sized device mesas incorporating these dots are shown in
Fig.~\ref{fig:1}. The lateral electrostatic confinement originates from
side-wall depletion, and this (and the effective dot size) can be controlled
or ``squeezed'' by the action of a Schottky gate wrapped around the mesa in
the vicinity of the dot to the degree that the number of electrons trapped
on the dot can be changed one-by-one. Also the few-electron regime is readily
accessible, and then the resulting confinement is well approximated by a
parabolic $r^2$ potential. Electron phenomena in related semiconductor
quantum dot structures continue to attract much attention
~\cite{KASTNER,MEIRAV,ASHOORI}. As they exhibit atomic-like properties,
such as a shell structure and shell filling in accordance with Hund's first
rule, the vertical quantum dots, whose heterostructure barriers are both
abrupt and thin, can be regarded as artificial atoms whose ground and
excited states can be probed electrically by single electron tunneling
spectroscopy in order to perform novel ``atomic physics'' experiments in the
few-electron regime~\cite{TARUCHA,KOUWENHOVEN}.
When an arbitrarily small bias, $V$, is applied across the dot between
the metal contact on top of the device mesa and the substrate contact (these
are often refered to as the source and drain contacts), the ground states of
an $N-$electron quantum dot weakly coupled to the contacts can be investigated
directly by monitoring the current flowing vertically through the dot at or
below 0.3 K as the voltage on a single gate, $V_g$, surrounding the dot is
varied. When no current flows (Coulomb blockade), $N$ is well defined. On the
other hand, when current flows the number of electrons can oscillate between
$N$ and $N+1$. With the gate, $N$ can be increased one-by-one starting from
zero by making $V_g$ more positive, so a series of sharp current peaks due
to the charging of the dot (Coulomb oscillations) can be observed. For a
large dot containing many electrons, the Coulomb oscillations are usually
periodic because the single electron charging energy is determined classically
just by the total dot capacitance. For a dot containing just a few electrons
both quantum effects reflecting the underlying symmetry of the confining
potential, and the details of the electron-electron interactions become
important as the dot size is reduced. This leads to modifications of the
Coulomb oscillations, so they are no longer expected to be periodic
~\cite{TARUCHA2,TARUCHA}.
To date, we have mainly focused on the properties of dots in circular
mesas which have diameters of typically 0.4 to 0.7 microns. For a magnetic
field parallel to the current, the measured ground states between 0 T and
about 4 T for $N<20$ in these disk-shaped dot can be well accounted for by a
single-particle picture based on the Darwin-Fock spectrum for a circular
two-dimensional harmonic confining potential, a constant interaction, and
corrections at 0 T due to exchange, i.e. Hund's first rule
~\cite{TARUCHA2,TARUCHA}. At higher fields beyond about 4 T, the evolution
of ground states (and also the excited states) for $N<6$ can be understood
in terms of many-body effects~\cite{KOUWENHOVEN}.
The main theme of this article concerns the effect of geometrically
distorting a circular dot into an elliptical (anisotropic) dot. Previously,
we have briefly reported some properties of elliptical dots
~\cite{TARUCHA2,SASAKI}. Here, we present a more detailed study of the
addition energies and include their magnetic-field dependencies. The
experimental data are compared to model calculations. We survey general
trends, and examine basic assumptions about the nature of the deformed dots.
A perfectly circular dot possesses full rotational symmetry. This high
symmetry leads to maximal level degeneracy of the single-particle
two-dimensional states for parabolic confinement, and this emphasises atomic-like
properties~\cite{TARUCHA2}. This level degeneracy at 0 T for a circular dot
is evident in the single-particle spectrum in Fig.~\ref{fig:2}(a), and
consecutive filling of each set of degenerate states is directly responsible
for the characteristic shell structure with ``magic'' numbers
$N= $2, 6, 12, 20,~... . Furthermore, Hund's first rule accounts
for the parallel filling of electrons amongst half-filled degenerate
states in a shell at numbers $N= $4, 9, 16,~... due to an exchange effect.
Breaking the circular symmetry by deforming the lateral confining potential
lifts the degeneracies of the single-particle levels present in a disk-shaped
dot. This destroys the shell structure for a circle, and modifies other
atomic-like properties~\cite{MADHAV}.
The sequence of spectra in Fig.~\ref{fig:2} also introduces two key
points in our subsequent arguments. Firstly, as the deformation is
gradually increased, (a) to (d), degeneracies of the single-particle states
at 0 T are generally removed. Nevertheless, accidental degeneracies
can occur at certain ``magic'' deformations, e.g. (b) and (c), leading to
subshell closures, provided the confining potential is still perfectly
parabolic. The resulting patterns, however, are very different from
that for the circular case, (a), and in practice may be hard to observe.
Secondly, a weak magnetic field parallel to the current can also induce
level degeneracies in both circular and elliptical dots when single-particle
levels cross at finite field, but here too, any shell structure at a
particular field is of a lower order and less apparent than that for the
circle at 0 T~\cite{MADHAV}.
While illustrative, ultimately any modelling of the behavior of real
dots must go beyond a system of $N$ non-interacting electrons confined by a
two-dimensional harmonic oscillator, i.e.~a single-particle picture, as
employed to generate the spectra in Fig.~\ref{fig:2}~\cite{MADHAV}, and
include Coulomb interactions which can lift certain degeneracies at 0 T.
Numerical diagonalization of the full Hamiltonian matrix has recently been
successfully employed to calculate basic electronic properties of dots with
anisotropic confining potentials~\cite{EZAKI,EZAKI2}. Such ``exact'' numerical
calculations, however, are limited to only a few confined particles. In order
to study dots confining a larger number of electrons
we apply spin-density functional theory at 0 T. This powerful technique,
which explicitly incorporates the electron-spin interactions, has lead to a
number of interesting predictions for the ground state structure of quantum
dots, although there is a continuing discussion as to the interpretation
of so-called spin-density
waves (SDW)~\cite{LEE,koskinen,steffens,serra,reimann1,Hirose}. Both ``exact''
numerical calculations and spin-density functional theory predict subtle
changes in the addition energy spectra, and transitions in the spin-states
as deformation is varied -- even for a weak deformation. An example of the
latter is the breakdown of the conditions for which Hund's first rule
applies for four electrons, and this marks a transition from a spin-triplet to
a spin-singlet configuration, i.e. states are consecutively filled by spin-up
and spin-down electrons.
\section{Experimental setup}
The vertical quantum dots under focus in the following are fabricated
by electron-beam lithography, and a two step etching technique to make
circular or rectangular sub-micron mesas from one special
GaAs/Al$_{0.22}$Ga$_{0.78}$As/\-In$_{0.05}$Ga$_{0.95}$As/\-Al$_{0.22}$
Ga$_{0.78}$As/\-GaAs double barrier heterostructure (DBH). Full details of
the device fabrication, and the material parameters are given elsewhere
~\cite{AUSTING,AUSTING2,TARUCHA2,TARUCHA}. A single Schottky gate is placed
around the side of the mesa close to the DBH. We discuss one circular mesa
with a nominal top contact diameter, $D$, of 0.5 $\mu$m (W), and three
rectangular mesas with a top contact area $(L\times S)$ 0.55 $\times 0.4$
$\mu$m$^2$ (X), 0.65 $\times 0.45$ $\mu$m$^2$ (Y), and 0.6 $\times 0.4$
$\mu$m$^2$ (Z). $L(S)$ is the nominal dimension of longest (shortest) side
of the top contact. Fig.~\ref{fig:1} shows typical scanning electron
micrographs of a circular mesa, and a rectangular mesa taken immediately
after the depositon of the Schottky gate metal surrounding the mesa. For
the rectangular mesas, an intuitively simple way to classify them is to
define a geometric parameter, $\beta $, to be the ratio $L/S$. For X, Y, and
Z respectively $\beta $ is nominally 1.375, 1.44 and 1.5. Due to a slight
isotropic undercut resulting from the light wet etch during the formation of
the mesa~\cite{AUSTING,AUSTING2}, the area of the mesas, as revealed by the
micrographs, is a little less than that of the top contact, so realistic
values for $\beta $ are estimated to be about 5\% larger than the values
quoted.
Fig.~\ref{fig:1} also schematically shows the slabs of semiconductor
between the two Al$_{0.22}$Ga$_{0.78}$As tunneling barriers, and the resulting
dots bounded by the shaded depletion region for the circular and rectangular
mesas. The thickness of the In$_{0.05}$Ga$_{0.95}$As slab is determined by
the separation between the well defined heterostructure tunneling barriers
(approximtely 100{\AA}). The slab is sufficiently thin that all electrons
are in the lowest state in the vertical direction parallel to the current.
The lateral confining potential due to the side wall depletion further
restricts electrons to the center of the slab, thus defining the dot region.
We note that in our devices, the extent of the lateral depletion region in the
vicinity of the dot is largely determined by the electron density in the
n-doped GaAs regions above and below.
The lateral harmonic confining potential of the dot in the circular
mesa has circular symmetry of a sufficiently high degree that degenerate sets
of states can systematically form in the disk-shaped dot
~\cite{TARUCHA2}. These states can be labelled by the quantum numbers
($n$,$l$), where $n$ is the radial quantum number (=0, 1, 2, ...), and $l$ is
the angular momentum quantum number $(=0,\pm 1,\pm 2, ...)$. Each state can
hold a spin-up electron and a spin-down electron. At 0 T the $2n+|l|+1$-th
shell is made up of $2n+|l|+1$ degenerate single-particle states. Each
degenerate set of states can be regarded as a shell of an artificial atom,
and this is the origin of the 2, 6, 12, 20, ... ``magic'' numbers. The first
shell consists of the (0,0) level, the second shell of the (0,1) and (0,-1)
levels, the third shell of the (0,2), (1,0) and (0,-2) levels, and so on.
For the circular dots we typically study, the lateral electrostatic
confinement energy seperating these degenerate sets of single-particle
states, $E_Q$, can be as large as 5 meV in the few-electron
limit~\cite{KOUWENHOVEN}. Neglecting an arbitary constant, the energy of
single-particle state ($n$,$l$) is $(2n+|l|+1)E_Q$. The effective lateral
diameter can be ``squeezed'' from a few thousand Angstroms for $N$ of
approximately 100 down to 0 {\AA} for $N=0$ by making the gate voltage more
negative~\cite{TARUCHA2,TARUCHA,KOUWENHOVEN}. We stress that crucially the
``squeezing'' action of the gate, and indeed application of a magnetic field
parallel to the current, preserves the circular symmetry of a disk-shaped
dot. Consequently, atomic-like properties should be particularly robust and
evident in circular dots.
For a rectangular mesa, the lateral confining potential of the dot is
expected to be elliptical-like due to rounding at the corners provided the
number of electrons in the dot is not too large (in which case it may be more
rectangular-like with rounded corners), or too small. Right at ``pinch-off'',
$(N\rightarrow 0)$, it may even become more circular-like, i.e. the
elliptical-shape of the confining potential may be changing in a complex way
~\cite{TARUCHA2,SASAKI}. Assuming the confining potential is perfectly
parabolic, we can choose to characterize the ``ellipticity''
by a deformation parameter, $\delta =E_S/E_L$. Here, $E_S (E_L)$ is
the confinement energy at 0 T along the minor (major) axis ($E_S>E_L$). The
states in the elliptical dot are now labelled by the quantum
numbers ($n_L$,$n_S$), where $n_L$ ($n_S$) is a quantum number (=0, 1, 2, ...)
associated with the energy parabola along the major (minor) axis
~\cite{MADHAV}. Again neglecting an arbitary constant, the energy of
single-particle state ($n_L$,$n_S$) is ($n_L$+1/2)$E_L$+($n_S$+1/2)$E_S$.
For a perfectly circular mesa, we can trivially generalize our
definition of the deformation parameter so that $\delta =\beta =1$.
On the other hand, for the rectangular mesas, there is no simple
correspondance between $\beta $, a ratio of lengths characteristic of the
top metal contact which is independent of gate voltage (or $N$), and
$\delta $, a ratio of energies characteristic of the dot in the mesa
which is in fact dependent on the gate voltage (or equivalently $N$), i.e.
``accidental'' degeneracies at ``magic'' deformations will be hard to see
over an extended range of $N$, and in any case may be lifted if the
confinement potential is not completely parabolic. Nevertheless, at this
stage, we start by assuming that $\beta $ is a measure of $\delta $, and
thus one might expect $\delta_Z>\delta_Y>\delta_X>\delta_W$. We are not
saying that $\delta =\beta $ for the ellipses, and indeed even for the
simplest possible model of uniform depletion spreading due to the action of
the gate, we would expect $\beta $ to underestimate $\delta $. We furthermore
assume in the following model calculations, for simplicity, that the
``squeezing'' action of the gate does not alter $\delta $. We will
examine these assumptions in light of the experimental and theoretical
data presented. Note that the application of a magnetic field parallel
to the current effectively reduces $\delta $ as seen by the confined
electrons in the limit of a very high field, where it approaches
unity.
\section{Addition energy spectra for circular and deformed dots}
In Fig.~\ref{fig:3}, the change (formally the second difference) in
the electro-chemical potential, $\mu(N+1)-\mu(N)=\Delta _2(N)$, which can also
be regarded as a capacitive energy~\cite{LEE}, is plotted as a function of
electron number, $N$, up to $N=17$ for (a) W, (b) X, (c) Y, and (d) Z at 0 T.
The traces are offset vertically by 3~meV for clarity. The $N$th current
peak position in gate voltage, V$_g$, at a very small bias ($\ll 1$ mV), i.e.
measured in the linear conductance regime, at or below 0.3 K reflects
$\mu (N)$, the electro-chemical potential of the ground state for $N$
electrons, or equivalently the ``addition energy'' to place an extra electron
on a dot with $N-1$ electrons.$\Delta _2(N)$ then mirrors directly the
spacing in gate voltage between the $N+1$th and the $N$th current
peaks~\cite{TARUCHA}. $\Delta _2(N)$ is actually the half-width of the
$N$th Coulomb diamond, the diamond-shaped region in the $V-V_g$ plane
in which current is blocked between the $N$th and the $N+1$th current
peaks. $\Delta _2$ contains contributions from the single-electron charging
energy and changes in the single-particle energy, $E_Q$
~\cite{TARUCHA,KOUWENHOVEN}.
At 0 T, for the circle W, $\Delta _2(N)$ is strongly
dependent on $N$, and a very clear characteristic shell structure
is evident in Fig.~\ref{fig:3}(a)~\cite{TARUCHA}.
Particularly large peaks ($N=2,6,12$), and relatively
large peaks ($N=4,9,16$) are indicated. The result from a local spin density
approximation (LSDA) calculation discussed below is also included for
comparison~\cite{reimann1}. $N=$2,6, and 12 are the first three ``magic''
numbers for a circular two-dimensional harmonic potential which mark
completion of the first three shells (containing respectively 1, 2 and 3
degenerate zero-dimensional single-particle states or equivalently 2, 4 and 6
electrons). The peaks at $N=$ 4, 9, 16 arise as a consequence of exchange
effects which are enhanced at half-full shell filling with same-spin
electrons for the 2nd, 3rd, and 4th shells respectively
~\cite{TARUCHA}. This shell structure should be clear (and this is generally
the case in practice for $N<20$) as long as: $i)$ the two-dimensional lateral
potential remains radially parabolic, and rotationally symmetric to a fairly
high degree, $ii)$ $E_Q$ is comparable to, or larger than, the Coulomb
interaction energy, and $iii)$ the effect of screening is not significant.
For the circular mesa W, it is also evident that as $N$ is decreased,
$\Delta _2(N)$ generally becomes larger due to the increase of the Coulomb
interaction when the dot is ``squeezed''. This observation also holds for the
rectangular mesas, but there are no prominant maxima at $\Delta _2$(2,6,12).
The shell structure for the disk-shaped dot has now become disrupted or
``smeared out'', and this can be attributed directly to the lifting of the
degeneracies of the single-particle states that are present in a circular dot
~\cite{TARUCHA2,TARUCHA,SASAKI}. In other words, deformation kills the shell
structure for a circle, and even quite a small deformation can make a
big difference. This is evident from the three traces, (b) to (d) in
Fig.~\ref{fig:3}, but there are major difficulties in discussing specific
details. As noted earlier, in practice, right at ``pinch-off'', $\delta $
may actually tend towards unity~\cite{TARUCHA2,SASAKI}, but more generally
$\delta \approx \beta $ may be unreliable. Also, even for two circular dots
which have a clear shell structure in the few-electron limit, the absolute
values of $\Delta_2$(N) can vary from dot to dot, i.e. the precise
details are device dependent, and beyond the third shell only a few devices
show the expected behavior clearly~\cite{TARUCHA}. Lastly, even if
$\delta $ could be determined accurately, $\Delta _2$(N) strictly
speaking can only be fairly compared if the ``areas'' of the dots are
comparable, as in the classical limit $\Delta _2(N)$ is determined by the
overall dot capacitance~\cite{TARUCHA2}. Based on the nominal sizes
of the mesas, and in line with the trends of the ``pinch-off'' gate voltage
as identified by the position of the first current peak, elliptical dots X,
Y, and Z respectively may have ``areas'' 1.1, 1.5 and 1.2 larger than that
of the circular dot W. Thus to sensibly discuss details, like the spin-states,
even generally, we first calculate $\Delta_2(N)$ at 0 T for a range of
$\delta $ values in line with those suggested by the $\beta $ values of the
mesas X, Y, and Z. We then compare with, and look for patterns in, the
experimental data at 0 T, before looking at the magnetic field dependence
for confirmation of trends, and whether $\delta \approx \beta $ is reasonable.
\section{Mean-field model for circular and elliptical quantum dots}
We next aim to model the changes due to the deformation of the
lateral confinement to the shell structure of the quantum dots at 0 T by
applying the methods of spin-density functional theory (SDFT). We will briefly
address different aspects of the spin structure relevant to the deformed
quantum dots.
\subsection{The method}
To obtain the ground-state energies and densities for $N$ electrons
confined in an externally imposed potential, we solve the spin-dependent
single-particle Kohn-Sham (KS) equations~\cite{kohnsham}
\begin{equation}
\left[
-{{\hbar^2}\over{2m^*}}\nabla^2 _{\bf r} +
V_{\rm eff}^\sigma ({\bf r})\right]
\psi_{i,\sigma}({\bf r})
=\epsilon_{i,\sigma}\psi_{i,\sigma}({\bf r})
\label{kseq}
\end{equation}
in a plane-wave basis to avoid any symmetry restrictions.
In Eq.(\ref{kseq}), the index $\sigma $ accounts
for the spin ($\uparrow$ or $\downarrow$), and ${\bf r}=(x,y)$.
The effective mean-field potential,
$V_{\rm eff}^\sigma ({\bf r})$, contains contributions from the
external harmonic confining potential,
the Hartree potential of the electrons,
and the functional derivative of the local
exchange-correlation energy, for which we use
the approximation of Tanatar and Ceperley~\cite{tantar}
(see also~\cite{koskinen,reimann1} for details).
The electrostatic confinement due to the lateral depletion region imposed
by the side wall and the Schottky gate is approximated by a two-dimensional
anisotropic harmonic oscillator with
frequencies $\omega _x=\omega \sqrt{\delta }$ and
$\omega _y=\omega /\sqrt{\delta }$,
\begin{equation}
V_{ext}(x,y)={1\over 2}m^*
\omega ^2 \left( \delta x^2+ {1\over \delta }y^2
\right)~. ~
\end{equation}
The ratio of the oscillator frequencies,
$\delta = \omega _x/\omega _y$, thus
defines the ratio of semiaxes of the ellipsoidal equipotentials. We
impose the constraint, $\omega^2$=$\omega _x\omega _y$, which is equivalent to
conserving the area of the quantum dot with deformation~\cite{reimann1}.
The $x$ and $y$-axes are indicated in the schematic diagram for the elliptical
dot in Fig.~\ref{fig:1}. With this convention, the above defined $E_S$ and
$E_L$ respectively correspond to ${\hbar}\omega _x$ and ${\hbar}\omega _y$.
In the model we present the dot is assumed to be well isolated from its
surroundings, so any effects due to the presence of the gate and the
neighboring conducting regions are neglected. Likewise, screening and non-
parabolicity effects inside the dot, which become more important for large
N, are not considered.
For $\delta =1$, a circular shape for the quantum dot is obtained,
whereas $\delta > 1$ corresponds to an ellipsoidally deformed quantum dot. The
strength, $\omega $, of the external parabolic confinement leading to an
average particle density, $n_0=1/(\pi r_s^2)$,
in a circular dot is approximated
by $\omega ^2 = e^2 / (4 \pi \epsilon _0 \epsilon
m^* r_s^3\sqrt{N})$~\cite{koskinen}.
Minimizing the energy density functional by self-consistently solving the
above KS equations, Eq.~(\ref{kseq}), ground state energies,
$E(N,\delta)$, are obtained for different electron numbers and deformation
parameters. Full technical details are given elsewhere
~\cite{koskinen,reimann1}, and here we report only the results. We
emphasise that from recent measurements, it is clear that as $N$ increases
the confinement weakens in such a way that the particle density tends to a
constant~\cite{AUSTING4}. This is implicit in our model, as for any given
value of $r_s$, the oscillator frequency $\omega $, and the related
frequencies $\omega _x$ and $\omega _y$, decrease with increasing $N$.
$\delta $ is also kept constant for simplicity, although $\delta $ is
expected to vary with $N$ in practice.
Although strictly speaking the dot is located in
In$_{0.05}$Ga$_{0.95}$As, we take for values of the effective mass, $m^*$,
and dielectric constant, $\epsilon $, those for GaAs -- namely 0.067
and 13.1 respectively. There are no fitting parameters in the equations, and
only a suitable choice for $r_s$ is required to generate the addition energy
spectra. The value of $r_s=1.5$$a_B^*$ used in the model calculations is
realistic as the value estimated experimentally for a circular quantum dot
is 1.3 to 1.4$a_B^*$~\cite{AUSTING4}.
$a_B^*=\hbar ^2 (4\pi \epsilon _0 \epsilon )/m^* e^2$ is an effective atomic
unit, which for GaAs is about 103$\AA $. $r_s=1.5a_B^*$ in the model
presented here corresponds to an effective confinement energy, $E_Q$, for
$N=1$ of about 5.7~meV. This value is consistent with the upper limit of
$E_Q$ observed in practice (about 5~meV), and justifies the $E_Q$=3~meV
value as a reasonable average for calculating the simple single-particle
spectra shown in Fig.~\ref{fig:2} for the first ten levels.
We point out that the SDFT calculations described here, as well as
those performed by Hirose and Wingreen~\cite{Hirose}, are strictly
two-dimensional, so the strength of the Coulomb interactions may be
overestimated, i.e. the possibility of charge spreading out in both the
$x$-$y$--plane, and along the vertical direction parallel to the current to
minimize the Coulomb energy is neglected. Equivalently, anisotropic extension
of the electron wavefunctions along the major axis is ignored~\cite{TARUCHA2}.
In practice, screening by the metal contacts surrounding a dot is also
believed to reduce the influence of Coulomb interactions. The three-dimensional
model of Lee et al.~\cite{LEE} does incorporate self-consistent solution of
the Poisson equation into a SDFT calculation, but because they use different
expressions for the exchange-correlation energy, and considerably higher
values for $E_Q$, $E_L$, and $E_S$ (up to 20~meV), a direct comparision with
their results is not easy. Nevertheless, they find that electrons strongly
confined in the vertical direction have a very strong two-dimensional
character, and both approaches lead to the same qualitative conclusions.
Namely, the distinct shell-structure for a circle, as well as the spin-states,
at 0 T are strongly modified with deformation.
\subsection{$\Delta _2(N)$ for elliptical dots: Results from LSDA calculations}
We now make a simple comparison between the experimentally measured
traces for the change in the electro-chemical potential, $\Delta _2(N)$,
with those modelled theoretically. Fig.~\ref{fig:4} shows $\Delta _2(N)$,
derived from the self-consistent ground-state energies, $E(N,\delta )$. The
energies are obtained by self-consistently solving the KS-equations starting
from different initial guesses for the effective KS-potential for the spin-up
and spin-down particles. The initial potentials
are chosen completely arbitrarily by
just putting small random numbers on to the lattice points. The calculations
are started from four such guesses. For two of them, the spin-up and spin-down
initial guesses are shifted in value in order to to search for states with
non-zero total spin for even-$N$. This is important in order to find the
ground state amongst all possible spin configurations with a high degree of
certainty~\cite{koskinen,reimann1}.
The lowest trace in Fig.~\ref{fig:4} gives $\Delta _2(N)$ for the
circular dot ($\delta $=1, i.e. zero deformation). As expected, the circular-
shaped confinement produces a spectrum with the familiar shell structure
for a two-dimensional harmonic oscillator, with shell closures at the
``magic'' numbers $2, 6, 12,$ and 20. At the average particle density
corresponding to $r_s=1.5a_B^*$, these ``magic'' numbers arise from large
gaps at the Fermi surface and paired spins in each non-degenerate level, so
the total spin is zero $(S=0)$. We note that within this mean-field model,
spin-density wave (SDW) states are not expected for these particular
spin-zero states~\cite{koskinen,reimann1}.
In Fig.~\ref{fig:3}(a), for $\delta =1$, the experimental and
theoretical traces can be directly compared. The agreement is strikingly good,
given that no parameters are fitted to reproduce the experimental data. Not
only are the principal peaks 2, 6 and 12 well reproduced, but the
relatively large peaks at 4, 9, and 16 for the high-spin states at half-shell
filling are also clear~\cite{koskinen,reimann1}. For $N=4$, Hund's first
rule correctly predicts the calculated $S=1$ spin-triplet state in
which spins are aligned in the two highest partially occupied degenerate
single-particle levels ($n$,$l$)=(0,1) and (0,-1), rather than the $S=0$
spin-singlet state in which the paired-spin electrons reside in either
the (0,1) or (0,-1) levels.
Deforming the confinement slightly by changing the deformation
parameter to $\delta =1.1$ (see trace (b) in Fig.~\ref{fig:4}), the
calculation still predicts fairly clear shell closures at $N=2,6$ and 12.
These numbers can still be considered as ``magic'', but the actual values of
$\Delta _2(2,6,12)$ are noticeably suppressed, because degeneracies have
been lifted~\cite{TARUCHA2}. The $N=20$ peak has become very weak.
Also values of $\Delta _2(N)$ neighboring $N= $2, 6, and 12 start to become
comparable to the values for $N=2,6,$ and 12, i.e. there is less contrast.
Overall, the shell structure is much less pronounced compared to that for
the circle. Already it is clear that even a very small deviation from perfect
circular symmetry can have a very noticeable effect even when single-particle
level degeneracies are lifted by just a small amount.
As the deformation increases further, the pronounced peaks for
$N=$2, 6, 12, and 20 evident for the disk-shaped dot are further suppressed.
This is a simple consequence of the removal of the level ``bunching'' with
deformation. Even for the cases where ``accidental'' subshell closures occur
at certain ``magic'' deformations (e.g. $\delta =1.5$ and 2 as seen in
Fig.~\ref{fig:2}), the reduced separation between degenerate single-particle
energy levels ($E_L$) would make any shell structure less clear to observe,
and the sequence of ``magic'' numbers would be very different (e.g. for
$\delta =2$ it would be 2, 4, 8, 12, 18, ...) compared to those for
$\delta =1$. From Fig.~\ref{fig:4} we can see essentially that for
$\delta \ge 1.2$, the circular shell structure has been completely
eliminated. Traces (a) to (f) thus illustrate the dramatic destruction of the
familiar shell structure for a circular dot with deformation.
Also apparent is that a systematic one-to-one correspondance of
$\Delta _2(N)$ between traces (b) to (d) in Fig.~\ref{fig:3} and traces
(b) to (f) in Fig.~\ref{fig:4} is impossible to make. Although the experimental
data for mesa X partly resembles the theoretical data for $\delta $= 1.1
to 1.3, the data for mesas Y and Z do not seem to resemble that for
$\delta >1.3$, except perhaps for a weak tendency to oscillate between
even-$N$ and odd-$N$. We have already stated many reasons why, in comparision
to a circular dot, a good correspondance between experiment and theory for
the elliptical dots is less likely. We stress that ultimately, except for
circle W, $\delta $ is not known, and equating $\delta $ with
$\beta $ may not be reliable. To progress we must look for other clues.
Theoretically, Fig.~\ref{fig:4} shows that there are transitions in
the ground state spin-configurations with deformation~\cite{reimann1}. The
total spin, $S$, is identified by different symbols in the figure. These
transitions are particularly numerous for, but are not restricted to,
the even-$N$ systems, and are clearly very sensitive to the actual value of
the deformation. For example, in the case of $N=6$ electrons, the total spin
is predicted to change from $S=0$ (i.e. a paramagnetic state) at $\delta =1$,
through an $S=0$ SDW state, to $S=1$ at $\delta =1.5$ --- an indication of
``piezo-magnetic'' behavior~\cite{reimann1,mss8},~i.e.~changes of the
dot magnetization with deformation. Although experimentally we are not in a
position to differentiate between an $S=0$ ``normal'' state and an $S=0$
SDW state showing a spatial variation in the polarization as a consequence
of broken spin symmetry in the internal coordinates
~\cite{ringschuck}- indeed the interpretation of a SDW is still debated in
the literature~\cite{Hirose}- the SDFT calculations described here predict
that the latter becomes more prevalent for even-$N$ systems as $\delta $
increases, particularly for small average particle densities
~\cite{koskinen,reimann1}.
Another interesting, and in practice the simplest incidence we can
focus on, is what happens to the $N=4$ ground state. The inset in
Fig.~\ref{fig:4} shows $\Delta _2(N=4)$ versus deformation up to
$\delta =1.5$. Starting with the circular dot, Hund's first rule gives a
total spin of $S=1$ for the triplet state favoring spin alignment of the two
electrons in the second shell rather than a total spin of $S=0$ for the
singlet state in which the spins are paired. As the deformation is initially
increased, the energy separation between the two levels ($n_L$,$n_S$)= (1,0)
and (0,1) -- the two originally degenerate levels ($n$,$l$)= (0,1) and (0,-1)
in the second shell of the circular dot -- increases (see (a) and (b) in
Fig.~\ref{fig:2}), and so the spin-triplet state becomes progressively less
favorable. $\Delta _2(4)$ continously decreases with $\delta $, and at a value
between 1.2 and 1.3, a spin-zero state (actually predicted by the SDFT
described here to be a SDW) appears, i.e. a spin triplet-singlet transition
is expected. For higher values of $\delta $ beyond this transition,
$\Delta _2(N=4)$ starts to increase.
Other recent calculations employing numerical diagonalization
for elliptical dots moderately deformed up to $ \delta =2$ have also predicted
that $\Delta _2(N)$ is sensitive to deformation, and that the spin-states can
be modified~\cite{EZAKI,EZAKI2}. Those calculations, for $N$ up to 10, and
performed at 0 T with $E_Q$=3 meV, also reveal a spin triplet-singlet
transition at $\delta \approx 1.2$ for $N=4$, and, more generally, a
consecutive filling of states by spin-up and spin-down electrons at higher
deformation is favored.
Inspection of Fig.~\ref{fig:3} gives values of $\Delta _2(N=4)$ for
mesas W, X, Y, and Z respectively of 3.1, 2.7, 3.1, and 2.5 meV. Whilst it is
reassuring that these energies lie in the range predicted by SDFT, it is
tempting to attribute, for a $\delta $ value equated to the $\beta $
value, the apparently anomalously low value for mesa Z to sample specific
fluctuations, and say that the trend for mesas W, X, and Y is consistent
with that predicted in Fig.~\ref{fig:4}~({\it inset}),~i.e.~$N=4$ is a spin-
triplet for W, and a spin-singlet for X, Y, and Z. However, as we do not
really know $\delta$ for the elliptical dots, we can not even be confident
that the actual $\delta$ values lie in the $\delta $=1.0 to 1.5 range, i.e.
the values might be higher, or even that the order X, Y, and Z for increasing
deformation as suggested by the $\beta $ values is correct. Fortunately,
we can apply a $B$-field, and as we will shortly show this goes a long way
to resolving these difficult issues.
In case the actual $\delta$ values for the elliptical dots exceed 1.5,
traces (g) and (h) in Fig.~\ref{fig:4} respectively show $\Delta _2(N)$ for
the higher deformation parameters $\delta =2$ and $\delta =3.2$. We have no
reason to believe that $\delta $ experimentally will be exactly 2 or exactly
3.2, but the numbers are representative of the two situations where,
respectively, many or no single-particle levels are degenerate at 0 T for
non-interacting electrons, as illustrated by the spectra in Fig.~\ref{fig:2}.
As expected, traces (g) and (h) show no circular-like shell structure, and no
particularly large values of $\Delta _2(N)$. Indeed, apart from the
``classical'' background trend, i.e. $\Delta _2(N)$ increasing as $N$
decreases, there is little one can say about the traces except for $N>5$
there is a tendency for a weak even-odd oscillation in $\Delta _2(N)$, and
this oscillation is perhaps clearer for larger $\delta $. The model here
actually predicts small peaks for odd-$N$, and small valleys for even-$N$.
For odd-$N$ the spin-state is nearly always S=1/2, and for even-$N$ the
spin-state is usually $S=0$ (SDW). At least for $\delta =2$, where
in the single-particle picture there can be accidental degeneracies
at 0 T (see Fig.~\ref{fig:2} trace (c)), one might naively expect
some non-zero even-$N$ spin-states, but it is possible that in the model
calculations, for the parameters given, the interactions modify the spectrum
so dramatically that expected degeneracies are lifted reducing the visibility
of any potential shell structure, e.g.~$N=6$ and 10 are predicted here to be
$S=0$ (SDW) rather than $S=1$ as might be expected from Hund's first rule.
On the other hand, for $\delta =3.2$, where in the single-particle picture
there are no accidental degeneracies at 0 T (see Fig.~\ref{fig:2} trace (d)),
perhaps surprisingly some non-zero even-$N$ spin-states, for example for
$N$=12 and 16, are predicted -- this too may be due to interactions.
For Y and Z, the $\Delta _2(N)$ traces in Fig.~\ref{fig:3}
seem to show a weak tendency to oscillate between a slightly larger
even-$N$ value, and a slightly smaller odd-$N$ value, and this oscillation
seems clearer for Y than for Z. For the moment we do not try to account
for the clarity of this oscillation in dots Y and Z, but try to explain the
origin of the oscillation, although we are now being forced to entertain the
idea that $\delta $ for Y and Z may be much larger than 1.5. Starting from
the over simple single-particle picture with a fixed confinement energy, and
then including a constant interaction which is the same for even-$N$ and
odd-$N$, a larger even-$N$ value is expected because only
$\Delta _2($even-$N)$ can contain a finite contribution due to the
single-particle energy level spacing. A slightly more advanced model,
which is more realistic in principle, would be to have a constant
interaction for odd-$N$ (next electron added to an $S=1/2$ state already
containing one electron) that is stronger than the constant interaction
for even-$N$ (next electron added to an empty state).
If the former is larger than
the latter plus the single-particle spacing (more likely in practice as
$N$ increases), a weak tendency to oscillate between smaller even-$N$ and
larger odd-$N$ could occur. This pattern is what the SDFT calculations
predict in Fig. 4 for $\delta =2 $ and $\delta = 3.2$. The fact that
$\Delta _2(N)$ for Y and Z is often a little larger for even-$N$ than
odd-$N$ should not be taken to mean that the constant interaction model is
more accurate. Rather the Coulomb interactions may not be so strong in
practice, due to screening by the leads for example, as those in our
model- a model that also does not include the self-consistent calculation
of the electrostatic confining potential. Indeed, in the SDFT
calculations of Lee et al.~\cite{LEE}, the electrostatic confining potential
is much stronger (e.g. $E_S$=20 meV, $E_L$=10 meV), and they find that
$\Delta _2(N)$ is generally a little larger for even-$N$ than for
odd-$N$. Finally, we note that eventually, for a much stronger deformation
(e.g. $\delta $ exceeding 10), the addition energy spectrum would become
smoother as it tends towards that for a quasi-one-dimensional quantum
wire~\cite{reimann1,reimann2}.
\section{Magnetic Field Dependence}
Application of a magnetic field is a powerful tool with which to
identify the quantum numbers of states in our vertical quantum dots
~\cite{TARUCHA2,TARUCHA,KOUWENHOVEN}. Fig.~\ref{fig:2} instructively shows
the expected evolution of the first ten single-particle energy levels
with $B$-field up to 6 T for a circular dot ($\delta $=1), and for
elliptical dots with $\delta=1.5, 2,$ and 3.2. The energy level spectra are
calculated according to the simple single-particle model employed by
Madhav and Chakraborty\cite{MADHAV} in which Coulomb interactions are
neglected, and the confining potential is assumed to be perfectly parabolic.
The spectrum for the circular dot is the familiar Darwin-Fock
spectrum for a circular two-dimensional harmonic confining potential. The
confinement energy for the circular dot, $E_Q$, is taken to be 3 meV,
in practice a reasonable average value in the few-electron limit, and is
assumed to be independent of $N$. The confinement energies for the
elliptical dots are simply derived from the relation $E_L$$E_S$=$E_Q$$E_Q$.
For the case of the circle and the $\delta=3.2$ ellipse, quantum numbers
($n$,$l$) and ($n_L$,$n_S$) respectively for some of the states we discuss
are indicated. Each single-particle energy level can accomodate a spin-up
and spin-down electron, so current peaks should normally come in pairs in
a constant interaction model neglecting exchange~\cite{MADHAV}. ``Wiggles''
in the position of pairs of current peaks are expected because the
$B$-field induces crossings between single-particle states
~\cite{TARUCHA2,TARUCHA,MADHAV}. The first lowest energy ``wiggle''
originates from the crossing marked by a black triangle in each of the four
spectra. $\Delta _2$(even-$N$) is expected to be strongly dependent on $B$-
field as it can contain contributions from single-particle energy level
spacings, whereas $\Delta _2$(odd-$N$) is essentially independent of
$B$-field at weak-field, and is determined only by the effect of Coulomb
repulsion. Any detailed discussion on the actual $B$-field dependence of the
current peaks requires the inclusion of Coulomb interactions
~\cite{KOUWENHOVEN}. The four calculated spectra nevertheless clearly serve
to demonstate three simple points: $i)$ the $B$-field lifts all degeneracies
present at 0 T at the ``magic'' deformations, e.g. $\delta =$1, 1.5, 2, ...
($\delta$=3.2 is not a ``magic'' deformation); $ii)$ a $B$-field can always
induce degeneracies at finite field when single-particle levels cross,
provided the confinement potential is perfectly parabolic; and $iii)$ as
$\delta $ increases, the single-particle energy level spacing generally
decreases ($\le E_L$).
Fig.~\ref{fig:5} shows the $B$-field dependence, for a weak field
applied parallel to the current, of the Coulomb oscillation peak positions for
the circular mesa W, (a), and the rectangular mesas X, Y and Z, (b) to (d).
The data consists of current vs. $V_g$ traces taken at a very small bias
($\ll 1$~mV) at different $B$-fields at or below 0.3~K.
For circle W, only the third, fourth, fifth and sixth current peaks
(belonging to the second shell at 0 T) are shown. The pairing of the third
peak with the fifth peak, and the fourth peak with the sixth peak from 0 T
to 0.4 T, as opposed to the more usual pairing of the third peak with the
fourth, and the fifth peak with the sixth (due to consecutive filling of
electrons into spin-degenerate single-particle states) for B$>$0.4~T, is a
consequence of Hund's first rule: the $N=4$ state is a spin-triplet so two
parallel-spin electrons fill the two different but originally degenerate
states ($n$,$l$)=(0,1) and (0,-1) in the half-filled second shell
~\cite{TARUCHA2,TARUCHA}. For B$>$0.4~T, the fifth and sixth peaks, as a pair,
first move up, as indicated by the thick arrow, and then start to move down
at about 1.4 T due to the crossing of the single-particle states ($n$,$l$)=
(0,-1) and (0,2). This lowest single-particle level crossing, which is also
clear in Fig.~\ref{fig:2}(a), is marked by a black triangle. The spins of
the added electrons are also shown pictorially at 0 T and 2 T.
To explain why Hund's first rule is obeyed in a simple way,
we can introduce an energy, $E_{EX}$, to represent the reduction in energy
due to exchange between electrons in the half-filled second shell, and this is
estimated to be about 0.7 meV for circle W~\cite{TARUCHA2,TARUCHA}. The $N=4$
triplet-state is thus lower in energy than the $N=4$ singlet-singlet state
by $E_{EX}$, and as a consequence $\Delta _2(3),\Delta _2(5)<\Delta _2(4)$ by
about 2$E_{EX}$. This exchange-related effect persists in a weak B-field as
long as the splitting between states $(0,1)$ and $(0,-1)$ is less than
$E_{EX}$. At 1.4 T this splitting exceeds $E_{EX}$, and the ground state
becomes a spin-singlet, i.e. there is a B-field induced triplet-singlet
transition.
For rectangles X, Y, and Z, the first ten current peaks are shown
in Fig.~\ref{fig:5},(b) to (d). Peaks are paired, and there are no obvious
deviations close to 0 T for $N=4$ which can be attributed to exchange effects,
i.e. Hund's first rule. Quantum numbers ($n_L$,$n_S$) of the single-particle
states are assigned, and the first up-moving pair of peaks is marked by a
thick arrow. With increasing deformation, the first up-moving pair of
peaks, and the lowest energy single-particle level crossing- identified by a
black triangle in each of the Fig.~\ref{fig:2} spectra- are simply expected to
move systematically to higher $N$ (or equivalently to higher energy)
~\cite{MADHAV}.
For the elliptical dots, normal peak pairing, even from 0 T, occurs
so Hund's first rule is not obeyed. This suggests that the spin-state
for $N=4$ is a singlet. The exchange effect is maximal for a circular dot at
$N=4$ because the ($n$,$l$)=(0,1) and (0,-1) states are degenerate, but with
deformation these states become the ($n_L$,$n_S$)=(1,0) and (0,1) states in
an elliptical dot which are split at 0 T. This energy splitting, $\gamma $,
increases with $\delta$. If $\gamma < E_{EX}$ at 0 T, exchange can still
operate to lower the energy, and thus the $N=4$ ground state remains a
spin-triplet. On the other hand, if $\gamma > E_{EX}$ at 0 T, the energy gain
due to exchange is not sufficiently large to compensate for the spliting,
so normal pairing occurs. Thus, as $\delta$ increases, we can expect a
triplet-singlet transition at some critical deformation~\cite{SASAKI}.
Note that $E_{EX}$ itself decreases with increasing deformation, as it has
its maximum value only when the orbitals involved
have the same symmetry.
This transition is clear in the inset of Fig.~\ref{fig:4} according to SDFT,
and has also been predicted by exact numerical diagonalization
~\cite{EZAKI,EZAKI2}. The tell-tail pattern in the trend of $\Delta _2(N=4)$
at 0 T with deformation should be an initial decrease while the state
remains a spin-triplet, a turning point at the transition, and a rise
thereafter when the state is a spin-singlet. As noted before, it is hard to
judge from the absolute values of $\Delta_2(4)$ at 0 T alone shown in
Fig.~\ref{fig:3} whether the $N=4$ state is a triplet or singlet.
$\Delta_2(4)$ can be relatively large either side of the turning point if
either $E_{EX}$ or $\gamma $ is large, i.e. a large $\Delta_2(4)$ can mean
Hund's first rule is operating for nearly degenerate states, or there is a
large separation between non-degenerate states. This potential ambiguity is
apparent when we see that $\Delta_2(4)$ for circle W and ellipse Y are
essentially equal, so it is vitally important to examine carefully the $B$-
field dependence. The absense of deviations to the normal peak pairing at
$N=4$ in Fig.~\ref{fig:5}, traces (b) to (d), nevertheless does apparently
confirm that $\delta $ is indeed greater than 1.2-1.3
which is in line with the
$\beta $ values for mesas X, Y, and Z. For completeness, we note that
normally we probe the spin-states in our high symmetry dot structures via
the orbital effect with the $B$-field parallel to the current. The spin-states
in the elliptical dot X have also been confirmed directly by measuring the
Zeeman effect alone by applying a $B$-field perpendicular to the current, and
the results are again consistent with a spin-singlet interpretation for $N=4$
~\cite{SASAKI}.
The next most striking feature about traces (b) to (d) in
Fig.~\ref{fig:5} is the position of the first up-moving pair of peaks. For
mesas X, Y, and Z respectively, it is the 3rd, 5th, and 4th pair of peaks.
As revealed by the sequence of spectra in Fig.~\ref{fig:2}, in a simple
single-particle picture, the first up-moving state is ($n_L$,$n_S$)=(0,1),
which is actually the lowest energy state of the
second Landau-level~\cite{MADHAV}.
Inspection of these calculated spectra shows that this state is, in the weak-
field limit, from the bottom, the 3rd, 4th, and 5th state respectively for
$1\le\delta <2$, $2\le \delta <3$, $3\le\delta<4$. Thus, starting from no
deformation, the first up-moving pair of peaks should go from the 3rd to
4th, 4th to the 5th, ... at certain ``magic'' deformations as $\delta $ is
increased. Remembering that Coulomb effects are neglected in this simple
picture, and that in practice $\delta $ is expected to vary with $N$,
nonetheless, with these simple arguements it looks as if $1<\delta <2$ for X,
$3<\delta <4$ for Y, $2<\delta <3$ for Z. If we believe this, then even
though ellipses X, Y, and Z are all deformed beyond the triplet-singlet
transition, we are forced to conclude the following: $i)$ $\delta $ can be
much higher than that suggested by the $\beta $ values (especially for Y and
Z); and $ii)$ the ordering given by increasing $\beta $ values may not
reflect the true ordering in $\delta $, i.e. the deformation in Y seems to be
stronger than in Z, so the true sequence may be W--X--Z--Y for the four mesas
considered. Given our earlier comments, the former is not so unexpected
since we have no independent way of measuring $\delta $, but the latter is
perhaps more surprising.
If the true ordering of the ellipses is X, Z, and Y, even though the
reason for the deformation in Y being stronger than in Z is unclear to us,
at least other attributes of Y and Z, and trends in Fig.~\ref{fig:3} and
Fig.~\ref{fig:5} are consistent with this interpretation. For instance,
$\Delta _2(N=4)$ for a W-X-Z-Y ordering respectively of 3.1, 2.7, 2.5, and
3.1 meV is more in line with the predicted trend shown in the inset in
Fig.~\ref{fig:4}, although the value for Z still seems a little low. This
reordering does not contradict our earlier conclusion that, with the absense
of deviations to normal pairing, $N=4$ is a spin-singlet state for all
three ellipses. Thus $\Delta_2(4)$ increases after the triplet-singlet
transition, because with deformation the degeneracy of single-particle states
is strongly lifted. The reversed ordering of
ellipses Y and Z, as well as higher $\delta $ values than suggested by the
$\beta $ values, also fits with the observations made earlier about
traces (c) and (d) in Fig.~\ref{fig:3}, and traces (g) and (h) in
Fig.~\ref{fig:4}. Both mesas show a tendency for $\Delta _2(N)$ to oscillate
between slightly higher and slightly lower values respectively for even-$N$
and odd-$N$ and this seems more pronounced for Y than Z.
\section{Conclusions}
We have experimentally and theoretically investigated the effect of
ellipsoidal deformation on the shell structure, addition energies, and
spin-states in vertical quantum dot atoms on going from circular- to
rectangular-shaped mesas. The familiar and distinctive shell structure
as determined from the addition energy spectra at 0 T for the circular dot is
absent in the elliptical dots, and even small deviations breaking circular
symmetry have a dramatic effect. Measurements with a magnetic field applied
parallel to current confirm that the $N=4$ spin state at 0 T has undergone a
transition due to the moderate deformation: for the circular dot it is a
spin-triplet in accordance with Hund's first rule when the second shell is
half-filled, and for the elliptical dots it is a spin-singlet. These
observations are in agreement with recent theory, as well demonstrated here
by the application of spin-density functional theory at 0 T with a wide
range of deformation parameters. The $B$-field dependence strongly suggests
that the anisotropy of an elliptical dot in practice can be significantly
higher than that given by simply considering the geometry of the mesa in
which the dot is situated. In the future it will be interesting to experiment
with even more strongly- and extremely-deformed dots (to clarify for
instance the existance of $S=0$ SDW states), quasi-one-dimensional wire-like
dots~\cite{reimann1,reimann2}, and possibly other exotically-shaped dots,
for example ring-shaped dots~\cite{reimann2}, and triangular-shaped
dots~\cite{EZAKI,EZAKI2}. Finally, with these goals in mind, ultimately
better control and in-situ manipulation of the lateral potential geometry of
a quantum dot is highly desirable, and this may be achieved by fully
exploiting a multiple-gated vertical single electron transistor we have
recently developed~\cite{AUSTING3,AUSTING5}.
\acknowledgments
We would like to acknowledge the considerable assistance of Takashi
Honda in the fabrication of the devices, and useful discussions with
Yasuhiro Tokura, and Hiroyuki Tamura. This work is partly supported by
grant 08247101 from the Ministry of Education, Science, Culture and Sports,
Japan, the NEDO joint research program (NTDP-98), the Academy of Finland,
and the TMR program of the European Community under contract
ERBFMBICT972405. Our understanding of the circular dots has benefitted from
a long term collaboration with Leo Kouwenhoven at the Delft University of
Technology and his coworkers.
\vskip-12pt
|
2,869,038,157,040 | arxiv | \section{#1}\setcounter{equation}{0}}
\title{\bf Boundary singularities of $N$-harmonic functions
\footnote {To appear in{ \it Communications in Partial Differential Equations}}
\author{{\bf Rouba Borghol}, {\bf Laurent V\'eron}\\
{\small Department of Mathematics,}\\
{\small University of Tours, FRANCE}
\date{}
\begin{document}
\maketitle
\newcommand{\txt}[1]{\;\text{ #1 }\;
\newcommand{\textbf}%% Bold face. Usage: \tbf{...}{\textbf
\newcommand{\tit}{\textit
\newcommand{\tsc}{\textsc
\newcommand{\textrm}{\textrm}
\newcommand{\mbf}{\mathbf
\newcommand{\mrm}{\mathrm
\newcommand{\bsym}{\boldsymbol
\newcommand{\scs}{\scriptstyle
\newcommand{\sss}{\scriptscriptstyle
\newcommand{\textstyle}{\textstyle}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\footnotesize}{\footnotesize}
\newcommand{\scriptsize}{\scriptsize}
\newcommand{\be}{
\begin{equation}
}
\newcommand{\bel}[1]{
\begin{equation}
\label{#1}}
\newcommand{\ee}{
\end{equation}
\newcommand{\eqnl}[2]{
\begin{equation}
\label{#1}{#2}
\end{equation}
\newtheorem{subn}{Proposition}
\renewcommand{\thesubn}{}
\newcommand{\bsn}[1]{\defProposition{#1}
\begin{subn}}
\newcommand{\esn}{
\end{subn}}
\newtheorem{sub}{Proposition}[section]
\newcommand{\dn}[1]{\defProposition{#1}}
\newcommand{\bs}{
\begin{sub}}
\newcommand{\es}{
\end{sub}}
\newcommand{\bsl}[1]{
\begin{sub}\label{#1}}
\newcommand{\bth}[1]{\defProposition{Theorem}
\begin{sub}\label{t:#1}}
\newcommand{\blemma}[1]{\defProposition{Lemma}
\begin{sub}\label{l:#1}}
\newcommand{\bcor}[1]{\defProposition{Corollary}
\begin{sub}\label{c:#1}}
\newcommand{\bdef}[1]{\defProposition{Definition}
\begin{sub}\label{d:#1}}
\newcommand{\bprop}[1]{\defProposition{Proposition}
\begin{sub}\label{p:#1}}
\newcommand{\eqref}{\eqref}
\newcommand{\rth}[1]{Theorem~\ref{t:#1}}
\newcommand{\rlemma}[1]{Lemma~\ref{l:#1}}
\newcommand{\rcor}[1]{Corollary~\ref{c:#1}}
\newcommand{\rdef}[1]{Definition~\ref{d:#1}}
\newcommand{\rprop}[1]{Proposition~\ref{p:#1}}
\newcommand{\BA}{
\begin{array}}
\newcommand{\EA}{
\end{array}}
\newcommand{\renewcommand{\arraystretch}{1.2}{\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{2pt}
\begin{array}}
\newcommand{\BAV}[2]{\renewcommand{\arraystretch}{#1}
\setlength{\arraycolsep}{#2}
\begin{array}}
\newcommand{\BSA}{
\begin{subarray}}
\newcommand{\ESA}{
\end{subarray}}
\newcommand{\BAL}{
\begin{aligned}}
\newcommand{\EAL}{
\end{aligned}}
\newcommand{\BALG}{
\begin{alignat}}
\newcommand{\EALG}{
\end{alignat}
\newcommand{\BALGN}{
\begin{alignat*}}
\newcommand{\EALGN}{
\end{alignat*}
\newcommand{\note}[1]{\textit{#1.}\hspace{2mm}}
\newcommand{\note{Proof}}{\note{Proof}}
\newcommand{\hspace{10mm}\hfill $\square$}{\hspace{10mm}\hfill $\square$}
\newcommand{\qed}{\\
${}$ \hfill $\square$}
\newcommand{\note{Remark}}{\note{Remark}}
\newcommand{\modin}{$\,$\\
[-4mm] \indent}
\newcommand{\quad \forall}{\quad \forall}
\newcommand{\set}[1]{\{#1\}}
\newcommand{\setdef}[2]{\{\,#1:\,#2\,\}}
\newcommand{\setm}[2]{\{\,#1\mid #2\,\}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\longleftarrow}{\longleftarrow}
\newcommand{\longleftrightarrow}{\longleftrightarrow}
\newcommand{\Longrightarrow}{\Longrightarrow}
\newcommand{\Longleftarrow}{\Longleftarrow}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\rightharpoonup}{\rightharpoonup}
\newcommand{
\paran}[1]{\left (#1 \right )
\newcommand{\sqbr}[1]{\left [#1 \right ]
\newcommand{\curlybr}[1]{\left \{#1 \right \}
\newcommand{\abs}[1]{\left |#1\right |
\newcommand{\norm}[1]{\left \|#1\right \|
\newcommand{
\paranb}[1]{\big (#1 \big )
\newcommand{\lsqbrb}[1]{\big [#1 \big ]
\newcommand{\lcurlybrb}[1]{\big \{#1 \big \}
\newcommand{\absb}[1]{\big |#1\big |
\newcommand{\normb}[1]{\big \|#1\big \|
\newcommand{
\paranB}[1]{\Big (#1 \Big )
\newcommand{\absB}[1]{\Big |#1\Big |
\newcommand{\normB}[1]{\Big \|#1\Big \|
\newcommand{\rule[-.5mm]{.3mm}{3mm}}{\rule[-.5mm]{.3mm}{3mm}}
\newcommand{\thknorm}[1]{\rule[-.5mm]{.3mm}{3mm} #1 \rule[-.5mm]{.3mm}{3mm}\,}
\newcommand{\trinorm}[1]{|\!|\!| #1 |\!|\!|\,}
\newcommand{\bang}[1]{\langle #1 \rangle
\def\angb<#1>{\langle #1 \rangle
\newcommand{\vstrut}[1]{\rule{0mm}{#1}}
\newcommand{\rec}[1]{\frac{1}{#1}}
\newcommand{\opname}[1]{\mbox{\rm #1}\,}
\newcommand{\opname{supp}}{\opname{supp}}
\newcommand{\opname{dist}}{\opname{dist}}
\newcommand{\myfrac}[2]{{\displaystyle \frac{#1}{#2} }}
\newcommand{\myint}[2]{{\displaystyle \int_{#1}^{#2}}}
\newcommand{\mysum}[2]{{\displaystyle \sum_{#1}^{#2}}}
\newcommand {\dint}{{\displaystyle \int\!\!\int}
\newcommand{\quad}{\quad}
\newcommand{\qquad}{\qquad}
\newcommand{\hsp}[1]{\hspace{#1mm}}
\newcommand{\vsp}[1]{\vspace{#1mm}}
\newcommand{\infty}{\infty}
\newcommand{\prt}{
\partial}
\newcommand{\setminus}{\setminus}
\newcommand{\emptyset}{\emptyset}
\newcommand{\times}{\times}
\newcommand{^\prime}{^\prime}
\newcommand{^{\prime\prime}}{^{\prime\prime}}
\newcommand{\tilde}{\tilde}
\newcommand{\subset}{\subset}
\newcommand{\subseteq}{\subseteq}
\newcommand{\noindent}{\noindent}
\newcommand{\indent}{\indent}
\newcommand{\overline}{\overline}
\newcommand{\underline}{\underline}
\newcommand{\not\in}{\not\in}
\newcommand{\pfrac}[2]{\genfrac{(}{)}{}{}{#1}{#2}
\def\alpha} \def\gb{\beta} \def\gg{\gamma{\alpha} \def\gb{\beta} \def\gg{\gamma}
\def\chi} \def\gd{\delta} \def\ge{\epsilon{\chi} \def\gd{\delta} \def\ge{\epsilon}
\def\theta} \def\vge{\varepsilon{\theta} \def\vge{\varepsilon}
\def\phi} \def\vgf{\varphi} \def\gh{\eta{\phi} \def\vgf{\varphi} \def\gh{\eta}
\def\iota} \def\gk{\kappa} \def\gl{\lambda{\iota} \def\gk{\kappa} \def\gl{\lambda}
\def\mu} \def\gn{\nu} \def\gp{\pi{\mu} \def\gn{\nu} \def\gp{\pi}
\def\varpi} \def\gr{\rho} \def\vgr{\varrho{\varpi} \def\gr{\rho} \def\vgr{\varrho}
\def\sigma} \def\vgs{\varsigma} \def\gt{\tau{\sigma} \def\vgs{\varsigma} \def\gt{\tau}
\def\upsilon} \def\gv{\vartheta} \def\gw{\omega{\upsilon} \def\gv{\vartheta} \def\gw{\omega}
\def\xi} \def\gy{\psi} \def\gz{\zeta{\xi} \def\gy{\psi} \def\gz{\zeta}
\def\Gamma} \def\Gd{\Delta} \def\Gf{\Phi{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi}
\def\Theta{\Theta}
\def\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi{\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi}
\def\Omega} \def\Gx{\Xi} \def\Gy{\Psi{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}
\def{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}{{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}}
\def{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}
\def{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}{{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}}
\def{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}{{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}}
\def{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}{{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}}
\def{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}{{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}}
\def{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}{{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}}
\def{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}{{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}}
\def\CW{{\mathcal W}} \def\CQ{{\mathcal Q}}
\def\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C {\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C}
\def\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F {\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F}
\def\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I {\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I}
\def\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L {\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L}
\def\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O {\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O}
\def\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S {\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S}
\def\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V {\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V}
\def\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y {\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y}
\def\mathbb Z {\mathbb Z}
\def\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C {\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C}
\def\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F {\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F}
\def\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I {\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I}
\def\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L {\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L}
\def\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O {\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O}
\def\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S {\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S}
\def\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V {\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V}
\def\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y {\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y}
\def\mathfrak Z} \def\GTQ {\mathfrak Q {\mathfrak Z} \def\GTQ {\mathfrak Q}
\font\Sym= msam10
\def\SYM#1{\hbox{\Sym #1}}
\newcommand{\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\xspace}{\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\xspace}
\medskip
\mysection {Introduction}
Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be a domain is $\BBR^N$ ($N\geq 2$) with a $C^2$ compact boundary $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. A function $u\in W_{loc}^{1,p}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is $p$-harmonic if
\begin{equation}\label {p-harm}
\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\abs {Du}^{p-2}\langle Du,D\phi} \def\vgf{\varphi} \def\gh{\eta\rangle\,dx=0
\end {equation}
for any $\phi} \def\vgf{\varphi} \def\gh{\eta\in C^1_0(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Such functions are locally $C^{1,\alpha} \def\gb{\beta} \def\gg{\gamma}$ for some $\alpha} \def\gb{\beta} \def\gg{\gamma\in (0,1)$. In the case $p=N$, the function $u$ is called $N$-harmonic. The $N$-harmonic functions play an important role as a natural extension of classical harmonic functions. They also appear in the theory of bounded distortion mappings \cite {Re}. One of the main properties of the class of $N$-harmonic functions is its invariance by conformal transformations of the space $\BBR^N$. This article is devoted to the study of $N$-harmonic functions which admit an isolated boundary singularity. More precisely, let $a\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$
and $u\in W_{loc}^{1,N}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap C(\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\})$ be a $N$-harmonic function vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$, then $u$ may develop a singularity at the point $a$. Our goal is to show the existence of such singular solutions, and then to classify all the positive $N$-harmonic functions with a boundary isolated singularity. We denote by $\bf n_a$ the outward normal unit vector to $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ at $a$
The main result we prove are presented below:\\
\noindent {\it There exists a unique positive $N$-harmonic function $u=u_{1,a}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ such that
\begin{equation}\label {behav}
\lim_{\scriptsize\BA{c}x\to a\\
\frac{x-a}{\abs {x-a}}\to\sigma} \def\vgs{\varsigma} \def\gt{\tau\EA}\abs {x-a}u(x)=-\langle\sigma} \def\vgs{\varsigma} \def\gt{\tau,\bf n_a\rangle
\end {equation}
uniformly
on $S^{N-1}\cap \overline{\Omega}=\{\sigma} \def\vgs{\varsigma} \def\gt{\tau\in S^{N-1}: \langle\sigma} \def\vgs{\varsigma} \def\gt{\tau,{\bf n_{a}}\rangle <0\}$.}\\
The functions $u_{1,a}$ plays a fundamental role in the description of all the positive singular $N$-harmonic functions since we the next result holds\\
\noindent {\it Let $u$ be a positive $N$-harmonic function in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$. Then there exists $k\geq 0$ such that
\begin{equation}\label {charact}
u=ku_{1,a}.
\end {equation}
}
When $u$ is no longer assumed to be positive we obtain some classification results provided its growth is limited as shows the following\\
\noindent {\it Let $u$ be a $N$-harmonic function in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ and verifying
$$\abs u\leq Mu_{1,a},
$$
for some $M\geq 0$. Then there exists $k\in\BBR$ such that
\begin{equation}\label {charact2}
u=ku_{1,a}.
\end {equation}
}
In the last section we give a process to construct $p$-harmonic regular functions ($p>1$) or $N$-harmonic singular functions as product of one variable functions. Starting from the existence of $p$-harmonic functions in the plane under the form $u(x)=u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=r^{\gb}\gw(\theta} \def\vge{\varepsilon)$ (see \cite {Kr}), our method, by induction on $N$, allows us to produce separable solutions of the {\it spherical $p$-harmonic spectral equation}
\begin {equation}\label {psrad-p*}
- div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}\left(\left(\gb^2v^2+\abs {\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v}^2\right)^{(p-2)/2}\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v\right)
=\gl_{N,\gb}\left(\gb^2v^2+\abs {\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v}^2\right)^{(p-2)/2}v.
\end {equation}
on $S^{N-1}$, where $\gl_{N,\gb}=\gb\left(N-1+(\gb-1)(p-1)\right)$. This equation equation is naturally associated to the existence of $p$-harmonic functions under the form $u(x)=\abs x^{\gb}v(x/\abs x)$.
As a consequence, we express $p$-harmonic functions under the form of a product of $N$-explicit functions of one real variable. {\it If we represent $\BBR^N$ as the set of $\{x=(x_{1},..., x_{N})\}$ where
$x_{1}=r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2}\sin\theta} \def\vge{\varepsilon_{1}$,
$x_{2}=r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2}\cos\theta} \def\vge{\varepsilon_{1}$, ...,
$x_{N-1}=r\sin\theta} \def\vge{\varepsilon_{N-1}\cos\theta} \def\vge{\varepsilon_{N-2}$ and
$x_{N}=r\cos\theta} \def\vge{\varepsilon_{N-1}$
with $\theta} \def\vge{\varepsilon_{1}\in [0,2\gp]$ and $\theta} \def\vge{\varepsilon_{k}\in [0,\gp]$, for $k=2,...,N-1$, then, for any integer $k$ the function
\begin{equation}\label {charact-p}
u(x)=(r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2})^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon_{1})
\end {equation}
is $p$-harmonic in $\BBR^N$, in which expression $\gb_{k}>1$ is an algebraic number depending on $k$ and
$\gw_{k}$ is a $\gp/k$-antiperiodic solutions of a completely integrable homogeneous differential equation. Moreover $N$-harmonic singular
functions are also obtained under the form}
\begin{equation}\label {charact-p'}
u(x)=r^{-\gb_{k}}(\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2})^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon_{1}).
\end {equation}
Our paper is organized as follows: 1- Introduction. 2- Construction of fundamental singular $N$-harmonic functions. 3- The classification theorem.
4- Separable solutions of the $p$-harmonic spectral problem.
\mysection {Construction of fundamental singular $N$-harmonic functions}
We denote by $\CH_N$ the group of conformal transformations in $\BBR^N$. This group is generated by homothethies, inversion and isometries. Our first result is classical, but we repeat the proof for the sake on completeness.
\bprop{inv} Let $u$ be a $N$-harmonic function in a domain $G\subset\BBR^N$
and $h\in \CH_N$. Then $u_h=u\circ h$ is $N$-harmonic in $h^{-1}(G)$.
\es
\note{Proof} Because for any $p>1$ the class of $p$-harmonic functions is invariant by
homothethies and isometries, it is sufficient to prove the result if $h$ is the inversion
$\CI_0^1$ with center the origin in $\BBR^N$ and power $1$. We set
$y=\CI_0^1(x)$ and $v(y)=u(x)$. For any $i=1,...,N$
$$u_{x_i}(x)=\sum_j\left(\gd_{ij}\abs x^{-2}-2\abs x^{-4}x_ix_j\right)v_{y_j}(y).
$$
Then
$$\abs{Du}^2(x)=\abs x^{-4}\abs{Dv}^2(y)=\abs y^{4}\abs{Dv}^2(y).
$$
If $\phi} \def\vgf{\varphi} \def\gh{\eta$ is a test function, we denote similarly $\psi(y)=\phi(x)$, thus
$$\langle Du,D\phi\rangle=\abs x^{-4}\langle Dv,D\psi\rangle=
\abs y^{4}\langle Dv,D\psi\rangle,
$$
and
$$\int_G\abs {Du}^{N-2}\langle Du,D\phi\rangle\,dx
=\int_{\CI_0^1(G)}\abs y^{2N}\abs {Dv}^{N-2}\langle Dv,D\psi\rangle\abs {D\CI_0^1}\,dy
$$
Because $\abs {D\CI_0^1}=\abs {\det(\prt x_i/\prt y_j)}=\abs y^{-2N}$, the result follows.
\hspace{10mm}\hfill $\square$
\\
\bprop {ball} Let $N\geq 2$, $B=B_1(0)$ and $a\in \prt B$. Then there exists a unique positive $N$-harmonic function $U^i$ in $B$ which vanishes on $\prt B\setminus\{a\}$ and satisfies
\begin {equation}\label {behav1}
U^i(x)=\myfrac {1-\abs{x}}{\abs {x-a}^2}(1+\circ (1))\quad \mbox {as }x\to a.
\end {equation}
\es
\note{Proof} We first observe that the coordinates functions $x_i$ are $N$-harmonic and
positive in the half-space $H_i=\{x\in \BBR^N:x_i>0\}$ and vanishes on
$\prt H_i$. Therefore, the functions $\chi_i(x)=x_i/\abs x^2$ are also $N$-harmonic and singular at $0$. Without loss of generality we can assume that
$a$ is the origin of coordinates,
and that $B$ is the ball with radius $1$ and center $(-1,0,...,0)$. Let
$\gw$ be the point with coordinates $(-2,0,...,0)$. By
the inversion $\CI_\gw^4$, $a$ is invariant and $B$ is transformed into
the half space $H_1$. Since $\chi_1$ is $N$-harmonic in $H_1$, the function
$$x\mapsto \chi_1\circ\CI_\gw^4(x)=-\myfrac{\abs x^2+2x_1}{2\abs x^2}$$
is $N$-harmonic and positive in $B=\{x:\abs x^2+2x_1<0\}$, vanishes
on $\prt B$ and is singular at $x=0$. If we set
$x'_1=x_1+1$, $x'_i=x_i$ for $i=2,...,N$ and
$U^i(x')=\chi_1\circ\CI_\gw^4(x)$, then the $x'$ coordinates of $a$ are
$(1,0,...,0)$ and
$$U^i(x')=\myfrac {1-\abs {x'}^2}{2\abs {x'-a}^2}=\myfrac {1-\abs {x'}}{\abs {x'-a}^2}(1+\circ (1))\quad \mbox {as }x'\to a.
$$
Let $\tilde U^i$ be another positive $N$-harmonic function in $B$ which verifies
(\ref {behav1}) and vanishes on $\prt B\setminus\{a\}$. Thus, for any $\gd>0$, $(1+\gd)\tilde U^i$, is positive, $N$-harmonic, and $U^i-(1+\gd)\tilde U^i$ is negative near $a$. By the maximum principle, $U^i\leq (1+\gd)\tilde U^i$. Letting $\gd\to 0$, and permuting $U^i$ and $\tilde U^i$ yields $\tilde U^i=U^i$.\hspace{10mm}\hfill $\square$
\\
By performing the inversion $\CI_0^1$, we derive the dual result
\bprop {extball} Let $N\geq 2$, $G=B^c_1(0)$ and $a\in \prt B$. Then there exists a unique positive $N$-harmonic function $U^e$ in $G$ which vanishes on $\prt B\setminus\{a\}$ and satisfies
\begin {equation}\label {behav2}
U^e(x)=\circ (\ln\abs x)\quad \mbox {as }\abs x\to \infty,
\end {equation}
and
\begin {equation}\label {behav3}
U^e(x)=\myfrac {\abs{x}-1}{\abs {x-a}^2}(1+\circ (1))\quad \mbox {as }x\to a.
\end {equation}
\es
\note{Proof} The assumption (\ref{behav2}) implies that the function $U=U^e\circ \CI_0^1$, which is $N$-harmonic in $B\setminus\{0\}$ verifies
$$U(x)=\circ (\ln(1/\abs x))\quad \mbox {near }0.
$$
By \cite {Se1}, $0$ is a removable singularity and thus $U$ can be extended as a positive $N$-harmonic function in $B$ which satisfies
(\ref{behav1}). This implies the claim.\hspace{10mm}\hfill $\square$ \\ \hspace{2cm} \\
We denote by $\dot\gr(x)$ the signed distance from $x$ to $\partial \Omega$.
Since $\partial \Omega $ is $C^{2}$, there exists $\beta_{0}>0$ such that if $x\in \mathbb{R}^{N}$ verifies $-\beta_{0} \leq \dot\gr(x)\leq \beta_{0}$, there exists a unique $\xi_{x}\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that
$\abs{x-\xi_{x}}=\abs{\dot\gr(x)}$. Furthermore, if $\gn_{\xi_{x}}$ is the outward unit vector to $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ at $\xi_{x}$, $x=\xi_{x}-\dot\gr(x)\gn_{\xi_{x}}$. In particular $\xi_{x}-\dot\gr(x)\gn_{\xi_{x}}$ and $\xi_{x}+\dot\gr(x)\gn_{\xi_{x}}$ have the same orthogonal projection $\xi_{x}$ onto $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. \\ \smallskip
Let $T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)=\{x\in\BBR^N:-\gb_{0}\leq\dot\gr(x)\leq\gb_{0}\}$, then the mapping
$\Gp:[-\gb_{0},\gb_{0}]\times\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\mapsto T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ defined by $\Gp(\gr,\xi)=\xi-\gr{\bf\gn}(\xi)$ is a $C^2$ diffeomorphism. Moreover $D\Gp(0,\xi)(1,e)=e-\gn_{\xi}$ for any $e$ belonging to the tangent space $T_{\xi}(\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ to $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ at $\xi$. If $x\in T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, we define the reflection of $x$ through $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ by
$\psi(x)=\xi_{x}+\dot\gr(x)\gn_{\xi_{x}}$. Clearly $\psi$ is an involutive diffeomorphism from $\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ to $\Omega} \def\Gx{\Xi} \def\Gy{\Psi^c\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, and $D\psi (x)=I$ for any $x\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
If a function $v$ is defined in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, we define $\tilde v$ in $ T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ by\\
\begin {equation}\label {ext}
\tilde v(x)=\left\{\BA{l}v(x)\quad\qquad\mbox {if }x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\\[2mm]
-v\circ\psi(x)\quad\mbox {if }x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi^c\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi).
\EA\right.\end {equation}
\\
\blemma{ext} Assume that $0\in \partial \Omega$. Let $v\in C^{1,\alpha} \def\gb{\beta} \def\gg{\gamma}(\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\setminus\{0\})$ be a solution of (\ref {p-harm}) in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{0\}$. Then
$\tilde v\in C^{1,\alpha} \def\gb{\beta} \def\gg{\gamma}(T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\setminus\{0\})$ is solution of a quasilinear equation
\begin {equation}\label {ext-equ}
\mysum{j}{}\myfrac {\prt }{\prt x_{j}}\tilde A_{j}(x,D\tilde v)=0
\end {equation}
in $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\setminus\{0\}$ where the $\tilde A_{j}$ are $C^1$ functions defined in $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ where they verify
\begin {equation}\label {ext1}\left\{\BA {l}
(i)\quad \tilde A_{j}(x,0)=0\\[2mm]
(ii) \quad\mysum{i,j}{}\myfrac {\prt \tilde A_{j}}{\prt \eta_{i}}(x,\eta)\xi_{i}\xi_{j}
\geq \Gamma} \def\Gd{\Delta} \def\Gf{\Phi\abs\eta^{p-2}\abs \xi^2
\\
(iii) \quad\mysum{i,j}{}\abs {\myfrac {\prt \tilde A_{j}}{\prt \eta_{i}}(x,\eta)}\leq \Gamma} \def\Gd{\Delta} \def\Gf{\Phi\abs\eta^{p-2}\\
\EA\right.
\end {equation}
for all $x\in T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\setminus\{0\}$ for some $\gb\in (0,\gb_{0}]$, $\eta\in \BBR^N$, $\xi\in\BBR^N$ and some
$ \Gamma} \def\Gd{\Delta} \def\Gf{\Phi>0$.
\es
\note{Proof} The assumptions (\ref {ext1}) implies that weak solutions of (\ref {ext-equ}) are $C^{1,\alpha} \def\gb{\beta} \def\gg{\gamma}$, for some $\alpha} \def\gb{\beta} \def\gg{\gamma>0$ \cite {To1} and satisfy the standard a priori estimates. As it is defined the function $\tilde v$ is clearly $C^1$ in $T_{\gb_{0}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\setminus\{0\}$. Writing
$Dv(x)=-D(\tilde v\circ \psi(x))=-D\psi(x)(D\tilde v(\psi(x)))$ and $\tilde x=\psi(x)=\psi^{-1}(x)$
$$\BA {l}
\myint {\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)}{}\abs {Dv}^{p-2}Dv.D\gz dx\\
\phantom {-----}
=\myint {\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi^c\cap T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)}{}\abs {D\psi(D\tilde v)}^{p-2}
D\psi (D\tilde v).D\psi(D\gz)\abs {D\psi}d\tilde x.
\EA$$
But
$$\BA {l}D\psi (D\tilde v).D\psi(D\gz)=\mysum{k}{}
\left(\mysum{i}{}\myfrac {\prt \psi_{i}}{\prt x_{k}}\myfrac {\prt \tilde v}{\prt x_{i}}\right)
\left(\mysum{j}{}\myfrac {\prt \psi_{j}}{\prt x_{k}}\myfrac {\prt \gz}{\prt x_{j}}\right)\\
\phantom{D\psi (D\tilde v).D\psi(D\gz)}
=\mysum{j}{}\left(\mysum{i,k}{}\myfrac {\prt \psi_{i}}{\prt x_{k}}\myfrac {\prt \psi_{j}}{\prt x_{k}}\myfrac {\prt \tilde v}{\prt x_{i}}\right)\myfrac {\prt \gz}{\prt x_{j}}.
\EA$$
We set $b(x)=\abs{D\psi}$,
\begin {equation}\label {ext2}
A_j(x,\eta)=\abs {D\psi}\abs {D\psi(\eta)}^{p-2}\mysum{i}{}\left(\mysum{k}{}\myfrac {\prt \psi_{i}}{\prt x_{k}}\myfrac {\prt \psi_{j}}{\prt x_{k}}\right)\eta_i,
\end {equation}
and
\begin {equation}\label {ext3}
A(x,\eta)=(A_1(x,\eta),...,A_N(x,\eta))=\abs {D\psi}\abs {D\psi(\eta)}^{p-2}(D\psi)^tD\psi(\eta).
\end {equation}
For any $\xi\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the mapping
$D\psi_{\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi}(\xi)$ is the symmetry with respect to the hyperplane $T_{\xi}(\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ tangent to $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ at $\xi$, so $\abs {D\psi(\xi)}=1$. Inasmuch $D\psi$ is continuous, a lengthy but standard computation leads to the existence of some $\gb\in (0,\gb_{0}]$ such that (\ref {ext1}) holds in $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi^c$. If we define $\tilde A$ to be $\abs\eta^{p-2}\eta$ on $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $A$ on $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi^c$, then inequalities (\ref {ext1}) are satisfied in $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$.\hspace{10mm}\hfill $\square$
\\
These three results allows us to prove our main result
\bth{gen} Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be an open subset of $\BBR^N$ with a compact $C^2$ boundary, $\gr(x)=\opname{dist} (x,\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and $a\in\prt \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Then there exists one and only one positive $N$-harmonic function $u$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, vanishing on
$\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ verifying
\begin {equation}\label {behav4}
\lim_{\scriptsize\BA{c}x\to a\\
\frac{x-a}{\abs {x-a}}\to\sigma} \def\vgs{\varsigma} \def\gt{\tau\EA}\abs {x-a}u(x)=-\langle\sigma} \def\vgs{\varsigma} \def\gt{\tau,\bf n_a\rangle
\end {equation}
uniformly on $S^{N-1}\cap \overline{\Omega}$, and
\begin {equation}\label {behav5}
u(x)=\circ (\ln\abs x))\quad \mbox {as }\abs x\to \infty,
\end {equation}
if $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is not bounded.
\es
\note{Proof} Uniqueness follows from (\ref{behav4}) by the same technique as in the previous propositions.\\
{\it Step 1 (Existence). }If $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is not bounded, we perform an inversion
$\CI_m^{\abs{m-a}^2}$ with center some $m\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Because of (\ref{behav5}), the new function $u\circ \CI_m^{\abs{m-a}^2}$ is $N$-harmonic in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi'=\CI_m^{\abs{m-a}^2}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and satisfies (\ref{behav4}). Thus we are reduced to the case were $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is bounded. Since $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is $C^2$, it satisfies the interior and exterior sphere condition at $a$. By dilating $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, we can assume that the exterior and interior tangent spheres at $a$ have radius $1$. We denote them by $B_1(\gw^e)$ and $B_1(\gw^i)$, their respective centers being
$\gw^i=a-\bf n_a$ and $\gw^e=a+\bf n_a$. We set
$V^i(x)=U^i(x-\gw^i)$ and $V^e(x)=U^e(x-\gw^e)$ where $U^i$ and
$U^e$ are the two singular $N$-harmonic functions described in
\rprop {ball} and \rprop {extball}, respectively in $B_1(\gw^i)$ and
$B^c_1(\gw^e)$, with singularity at point $a$. For
$\ge>0$, we put $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge=\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus B_\ge(a)$,
$\Gs_\ge=\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap\prt B_\ge(a)$ and
$\prt^* \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge=\prt \Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap B^c_\ge(a)$. Let $u_\ge$ be the solution of
\begin {equation}\label {approx}\left\{\BA{l}
{div}(\abs {Du_\ge}^{N-2}Du_\ge)=0\quad\mbox {in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge\\
\phantom {-------;,}
u_\ge=0\quad\mbox {on }\prt^* \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge\\
\phantom {-------;,}
u_\ge=V^e\quad\mbox {on }\Gs_\ge.
\EA\right.\end {equation}
This solution is obtained classicaly by minimisation of a convex functional over a class of functions with prescribed boudary value on
$\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge$. For any $x\in B_1(\gw^i)$, there holds
$$\opname{dist} (x,\prt B_1(\gw^e))=\abs {x-\gw^e}-1\geq
\opname{dist} (x,\prt \Omega} \def\Gx{\Xi} \def\Gy{\Psi)\geq \opname{dist} (x,\prt B_1(\gw^i))=1-\abs {x-\gw^i}.
$$
thus
$$V^i(x)\leq V^e(x)\quad \forall x\in B_1(\gw^i),
$$
by using (\ref {behav1}), (\ref {behav3}) and the maximum principle. Therefore
$$V^i(x)\leq u_\ge (x)\leq V^e(x)\quad \forall x\in B_1(\gw^i)\cap \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge
$$
and
$$u_\ge (x)\leq V^e(x)\quad \forall x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge.
$$
Finally, for $0<\ge'<\ge$,
$u_{\ge'}\vline_{\Gs_\ge}\leq V^e \vline_{\Gs_\ge}= u_{\ge}\vline_{\Gs_\ge}$. Thus
$$u_{\ge'}(x)\leq u_{\ge}(x)\quad \forall x\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\ge.
$$
The sequence $\{u_\ge\}$ is increasing with $\ge$. By classical a priori estimates concerning quasilinear equations, it converges to some positive $N$-harmonic function $u$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ which vanishes on
$\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ and verifies
$$V^i(x)\leq u (x)\quad \forall x\in B_1(\gw^i),
$$
and
$$u (x)\leq U^e(x)\quad \forall x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi.
$$
This implies
\begin {equation}\label {approx1}
\myfrac {1-\abs {x-\gw_i}^2}{2\abs {x-a}^2}\leq u (x)\quad \forall x\in B_1(\gw^i),
\end {equation}
\begin {equation}\label {approx2}
u (x)\leq\myfrac {\abs {x-\gw_e}^2-1}{2\abs {x-a}^2}\quad \forall x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi,
\end {equation}
By scaling we can prove the following estimate
\begin {equation}\label {approx3}
u (x)\leq C\myfrac {\gr(x)}{\abs {x-a}^2}\quad \forall x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi.
\end {equation}
for some $C>0$: for simplicity we can assume that $a$ is the origin of coordinates and, for $r>0$ set
$u_r(y)=u(r y)$. Clearly $u_r$ is $N$-harmonic in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r$ and
$$\max \{\abs {Du_r(y)}:y\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi/r\cap (B_{3/2}\setminus B_{2/3}) \}
\leq C\max \{\abs {u_r(z)}:z\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi/r\cap (B_2\setminus B_{1/2})\},
$$
where $C$, which depends on the curvature of $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r$, remains bounded as long as $r\leq 1$.
Since $Du_r(y)=r Du(r y)$, we obtain by taking $r y=x$, $\abs y=1$ and using (\ref{approx2}) with general $a$, $ \abs {Du(x)}\leq C\abs {x-a}^{-2}$. By the mean value theorem, since $u$ vanishes on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \{a\}$, (\ref{approx3}) holds.\\
{\it Step 2. } In order to give a simple proof of the estimate (\ref{behav4}), we fix the origin of coordinates at $a=0$ and the normal outward unit vector at $a$ to be $-\mathbf{e}_{N}$. If $\tilde{u}$ is the extension of $u$ by reflection through $\prt \Omega$, it statisfies (\ref{ext-equ}) in $T_{\gb}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\backslash\{0\}$ (see lemma \ref{ext}). For $r>0$, set $\tilde{u}^{r}(x)=r \tilde{u}(rx)$. Then $\tilde{u}^{r}$ is solution of
\begin{equation}\label{eq}
\mysum{j}{}\myfrac {\prt }{\prt x_{j}}\tilde A_{j}(rx,D\tilde u^{r})=0
\end{equation}
in $T_{\gb/r}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r)\backslash\{0\}$. By the construction of $\tilde{A}_{j}(x,\eta)$, we can note that\\
\begin{center}
$\displaystyle{\lim_{\scriptsize r\to 0} \tilde{A}^{j}(rx,\eta)=|\eta|^{p-2}\eta_{j},\hspace{1cm}\forall \eta \in \mathbb{R}^{N}}$.
\end{center}
Furthermore, for any $x\in T_{\beta}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\backslash\{0\}$, $\rho(x)=\rho(\psi(x))$ and $c\abs{x}\leq \abs{\psi(x)}\leq c^{-1} \abs{x}$ for some $c>0$, the estimate (\ref{approx3}) holds if $u$ is replaced by $\tilde{u}^{r}$, $\Omega} \def\Gx{\Xi} \def\Gy{\Psi $ by $T_{\gb/r}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r)$ and $\rho(x)$ by $\rho_{r}(x):=$ dist$(x,\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r)$ i.e.
\begin{center}
$\abs{\tilde{u}^{r}(x)}\leq C |x|^{-2} \rho_{r}(x)$ $\forall x\in T_{\gb/r}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r)$.
\end{center}
For $0<a<b$ fixed and for some $0< r_{0}\leq 1$ the spherical shall $\Gamma_{a,b}=\{x\in \mathbb{R}^{N} : a\leq \abs{x}\leq b \}$ is included into $T_{\beta/r}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi/r)$ for all $0<r\leq r_{0}$. By the classical regularity theory for quasilinear equations \cite{To1} and lemma \ref{ext}, there holds
\begin{center}
$\left\| D\tilde{u}^{r} \right\|_{C^{\alpha}(\Gamma_{2/3,3/2})}\leq C_{r} \left\|\tilde{u}^{r}\right\|_{L^{\infty}(\Gamma_{1/2,2})},$
\end{center}
where $C_{r}$ remains bounded because $r\leq 1$. By Ascoli's theorem, (\ref{approx1}) and (\ref{approx3}), $\tilde{u}^{r}(x)$ converges to $x_{N}|x|^{-2}$ in the $C^{1}(\Gamma_{2/3,3/2})$-topology. This implies in particular that $ r^{2}D\tilde{u}(rx)$ converges uniformly in $\Gamma_{2/3,3/2}$ to $-2 x_{N}|x|^{-4}x+|x|^{-2} \mathbf{e}_{N}$. Using the expression of $D\tilde{u}$ in spherical coordinates we obtain
\begin{center}
$r^{2}\tilde{u}_{r} \mathbf{i}- r \tilde{u}_{\phi} \mathbf{e}+ \myfrac{r}{\sin \phi}\nabla_{\sigma'}\tilde{u} \to\ -2 \sigma_{N} \mathbf{i} + \mathbf{e}_{N}$ uniformly on $S^{N-1}$ as $r\to\ 0$,
\end{center}
where $\cos \phi = x_{N}|x|^{-1}$, $\mathbf{i}=x/|x|$, $\mathbf{e}$ is derived from $x/|x|$ by a rotation with angle $\pi/2$ in the plane $0,x,N$ ($N$ being the North pole), and $\nabla_{\sigma'} $ is the covariant gradient on $S^{N-2}$. Inasmuch $\mathbf{i}$, $\mathbf{e}$ and $\nabla_{\sigma'}$ are orthogonal, the components of $\mathbf{e}_{N}$ are $\cos \phi, \sin \phi$ and $0$, thus
\begin{center}
$r \tilde{u}_{\phi}(r,\sigma',\phi) \to\ - \sin \phi $ as $r \to\ 0$.
\end{center}
Since
\begin{center}
$\tilde{u}(r,\sigma',\phi)=\myint{\pi/2}{\phi}\tilde{u}_{\phi}(r,\sigma',\theta)d\theta$,
\end{center}
the previous convergence estimate establishes (\ref{behav4}).\hspace{10mm}\hfill $\square$ \\
\bdef{fund} We shall denote by $u_{1,a}$ the unique positive $N$-harmonic function satisfying (\ref{behav4}), and call it the fundamental solution with a point singularity at $a$.
\es
\mysection {The classification theorem}
In this section we characterize all the positive $N$-harmonic functions
vanishing on the boundary of a domain except one point. The next statement is an immediate consequence of \rth {gen} and \cite [Th. 2.11]{BBV}.
\bth {est}. Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be a bounded domain with a $C^2$ boundary and $a\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. If $u$ is a positive $N$-harmonic function in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$, there exists $M\geq 0$ such that
\begin {equation}\label {est1}
u(x)\leq Mu_{1,a}(x)\quad \forall x\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi
\end {equation}
\es
In the next theorem, which extends \cite [Th. 2.13]{BBV}, we characterize all the signed $N$-harmonic functions with a moderate growth near the singular point.
\bth {charact}. Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be a bounded domain with a $C^2$ boundary and $a\in\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Assume that $u_{1,a}$ has only a finite number of critical points in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. If $u$ is a $N$-harmonic function in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ vanishing on $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ verifying
$\abs {u(x)}\leq Mu_{1,a}(x)$ for some $M>0$ and any $x\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, there
exists $k\in[-M,M]$ such that $u=ku_{1,a}$.
\es
\note{Proof} We define $k$ as the minimum of the $\ell$ such that $u\leq \ell u_{1,a}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Without any loss of generality we can assume $k>0$. Then either the tangency of the graphs of the functions $u$ and $ku_{1,a}$ is achieved in $\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \{a\}$, or it is achieved asymptotically at the singular point $a$. In the first case we considered two sub-cases:\\[1mm]
(i) The coincidence set $G$ of $u$ and $ku_{1,a}$ has a connected component $\gw$ isolated in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. In this case there exists a smooth domain $\CU$ such that $\overline\gw\subset\CU$ and $\gd>0$ such that $ku_{1,a}-u\geq\gd$ on $\prt\CU$. The maximum principle implies that $ku_{1,a}-u\geq\gd$ in $\CU$, a contradiction.\\[1mm]
(ii) In the second sub-case any connected component $\gw$ of the coincidence set touches $\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$, or the two graphs admits a tangency point on
$\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$. If $m\in\gw\cap\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$ or is such a tangency point, the regularity theory implies $\prt u(m)/\prt{\bf n}_m=ku_{1,a}(m)/\prt{\bf n}_m$. By Hopf boundary lemma, $u_{1,a}(m)/\prt{\bf n}_m<0$. By the mean value theorem, the function $w=ku_{1,a}-u$ satisfies an equation
\begin {equation}\label {lin}
Lw=0
\end {equation}
which is elliptic and non degenerate near $m$ (see \cite {FV}, \cite {KV}), it follows that
$w$ vanishes in a neighborhood of $m$ and the two graphs cannot be tangent only on
$\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\{a\}$. Assuming that $\gw\neq\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, let $x_0\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\gw$
such that $\opname{dist} (x_0,\gw)=r_0<\gr(x_0)=\opname{dist} (x_0,\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and let $y_0\in\gw$ be such that $\abs {x_0-y_0}=r_0$. Since $u_{1,a}$ has at most a finite number of critical points, we can choose $x_0$ such that $y_0$ is not one of these critical points. By assumption $w=ku_{1,a}-u$ is positive in $B_{r_0}(x_0)$ and vanishes at a boundary point $y_0$. Since the equations are not degenerate at $y_0$ there holds
$$k\prt u_{1,a}(y_0)/\prt {\bf \gn}-\prt u(y_0)/\prt {\bf \gn}< 0
$$
where $\gn=(y_0-x_0)/r_0$, which contradicts the fact that the two graphs are tangent at $y_0$.\smallskip
\noindent Next we are reduced to the case where the graphs of $u$ and $ku_{1,a}$ are separated in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and asymptotically tangent at the singular point $a$. There exists a sequence $\{\xi_n\}\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that
$\lim_{n\to\infty} u(\xi_n)/u_{1,a}(\xi_n)=k$. We set
$\abs {x_{n}-a}=r_{n}$, $u_{n}(y)=r_{n}u(a+r_{n}y)$ and $v_{n}(y)=r_{n}u_{1,a}(a+r_{n}y)$. Both $u_{n}$ and $ v_{n}$ are $N$-harmonic in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_{n}=(\Omega} \def\Gx{\Xi} \def\Gy{\Psi-a)/r_{n}$. The functions $u_{n}$ and $v_{n}$ are locally uniformly bounded in $\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi_{n}\setminus \{0\}$. It follows, by using classical regularity results, that, there exists sub-sequences, such that $\{u_{n_k}\}$ and $\{v_{n_k}\}$ converge respectively to $U$ and $V$
in the $C^1_{loc}$-topology of $\overline\Omega} \def\Gx{\Xi} \def\Gy{\Psi_{n_{k}}\setminus\{0\}$. The functions $U$ and $V$ are $N$-harmonic in $H\approx\BBR^N_+=\{x=(x_1,x_2,...,x_N):x_N>0\}$ and vanish on $\prt H\setminus\{0\}$. Since it can be assumed that $(\xi_{n_k}-a)/r_{n_k}\to \xi$, there holds
$U\leq kV$ in $H$, $U(\xi)=kV(\xi)$, if $\xi\in H$, and $\prt U(\xi)/\prt x_N=k\prt V(\xi)/\prt x_N>0$, if
$\xi\in \prt H$ (notice that $\abs {\xi}=1$). If $\xi\in\prt H$, Hopf lemma applies to $V$ at $\xi$ and, using the same linearization with the linear operator $L$ as in the previous proof, it yields to
$U=kV$. If $\xi\in H$, we use the fact that $\abs {Du_{1,a}(x)}\geq \gb>0$ for $\abs {x-a}\leq \alpha} \def\gb{\beta} \def\gg{\gamma$ for some $\gb,\alpha} \def\gb{\beta} \def\gg{\gamma>0$. Thus $\abs {Dv_n(\xi)}\geq \gb$. The non-degeneracy of $V$ and the strong maximum principle lead again to $U=kV$. Whatever is the position of $\xi$, the equality between $U$ and $kV$ and the convergence in
$C^1_{loc}$ leads to the fact that for any $\ge>0$ there exists $n_\ge\in\BBN$ such that $n\geq n_\ge$ implies
$$(k-\ge)u_{1,a}(x)\leq u(x)\leq (k+\ge)u_{1,a}(x)\quad \forall x\in\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap \prt B_{r_n}(a).
$$
By the comparison principle between $N$-harmonic functions this inequality holds true in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \prt B_{r_n}(a)$. Since $r_n\to 0$ and $\ge$ is arbitrary, this ends the proof.
\hspace{10mm}\hfill $\square$ \\
\noindent\note{Remark} The assumption that $u_{1,a}$ has only isolated critical points in
$\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is clearly satisfied in the case of a ball, a half-space or the complementary of a ball where no critical point exists. It is likely that this assumption always holds but we cannot prove it. However the Hopf maximum principle for p-harmonic functions (see \cite {To}) implies that $u_{1,a}$ cannot have local extremum in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
\mysection {Separable solutions of the $p$-harmonic spectral problem}
In this section we present a technique for constructing signed $N$-harmonic functions, regular or singular,
as a product of functions depending only on one real variable. Some of the results were sketched in \cite {Ve4}. The starting point is the result of Krol \cite {Kr} dealing with the existence of $2$-dimensional separable $p$-harmonic functions (the construction of singular separable $p$-harmonic functions was performed in \cite {KV}).
\bth {Krth} {\rm (Krol)} Let $p>1$. For any positive integer $k$ there exists a unique $\gb_{k}>0$ and
$\gw_{k}:\BBR\mapsto\BBR$, with least antiperiod $\gp/k$, of class $C^\infty $ such that
\begin {equation}\label {psrad1}
u_{k}(x)=\abs x^{\gb_{k}}\gw_{k}(x/\abs x)
\end {equation}
is $p$-harmonic in $\BBR^2$; $\gb_{k}$ is the unique root $\geq 1$ of
\begin {equation}\label {psrad2}
(2k-1)X^2-\myfrac {pk^2+(p-2)(2k-1)}{p-1}X+k^2=0.
\end {equation}
$(\gb_{k},\gw_{k})$ is unique up to translation and homothety over $\gw_{k}$.
\es
This result is obtained by solving the homogeneous differential equation satisfied by $\gw_{k}=\gw$:
\begin {equation}\label {psrad-1}
-\left(\left(\gb^2\gw^2+\gw^2_{\theta} \def\vge{\varepsilon}\right)^{(p-2)/2}\gw_{\theta} \def\vge{\varepsilon}\right)_{\theta} \def\vge{\varepsilon}
=\gb\left(1+(\gb-1)(p-1)\right)\left(\gb^2\gw^2+\gw^2_{\theta} \def\vge{\varepsilon}\right)^{(p-2)/2}\gw.
\end {equation}
In the particular case $k=1$, then $\gb_{1}=1$ and $\gw_{1}(\theta} \def\vge{\varepsilon)=\sin\theta} \def\vge{\varepsilon$. For the other values of $k$ the $\gb_{k}$ are algebraic numbers and the $\gw_{k}$ are not trigonometric functions, except if $p=2$. More generally, if one looks for $p$-harmonic functions in $\BBR^N\setminus\{0\}$ under the form
$u(x)=u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=r^\gb v(\sigma} \def\vgs{\varsigma} \def\gt{\tau)$, $r=\abs x>0$, $\sigma} \def\vgs{\varsigma} \def\gt{\tau=x/\abs x\in S^{N-1}$, one obtains that $v$ verifies
\begin {equation}\label {psrad-p}
- div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}\left(\left(\gb^2v^2+\abs {\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v}^2\right)^{(p-2)/2}\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v\right)
=\gl_{N,\gb}\left(\gb^2v^2+\abs {\nabla _{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v}^2\right)^{(p-2)/2}v
\end {equation}
on $S^{N-1}$, where $\gl_{N,\gb}=\gb\left(N-1+(\gb-1)(p-1)\right)$ and $div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}$ and $\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}$ are respectively the divergence and the gradient operators on $S^{N-1}$ (endowed with the Riemaniann structure induced by the imbedding of the sphere into $\BBR^N$). This equation, called the {\it spherical $p$-harmonic spectral problem}, is the natural generalization of the spectral problem of the Laplace-Beltrami operator on $S^{N-1}$. Since it does not correspond to a variational form (except if $p=2$), it is difficult to obtain solutions. In the range of $1<p\leq N-1$, Krol proved in \cite {Kr} the existence of solutions of (\ref {psrad-p}), not on the whole sphere, but on a spherical cap (which reduced (\ref {psrad-p}) to an non-autonomous nonlinear second order differential equation). His methods combined ODE estimates and shooting arguments. Later on, Tolksdorf \cite {To} introduced an entirely new method for proving the existence of solutions on any $C^2$ spherical domain $S$, with Dirichlet boundary conditions. Only the case $\gb>0$ was treated in \cite {To}, and, by a small adaptation of Tolksdorf approach, the case $\gb>0$ was considered in \cite {Ve4}. We develop below a method which allows to express solutions as product of explicit one variable functions.
\subsection {The $3$-D case}
Let $(r,\theta} \def\vge{\varepsilon,\phi} \def\vgf{\varphi} \def\gh{\eta)\in (0,\infty)\times[0,2\gp]\times [0,\gp]$ be the spherical coordinates in $\BBR^3$
$$\left\{\BA {l}
x_{1}=r\cos\theta} \def\vge{\varepsilon\sin\phi} \def\vgf{\varphi} \def\gh{\eta\\
x_{2}=r\sin\theta} \def\vge{\varepsilon\sin\phi} \def\vgf{\varphi} \def\gh{\eta\\
x_{3}=r\cos\phi} \def\vgf{\varphi} \def\gh{\eta
\EA\right.$$
Then (\ref {psrad-p}) turns into
\begin {equation}\label {psrad-p-3}\BA {c}
- \myfrac {1}{\sin\phi} \def\vgf{\varphi} \def\gh{\eta}\left[\sin\phi} \def\vgf{\varphi} \def\gh{\eta\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v_{\phi} \def\vgf{\varphi} \def\gh{\eta}\;\;\;\;\;\;\right]_{\phi} \def\vgf{\varphi} \def\gh{\eta}-\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}
\left[\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v_{\theta} \def\vge{\varepsilon}\;\;\;\;\;\;\right]_{\theta} \def\vge{\varepsilon}\\[6mm]
\phantom {--------}
=\gb\left(2+(\gb-1)(p-1)\right)\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v
\EA\end {equation}
We look for a function $v$ under the form
\begin {equation}\label {psrad-4}
v(\theta} \def\vge{\varepsilon,\phi} \def\vgf{\varphi} \def\gh{\eta)=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^\gb \gw(\theta} \def\vge{\varepsilon)
\end {equation}
then
$$\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}
=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{2\gb-2}(\gb^2 \gw^2+\gw_{\theta} \def\vge{\varepsilon}^2),
$$
$$\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}
\left[\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v_{\theta} \def\vge{\varepsilon}\;\;\;\;\;\;\right]_{\theta} \def\vge{\varepsilon}
=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-1)-1}\left((\gb^2 \gw^2+\gw_{\theta} \def\vge{\varepsilon}^2)^{(p-2)/2}\gw_{\theta} \def\vge{\varepsilon}\right)_{\theta} \def\vge{\varepsilon},
$$
$$\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v
=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-1)+1}(\gb^2 \gw^2+\gw_{\theta} \def\vge{\varepsilon}^2)^{(p-2)/2}\gw,
$$
and
$$\BA {l}
\myfrac {1}{\sin\phi} \def\vgf{\varphi} \def\gh{\eta}\left[\sin\phi} \def\vgf{\varphi} \def\gh{\eta\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {v^2_{\theta} \def\vge{\varepsilon}}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\right)^{(p-2)/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v_{\phi} \def\vgf{\varphi} \def\gh{\eta}\;\;\;\;\;\;\right]_{\phi} \def\vgf{\varphi} \def\gh{\eta}\\[6mm]
=\gb(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-1)-1}
\left[((\gb-1)(p-1)+1)-\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta\;((\gb-1)(p-1)+2)\right](\gb^2 \gw^2+\gw_{\theta} \def\vge{\varepsilon}^2)^{(p-2)/2}\gw.
\EA
$$
It follows that $\gw$ satisfies the same equation (\ref {psrad-1}). The next result follows immediately from \rth {Krth}
\bth {septh} Assume $N=3$ and $p>1$. Then for any positive integer $k$ there exists a $p$-harmonic function $u$ in $\BBR^3$ under the form
\begin {equation}\label {psrad-5}
u(x)=u(r,\theta} \def\vge{\varepsilon,\phi} \def\vgf{\varphi} \def\gh{\eta)=r^{\gb_{k}}(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon)
\end {equation}
where $\gb_{k}$ and $\gw_{k}$ are as in \rth {Krth}.
\es
In the case $p=3$ we can use the conformal invariance of the $3$-harmonic equation in $\BBR^3$ to derive
\bth {septh} Assume $p=N=3$. Then for any positive integer $k$ there exists a $p$-harmonic function $u$ in $\BBR^3\setminus\{0\}$ under the form
\begin {equation}\label {psrad-5'}
u(x)=u(r,\theta} \def\vge{\varepsilon,\phi} \def\vgf{\varphi} \def\gh{\eta)=r^{-\gb_{k}}(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon)
\end {equation}
where $\gb_{k}$ and $\gw_{k}$ are as in \rth {Krth} with $p=3$.
\es
As a consequence of \rth {septh} we obtain signed $3$-harmonic functions under the form
(\ref {psrad-5}) in the half space $\BBR^3_{+}=\{x:x_2>0\}$, vanishing on $\prt\BBR^3_{+}\setminus\{0\}$, with a singularity at $x=0$. They correspond to even integers $k$. The extension to general smooth domains $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a deep chalenge. In the particular case $k=1$, we have seen that $\gb_{1}=1$ and $\gw_{1}(\theta} \def\vge{\varepsilon)=\sin\theta} \def\vge{\varepsilon=x_{2}$, that we already know.
\subsection {The general case}
We assume that $N>3$ and write the spherical coordinates in $\BBR^N$ under the form
\begin {equation}\label {repres0}
x=\left\{(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1}=(r,\sin\phi} \def\vgf{\varphi} \def\gh{\eta\,\sigma} \def\vgs{\varsigma} \def\gt{\tau',\,\cos\phi} \def\vgf{\varphi} \def\gh{\eta):\sigma} \def\vgs{\varsigma} \def\gt{\tau'\in S^{N-2},\phi} \def\vgf{\varphi} \def\gh{\eta\in [0,\gp]\right\}.
\end {equation}
The main result concerning separable $p$-harmonic functions is the following.
\bth {repres} Let $N>3$ and $p>1$. For any positive integer $k$ there exists $p$-harmonic functions
in $\BBR^N$ under the form
\begin{equation}\label{psrad-N}
u(x)=u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau',\phi} \def\vgf{\varphi} \def\gh{\eta)=(r\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{\gb_{k}}\, w(\sigma} \def\vgs{\varsigma} \def\gt{\tau').
\end{equation}
where $\gb_{k}$ is the unique root $\geq 1$ of (\ref {psrad2}) and $w$ is solution of (\ref {psrad-N-1})
with $\gb=\gb_{k}$. Furthermore, if $p=N$ there exists a singular $N$-harmonic function under the form
\begin{equation}\label{psrad-N=p}
u(x)=u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau',\phi} \def\vgf{\varphi} \def\gh{\eta)=r^{-\gb_{k}}(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{\gb_{k}}\, w(\sigma} \def\vgs{\varsigma} \def\gt{\tau').
\end{equation}
\es
\note{Proof} We first recall (see \cite {Vi} for details) that the $SO(N)$ invariant unit measure on $S^{N-1}$ is $d\sigma} \def\vgs{\varsigma} \def\gt{\tau=a_{N}
\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta\,d\sigma} \def\vgs{\varsigma} \def\gt{\tau'$ for some $a_{N}>0$, and
$$\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}v=-v_{\phi} \def\vgf{\varphi} \def\gh{\eta}{\bf e}+\myfrac {1}{\sin\phi} \def\vgf{\varphi} \def\gh{\eta}\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v.
$$
where $\bf e$ is derived from $x/\abs x$ by the rotation of center $0$ angle $\gp/2$ in the plane going thru $0$, $x/\abs x$ and the north pole. The weak formulation of (\ref{psrad-p}) expresses as
\begin {equation}\label {psrad-p2}\BA{l}
\myint{0}{\gp}\myint{S^{N-2}}{}\!\!
\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!\left(v_{\phi} \def\vgf{\varphi} \def\gh{\eta}\gz_{\phi} \def\vgf{\varphi} \def\gh{\eta}+
\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v.\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}\gz
\right)\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta\,d\sigma} \def\vgs{\varsigma} \def\gt{\tau'\,d\phi} \def\vgf{\varphi} \def\gh{\eta\\[4mm]
\phantom{-------}
=\gl_{N,\gb}\myint{0}{\gp}\myint{S^{N-2}}{}\!\!
\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!v\,\gz\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta\,d\sigma} \def\vgs{\varsigma} \def\gt{\tau'\,d\phi} \def\vgf{\varphi} \def\gh{\eta
\EA\end {equation}
or, equivalently
\begin {equation}\label {psrad-p3}\BA{l}
-\myfrac {1}{\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta}\left[\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!v_{\phi} \def\vgf{\varphi} \def\gh{\eta}\right]_{\phi} \def\vgf{\varphi} \def\gh{\eta}
\\[6mm]
\phantom{---}
-\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}\left[\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v\right]
\\[6mm]
\phantom{----------}
=\gl_{N,\gb}
\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!v
\EA\end {equation}
where $div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}$ is the divergence operator acting on vector fields on $S^{N-2}$. We look again for p-harmonic functions under the form
\begin{equation}\label{psrad-N0}
u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=u(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau',\phi} \def\vgf{\varphi} \def\gh{\eta)=r^\gb v(\sigma} \def\vgs{\varsigma} \def\gt{\tau',\phi} \def\vgf{\varphi} \def\gh{\eta)=r^\gb\sin^\gb\phi} \def\vgf{\varphi} \def\gh{\eta\, w(\sigma} \def\vgs{\varsigma} \def\gt{\tau').
\end{equation}
Then
$$\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}
=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-2)}\left(\gb^2w^2+\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w}^2\right)^{(p-2)/2},
$$
thus
$$\BA{l}
\myfrac {1}{\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta}\left[\sin^{N-2}\phi} \def\vgf{\varphi} \def\gh{\eta\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!v_{\phi} \def\vgf{\varphi} \def\gh{\eta}\right]_{\phi} \def\vgf{\varphi} \def\gh{\eta}\\[6mm]
=\gb (\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-1)-1}
\left(\left(N-2+(\gb-1)(p-1)\right)-\left(N-1+(\gb-1)(p-1)\right)\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta\right)\\[4mm]
{\times}\left(\gb^2w^2+\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w}^2\right)^{(p-2)/2}w,
\EA
$$
and
$$\BA {l}
\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}\left[\left(\gb^2v^2+v_{\phi} \def\vgf{\varphi} \def\gh{\eta}^2+\myfrac {1}{\sin^2\phi} \def\vgf{\varphi} \def\gh{\eta}\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v}^2\right)^{(p-2)/2}\!\!\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}v\right]\\[6mm]
\phantom {---}
=(\sin\phi} \def\vgf{\varphi} \def\gh{\eta)^{(\gb-1)(p-1)-1}div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}\left[\left(\gb^2w^2+\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w}^2\right)^{(p-2)/2}\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w\right]
\EA$$
Finally $w$ satisfies
\begin {equation}\label {psrad-N-1}
-div_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}\left[\left(\gb^2w^2+\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w}^2\right)^{(p-2)/2}\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w\right]
=\gl_{N-1,\gb}\left(\gb^2w^2+\abs{\nabla_{\sigma} \def\vgs{\varsigma} \def\gt{\tau'}w}^2\right)^{(p-2)/2}w
\end {equation}
on $S^{N-2}$, which is the desired induction. \hspace{10mm}\hfill $\square$
\\
In order to be more precise, we can completely represent the preceding solutions by introducing the generalized Euler angles in $\BBR^N=\{x=(x_{1},...,x_{N})\}$
\begin{equation}\label{Euler}\left\{\BA {l}
x_{1}=r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2}\sin\theta} \def\vge{\varepsilon_{1}\\
x_{2}=r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2}\cos\theta} \def\vge{\varepsilon_{1}\\[-1mm]
.\\[-1mm]
.\\[-1mm]
.\\[-1mm]
x_{N-1}=r\sin\theta} \def\vge{\varepsilon_{N-1}\cos\theta} \def\vge{\varepsilon_{N-2}\\
x_{N}=r\cos\theta} \def\vge{\varepsilon_{N-1}
\EA\right.\end{equation}
where $\theta} \def\vge{\varepsilon_{1}\in [0,2\gp]$ and $\theta} \def\vge{\varepsilon_{k}\in [0,\gp]$, for $k=2,...,N-1$. Notice that $\theta} \def\vge{\varepsilon_{N-1}$ is the variable $\phi} \def\vgf{\varphi} \def\gh{\eta$ in the representation (\ref {repres0}). The above theorem combined with the induction process yields to the following.
\bth {repres2} Let $N>3$ and $p>1$. For any positive integer $k$ there exists $p$-harmonic functions
in $\BBR^N$ under the form
\begin{equation}\label{psrad-N1}
u(x)=(r\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2})^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon_1)
\end{equation}
where $(\gb_{k},\gw_{k})$ are obtained in \rth {Krth}. Furthermore, if $p=N$ there exists a singular $N$-harmonic function under the form
\begin{equation}\label{psrad-N2}
u(x)=r^{-\gb_{k}}(\sin\theta} \def\vge{\varepsilon_{N-1}\sin\theta} \def\vge{\varepsilon_{N-2}...\sin\theta} \def\vge{\varepsilon_{2})^{\gb_{k}}\gw_{k}(\theta} \def\vge{\varepsilon_1).
\end{equation}
\es
|
2,869,038,157,041 | arxiv | \section{Introduction}
Flavor changing neutral current (FCNC) interactions serve as an important probe to test the standard model (SM) and its possible interactions. The reason is that these interactions are forbidden at the tree level within the SM and can occur only via one or more loops. Hence such processes are highly suppressed within the SM. The high statistics experiments at the Large Hadron Collier (LHC) and the Super-B factories will measure FCNC interactions with high accuracy and hence will provide tests of higher order corrections to the SM and its possible extensions.
The quark level transition $b \to s \mu^+ \mu^-$ serves as an important probe to test the SM at the loop level and also constrain many of its possible extensions. The $b \to s \mu^+ \mu^-$ transition is responsible for the (i) inclusive semi-leptonic decay $B \to X_s \mu^+ \mu^-$,
(ii) exclusive semi-leptonic decays $B \to
(K, K^*) \mu^+ \mu^-$, and (iii) purely leptonic decay ${B}_s \to \mu^+ \mu^-$. Both the exclusive and inclusive semi-leptonic decays have been observed in experiments \cite{babar04_incl,belle05_incl,babar-03,babar-06,belle-03,hfag} with a branching ratio close to their SM predictions \cite{ali-02,lunghi,kruger-01,isidori,Asatryan:2001zw,Asatrian:2001de}.
The decay ${B}_s \to \mu^+ \mu^-$ is helicity suppressed in the SM and its branching ratio is predicted to be
$(3.35\pm 0.32)\times 10^{-9}$ \cite{blanke,buchalla,buras-01}. This decay is yet to be observed experimentally. Recently the upper
bound on its branching ratio has been improved to \cite{cdf-07}
\begin{equation}
Br({B}_s \to \mu^+ \mu^-)<5.8 \times 10^{-8} \qquad (2\sigma\, \rm C.L.)
\end{equation}
which is still larger than an order of magnitude above its SM prediction. ${B}_s \to \mu^+ \mu^-$ will be one of the important rare B
decays to be studied at the LHC.
The LHCb can exclude the region between $10^{-8}$ and the SM prediction with very little luminosity $\sim 0.5\, fb^{-1}$. It has the potential for a
$3 \sigma$ ($5 \sigma$) observation (discovery) of the SM prediction with $\sim 2\, fb^{-1}$ ($\sim 6\, fb^{-1}$) of data \cite{Lenzi:2007nq}. Hence with
a sensitivity exceeding the SM prediction of $Br(B_s \to \mu^+ \mu^-)$, LHCb will be able to observe both ehnancements as well as suppression in the branching ratio of ${B}_s \to \mu^+ \mu^-$.
The general purpose detectors, ATLAS and CMS can reconstruct the ${B}_s \to \mu^+ \mu^-$ signal with $3 \sigma$ significance after three years of running at a luminosity of $10^{33}\,{\rm cm}^{-2}{\rm s}^{-1}$ \cite{Smizanska:2008qm}.
As ${B}_s \to \mu^+ \mu^-$ is highly suppressed within the SM, it can serve as an
important probe to test many new physics models. New physics in the form
of magnetic dipole and tensor operators do not contribute to $Br(B_s \to \mu^+ \mu^-)$. In
\cite{Alok:2005ep}, it was shown that new physics in the form of
vector/axial-vector operators are constrained by the present measurement
of the branching ratios of the exclusive semi-leptonic decays $B \to
(K,K^*)\mu^+ \mu^-$ and an order of magnitude enhancement in $Br(B_s \to \mu^+ \mu^-)$ due
to such operators is not possible. However if new physics is in the form
of scalar/pseudoscalar operators then the measurement of $B \to K^*
\mu^+ \mu^-$ fails to put any useful constraint on the new physics
couplings and allows an order of magnitude or more enhancement in $Br(B_s \to \mu^+ \mu^-)$.
Therefore $Br(B_s \to \mu^+ \mu^-)$ is sensitive to the new physics models with non standard
scalar particles like the multi Higgs doublet models and supersymmetric
models. This is the reason why the decay ${B}_s \to \mu^+ \mu^-$ has been studied in
literature in great detail in context of multi-Higgs doublet models as
well as supersymmetry (SUSY) \cite{Skiba:1992mg}-\cite{Allanach:2009ne}.
SUSY is among the leading candidates for the extensions of the SM. This besides providing a natural
solution to electroweak hierarchy problem counts upon many interesting features such as suppression of flavor changing neutral current, provides a
candidate for cold dark matter, common ingredient to superstrings/M-theory etc. Because of the broken nature of SUSY, it is yet to be observed in experiments.
In this paper we study ${B}_s \to \mu^+ \mu^-$ in the context of R-parity violating (RPV) minimal supergravity (mSugra) framework for large $\tan\beta$. Though ${B}_s \to \mu^+ \mu^-$ has been studied in RPV supersymmetry in \cite{Chen:2005kt,Xu:2006vk,Wang:2007sp}, these studies focus only on the contributions due to RPV terms only. Here, we present the contribution due to the two Higgs doublet and R-parity conserving (RPC) terms along with RPV terms and study the enhancement as well as suppression of $Br(B_s \to \mu^+ \mu^-)$ as compared to its SM prediction. We also study the constrains on the mSugra parameter space coming from the present upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$. In addition, we also see how the constraints change if the upper bound on $Br(B_s \to \mu^+ \mu^-)$ is brought down to a value close to its SM prediction.
The paper is organized as follows. In Sec. \ref{bsbr}, we discuss ${B}_s \to \mu^+ \mu^-$
in the effective theory.
In Sec. \ref{rpvsusy}, we present the theoretical expressions for the branching ratio of ${B}_s \to \mu^+ \mu^-$ in the RPV mSugra.
In Sec. \ref{res}, we discuss our results.
Finally in Sec. \ref{concl}, we present our conclusions.
\section{${B}_s \to \mu^+ \mu^-$ in the effective theory }
\label{bsbr}
The most general model independent form of the effective
Lagrangian for the quark level transition $b \to s \mu^+ \mu^-$ that
contributes to the decay ${B}_s \to \mu^+ \mu^-$ has the form \cite{Grossman:1996qj,Guetta:1997fw}
\begin{eqnarray}
L & = & \frac{G_F \alpha}{2 \sqrt{2} \pi}
\left( V_{ts}^\ast V_{tb} \right) \,
\left\{
R_A
(\bar{s} \, \gamma_\mu \gamma_5 \, b)
(\bar{\mu} \, \gamma^\mu \gamma_5 \, \mu)
\right. \nonumber \\
& & \left. \; \; \; \; \; \; \; \; \; \; \; \; \; \;
+
R_S
(\bar{s} \, \gamma_5 \, b)
(\bar{\mu} \, \mu)
+
R_P
(\bar{s} \, \gamma_5 \, b)
(\bar{\mu} \, \gamma_5 \, \mu)
\right\} \; ,
\label{eqn:heff1}
\end{eqnarray}
where $R_P, R_S$ and $R_A$ are the strengths of the scalar,
pseudoscalar and axial vector operators respectively.
$R_A$ in eq.~(\ref{eqn:heff1}) is the sum of SM and new physics contributions.
Here we assume $R_A \simeq R_A^{SM}$.
In SM, the
scalar and pseudoscalar couplings $R_S^{\rm SM}$ and $R_P^{\rm
SM}$ receive contributions from the penguin diagrams with physical
and unphysical neutral scalar exchange and are highly suppressed:
\begin{equation}
R_S^{\rm SM} = R_P^{\rm SM} \propto \frac {(m_{\mu} m_b)}{m_{W}^{2}} \sim 10^{-5} \; .
\end{equation}
Also, $R_A^{\rm SM } = {Y(x)}/{\sin^2 \theta_W}$, where $Y(x)$ is
the Inami-Lim function \cite{inamilim}
\begin{equation}
Y(x)=\frac{x}{8}\, \left[\frac{x-8}{x-1}+ \frac{3x}{(x-1)^2} \ln x
\right] \; ,
\end{equation}
with $x =({m_t}/{M_W})^2$. Thus, $R_A^{\rm SM }\simeq 4.3$.
The calculation of the branching ratio gives \cite{Alok:2008hh}
\begin{equation}
B({B}_s \to \mu^+ \mu^-)=a_s\left[|b_{SM}-b_P|^2\,+\,|b_S|^2\right]\; ,
\label{blep_gen}
\end{equation}
where
\begin{equation}
a_s \equiv \frac{G_F^2 \alpha^2}{64 \pi^3} \,
\left| V_{ts}^\ast V_{tb} \, \right|^2 \tau_{B_s} f_{B_s}^2 m^3_{B_s} \,
\sqrt{ 1 - \frac{4 m_\mu^2 }{m_{B_s}^2} } \; ,
\end{equation}
\begin{equation}
b_{SM}=2 \frac{m_\mu}{m_{B_s}} R_A^{SM}\;, \quad
b_{P}=\frac{m_{B_s} } {m_b + m_s} R_P\;, \quad
b_{S}=\sqrt{1 - \frac{4 m_\mu^2 }{m_{B_s}^2}}
\frac{m_{B_s}}{m_b + m_s} R_S\;.
\label{b-def}
\end{equation}
Here $\tau_{B_s}$ is the lifetime of $B_s$.
\section{${B}_s \to \mu^+ \mu^-$ in the R-parity violating mSugra}
\label{rpvsusy}
To begin with, the MSSM superpotential, written in terms of the Higgs, quark and
lepton superfields, is given by
\begin{equation}
W_{MSSM} = Y^l_{ij} L_i H_d E^c_j + Y^d_{ij} Q_i H_d D^c_j
+ Y^u_{ij} Q_i H_u U^c_j + \mu H_d H_u
\end{equation}
\noindent
where $L$, $Q$ are the left-chiral lepton and quark superfields
respectively and $E$, $U$, $D$ are the right-chiral lepton, up-type and down-type quark superfields respectively and $H_d$ and $H_u$ are respectively the Higgs superfileds. $i$, $j$ are family indices. $Y$'s denote the Yukawa matrices and $\mu$ is the Higgsino mass parameter.
Now, if $R=(-)^{(3B+L+2S)}$ is not conserved, the superpotential
admits of the following additional terms~\cite{rp}:
\begin{equation}
W_{R_p{\!\!\!\!\!\!/\ }} = \lambda_{ijk} L_i L_j E^c_k
+ \lambda^{'}_{ijk} L_i Q_j D^c_k
+ \lambda^{''}_{ijk} \bar{U}_i D^c_j \bar{D}_k
+ \epsilon_{i} L_{i} H_u
\end{equation}
\noindent
where the terms proportional to $\lambda_{ijk}$, $\lambda_{ijk}^{'}$
and $\epsilon_i$ violate lepton number, and those proportional to
$\lambda_{ijk}^{''}$ violate baryon number. The constants
$\lambda_{ijk}$ ($\lambda_{ijk}^{''}$) are antisymmetric in the first
(last) two indices.
In case of exact SUSY, masses of superparticles are the same as their SM counterpart. Now, since no superparticle has been observed till date in experiments, which means SUSY must be broken in masses. SUSY breaking in masses can be achieved by the introduction of soft mass-terms corresponding to each superparticles. Thus in SUSY we have a large number of free parameters ($\sim$ 150). In a mSugra framework, where same spin particles share a common (soft) mass at a high scale ($\Lambda \sim 10^{16} GeV$), the whole SUSY spectrum can be described in terms of five (5) free parameters at
high scale $\Lambda$. These are
as follows
\begin{itemize}
\item A universal scalar mass, $m_0$
\item A universal fermion mass, $m_{1/2}$
\item A universal trilinear coupling, $A$
\item $\tan\beta=<H_d>/<H_u>$, the ratio of vaccum expectation value (vev) of the down and up type Higgses, and
\item $sgn(\mu)$, sign of the Higgsino mass parameter.
\end{itemize}
We now compute $R_S$ and $R_P$, the Wilson coefficients of the scalar and pseudoscalar
operators, in the $b\to s \mu^+ \mu^-$ transition, within the context of RPV mSugra. We focus on
the large $\tan\beta $ scenario ($\tan\beta > 30$). There are three contributions to the $R_S$ and $R_P$: (i) Contribution from the Higgs sector,
(ii) Contribution from the RPC sector and, (iii) Contribution from the RPV sector.
\subsection{Contribution from the Higgs sector}
In SUSY, up-type quarks couple to one Higgs doublet ($H_u$) while the
down-type quarks couple to the other Higgs doublet ($H_d$) which is the same as the type II-two Higgs doublet model (2HDM).
Retaining only leading terms in $\tan\beta$ (which is a valid approximation for $\tan\beta > 30$), the Higgs contribution to
$R_S$ and $R_P$ is given by \cite{Bobeth:2001sq}
\begin{equation}
R_S^{\rm 2HDM} = -R_P^{\rm 2HDM} = \frac{m_b\,m_l\,\tan^2\beta}{4M_W^2\sin^2\theta_W}
\frac{\ln r}{1-r},\quad r=\frac{m_{H^{\pm}}^2}{m_t^2}.
\label{constrained:2HDM:res}
\end{equation}
\subsection{Contribution from the RPC sector}
We consider a scenario with minimal flavour violation, i.e. we assume
flavour-diagonal sfermion mass matrices, the contributing
SUSY diagrams, in addition to those contributing to 2HDM,
consist only of the two chargino states.
The RPC contribution to
$R_S$ and $R_P$ is given by \cite{Bobeth:2001sq}
\begin{equation}
R_S^{RPC}=R_{S}^{\text{box}}+R_{S}^{\text{peng}}+R_S^{\text{count}}\;,
\end{equation}
\begin{equation}
R_P^{RPC}=R_{P}^{\text{box}}+R_{P}^{\text{peng}}+R_P^{\text{count}}\;,
\end{equation}
where
\begin{eqnarray}\label{box}
R_{S,P}^{\text{box}} = &\mp& \frac{m_b m_l\tan^2\beta}
{2M_W}\sum_{i,j=1}^{2}\sum_{a=1}^{6}\sum_{k,m,n=1}^{3}
\frac{1}{m_{\tilde{\chi}_i^{\pm}}^2}
\Bigg\{
(R^\dagger_{\tilde{\nu}})_{lk}(R_{\tilde{\nu}})_{kl}
(\Gamma^{U_L})_{am}U_{j2}\Gamma^a_{imn}\nonumber\\
&\times&\Bigg(y_{ai} U_{j2}^{\ast}
V_{i1}^{\ast}\pm\frac{m_{\tilde{\chi}_j^{\pm}}}{m_{\tilde{\chi}_i^{\pm}}}
U_{i2}V_{j1}\Bigg)D_1(x_{ki},y_{ai},z_{ji})\Bigg\},
\end{eqnarray}
\begin{eqnarray}\label{peng}
R_{S,P}^{\text{peng}} = &\pm&\frac{m_b m_l\tan^2\beta}{M_W^2(m_{H^{\pm}}^2-M_W^2)}
\sum_{i,j=1}^{2}\sum_{a,b=1}^{6}\sum_{k,m,n=1}^{3}
\Gamma^a_{imn}(\Gamma^{U_L})_{bm}U_{j2}\nonumber \\
&\times&\Bigg\{M_W\Bigg(y_{aj}
U_{j2}^{\ast}V_{i1}^{\ast}\pm\frac{m_{
\tilde{\chi}_i^{\pm}}}{m_{\tilde{\chi}_j^{\pm}}}U_{i2}V_{j1}\Bigg)D_2(y_{aj},z_{ij})\delta_{ab}\delta_{km}\nonumber\\
&-&\frac{ (M_U)_{kk}}{\sqrt{2}m_{\tilde{\chi}_i^{\pm}}}[\mu^*(\Gamma^{U_R})_{ak}(\Gamma^{U_L\dagger})_{kb}
\pm\mu(\Gamma^{U_L})_{ak}(\Gamma^{U_R\dagger})_{kb}]
D_2(y_{ai},y_{bi})\delta_{ij}\Bigg\},
\end{eqnarray}
\begin{eqnarray}\label{count}
R_{S,P}^{\text{count}} &=&\mp
\frac{m_b m_l\tan^3\beta}{\sqrt{2}M_W^2
(m_{H^{\pm}}^2-M_W^2)}\sum_{i=1}^{2}\sum_{a=1}^{6}\sum_{m,n=1}^{3}
[m_{\tilde{\chi}_i^{\pm}} D_3(y_{ai})U_{i2}(\Gamma^{U_L})_{am}\Gamma^a_{imn}],
\end{eqnarray}
where
\begin{equation}
M_U\equiv {\mathrm{diag}}(m_u, m_c, m_t),
\end{equation}
\begin{equation}\label{gamma}
\Gamma^a_{imn}= \frac{1}{2\sqrt{2}\sin^2\theta_W}
[\sqrt{2}M_W V_{i1}(\Gamma^{U_L\dagger})_{na}-(M_U)_{nn}V_{i2}
(\Gamma^{U_R\dagger})_{na}]\lambda_{mn}\;.
\end{equation}
The mass ratios are defined as
\begin{equation}
x_{ki}=\frac{m_{\tilde{\nu}_k}^2}{m_{\tilde{\chi}_i^{\pm}}^2},\quad
y_{ai}=\frac{m_{\tilde{u}_a}^2}{m_{\tilde{\chi}_i^{\pm}}^2},\quad
z_{ij}=\frac{m_{\tilde{\chi}_i^{\pm}}^2}{m_{\tilde{\chi}_j^{\pm}}^2}\;,
\end{equation}
with $\tilde{\nu}_k$, $\tilde{u}_a$, and $\tilde{\chi}^{\pm}_i$ denoting sneutrinos,
up-type squarks, and charginos.
The ratio of CKM factors
$\lambda_{mn}\equiv V_{mb}^{}V_{ns}^{\ast}/V_{tb}^{}V_{ts}^{\ast}$, and the
functions $D_{1,2,3}$ are listed in Appendix A of ref. \cite{Bobeth:2001sq}.
\subsection{Contribution from the RPV sector }
The RPV contribution to
$R_S$ and $R_P$ is given by \cite{Mir:2008zz,Dreiner:2006gu}
\begin{equation}
R_S^{RPV}=\frac{1}{4}\left(B^{bq'}_{\beta \beta}-C^{bq'}_{\beta \beta}\right)\;,
\end{equation}
\begin{equation}
R_P^{RPV}=\frac{1}{4}\left(B^{bq'}_{\beta \beta}+C^{bq'}_{\beta \beta}\right)\;,
\end{equation}
where
\begin{equation}
B_{\beta \beta }^{bq'}=\frac{2\sqrt{2}\pi}{G_{F}\alpha }\underset{i=1}{\overset{3%
}{\sum }}\frac{1}{V_{tb}V_{tq'}^{\ast }}\frac{2\lambda _{i\beta \beta }^{\ast
}\lambda _{iq'3}^{\prime }}{m_{\widetilde{\nu }_{iL}}^{2}}
\end{equation}%
\begin{equation}
C_{\beta \beta }^{bq'}=\frac{2\sqrt{2}\pi}{G_{F}\alpha }\underset{i=1}{\overset{3%
}{\sum }}\frac{1}{V_{tb}V_{tq'}^{\ast }}\frac{2\lambda _{i\beta \beta
}\lambda _{i3q'}^{\prime \ast }}{m_{\widetilde{\nu }_{iL}}^{2}}
\end{equation}%
Here $q'=s$ and $\beta=\mu$.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|}
\hline
$G_F = 1.166 \times 10^{-5} \; {\text{GeV}}^{-2}$ \\ $\alpha = 1.0/137.0$ \\
$\tau_{B_s} = 1.45 \times 10^{-12}\; s$ \\ $m_{B_s}=5.366 \; {\text{GeV}}$ \\
$m_{\mu}=0.105 \;{\text{GeV}}$ \\ $m_b=4.20\; {\text{GeV}} $ \\
$m_s=0.100\; {\text{GeV}} $\\ $m_t=172.3\; {\text{GeV}} $ \\
$m_W=80.403\; {\text{GeV}} $\\ $V_{tb}= 1.0 $ \\
$|V_{ts}|= (40.6 \pm 2.7) \times 10^{-3}$ \\$f_{B_s}=(259\pm 27)\;$ MeV\;\cite{Mackenzie:2006un}\\
\hline
\end{tabular}
\caption{\small\sf Numerical inputs used in our
analysis. Unless explicitly specified, they are taken from the
Review of Particle Physics~\cite{PDG}. }
\label{table1}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Parameter & Range\\ \hline
$m_0$ & $100 - 2000$ {\text{GeV}}\\
$m_{1/2}$ & $100 - 2000$ {\text{GeV}}\\
$A$ & $-3000 - 3000$ {\text{GeV}}\\
$\tan\beta$& $30 - 80$\\
$sgn(\mu)$ & $+$, $-$\\
\hline
\end{tabular}
\caption{\small\sf Ranges of various Sugra parameters used in our analysis.}
\label{sparam}
\end{center}
\end{table}
\section{Numerical Analysis and Results}
\label{res}
We have already seen in section 2, the low energy
SUSY mass spectrum can be fully obtained by specifying the mSugra parameter set
\{$m_0, m_{1/2}, A, sgn(\mu)\, {\rm and} \tan\beta$\} at the high scale once R-parity is conserved.
In order to generate low energy SUSY mass spectrum, we use {\bf
Suspect2.0} \cite{suspect}. Ranges of various High scale
mSugra parameters used our analysis are summarized in Table \ref{sparam}. Finally for the sake of simplicity, in case of RPV-Sugra,
we assume $\lambda_{ijk} = \lambda$ and $\lambda^{'}_{ijk} = \lambda^{'}$ and $\lambda^{''}_{ijk} = \lambda^{''}$ at the electroweak scale.
While scanning the parameter space we have taken into account upper bound on the supersymmetric contribution to the $\rho$-parameter~\cite{Barbieri:1983wy,Lim:1983re,Eliasson:1984yu,Drees:1990dx,Djouadi:1996pa,Djouadi:1998sq}, anomalous magnetic moment of the muons~\cite{Davier:2003pw,Hagiwara:2003da,de Troconiz:2004tr,Passera:2004bj},
bounds on $b\to s \gamma$ branching fraction~\cite{PDG,Kagan:1998ym,Gambino:2001au,Gambino:2000fz,Gambino:2003zm}
and LEP bounds on the Lightest neutral Higgs mass
and superparticle masses~\cite{PDG,lep}.
Further constraints has been employed in case of RPC Sugra in order to
avoid charged lightest superparticle (LSP) as here, unlike RPV case, the
LSP has be to stable as a consequence of R-parity conservation. Finally,
in order to respect existing bounds on the products in RPV couplings
from earlier analysis
\cite{rp,Bhattacharyya:1996nj,Dreiner:1997uz,Allanach:1999ic}, we use
$|\lambda . \lambda^{'}| < 10^{-5}$. It is to be noted that the products
involving $\lambda^{''}$ will not affect our analysis. This is simply because
pure baryon number violating terms does not contribute to the ${B}_s \to \mu^+ \mu^-$
branching ratio.
\subsection{Predictions for ${B}_s \to \mu^+ \mu^-$ branching ratio}
\begin{figure}
\centerline{
{\label{fig:mhbrv0}\includegraphics[angle=-90, width=0.5\textwidth]{v0mhbr}}
{\label{fig:mhbrsv0}\includegraphics[angle=-90, width=0.5\textwidth]{sv0mhbr}}
}
\caption{\small\sf RPV branching ratio for ${B}_s \to \mu^+ \mu^-$ vs $m_{H^\pm}$. The left panel of the Figure corresponds to
$\mu>0$ whereas the right panel corresponds to $\mu<0$.}
\label{fig:rpv5}
\end{figure}
\begin{figure}
\centerline{
{\label{fig:c0c1v0}\includegraphics[angle=-90, width=0.5\textwidth]{v0cc1}}
{\label{fig:c0c1sv0}\includegraphics[angle=-90, width=0.5\textwidth]{sv0cc1}}
}
\caption{\small\sf RPV branching ratio for ${B}_s \to \mu^+ \mu^-$ vs $|\lambda\lambda^\prime|$. The left panel of the Figure corresponds to
$\mu>0$ whereas the right panel corresponds to $\mu<0$.}
\label{fig:rpv6}
\end{figure}
The possible values of $Br(B_s \to \mu^+ \mu^-)$ in the RPV mSugra as a function of $m_{H^\pm}$ and $|\lambda\lambda^\prime|$ are shown in Figure \ref{fig:rpv5} and \ref{fig:rpv6} respectively.
It is obvious from the Figure that the branching ratio of
${B}_s \to \mu^+ \mu^-$ in the RPC mSugra can be enhanced by more than an order of magnitude above its
SM expectation. In that case ${B}_s \to \mu^+ \mu^-$ can even be observed at the tevatron.
On the other hand, $Br(B_s \to \mu^+ \mu^-)$ can also be suppressed to a value as low
as $2 \times 10^{-12} $ for $\mu >0$ ($3 \times 10^{-10} $ for $\mu<0$ )
which is well below the present LHCb sensitivity for ${B}_s \to \mu^+ \mu^-$. In such a situation ${B}_s \to \mu^+ \mu^-$ can be invisible to the LHC.
The possibility of invisibility of ${B}_s \to \mu^+ \mu^-$ at the LHC due to new physics scalar/pseudoscalar
operators was first pointed out in the ref. \cite{Alok:2008hh} whereas in \cite{Dedes:2008iw}
it was shown that in the minimal supersymmetric standard model,
$Br(B_s \to \mu^+ \mu^-)$ can go well below the SM predictions in the low $\tan\beta$ regime ($\tan\beta< 10$).
Thus we see that even in the large $\tan \beta$ regime, the lowest
value of $Br(B_s \to \mu^+ \mu^-)$ in the RPV mSugra can go several orders of magnitude
below the present LHCb sensitivity. Hence ${B}_s \to \mu^+ \mu^-$ can even be invisible to the LHC.
\subsection{Constraints on the RPV mSugra parameter space from the upper bound on $Br(B_s \to \mu^+ \mu^-)$ }
We now study the constraints imposed on the RPV mSugra parameter space from the upper bound on ${B}_s \to \mu^+ \mu^-$.
In the limit $|\lambda . \lambda'| \rightarrow 0$, which is the case of RPC mSugra, we find our results consistent with previous works in literature \cite{Babu:1999hn, Chankowski:2000ng, Dedes:2001fv, Buras:2002vd, Ellis:2007ss}. Hence we discuss constraints on RPV mSugra parameter space only.
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:msmfv0}\includegraphics[angle=-90, width=0.35\textwidth]{v0msmf}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:msmfv1}\includegraphics[angle=-90, width=0.35\textwidth]{v1msmf}}
\subfloat[$1.0\times10^{-9}$]{\label{fig:msmfv2}\includegraphics[angle=-90, width=0.35\textwidth]{v2msmf}}
}
\caption{\small\sf RPV $(m_0-m_{1/2})$ plane for $\mu>0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,0.50,\,0.10) \times 10^{-8}$.}
\label{fig:rpv1}
\end{figure}
Figure \ref{fig:rpv1} shows the allowed RPV $(m_0,m_{1/2})$ plane for $\mu>0$ for several values of
the branching ratio of ${B}_s \to \mu^+ \mu^-$. The present upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$ rules out some parameter space for
$m_{1/2} \lesssim 400\, {\text{GeV}}$. The situation remains almost the same if the upper bound on $Br(B_s \to \mu^+ \mu^-)$ is brought down to $5.0 \times 10^{-9}$.
However if the upper bound on $Br(B_s \to \mu^+ \mu^-)$ is as low as $1.0 \times 10^{-9}$,
which is the LHCb sensitivity for $Br(B_s \to \mu^+ \mu^-)$, a large $(m_0,m_{1/2})$ parameter space is ruled out.
In this case all parameter space in the region
$m_0 \lesssim 900\, {\text{GeV}}$ and $m_{1/2} \lesssim 600\, {\text{GeV}}$ is ruled out.
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:msmfsv0}\includegraphics[angle=-90, width=0.35\textwidth]{sv0msmf}}
\subfloat[$1.0\times10^{-8}$]{\label{fig:msmfsv1}\includegraphics[angle=-90, width=0.35\textwidth]{sv1msmf}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:msmfsv2}\includegraphics[angle=-90, width=0.35\textwidth]{sv2msmf}}
}
\caption{\small\sf RPV $(m_0-m_{1/2})$ plane for $\mu<0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,1.0,\,0.50) \times 10^{-8}$.}
\label{fig:rpvs1}
\end{figure}
The constraints on the RPV plane for $\mu<0$ is shown in Figure \ref{fig:rpvs1}.
We see that in this case the upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$ puts more
stringent constraint on $(m_0,m_{1/2})$ plane as compared to the case when $\mu>0$.
It can be seen from the Figure \ref{fig:msmfsv0} that the present upper bound on $Br(B_s \to \mu^+ \mu^-)$ rules out all parameter space for
$m_0 \lesssim 1000\, {\text{GeV}}$. All parameter space above $m_{1/2} \gtrsim 1100\, {\text{GeV}}$ is also ruled out.
For the upper bound on $Br(B_s \to \mu^+ \mu^-)$ close to the SM prediction, almost all parameter is ruled out.
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:asmsv0}\includegraphics[angle=-90, width=0.35\textwidth]{v0asms}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:asmsv1}\includegraphics[angle=-90, width=0.35\textwidth]{v1asms}}
\subfloat[$1.0\times10^{-9}$]{\label{fig:asmsv2}\includegraphics[angle=-90, width=0.35\textwidth]{v2asms}}
}
\caption{\small\sf RPV $(A-m_0)$ plane for $\mu>0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,0.50,\,0.10) \times 10^{-8}$.}
\label{fig:rpv2}
\end{figure}
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:asmssv0}\includegraphics[angle=-90, width=0.35\textwidth]{sv0asms}}
\subfloat[$1.0\times10^{-8}$]{\label{fig:asmssv1}\includegraphics[angle=-90, width=0.35\textwidth]{sv1asms}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:asmssv2}\includegraphics[angle=-90, width=0.35\textwidth]{sv2asms}}
}
\caption{\small\sf RPV $(A-m_0)$ plane for $\mu<0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,1.0,\,0.50) \times 10^{-8}$.}
\label{fig:rpvs2}
\end{figure}
The constraints on the allowed RPV $(A-m_{0})$ plane for $\mu>0$ and $\mu<0$
are shown in Figure \ref{fig:rpv2} and \ref{fig:rpvs2} respectively.
For $\mu>0$ , the present upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$
fails to put any useful constraint on the RPV $(A-m_{0})$ plane whereas
for $\mu<0$ all parameter space for $m_0 \lesssim 1000\, {\text{GeV}}$ is ruled out.
If the upper bound on $Br(B_s \to \mu^+ \mu^-)$ is improved to a value close to its SM prediction, the constraints on $(A-m_{0})$ plane
for $\mu>0$ remains almost the same whereas for $\mu<0$, almost all parameter space is ruled out.
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:asmfv0}\includegraphics[angle=-90, width=0.35\textwidth]{v0asmf}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:asmfv1}\includegraphics[angle=-90, width=0.35\textwidth]{v1asmf}}
\subfloat[$1.0\times10^{-9}$]{\label{fig:asmfv2}\includegraphics[angle=-90, width=0.35\textwidth]{v2asmf}}
}
\caption{\small\sf RPV $(A-m_{1/2})$ plane for $\mu>0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,0.50,\,0.10) \times 10^{-8}$.}
\label{fig:rpv3}
\end{figure}
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:asmfsv0}\includegraphics[angle=-90, width=0.35\textwidth]{sv0asmf}}
\subfloat[$1.0\times10^{-8}$]{\label{fig:asmfsv1}\includegraphics[angle=-90, width=0.35\textwidth]{sv1asmf}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:asmfsv2}\includegraphics[angle=-90, width=0.35\textwidth]{sv2asmf}}
}
\caption{\small\sf RPV $(A-m_{1/2})$ plane for $\mu<0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,1.0,\,0.50) \times 10^{-8}$.}
\label{fig:rpvs3}
\end{figure}
Figure \ref{fig:rpv3} shows the allowed RPV $(A-m_{1/2})$ plane for
$\mu>0$ corresponding to several possible values of the branching ratio of ${B}_s \to \mu^+ \mu^-$. It is obvious
from the Figure \ref{fig:asmfv0} that the present upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$ fails to put any useful constraint on the $(A-m_{1/2})$ plane. The situation remains almost the same even if the upper bound is improved by an order of magnitude. However a large parameter space is ruled out if the upper bound on ${B}_s \to \mu^+ \mu^-$ is as low as $1.0 \times 10^{-9}$. For $\mu<0$, the allowed RPV $(A-m_{1/2})$ plane is shown in Figure \ref{fig:rpvs3}. It can been seen that the constraints on
$(A-m_{1/2})$ plane are more severe as compared to the case when $\mu>0$. For the present upper bound on $Br(B_s \to \mu^+ \mu^-)$, there is no allowed region for $m_{1/2} \gtrsim 1000\, {\text{GeV}}$. For $Br(B_s \to \mu^+ \mu^-)$ close to the SM prediction, almost all $(A-m_{1/2})$ parameter space is ruled out.
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:tbmhv0}\includegraphics[angle=-90, width=0.35\textwidth]{v0tbmh}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:tbmhv1}\includegraphics[angle=-90, width=0.35\textwidth]{v1tbmh}}
\subfloat[$1.0\times10^{-9}$]{\label{fig:tbmhv2}\includegraphics[angle=-90, width=0.35\textwidth]{v2tbmh}}
}
\caption{\small\sf RPV $(m_{H^\pm}-\tan\beta)$ plane for $\mu>0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,0.50,\,0.10) \times 10^{-8}$.}
\label{fig:rpv4}
\end{figure}
\begin{figure}
\centerline{
\subfloat[$5.8\times10^{-8}$]{\label{fig:tbmhsv0}\includegraphics[angle=-90, width=0.35\textwidth]{sv0tbmh}}
\subfloat[$1.0\times10^{-8}$]{\label{fig:tbmhsv1}\includegraphics[angle=-90, width=0.35\textwidth]{sv1tbmh}}
\subfloat[$5.0\times10^{-9}$]{\label{fig:tbmhsv2}\includegraphics[angle=-90, width=0.35\textwidth]{sv2tbmh}}
}
\caption{\small\sf RPV $(m_{H^\pm}-\tan\beta)$ plane for $\mu<0$ for $Br(B_s \to \mu^+ \mu^-)=(5.8,\,1.0,\,0.50) \times 10^{-8}$.}
\label{fig:rpvs4}
\end{figure}
Figure \ref{fig:rpv4} shows the allowed $(m_{H^\pm}-\tan\beta)$ parameter space in the RPV mSugra for $\mu>0$.
We see that the present upper bound on the branching ratio of ${B}_s \to \mu^+ \mu^-$ puts a useful constraint on the $(m_{H^\pm}-\tan\beta)$ parameter space.
The constraints of RPV Sugra parameter space becomes slightly weaker in comparison to that of R-parity conserving case is due to additional (RPV) parameters. The constraints are severe if the upper bound on $Br(B_s \to \mu^+ \mu^-)$ is close to the LHCb sensitivity. In this case all $(m_{H^\pm}-\tan\beta)$ parameter space is ruled out for $\tan\beta \gtrsim 50$. If $\mu<0$, then it can seen from the Figure \ref{fig:rpvs4} that for the present upper bound on $Br(B_s \to \mu^+ \mu^-)$, whole $(m_{H^\pm}-\tan\beta)$ parameter space for $\tan\beta \gtrsim 45$ is ruled out. For $Br(B_s \to \mu^+ \mu^-)$ close the SM prediction, almost all $(m_{H^\pm}-\tan\beta)$ parameter space is ruled out.
We noted that $\mu<0$ is strongly constraint as compared to that of $\mu>0$. This is due to the fact that contribution due to threshold correction depends on $sign(\mu)$ \cite{Allanach:2009ne}. For $\mu<0$ the Higgs mass gets negative contribution due to the threshold corrections, thus making it inconsistent with the existing bound on the charged Higgs Mass in most of the $\mu<0$ region. We therefore observe relatively
much smaller allowed parameter space for $\mu<0$ (see the Figures \ref{fig:rpv4} and \ref{fig:rpvs4}).
\section{Conclusions}
\label{concl}
In this paper we study the decay ${B}_s \to \mu^+ \mu^-$ in context of the RPV mSugra in
the high $\tan\beta$ regime. In a mininal flavour violating scenario, we
consider contribution from the two Higgs doublet and the RPC terms along
with the RPV terms. The results may be summarized as follows:
We show that even in the case of large $\tan\beta$, the lowest possible value of the branching
ratio of ${B}_s \to \mu^+ \mu^-$ in the RPV mSugra can go several
orders of magnitude below the present LHCb sensitivity, and hence ${B}_s \to \mu^+ \mu^-$ may be invisible to the LHC.
We find that the present upper bound on $Br(B_s \to \mu^+ \mu^-)$ puts strong constraint on
the $\mu<0$ mSugra parameter space. Almost whole RPV mSugra parameter space, except for a
few points below $\tan\beta = 50$ becomes disfavored. Once the constraint on ${B}_s \to \mu^+ \mu^-$ are
brought down to a value close to the projected LHCb
sensitivity, there is hardly any allowed region. For $\mu>0$, constraints are
relatively weaker.
\subsection*{Acknowledgements}
We thank A. Dighe and S. Uma Sankar for helpful discussions and S.
Raichaudhuri for technical help. SKG acknowledges the Tata Institute of
Fundamental Research, Mumbai for their hospitality during initial stage
of the work. This work was partially supported by funding available from
the US Department of Energy (DOE) under the contract number
DE-FG02-01ER41155.
|
2,869,038,157,042 | arxiv | \section{Introduction}
The theory of equinormalizable deformations has been initiated by B. Teissier (\cite{Tei1}) in the late 1970's for deformations of reduced curve singularities over $(\c,0)$. It is generalized to higher dimensional base spaces by M. Raynaud and Teissier himself (\cite{Tei2}; some insight into the background of Raynaud's argument might be gleaned from the introduction to \cite{GrS}). Recently, it is developed by Chiang-Hsieh and Lipman (\cite{Ch-Li}, 2006) for projective deformations of reduced complex spaces over normal base spaces, and it is studied by Koll\'{a}r (\cite{Ko}, 2011) for projective deformations of generically reduced algebraic schemes over semi-normal base spaces.
Each reduced curve singularity is associated with a $\delta$ number (see Definition \ref{dn4.2}), which is a finite number and it is a topological invariant of reduced curve singularities. Teissier-Raynaud-Chiang-Hsieh-Lipman (\cite{Tei1}, \cite{Tei2}, \cite{Ch-Li}) showed that a deformation of a reduced curve singularity over a normal base space is equinormalizable (see Definition \ref{dn4.1}) if and only if it is $\delta$-constant, that is the $\delta$ number of all of its fibers are the same. This is so-called the \textit{$\delta$-constant criterion} for equinormalizability of deformations of reduced curve singularities.
For isolated curve singularities with embedded components, Br\"{u}cker and Greuel (\cite{BG}, 1990) gave a similar $\delta$-constant criterion (with a new definition of the $\delta$ number, see Definition \ref{dn4.2}) for equinormalizability of deformations of isolated (not necessarily reduced) curve singularities over $(\c,0)$. The author considered in \cite{Le} (2012) deformations of \textit{plane curve singularities} with embedded components over smooth base spaces of dimension $\geq 1$, and gave a similar $\delta$-constant criterion for equinormalizability of these deformations, using special techniques (e.g. a corollary of Hilbert-Burch theorem), which are effective only for plane curve singularities.
The first purpose of this paper is to generalize the $\delta$-constant criterion given in \cite{BG} and \cite{Le} to deformations of isolated (not necessarily reduced) curve singularities over normal or smooth base spaces of dimension $\geq 1$. In Proposition \ref{pro4.1} we show that equinormalizability of deformations of isolated curve singularities over normal base spaces implies the constancy of the $\delta$ number of fibers of these deformations. Moreover, in Theorem \ref{thr4.1} we show that if the normalization of the total space of
a deformation of an isolated curve singularity over $(\c^{k},0)$, $k\geq 1$, is Cohen-Macaulay then the converse holds. The assumption on Cohen-Macaulayness of the normalization of the total space ensures for flatness of the composition map. Moreover, Cohen-Macaulayness of the normalization of the total space is always satified for deformations over $(\c,0)$, because in this case, the total space is a normal surface singularity, which is Cohen-Macaulay.
In all of known results for the $\delta$-constant criterion for equinormalizability of deformations of isolated curve singularities, the total spaces of these deformations are always assumed to be reduced and pure dimensional. It is necessary to weaken the hypothesis on reducedness or purity of the dimension of total spaces. In section 2 we study the relationship between reducedness of the total space and that of the \textit{generic fibers} of a flat morphism, and show in Theorem \ref{thr2.1} that if the generic fibers of a flat morphism over a reduced Cohen-Macaulay space are reduced then the total space is reduced. In particular, if there exists a representative of a deformation of an isolated singularity over a reduced Cohen-Macaulay base space such that \textit{the total space is generically reduced over the base space} then the total space is reduced (see Corollary \ref{coro2.3}). This gives a way to check reducedness of the total space of a deformation, and to weaken the hypothesis on reducedness of the total space of a deformation.
For families of isolated curve singularities, one of the most important things is the admission of weak simultaneous resolutions (\cite{Tei2}) of these families. Buchweitz and Greuel (\cite{B-G}, 1980) gave a list of criteria for the admission of weak simultaneous resolutions of one-parametric families of reduced curve singularities, namely, the constancy of the Milnor number, the constancy of the $\delta$ number as well as the number of branches of all fibers, and the topologically triviality of these families (see Theorem \ref{thr5.1}). In the last section, we use a very new result of Bobadilla, Snoussi and Spivakovsky (2014) to show that these criteria are also true for one-parametric families of isolated (not necessarily reduced) curve singularities (see Theorem \ref{thr5.2}).
\vspace{0.5cm}
\hspace{-0.6cm} \textbf{Notation:} Let $f : (X,x) \mtn (S,0)$ be a morphism of complex germs. Denote by $(X^{red},x)$ the reduction of $(X,x)$ and $i: (X^{red},x) \hookrightarrow (X,x)$ the inclusion. Let $\nu^{red}: (\gt{X},\gt{x}) \mtn (X^{red},x) $ be the normalization of $(X^{red},x)$, where $\gt{x}:=(\nu^{red})^{-1}(x)$. Then the composition $\nu: (\gt{X},\gt{x})\overset{\nu^{red}}{\mtn} (X^{red},x) \overset{i}{\hookrightarrow} (X,x)$ is called the \textit{normalization of $(X,x)$}. Denote $ \bar{f}:=f\circ \nu : (\gt{X}, \gt{x}) \mtn (S,0).$ For each $s\in S$, we denote
$$ X_s:=f^{-1}(s), \quad \gt{X}_s:=\bar{f}^{-1}(s). $$
\section{Generic reducedness}
Let $f: (X,x)\mtn (S,0)$ be a flat morphism of complex germs. In this section we study the relationship between reducedness of the total space $(X,x)$ and that of the generic fibers of $f$. This gives a way to check reducedness of the total space of a flat morphism.
\df Let $f: X \mtn S $ be a morphism of complex spaces. Denote by $\Red(X)$ the set of all reduced points of $X$ and
$$ \Red(f) = \{x \in X| f \mbox{ is flat at } x \mbox{ and } f^{-1}(f(x)) \mbox{ is reduced at } x\} $$
the \emph{reduced locus} of $f$. We say
\ite
\item[(1)] $X$ is \emph{generically reduced} if $\Red(X)$ is open and dense in $X$;
\item[(2)] $X$ is \emph{generically reduced over} $S$ if there is an analytically open dense set $V$ in $S$ such that $f^{-1}(V)$ is contained in $\Red(X)$;
\item[(3)] the \emph{generic fibers of $f$ are reduced} if there is an analytically open dense set $V$ in $S$ such that $X_s:=f^{-1}(s)$ is reduced for all $s$ in $V$.
\hite
\edf
We show in the following that under properness of the restriction of a flat morphism $f: (X,x) \mtn (S,0)$ to its non-reduced locus, the generically reducedness of $X$ over $S$ implies reducedness of the generic fibers of $f$.
\pro \label{pro2.4} Let $f: (X,x) \mtn (S,0)$ be flat with $(S,0)$ reduced. Assume that there is a representative $f: X \mtn S$ such that its restriction on the non-reduced locus $\NRed(f):= X \tru \Red(f)$ is proper and $X$ is generically reduced over $S$. Then the generic fibers of $f$ are reduced.
\epro
\pf $\NRed(f)$ is analytically closed in $X$ (cf. \cite[Corollary I.1.116]{GLS}). Moreover, since $X$ is generically reduced over $S$, there exists an analytically open dense set $U$ in $S$ such that $f^{-1}(U) \subseteq \Red(X)$. Then, by properness of the restriction $\NRed(f) \mtn S$, $f(\NRed(f))$ is analytically closed and nowhere dense in $S$ by \cite[Theorem 2.1(3), p.56]{BF}. This implies that $V:=S\tru f(\NRed(f))$ is analytically open dense in $S$, and for all $s \in V$, $X_s : = f^{-1}(s)$ is reduced. Therefore the generic fibers of $f$ are reduced.
\epf
\coro \label{coro2.5} Let $f: (X,x) \mtn (S,0)$ be flat with $(S,0)$ reduced. Assume that $X_0\tru \{x\}$ is reduced and there exists a representative $f: X \mtn S$ such that $X$ is generically reduced over $S$. Then the generic fibers of $f$ are reduced. \\
In particular, if $X_0 \tru \{x\}$ and $(X,x)$ are reduced, then the generic fibers of $f$ are reduced.
\ecoro
\pf Since $f$ is flat, we have
$$ \NRed(f) \cap X_0 = \NRed(X_0) \subseteq \{x\}, $$
where $\NRed(X_0)$ denotes the set of non-reduced points of $X_0$. This implies that the restriction $f: \NRed(f) \mtn S$ is finite, hence proper. Then the first assertion follows from Proposition \ref{pro2.4}. Moreover, if $(X,x)$ is reduced then there exists a representative $X$ of $(X,x)$ which is reduced. Then $X$ is obviously generically reduced over some representative $S$ of $(S,s)$. Hence we have the latter assertion.
\epf
\rem \label{rem2.1} \rm The assumption on reducedness of $X_0 \tru \{x\}$ in Corollary \ref{coro2.5} is necessary for reducedness of generic fibers, even for the case $S=\c$. In fact, let $(X_0,0)\subseteq (\c^{3},0)$ be defined by the ideal
$$I_0=\seq{x^{2},y}\cap \seq{y^{2},z}\cap \seq{z^{2},x}\subseteq \c\{x,y,z\}$$
and $(X,0)\subseteq (\c^{4},0)$ defined by the ideal
$$I=\seq{x^{2}-t^{2},y}\cap \seq{y^{2}-t^{2},z} \cap \seq{z^{2},x}\subseteq \c\{x,y,z,t\}.$$
Let $f: (X,0) \mtn (\c,0)$ be the restriction on $(X,0)$ of the projection on the fourth component $\pi: (\c^{4},0) \mtn (\c,0), ~(x,y,z,t)\mapsto t$. Then $f$ is flat, $X\tru X_0$ is reduced, hence $X$ is generically reduced over some representative $T$ of $(\c,0)$. However the fiber $(X_t,0)$ is not reduced for any $t\not = 0$. Note that in this case $ X_0\tru \{0\}$ is not reduced.
\erem
As we have seen from Corollary \ref{coro2.5}, if the total space of a flat morphism over a reduced base space is reduced, then the generic fibers of that morphism are reduced. In the following we shows that over a reduced Cohen-Macaulay base space, the converse is also true. This generalizes \cite[Proposition 3.1.1 (3)]{BG} to deformations over higher dimensional base spaces.
\thr \label{thr2.1} Let $f : (X,x) \mtn (S,0)$ be flat with $(S,0)$ reduced Cohen-Macaulay of dimension $k\geq 1$. If there exists a representative $f: X \mtn S$ whose generic fibers are reduced, then $(X,x)$ is reduced.
\ethr
\pf
We divide the proof of this part into two steps.\\
\textbf{Step 1:} $\bold{S=\c^k}.$ Then $f=(f_1,\cdots,f_k): (X,x) \mtn (\c^k,0)$ is flat. \\
For $k=1$, assume that there exists a representative $f: X \mtn T$ such that $X_t:=f^{-1}(t)$ is reduced for every $t\not =0$. Then for any $y \in X\tru X_0$ we have $(X_{f(y)},y)$ is reduced. It follows that $(X,y)$ is reduced (cf. \cite[Theorem I. 1.101]{GLS}). Thus $X\tru X_0$ is reduced. To show that $(X,x)$ is reduced, let $g$ be a nilpotent element of $\ohoa_{X,x}$. Then we have
$$ supp(g) = V(\Ann(g)) \subseteq X_0 = V(f).$$
It follows from Hilbert-R\"{u}ckert's Nullstellensatz (cf. \cite[Theorem I.1.72]{GLS}) that $f^n \in \Ann(g)$ for some $n\in \z_{+}$.
Hence $f^ng = 0$ in $\ohoa_{X,x}$. Since $f$ is flat, it is a non-zerodivisor in $\ohoa_{X,x}$. Then $f^n$ is also a non-zerodivizor in $\ohoa_{X,x}$. It follows that $g=0$. Thus $(X,x)$ is reduced, and the statement is true for $k=1$.\\
For $k\geq 2$, suppose there is a representative $f: X \mtn S$ and an analytically open dense set $V$ in $S$ such that $X_s$ is reduced for all $s\in V$. Let us denote by $H$ the line
$$H:= \{(t_1,\cdots,t_k) \in \c^k| t_1 = \cdots = t_{k-1}= 0 \}.$$
Denote by $A$ the complement of $V$ in $S$. Then $A$ is analytically closed and nowhere dense in $S$. We can choose coordinates $t_1,\cdots, t_k$ and a representative of $(\c^k,0)$ such that $A \cap H = \{0\}$. \\
Denote $f':=(f_1,\cdots,f_{k-1})$.
Since $f$ is flat, $f_1,\cdots, f_{k-1}$ is an $\ohoa_{X,x}$-regular sequence, hence $f': (X,x) \mtn (\c^{k-1},0)$ is flat with the special fiber $(X',x): = (f'^{-1}(0),x) = (f^{-1}(H),x)$. Since $f$ is flat, $f_k$ is a non-zerodivisor in $\ohoa_{X,x}/f'\ohoa_{X,x} = \ohoa_{X',x}$, hence the morphism $f_k: (X',x) \mtn (\c,0)$ is flat. For any $t\in \c\tru \{0\}$ close to $0$,
we have $(0,\cdots,0,t) \not \in A$, hence $f_k^{-1}(t) = f^{-1}(0,\cdots,0,t) $ is reduced. It follows from the case $k=1$ that the total space $(X',x)$ of $f_k$ is reduced. Since $f': (X,x) \mtn (\c^{k-1},0)$ is flat whose special fiber is reduced, $(X,x)$ is reduced (cf. \cite[Theorem I.1.101]{GLS}), and we have the proof for this step. \\
{\bf Step 2:} $\bold{(S,0)}$ {\bf is Cohen-Macaulay of dimension } $\bold{k\geq 1}.$ Since $(S,0)$ is Cohen-Macaulay, there exists an $\ohoa_{S,0}$-regular sequence $g_1, \cdots, g_k$, where $g_i \in \ohoa_{S,0} $ for every $i = 1,\cdots, k$. Then the morphism
$$g=(g_1,\cdots,g_k): (S,0) \sr (\c^k,0), t \longmapsto \big(g_1(t),\cdots,g_k(t)\big)$$
is flat.
We have
$$\dim (g^{-1}(0),0) = \dim \ohoa_{S,0}/(g_1,\cdots,g_k)\ohoa_{S,0} = 0$$ (cf. \cite[Prop. I.1.85]{GLS}).
This implies that $g$ is finite. Let $g: S \mtn T$ be a representative which is flat and finite, where $T$ is an open neighborhood of $0\in \c^k$. Then the composition $h=g\circ f: X \sr T$ (for some representative) is flat. To apply Step 1 for $h$, we need to show the existence of an analytically open dense set $U$ in $T$ such that all fibers over $U$ are reduced. In fact, since $S$ is reduced, its singular locus $ \Sing(S)$ is closed and nowhere dense in $S$ (cf. \cite[Corollary I.1.111]{GLS}). It follows that $A \cup \Sing(S), $ $A$ as in Step 1, is closed and nowhere dense in $S$. Then the set $U:=T\tru g(A\cup \Sing(S))$ is open and dense in $T$ by the finiteness of $g$. Furthermore, for any $t\in U$, $g^{-1}(t) = \{t_1,\cdots,t_r\}$, $t_i \in V \cap (S\tru \Sing(S))$. It follows that $h^{-1}(t) = f^{-1}(t_1) \cup \cdots \cup f^{-1}(t_r)$ is reduced. \\
Now applying Step 1 for the flat map $h: X \mtn T$, we have reducedness of $(X,x)$. The proof is complete.
\epf
The following result is a direct consequence of Corollary \ref{coro2.5} and Theorem \ref{thr2.1}.
\coro \label{coro2.3} Let $f: (X,x) \mtn (S,0)$ be flat with $(S,0)$ reduced Cohen-Macaulay of dimension $k\geq 1$. Suppose $X_0\tru \{x\}$ is reduced and there exists a representative $f: X \mtn S$ such that $X$ is generically reduced over $S$. Then $(X,x)$ is reduced.
\ecoro
Since normal surface singularities are reduced and Cohen-Macaulay, we have
\coro \label{coro2.4}\rm Let $f: (X,x) \mtn (S,0)$ be flat with $(S,0)$ a normal surface singularity. If there exists a representative $f: X \mtn S$ whose generic fibers are reduced, then $(X,x)$ is reduced.
\ecoro
\section{Equinormalizable deformations of isolated curve singularities over smooth base spaces }
In this section we focus on equinormalizability of deformations of isolated (not necessarily reduced) curve singularities over smooth base spaces of dimension $\geq 1$. Because of isolatedness of singularities in the special fibers of these deformations, by Corollary \ref{coro2.3}, instead of assuming reducedness of the total spaces, we need only assume the generically reducedness of the total spaces over the base spaces.
First we recall a definition of equinormalizable deformations which follows Chiang-Hsieh-Lipman (\cite{Ch-Li}) and Koll\'{a}r (\cite{Ko}).
\df \label{dn4.1}\rm Let $f: X\sr S$ be a morphism of complex spaces. A \emph{simultaneous normalization of $f$ } is a morphism
$n: \nga{X} \sr X$ such that
\ite
\item[(1)] $n$ is finite,
\item[(2)] $\tilde{f}:=f\circ n: \nga{X}\mtn S$ is \emph{normal}, i.e., for each $z\in \nga{X}$, $\tilde{f}$ is flat at $z$ and the fiber $\nga{X}_{\tilde{f}(z)}:=\tilde{f}^{-1}(\tilde{f}(z))$ is normal,
\item[(3)] the induced map $n_s: \nga{X}_s:=\tilde{f}^{-1}(s) \sr X_s$ is bimeromorphic for each $s\in f(X)$.
\hite
The morphism $f$ is called \emph{equinormalizable} if the normalization $\nu: \gt{X}\mtn X$ is a simultaneous normalization of $f$. It is called \emph{ equinormalizable at $x\in X$} if the restriction of $f$ to some neighborhood of $x$ is equinormalizable.\\
If $f: (X,x) \sr (S,s)$ is a morphism of germs, then a \emph{simultaneous normalization of $f$} is
a morphism $n$ from a multi-germ $(\nga{X}, n^{-1}(x))$ to $(X,x)$ such that some representative of $n$ is a simultaneous normalization of a representative of $f$. The germ $f$ is \emph{equinormalizable} if some representative of $f$ is equinormalizable.
\edf
The following lemma allows us to do base change, reducing deformations over higher dimensional base spaces to those over smooth 1-dimensional base spaces with similar properties.
\lm \label{lm4.1}
Let $f: (X,x) \mtn (S,0)$ be a deformation of an isolated singularity $(X_0,x)$ with $(S,0)$ normal. Suppose that there exists some representative $f: X \mtn S$ such that $X$ is generically reduced over $S$. Then there exists an open and dense set $U$ in $S$ such that $X_s:=f^{-1}(s)$ is reduced, $\gt{X}_s:=\bar{f}^{-1}(s)$ is normal for all $s\in U$. Moreover, for each $s\in U$, the induced morphism on the fibers $\nu_s:\gt{X}_s \mtn X_s$ is the normalization of $X_s$.
\elm
Here, we recall that $\nu: (\gt{X},\gt{x}) \mtn (X,x)$ is the normalization of $(X,x)$ and $\bar{f}:=f\circ \nu: (\gt{X},\gt{x}) \mtn (S,0)$.
\pf Since $X_0\tru \{x\}$ is reduced, it follows from the proof of Corollary \ref{coro2.5} that the set $f(\NRed(f))$ is closed and nowhere dense in $S$. Denote by $\NNor(f)$ (resp. $\NNor(\bar{f})$) the \textit{non-normal locus of $f$ (resp. $\bar{f}$}), the set of points $z$ in $X$ (resp. $\gt{X}$) at which either $f$ (resp. $\bar{f}$) is not flat or $X_{f(z)}$ (resp. $\gt{X}_{\bar{f}(z)}$) is not normal. Since $f$ is flat and $S$ is normal, we have $\nu(\NNor(\bar{f}) \cap \gt{X}_0) \subseteq \NNor(f) \cap X_0 = \NNor(X_0)$. Equivalently, $\NNor(\bar{f}) \cap \gt{X}_0 \subseteq \nu^{-1}(\NNor(X_0))$ which is finite since $\nu$ is finite and $X_0$ has an isolated singularity at $x$. It follows that the restriction of $\bar{f}$ on $\NNor(\bar{f})$ is finite. Then $\bar{f}(\NNor(\bar{f}))$ is closed and nowhere dense in $S$ by \cite[Theorem 2.1(3), p.56]{BF}. The set $U:=S\tru \big(f(\NRed(f)) \cup \bar{f}(\NNor(\bar{f}))\big)$ satisfies all the required properties.
\epf
For deformations of isolated curve singularities we have the following necessary condition for their equinormalizability, in terms of the constancy of the $\delta$-invariant of fibers. For the reader's convenience we recall the definition of the $\delta$-invariant of isolated (not necessarily reduced) curve singularities, which is defined by Br\"{u}cker and Greuel in \cite{BG}.
\df \label{dn4.2} \rm Let $X$ be a complex curve and $x\in X$ an isolated singular point. Denote by $X^{red}$ its reduction and let $\nu^{red}: \gt{X} \mtn X^{red}$ be the normalization of the reduced curve $X^{red}$. The number
$$\delta(X^{red},x):=\dim_\c (\nu^{red}_*\ohoa_{\gt{X}})_x/\ohoa_{X^{red},x} $$
is called the \emph{delta-invariant of $X^{red}$ at $x$},
$$\epsilon(X,x):=\dim_\c H_{\{x\}}^0(\ohoa_X) $$
is called the \emph{epsilon-invariant of $X$ at $x$}, where $H_{\{x\}}^0(\ohoa_X)$ denotes local cohomology, and
$$\delta(X,x):=\delta(X^{red},x) - \epsilon(X,x) $$
is called the \emph{delta-invariant of $X$ at $x$}.\\
If $X$ has only finitely many singular points then the number
$$\delta(X):=\sum_{x\in \Sing(X)} \delta(X,x) $$
is called the \emph{delta-invariant } of $X$.
\edf
It is easy to see that $\delta(X^{red},x)\geq 0$, and $\delta(X^{red},x) = 0$ if and only if $x$ is an isolated point of $X$ or the germ $(X^{red},x)$ is smooth. Hence, if $x\in X$ is an isolated point of $X$ then $\delta(X,x) = -\dim_\c \ohoa_{X,x} = - \epsilon(X,x)$.
In particular, $\delta(X,x) = -1$ for $x$ an isolated and reduced (hence normal) point of $X$.
\pro \label{pro4.1}
Let $f: (X,x) \mtn (S,0)$ be a deformation of an isolated curve singularity $(X_0,x)$ with $(X,x)$ pure dimensional, $(S,0)$ normal.
Suppose that there exists some representative $f: X \mtn S$ such that $X$ is generically reduced over $S$. If $f$ is equinormalizable, then it is \textit{$\delta$-constant}, that is, $\delta(X_s) = \delta(X_0)$ for every $s\in S$ close to $0$.
\epro
\pf (Compare to the proof of \cite[Theorem 4.1 (2)]{Le})\\
It follows from Lemma \ref{lm4.1} that there exists an open and dense set $U$ in $S$ such that $X_s$ is reduced and $\gt{X}_s$ is normal for all $s\in U$. \\
We first show that $f$ is $\delta$-constant on $U$, i.e. $\delta(X_s) = \delta(X_0)$ for any $s \in U$. In fact, for any $s\in U$, $s\not =0$, there exist an irreducible reduced curve singularity $C \subseteq S$ passing through $0$ and $s$. Let $\alpha: T \sr C \subseteq S$ be the normalization of this curve singularity such that $\alpha(T\tru \{0\}) \subseteq U$, where $T\subseteq \c$ is a small disc with center
at $0$. Denote
$$X_T:=X\times_S T, ~ \gt{X}_T:= \gt{X}\times_S T.$$
Then we have the following Cartesian diagram:
$$\xymatrix@C=12pt@R=10pt@M=8pt{
&&\ar @{} [dr] |{\Box} \gt{X}_T \ar[r] \ar[d]_{\nu_T} \ar@/_2pc/[dd]_{\bar{f}_T} & \gt{X} \ar[d]^{\nu} \ar@/^2pc/[dd]^{\bar{f}}\\
&&\ar @{} [dr] |{\Box} X_T\ar[d]_{f_T} \ar[r]& X \ar[d]^f\\
&&T \ar[r] & S}$$
For any $t\in T, s = \alpha(t) \in S$, we have
\begin{equation}\label{equ4.1}
\ohoa_{(X_T)_t}:= \ohoa_{f_T^{-1}(t)} \cong \ohoa_{X_s}, ~\ohoa_{(\gt{X}_T)_t}:= \ohoa_{\bar{f}_T^{-1}(t)} \cong \ohoa_{\gt{X}_s}.
\end{equation}
Since $f$ is flat by hypothesis and $\bar{f}$ is flat by equinormalizability, it follows from the preservation of flatness under base change
(cf. \cite[Prop. I. 1.87]{GLS}) that the induced morphisms $f_T$ and $ \bar{f}_T$ are flat over $T$. Hence, it follows from equinormalizability of $f$ and (\ref{equ4.1}) that $f_T: X_T \mtn T$ is equinormalizable. \\
For any $t\in T\tru \{0\}$, $s=\alpha(t)\in U$, hence $(X_T)_t \cong X_s$ is reduced by the existence of $U$. It follows from Theorem \ref{thr2.1} that $X_T$ is reduced. On the other hand, since $X$ and $S$ are pure dimensional, all fibers of $f$, hence of $f_T$, are pure dimensional by the dimension formula (\cite[Lemma, p.156]{Fi}). Then $X_T$ is also pure dimensional because $T$ is pure 1-dimensional. Therefore it follows from \cite[Korollar 2.3.5]{BG} that $f_T: X_T \sr T$ is $\delta$-constant, hence $f: X \sr S$ is $\delta$-constant on $U$. \\
Let us now take $s_0 \in S\tru U$. Since $U$ is dense in $S$, $s_0 \in S$,
there exists always a point $s_1 \in U$ which is close to $s_0$. It follows from the
semi-continuity of the $\delta$-function (cf. \cite[Lemma 4.2]{Le}) that
$$ \delta(X_0) \geq \delta(X_{s_0}) \geq \delta(X_{s_1}).$$
Moreover, $\delta(X_0) = \delta(X_{s_1})$ as shown above. It implies that $\delta(X_{s_0})=\delta(X_0)$. Hence $f : X \sr S$ is $\delta$-constant.
\epf
\rem \label{rem4.1} \rm The complex spaces $X_T$ and $\gt{X}_T$ appearing in the proof of Proposition \ref{pro4.1} have the following properties:
\ite
\item[(1)] $X_T$ is reduced; $\gt{X}_T$ is reduced if $\bar{f}_T$ is flat;
\item[(2)] they have the same normalization $\nga{X_T}$;
\item[(3)] fibers of the compositions $\nga{X_T} \overset{\mu_T}{\mtn} \gt{X}_T \overset{\bar{f}_T}{\mtn} T$ and $\nga{X_T} \overset{\theta_T}{\mtn} X_T \overset{f_T}{\mtn} T$ coincide.
\hite
In fact, as we have seen in the proof of Proposition \ref{pro4.1}, $X_T$ is reduced. Moreover, if $\bar{f}_T$ is flat, since its generic fibers are reduced (actually normal), $\gt{X}_T$ is reduced by Theorem \ref{thr2.1}. Therefore we have (1). \\
Now we show (2). Since finiteness and surjectivity are preserved under base change, $\nu_T$ is finite and surjective. Let us denote by $\mu_T:\nga{X_T}\mtn \gt{X}_T$ the normalization of $\gt{X}_T$. Then the composition $\theta_T:=\mu_T \circ \nu_T$ is finite and surjective. \\
Denote $A:=\NNor(f_T)$. Since $X_T$ is reduced, $A$ is nowhere dense in $X_T$. Moreover, since $\nu_T$ is finite and surjective, it follows from Ritt's lemma (cf. \cite[Chapter 5, \S 3, p.102]{GR}) that the preimage $A':=\nu_T^{-1}(A)$ is nowhere dense in $\gt{X}_T$. Furthermore, for any $z\not \in A'$, $y=\nu_T(z) \not \in A$, hence the fiber $(X_T)_t$ resp. $ X_s$ is normal at $y$ resp. $\alpha_T(y)$, where $t=f_T(y), s=\alpha(t)$. Thus $(X,\alpha_T(y))\cong (\gt{X},\bar{\alpha}_T(z))$. It follows that $(X_T,y) \cong (\gt{X}_T,z)$. Therefore $\gt{X}_T\tru A' \cong X_T\tru A$. Then $(\mu_T\circ \nu_T)^{-1}(A)$ is nowhere dense in $\nga{X_T}$ and we have the isomorphism
$$ \nga{X_T}\tru (\mu_T\circ \nu_T)^{-1}(A) = \nga{X_T}\tru \mu_T^{-1}(A') \cong \gt{X}_T\tru A' \cong X_T\tru A. $$
Therefore $\theta_T$ is bimeromorphic, whence it is the normalization of $X_T$. (3) is obvious.
\erem
The following theorem is the main result of this section, which asserts that under certain conditions, the $\delta$-criterion is sufficient for equinormalizability of deformations of isolated curve singularities over smooth base spaces of dimension $\geq 1$. This gives a generalization of \cite[Korollar 2.3.5]{BG}.
\thr \label{thr4.1} Let $f: (X,x) \mtn (\c^{k},0)$, $k\geq 1$, be a deformation of an isolated curve singularity $(X_0,x)$ with $(X,x)$ pure dimensional. Suppose that there exists a representative $f: X \mtn S$ such that $X$ is generically reduced over $S$.
If the normalization $\gt{X}$ of $X$ is Cohen-Macaulay \footnotemark \footnotetext{This holds always for $k=1$, since normal surfaces are Cohen-Macaulay.} and $f$ is $\delta$-constant, then $f$ is equinormalizable.
\ethr
\pf First we show that Cohen-Macaulayness of $\gt{X}$ implies flatness of the composition $\bar{f}$. Since $\gt{X}$ is Cohen-Macaulay and $S$ is smooth, it is sufficient to check that the dimension formula holds for $\bar{f}$ (cf. \cite[Proposition, p.158]{Fi}).
But it is always the case, since for any $z\in \nu^{-1}(x)$, we have
\begin{align*}
\dim (\gt{X},z) &= \dim (X,x) = \dim (X_0,x) + k \quad \quad \mbox{by flatness of $f$}\\
&= \dim (\gt{X}_0,z) + k.
\end{align*}
The latter equality follows from finiteness and surjectivity of $\nu_0:(\gt{X}_0,z) \mtn (X_0,x)$.
Let $U\subseteq S$ be the open dense set with properties described as in Lemma \ref{lm4.1}. For any $s\in U$, let $C\subseteq S$ be an irreducible reduced curve singularity passing through $s$ and $0$ such that $C\cap (S\tru U) = \{0\}$. Let $\alpha: T \sr C \subseteq S$ be the normalization of this curve singularity such that $\alpha(T\tru \{0\}) \subseteq U$, where $T\subseteq \c$ is a small disc with center at $0$. Denote $X_T$ and $\gt{X}_T$ as in the proof of Proposition \ref{pro4.1}. Then, since $\bar{f}$ is flat, it follows from Remark \ref{rem4.1} that $X_T$ and $\gt{X}_T$ are reduced and they have the same normalization $\nga{X}_T$. Consider the following Cartesian diagram:
$$\xymatrix@C=12pt@R=10pt@M=8pt{
&&\ar @{} [dr] \nga{X_T} \ar[d]^{\mu_T} \ar@/_1pc/[dd]_{\theta_T}\ar@/_3pc/[ddd]_{\nga{f}_T} & \\
&&\ar @{} [dr] |{\Box} \gt{X}_T\ar[r]^{\bar{\alpha}_T} \ar[d]^{\nu_T} \ar@/_1pc/[dd]_{\bar{f}_T} & \gt{X} \ar[d]_\nu \ar@/^1pc/[dd]^{\bar{f}}\\
&&\ar @{} [dr] |{\Box} X_T\ar[d]^{f_T} \ar[r]^{\alpha_T}& X \ar[d]_f\\
&&T\ar[r]_\alpha & S}$$
Since fibers of $f$ and $f_T$ are isomorphic, $f_T$ is $\delta$-constant and $X_T$ is pure dimensional. Then it follows from \cite[Korollar 2.3.5]{BG} that $f_T$ is equinormalizable. Therefore, by definition, for each $t\in T$, $(\nga{X})_t:=(\nga{f}_T)^{-1}(t)$ is normal, and it is the normalization of $(X_T)_t$.
Let us consider the flat map $\bar{f}_T : \gt{X}_T \mtn T$ and consider the normalization $\mu_T: \nga{X_T} \mtn \gt{X}_T$ of $\gt{X}_T$. It follows from \cite[Proposition 1.2.2]{BG} that the composition $\bar{f}_T\circ \mu_T : \nga{X_T}\mtn T$ is flat. Moreover, by the same argument as given in Remark \ref{rem4.1}, we can show that $(X_T)_t$ and $(\gt{X}_T)_t$ have the same normalization for each $t\in T$. Hence the restriction on the fibers $(\nga{X})_t \mtn (\gt{X}_T)_t$ is the normalization. Thus by definition, $\bar{f}_T$ is equinormalizable.
Then $\bar{f}_T$ is $\delta$-constant by Proposition \ref{pro4.1} (or by \cite[Korollar 2.3.5]{BG}). This implies that for any $t\in T\tru \{0\}$, we have
$$ \delta(\gt{X}_0) = \delta((\gt{X}_T)_0) = \delta ((\gt{X}_T)_t) = 0 \mbox{ (since } (\gt{X}_T)_t \mbox{ is normal).}$$
Now we show that $\gt{X}_0$ is reduced. First we show that $\nu(\NNor(\gt{X}_0)) \subseteq \NNor(X_0)$. In fact, if $y \not \in \NNor(X_0)$ then $X_0$ is normal at $y$. Since $f$ is flat and $S$ is normal at $0$, $X$ is normal at $y$ (cf. \cite[Theorem I.1.101]{GLS}). Therefore we have the isomorphism $ (\gt{X},z) \overset{\cong}{\sr} (X,y)$ for every $z\in \nu^{-1}(y)$. It induces an isomorphism on the fibers $(\gt{X}_0,z) \overset{\cong}{\sr} (X_0,y)$, hence $\gt{X}_0$ is normal at every point $z\in \nu^{-1}(y)$. It follows that $y\not \in \nu(\NNor(\gt{X}_0))$.\\
Then, for any $z\in \NNor(\gt{X}_0)$, since $\NNor(X_0)$ is nowhere dense in $X_0$, by Ritt's lemma (cf. \cite[Chapter 5, \S 3, 2, p.103]{GR}) and by the dimension formula (when $f$ is flat) we have
\begin{align*}
&\dim (\nu(\NNor(\gt{X}_0)), \nu(z)) \leq \dim (\NNor(X_0),\nu(z)) < \dim (X_0,\nu(z))\\
&= \dim (X,\nu(z)) - \dim (S,0) = \dim (\gt{X},z) - \dim (S,0) \leq \dim (\gt{X}_0,z).
\end{align*}
Furthermore, the restriction $\nu_0: \gt{X}_0\sr X_0$ is finite. Hence
$$\dim (\nu(\NNor(\gt{X}_0)), \nu(z)) =\dim (\NNor(\gt{X}_0),z) ~\mbox{(cf. \cite[Corollary, p.141]{Fi})}. $$
It follows that for any $z\in \NNor(\gt{X}_0)$ we have
$\dim (\NNor(\gt{X}_0),z) < \dim (\gt{X}_0,z)$, i.e., $\NNor(\gt{X}_0)$ is nowhere dense in $\gt{X}_0$ by Ritt's lemma. This implies that $\gt{X}_0$ is generically normal, whence generically reduced. \\
Moreover, for each $z\in \nu^{-1}(x)$, since $\bar{f}$ is flat and $\dim (\gt{X},z) = \dim (X,x) = k+1$, we have
$$ \depth (\ohoa_{\gt{X}_0,z}) = \depth(\ohoa_{\gt{X},z}) - k \geq (k+1) - k =1. $$
On the other hand, we have
$$ \dim (\gt{X}_0,z) = \dim (\gt{X},z) - k = 1. $$
Hence $ \depth (\ohoa_{\gt{X}_0,z}) \geq 1 = \min \{1, \dim (\gt{X}_0,z)\}$, i.e. $\gt{X}_0$ satisfies $(S_1)$ at every point $z\in \nu^{-1}(x)$. This implies that $\gt{X}_0$ is reduced at every point of $\nu^{-1}(x)$. Then $\gt{X}_0$ is normal, and it is the normalization of $X_0$. It follows that $f$ is equinormalizable. The proof is complete.
\epf
The following example illustrates our main theorem.
\ex[{\cite{St}, cf. \cite[Example 4.2]{Le}}] \label{ex4.1}\rm Let us consider the curve singularity $ (X_0,0)\subseteq (\c^4,0)$ defined by the ideal
$$I_0:= \seq{x^2 - y^3,z,w} \cap \seq{x,y,w} \cap \seq{x,y,z,w^2} \subseteq \c\{x,y,z,w\}.$$
The curve singularity $(X_0,0)$ is a union of a cusp
$C$ in the plane $z=w=0,$ a straight line $L = \{x = y = w = 0\}$ and
an embedded non-reduced point $O = (0,0,0,0)$. Now we consider the restriction $f: (X,0)\mtn (\c^2,0)$ of the projection $\pi:(\c^6,0)\mtn (\c^2,0), ~ (x,y,z,w,u,v)\mapsto (u,v),$ to the complex germ $(X,0)$ defined by the ideal
$$I=\seq{x^2-y^3+uy^2,z,w} \cap \seq{x,y,w-v}\subseteq \c\{x,y,z,w,u,v\}.$$
It is easy to check that $f$ is flat, $f^{-1}(0,0) = (X_0,0)$, the total space $(X,0)$ is reduced and pure $3$-dimensional, with two 3-dimensional irreducible components.
We have $\delta((X_0)^{red}) = 2$, $\epsilon(X_0)=1$, hence $\delta(X_0)=1$. Moreover, for each $u,v\in \c\tru \{0\}$, we have
$$\delta(X_{(u,v)}) = \delta((X_{(u,v)})^{red}) - \epsilon(X_{(u,v)})= 1-0=1; $$
$$\delta(X_{(u,0)})= 2-1=1;\quad \delta(X_{(0,v)}) = 1-0 =1.$$
Hence $f$ is $\delta$-constant.
Moreover, the normalizations of the first component $(X_1,0)$ and the second component $(X_2,0)$ of $(X,0)$ are given respectively by
$$ \nu_1: (\c^3,0) \mtn (X_1,0), \quad (T_1,T_2,T_3) \mapsto (0,0,T_1,T_3,T_2,T_3) $$
and
$$ \nu_2: (\c^3,0) \mtn (X_2,0), \quad (T_1,T_2,T_3) \mapsto (T_3^3+T_1T_3,T_3^2+T_1,0,0,T_1,T_2). $$
Hence the composition maps are given respectively by
$$ \bar{f}_1: (\c^3,0) \mtn (\c,0), \quad (T_1,T_2,T_3) \mapsto (T_2,T_3)$$
and
$$ \bar{f}_2: (\c^3,0) \mtn (\c,0), \quad (T_1,T_2,T_3) \mapsto (T_1,T_2).$$
On both components, $\bar{f}$ is flat with normal fibers, hence $f$ is equinormalizable. Note that, in this example, the normalization of $(X,0)$ is smooth. All the computation given above can be easily done by \textbf{SINGULAR} (\cite{DGPS}).
\eex
\section{Topologically triviality of one-parametric families of isolated curve singularities}
In this section we consider one-parametric families of isolated (not necessarily reduced) curve singularities and show that the topologically triviality of these families is equivalent to the admission of weak simultaneous resolutions (\cite{Tei2}).
Let $f: (X,x) \mtn (\c,0)$ be a deformation of an isolated curve singularity $(X_0,x)$ with $(X,x)$ pure dimensional. Let $f: X \mtn T$ be a \textit{good representative} (in the sense of \cite[\S 2.1, p.248]{B-G}) such that $X$ is generically reduced over $T$. Then $X$ is reduced by Corollary \ref{coro2.3}. Let $\nu: \gt{X} \mtn X$ be the normalization of $X$. Denote $\bar{f}:=f\circ \nu: \gt{X} \mtn T$.
\df[{cf. \cite{BG}}] \rm
\ite
\item[(1)] $f$ is said to be \textit{topologically trivial } if there is a homeomorphism $h: X \overset{\approx}{\mtn} X_0 \times T$ such that $f=\pi \circ h$, where $\pi: X_0 \times T \mtn T$ is the projection.
\item[(2)] Assume that $f$ admits a section $\sigma: T \mtn X$ such that $X_t\tru \sigma(t)$ is smooth for all $t\in T$. Then $f$ admits a \textit{weak simultaneous resolution} if $f$ is equinormalizable and
$$ \big(\nu^{-1}(\sigma(T))\big)^{red} \cong \big(\nu^{-1}(\sigma(0))\big)^{red} \times T \quad (\mbox{over $T$}).$$
\hite
\edf
\rem[{cf. \cite{Tei2}}] \label{rem5.1}\rm $f$ admits a weak simultaneous resolution if and only if $f$ is equinormalizable and the number of branches $r(X_t,\sigma(t)) $ of $(X_t,\sigma(t))$ is constant for all $t\in T$.
\erem
Buchweitz and Greuel (1980) proved the following result for families of reduced curve singularities.
\thr[{\cite[Theorem 5.2.2]{B-G}}] \label{thr5.1} Let $f: X \mtn T$ be a good representative of a flat family of reduced curves with section $\sigma: T\mtn X$ such that $X_t\tru \sigma(t)$ is smooth for each $t\in T$. Then the following conditions are equivalent:
\ite
\item[(1)] $f$ admits a weak simultaneous resolution;
\item[(2)] the delta number $\delta(X_t,\sigma(t))$ and the number of branches $r(X_t,\sigma(t))$ are constant for $t\in T$;
\item[(3)] the Milnor number $\mu(X_t, \sigma(t))$ is constant for $t\in T$;
\item[(4)] $f$ is topologically trivial.
\hite
\ethr
We shall show that this result is also true for families of isolated (not necessarily reduced) curve singularities. Due to Br\"{u}cker and Greuel (\cite{BG}), we give a new definition for the \textit{Milnor number} of a curve singulariy $C$ at an isolated singular point $c\in C$, namely,
$$ \mu(C,c):= 2 \delta(C,c) - r(C,c) +1. $$
The Milnor number of $C$ is defined to be
$$ \mu(C):=\sum_{c\in \Sing(C)} \mu(C,c). $$
To state and prove a similar result to Theorem \ref{thr5.1} we need the following result of Bobadilla, Snoussi and Spivakovsky (2014).
\lm[{\cite[Theorem 4.4]{BSS}}] \label{lm5.1} Let $f: (X,x) \mtn (\c,0)$ be a deformation of an isolated curve singularity $(X_0,x)$ with $(X,x)$ reduced. Assume that the singular locus $\Sing(X,x)$ of $(X,x)$ is smooth of dimension 1. If $f$ is topologically trivial, then for any $z \in \nu^{-1}(x)$, $\bar{f}: (\gt{X},z) \mtn (\c,0)$ is topologically trivial, and the normalization $(\gt{X},\nu^{-1}(x))$ of $(X,x)$ is smooth.
\elm
The following theorem is the main result of this section.
\thr \label{thr5.2} Let $f: (X,x) \mtn (\c,0)$ be a deformation of an isolated curve singularity $(X_0,x)$ with $(X,x)$ pure dimensional.
Let $f: X \mtn T$ be a good representative with section $\sigma: T\mtn X$ such that $X_t\tru \sigma(t)$ is smooth for each $t\in T$ and
$X$ is generically reduced over $T$. Assume that $\Sing(X,x)$ is smooth of dimension 1. Then the following conditions are equivalent:
\ite
\item[(1)] $f$ admits a weak simultaneous resolution;
\item[(2)] the delta number $\delta(X_t,\sigma(t))$ and the number of branches $r(X_t,\sigma(t))$ are constant for $t\in T$;
\item[(3)] the Milnor number $\mu(X_t, \sigma(t))$ is constant for $t\in T$;
\item[(4)] $f$ is topologically trivial.
\hite
\ethr
\pf The equivalence of (1) and (2) follows from Theorem \ref{thr4.1} (for $k=1$) and Remark \ref{rem5.1}. (2) $\td (3)$ because of the definition of the Milnor number. The implication $(1) \Sr (4)$ is proved by the same way for families of reduced curve singularities as given in the proof of the implication $(4) \Sr (6)$ of \cite[Theorem 5.2.2]{B-G}. Now we prove that $(4) \Sr (1)$.
For convenience, let us assume that $\nu^{-1}(x) = \{z_1,\cdots,z_r\}$. Note that $\gt{X}_0:=\bar{f}^{-1}(0)$ is reduced, $\gt{X}_t:=\bar{f}^{-1}(t)$ is smooth for every $t\not =0$ by \cite[Lemma 2.1.1]{BG}. Therefore for every $i=1,\cdots, r$, $\bar{f}: (\gt{X},z_i) \mtn (\c,0)$ is a family of reduced curve singularities with smooth general fibers, and there exist sections $\bar{\sigma}_1, \cdots, \bar{\sigma}_r: T \mtn \gt{X}$ such that $\bar{\sigma}_i(0)=z_i$, $\nu^{-1}(\sigma(t)) =\{\bar{\sigma}_1(t), \cdots, \bar{\sigma}_r(t)\}$, and $\gt{X}_t\tru \bar{\sigma}_i(t)$ is smooth for every $t\in T$ and for every $i=1,\cdots, r$. \\
Assume that $f$ is topologically trivial. Then it follows from Lemma \ref{lm5.1} that the deformation $\bar{f}: (\gt{X}, z_i) \mtn (\c,0)$ of $(\gt{X}_0,z_i)$ is also topologically trivial for every $i=1,\cdots, r$. Hence it follows from Theorem \ref{thr5.1}, applying for the flat family of reduced curve singularities $\bar{f}: (\gt{X},z_i) \mtn (\c,0)$ with section $\bar{\sigma}_i : (\c,0) \mtn (\gt{X},z_i)$, that the delta number $\delta(\gt{X}_t,\bar{\sigma}_i(t))$ and the number of branches $r(\gt{X}_t,\bar{\sigma}_i(t))$ are constant for $t\in T$. Then for $t\not =0$ we have
$$ \delta(\gt{X}_0) = \delta(\gt{X}_t) = 0. $$
Hence $\gt{X}_0$ is normal. It follows that $f$ is equinormalizable. On the other hand, the equinormalizability of $f$ over the smooth base space $(\c,0)$ implies that for every $t\in T$ and for each $i=1,\cdots, r$, the induced map of $\nu$ on the fibers $\nu_t: (\gt{X}_t, \bar{\sigma}_i(t)) \mtn (X_t,\sigma(t))$ is the normalization of the corresponding irreducible component of $(X_t,\sigma(t))$.
It follows that the number of irreducible components of $(X_t, \sigma(t))$ is equal to the cardinality of $\nu^{-1}(\sigma(t))$, which is equal to $r$ for every $t\in T$. Hence $r(X_t,\sigma(t))$ is constant for every $t\in T$. It follows that $f$ admits a weak simultaneous resolution, and we have (1).
\epf
\ex \rm Let us consider again the curve singularity $ (X_0,0)\subseteq (\c^4,0)$ considered in Example \ref{ex4.1} which is defined by the ideal
$$I_0:= \seq{x^2 - y^3,z,w} \cap \seq{x,y,w} \cap \seq{x,y,z,w^2} \subseteq \c\{x,y,z,w\}.$$
Now we consider the restriction $f: (X,0)\mtn (\c,0)$ of the projection $\pi:(\c^5,0)\mtn (\c,0), ~ (x,y,z,w,t)\mapsto t,$ to the complex germ $(X,0)$ defined by the ideal
$$I=\seq{x^2-y^3+ty^2,z,w} \cap \seq{x,y,w-t}\subseteq \c\{x,y,z,w,t\}.$$
We can check the following (all of them can be checked easily by SINGULAR):
\ite
\item[(1)] $f$ is flat;
\item[(2)] $(X,0)$ is reduced and pure $2$-dimensional, with two 2-dimensional irreducible components;
\item[(3)] $f$ is $\delta$-constant with $\delta(X_t) = 1$ for all $t\in \c$ close to $0$;
\item[(4)] $r(X_t)=2$ for all $t\in \c$ close to $0$;
\item[(5)] $f$ is equinormalizable;
\item[(6)] the normalization of each component of $(X,0)$ is $(\c^2,0)$, which is smooth.
\hite
By Theorem \ref{thr5.2}, $f$ is topologically trivial.
\eex
\textbf{Acknowledgements.} The author would like to express his gratitude to Professor Gert-Martin Greuel for his valuable discussions, careful proof-reading and a lot of precise comments. He would also like to thank the anonymous referees for their careful proof-reading and suggestions. This research is funded by Vietnam National Foundation for Science and Technology Development
(NAFOSTED) under the grant number 101.99-2013.24. This work is finished during the author's postdoctoral fellowship at the Vietnam
Institute for Advanced Study in Mathematics (VIASM). He thanks VIASM for financial support and hospitality.
|
2,869,038,157,043 | arxiv | \section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\normalfont\LARGE\bfseries}}
\makeatother
\makeatletter
\def\@seccntformat#1{%
\protect\textup{%
\protect\@secnumfont
\expandafter\protect\csname format#1\endcsname
\csname the#1\endcsname
\protect\@secnumpunct
}%
}
\newcommand{\bfseries}{\bfseries}
\newcommand{\formatsubsubsection}{\bfseries}
\titleformat*{\subsubsection}{\bfseries}
\newcommand{\sect}
{
\setcounter{equation}{0}
\setcounter{figure}{0}
\section
}
\renewcommand{\theenumi}{\textnormal{A\arabic{enumi}}}
\renewcommand{\labelenumi}{(\theenumi)}
\newcommand{\enum}[1]{\textnormal{(\textbf{#1})}}
\theoremstyle{plain}
\newtheorem{definition}{Definition}[section]
\newtheorem{theorem}[definition]{Theorem}
\newtheorem{lemma}[definition]{Lemma}
\newtheorem{corollary}[definition]{Corollary}
\newtheorem{assumption}[definition]{Assumption}
\newtheorem{remark}[definition]{Remark}
\theoremstyle{definition}
\newtheorem*{remarks}{Remarks}
\newtheorem{properties}[definition]{Properties}
\newtheorem{example}[definition]{Example}
\newtheorem{proposition}[definition]{Proposition}
\allowdisplaybreaks
\begin{document}
\title[Freezing Traveling and Rotating Waves\\in Second Order Evolution Equations]{Freezing Traveling and Rotating Waves\\in Second Order Evolution Equations}
\setlength{\parindent}{0pt}
\begin{center}
\normalfont\huge\bfseries{\shorttitle}\\
\vspace*{0.25cm}
\end{center}
\vspace*{0.8cm}
\noindent
\begin{minipage}[t]{0.99\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\hspace*{1.8cm}
\textbf{Wolf-J{\"u}rgen Beyn}\footnotemark[1]${}^{,}$\footnotemark[4] \\
\hspace*{1.8cm}
\textbf{Denny Otten}\footnotemark[2]${}^{,}$\footnotemark[4] \\
\hspace*{1.8cm}
Department of Mathematics \\
\hspace*{1.8cm}
Bielefeld University \\
\hspace*{1.8cm}
33501 Bielefeld \\
\hspace*{1.8cm}
Germany
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\hspace*{1.8cm}
\textbf{Jens Rottmann-Matthes}\footnotemark[3]${}^{,}$\footnotemark[5] \\
\hspace*{1.8cm}
Institut für Analysis \\
\hspace*{1.8cm}
Karlsruhe Institute of Technology \\
\hspace*{1.8cm}
76131 Karlsruhe \\
\hspace*{1.8cm}
Germany
\end{minipage}
\end{minipage}\\
\footnotetext[1]{e-mail: \textcolor{blue}{[email protected]}, phone: \textcolor{blue}{+49 (0)521 106 4798}, \\
fax: \textcolor{blue}{+49 (0)521 106 6498}, homepage: \url{http://www.math.uni-bielefeld.de/~beyn/AG\_Numerik/}.}
\footnotetext[2]{e-mail: \textcolor{blue}{[email protected]}, phone: \textcolor{blue}{+49 (0)521 106 4784}, \\
fax: \textcolor{blue}{+49 (0)521 106 6498}, homepage: \url{http://www.math.uni-bielefeld.de/~dotten/}.}
\footnotetext[3]{e-mail: \textcolor{blue}{[email protected]}, phone: \textcolor{blue}{+49 (0)721 608 41632}, \\
fax: \textcolor{blue}{+49 (0)721 608 46530}, homepage: \url{http://www.math.kit.edu/iana2/~rottmann/}.}
\footnotetext[4]{supported by CRC 701 'Spectral Structures and Topological Methods in Mathematics', Bielefeld University}
\footnotetext[5]{supported by CRC 1173 'Wave Phenomena: Analysis and Numerics', Karlsruhe Institute of Technology}
\vspace*{0.6cm}
\noindent
\hspace*{4.2cm}
Date: \today
\normalparindent=12pt
\vspace{0.4cm}
\begin{center}
\begin{minipage}{0.9\textwidth}
{\small
\textbf{Abstract.}
In this paper we investigate the implementation of the so-called {\it freezing method} for second order wave equations in one and several space dimensions.
The method converts the given PDE into a partial differential algebraic
equation which is then solved numerically. The reformulation aims at
separating the motion of a solution into a co-moving frame
and a profile which varies as little as possible.
Numerical examples demonstrate the feasability of this approach for
semilinear wave equations with sufficient damping. We treat the case of a traveling
wave in one space dimension and of a rotating wave in two space dimensions.
In addition, we investigate in arbitrary space dimensions
the point spectrum and the essential spectrum of operators obtained by linearizing
about the profile, and we indicate the consequences for the nonlinear stability of
the wave.
}
\end{minipage}
\end{center}
\noindent
\textbf{Key words.} Systems of damped wave equations, traveling waves, rotating waves, freezing method, second order evolution equations, point spectra, essential spectra.
\noindent
\textbf{AMS subject classification.} 35K57, 35Pxx, 65Mxx (35Q56, 47N40, 65P40).
\sect{Introduction}
\label{sec:1}
The topic of this paper is the numerical computation and stability of
waves occurring in nonlinear second order evolution equations with damping terms. Our main object of study is the damped wave equation
in one or several space dimensions with a nonlinearity of semilinear
type (see \eqref{equ:1.1}, \eqref{equ:1.6} below).
In the literature there are many approaches to the numerical solution of the Cauchy problem for such equations by various types of spatial and temporal discretizations. We refer, for example,
to the recent papers \cite{Brunner2013}, \cite{Glowinski2014}, \cite{Alonso2015}, \cite{Rincon2016}.
Most of the results concern finite time error estimates,
and there are a few studies of detecting blow-up solutions
or the shape of a developing solitary wave.
In our work we take a different numerical approach which
emphasizes the longtime behavior and tries to determine the
shape and speed of traveling and rotating waves from a reformulation of the
original PDE.
More specifically, we transfer the so called {\it freezing method}
(see \cite{BeynThuemmler2004}, \cite{RowleyKevrekidisMarsden2003},
\cite{BeynOttenRottmannMatthes2013}) from first order to second order
evolution equations, and we investigate its relation to the stability
of the waves. Generally speaking, the method tries to separate the solution
of a Cauchy problem into the motion of a co-moving frame and of a profile,
where the latter is required to vary as little as possible or even become
stationary. This is achieved by transforming the original PDE into a
partial differential algebraic equation (PDAE). The PDAE involves extra
unknowns specifying the frame, and extra constraints (so called
{\it phase conditions}) enforcing the freezing principle for the profile.
This methodology has been successfully applied to a wide range of PDEs
which are of first order in time and of hyperbolic, parabolic or of mixed
type, cf. \cite{Thuemmler2006}, \cite{Thuemmler2008}, \cite{Thuemmler2008b},
\cite{BeynThuemmlerSelle2008}, \cite{RottmannMatthes2012a}, \cite{RottmannMatthes2012b},
\cite{RottmannMatthes2012c}, \cite{BeynOttenRottmannMatthes2013}. One aim
of the theoretical underpinning is to prove that waves which are (asymptotically)
stable with asymptotic phase for the PDE, become stable in the classical
Lyapunov sense for the PDAE. While this has been rigorously proved
for many systems in one space dimension and confirmed numerically
in higher space dimensions, the corresponding theory for the multi-dimensional
case is still in its early stages, see \cite{BeynLorenz2008},
\cite{BeynOtten2016}, \cite{BeynOtten2016b}, \cite{Otten2015}.
In this paper we develop the freezing formulation and perform
the spectral calculations in an informal way, for the one-dimensional
as well as the multi-dimensional case. Rigorous stability results for
the one-dimensional damped wave equation may be found in
\cite{GallayRaugel1997}, \cite{GarrayJoly2009},
\cite{beynottenrottmann-matthes2016}.
Here we consider a nonlinear wave equation of the form
\begin{equation}
\label{equ:1.1}
Mu_{tt} = Au_{xx} + f(u,u_x,u_t),\,x\in\R,\,t\geqslant 0,
\end{equation}
where $u(x,t)\in \R^m, A,M \in \R^{m,m}$ and $f:\R^{3m}\to \R^m$ is
sufficiently smooth. In addition, we assume the matrix $M$ to be nonsingular
and $M^{-1}A$ to be positive diagonalizable, which will lead to local
wellposedness of the Cauchy problem associated with \eqref{equ:1.1}.
Our interest is in traveling waves
\begin{equation*}
u_{\star}(x,t) = v_{\star}(x-\mu_{\star}t),\,x\in\R,\,t\geqslant 0,
\end{equation*}
with constant limits at $\pm \infty$, i.e.
\begin{equation}
\label{equ:1.3}
\lim_{\xi\to\pm\infty}v_{\star}(\xi)=v_{\pm}\in\R^m,\;\lim_{\xi\to\pm\infty}v_{\star,\xi}(\xi)=0,\quad f(v_{\pm},0,0)=0.
\end{equation}
Transforming \eqref{equ:1.1} into a co-moving frame via
$u(x,t)=v(\xi,t), \xi=x-\mu_{\star}t$ leads to the system
\begin{equation}
\label{equ:1.4}
Mv_{tt} = (A-\mu_{\star}^2 M)v_{\xi\xi} + 2\mu_{\star}Mv_{\xi t} + f(v,v_{\xi},v_t-\mu_{\star}v_{\xi}),\,\xi\in\R,\,t\geqslant 0.
\end{equation}
This system has $v_{\star}$ as a steady state,
\begin{equation}
\label{equ:1.5}
0 = (A-\mu_{\star}^2 M)v_{\star,\xi\xi} + f(v_{\star},v_{\star,\xi},-\mu_{\star}v_{\star,\xi}),\,\xi\in\R.
\end{equation}
In Section \ref{sec:2} we work out the details of the freezing PDAE
based on the ansatz $u(x,t)=v(x-\gamma(t),t)$, $x\in \R, t\ge 0$ with the
additional unknown function $\gamma(t), t\ge 0$. Solving this PDAE
numerically will then be demonstated for a special semilinear case,
for which damping occurs and for which the nonlinearity is of quintic
type with $5$ zeros.
We will also discuss in Section \ref{subsec:2.2} the spectral properties
of the linear operator obtained by linearizing the right-hand side of
\eqref{equ:1.4} about the profile $v_{\star}$. First, there is the eigenvalue
zero due to shift equivariance, and then we analyze the dispersion curves
which are part of the operator's essential spectrum. If there is sufficient
damping in the system (depending on the derivative $D_3f$), one can expect
the whole nonzero spectrum to lie strictly to the left of the imaginary axis.
We refer to \cite{beynottenrottmann-matthes2016} for a rigorous proof of
nonlinear stability in such a situation, both stability of the wave with
asymptotic phase for equation \eqref{equ:1.4} and Lyapunov stability of
the wave and its speed for the freezing equation.
The subsequent section is devoted to study corresponding problems
for multi-dimensional wave equations
\begin{equation}
\label{equ:1.6}
M u_{tt} + Bu_t= A \Delta u +f(u),\,x\in \R^d,\,t\geqslant 0,
\end{equation}
where the matrices $A,M$ are as above, the damping matrix $B \in \R^{m,m}$ is
given and $f:\R^m\to \R^m$ is again
sufficiently smooth. We look for rotating waves of the form
\begin{equation*}
u_{\star}(x,t) = v_{\star}(e^{-tS_{\star}}(x-x_{\star})),\,x\in\R^d,\,t\geqslant 0,
\end{equation*}
where $x_{\star}\in \R^d$ denotes the center of rotation, $S_{\star}\in \R^{d,d}$
is a skew-symmetric matrix, and $v_{\star}:\R^d \to \R^m$ describes
the profile. Transforming \eqref{equ:1.6} into a co-rotating frame
via $u(x,t)=v(e^{-tS_{\star}}(x-x_{\star}),t)$ now leads to the equation
\begin{equation}
\label{equ:1.8}
\begin{aligned}
Mv_{tt}+Bv_t =& A\triangle v - Mv_{\xi\xi}(S_{\star}\xi)^2 +2Mv_{\xi t}S_{\star}\xi - Mv_{\xi}S_{\star}^2\xi
+ Bv_{\xi}S_{\star}\xi + f(v),\,\xi\in\R^d,\,t\geqslant 0,
\end{aligned}
\end{equation}
where our notation for derivatives uses multilinear calculus, e.g.
\begin{align}
\label{equ:1.8a}
(v_{\xi\xi}h_1 h_2)_i = \sum_{j=1}^{d}\sum_{k=1}^{d}v_{i,\xi_j\xi_k}(h_1)_j(h_2)_k,\quad (\triangle v)_i = \sum_{j=1}^{d}v_{i,\xi_j\xi_j} = \sum_{j=1}^{d}v_{i,\xi\xi}(e^j)^2.
\end{align}
The profile $v_{\star}$ of the wave is then a steady state solution of \eqref{equ:1.8}, i.e.
\begin{equation}
\label{equ:1.9}
0 = A\triangle v_{\star} - Mv_{\star,\xi\xi}(S_{\star}\xi)^2-
Mv_{\star,\xi}S_{\star}^2\xi +Bv_{\star,\xi}S_{\star}\xi + f(v_{\star}),\,\xi\in\R^d.
\end{equation}
As is known from first oder in time PDEs, there are several eigenvalues of
the linearized operator on the imaginary axis caused by the Euclidean symmmetry,
see e.g. \cite{Metafune2001}, \cite{MetafunePallaraPriola2002},
\cite{FiedlerScheel2003}, \cite{BeynLorenz2008}, \cite{Otten2014}.
The computations become more involved for the wave equation \eqref{equ:1.8},
but we will show that the eigenvalues on the imaginary axis are
the same as in the parabolic case.
Further, determining the dispersion relation, and thus curves
in the essential spectrum, now amounts to solving a parameterized quadratic
eigenvalue problem which in general can only be solved numerically. Finally,
we present a numerical example of a rotating wave for the cubic-quintic
Ginzburg-Landau equation. The performance of the freezing method will be
demonstrated, and we investigate the numerical eigenvalues approximating
the point spectrum on (and close to) the imaginary axis as well as the
essential spectrum in the left half-plane.
\sect{Traveling waves in one space dimension}
\label{sec:2}
\subsection{Freezing traveling waves.}
\label{subsec:2.1}
Consider the Cauchy problem associated with \eqref{equ:1.1}
\begin{subequations}
\label{equ:2.1}
\begin{align}
& Mu_{tt} = Au_{xx} + f(u,u_x,u_t), &&\,x\in\R,\,t\geqslant 0, \label{equ:2.1a} \\
& u(\cdot,0) = u_0,\quad u_t(\cdot,0) = v_0, &&\,x\in\R,\,t=0, \label{equ:2.1b}
\end{align}
\end{subequations}
for some initial data $u_0,v_0:\R\rightarrow\R^m$ and some nonlinearity $f\in C^3(\R^{3m},\R)$.
Introducing new unknowns $\gamma(t)\in\R$ and $v(\xi,t)\in\R^m$ via the \begriff{freezing ansatz for traveling waves}
\begin{equation}
\begin{aligned}
\label{equ:2.2}
u(x,t) & = v(\xi,t),\quad\xi:=x-\gamma(t),\,x\in\R,\,t\geqslant 0,
\end{aligned}
\end{equation}
and inserting \eqref{equ:2.2} into \eqref{equ:2.1a} by taking
\begin{align}
\label{equ:2.3}
u_t = -\gamma_t v_{\xi}+v_t,\quad u_{tt}=-\gamma_{tt}v_{\xi}+\gamma_t^2v_{\xi\xi}-2\gamma_t v_{\xi t} + v_{tt}
\end{align}
into account, we obtain the equation
\begin{equation}
\label{equ:2.4}
Mv_{tt} = (A-\gamma_t^2 M)v_{\xi\xi} + 2\gamma_t M v_{\xi t} + \gamma_{tt}M v_{\xi} + f(v,v_{\xi},v_t-\gamma_t v_{\xi}),\;\xi\in\R,\,t\geqslant 0.
\end{equation}
Now it is convenient to introduce time-dependent functions $\mu_1(t)\in\R$ and $\mu_2(t)\in\R$ via
\begin{equation*}
\mu_1(t):=\gamma_t(t),\quad \mu_2(t):=\mu_{1,t}(t)=\gamma_{tt}(t)
\end{equation*}
which allows us to transfer \eqref{equ:2.4} into a coupled PDE/ODE-system
\begin{subequations}
\label{equ:2.5}
\begin{align}
&Mv_{tt} = (A-\mu_1^2 M)v_{\xi\xi} + 2\mu_1 M v_{\xi t} + \mu_2 M v_{\xi} + f(v,v_{\xi},v_t-\mu_1 v_{\xi}), &&\xi\in\R,\,t\geqslant 0, \label{equ:2.5a}\\
&\mu_{1,t} = \mu_2, &&t\geqslant 0,\label{equ:2.5b}\\
&\gamma_t = \mu_1, &&t\geqslant 0.\label{equ:2.5c}
\end{align}
\end{subequations}
The quantity $\gamma(t)$ denotes the \begriff{position}, $\mu_1(t)$ the velocity and $\mu_2(t)$ the acceleration of the profile $v(\xi,t)$ at time $t$.
We next specify initial data for the system \eqref{equ:2.5} as follows,
\begin{equation}
\label{equ:2.6}
v(\cdot,0) = u_0,\quad v_t(\cdot,0)=v_0+\mu_1^0 u_{0,\xi},\quad \mu_1(0)=\mu_1^0,\quad \gamma(0)=0
\end{equation}
Note that if we require $\gamma(0)=0$ and $\mu_1(0)=\mu_1^0$, then the first equation in \eqref{equ:2.6} follows from \eqref{equ:2.2} and \eqref{equ:2.1b},
while the second equation in \eqref{equ:2.6} follows from \eqref{equ:2.3}, \eqref{equ:2.1b} and \eqref{equ:2.5c}. Suitable values for $\mu_1^0$ depend on
the choice of phase condition to be discussed next.
We compensate the extra variable $\mu_2$ in the system \eqref{equ:2.5} by imposing an additional scalar algebraic constraint, also known as a phase condition,
of the general form
\begin{equation}
\label{equ:2.7}
\psi(v,v_t,\mu_1,\mu_2) = 0,\;t\geqslant 0.
\end{equation}
Two possible choices are the \begriff{fixed phase condition} $\psi_{\mathrm{fix}}$ and the \begriff{orthogonal phase condition} $\psi_{\mathrm{orth}}$ given by
\begin{align}
\psi_{\mathrm{fix}}(v) = \langle v-\hat{v},\hat{v}_{\xi}\rangle_{L^2},\;t\geqslant 0, \label{equ:2.8} \\
\psi_{\mathrm{orth}}(v_t) = \langle v_t,v_{\xi}\rangle_{L^2},\;t\geqslant 0. \nonumbe
\end{align}
These two types and their derivation are discussed in \cite{beynottenrottmann-matthes2016}. The function $\hat{v}:\R\rightarrow\R^m$ denotes a
time-independent and sufficiently smooth template (or reference) function, e.g. $\hat{v}=u_0$. Suitable values for $\mu_1(0)=\mu_1^0$ can be derived
from requiring consistent initial values for the PDAE. For example, consider \eqref{equ:2.8} and take the time derivative at $t=0$. Together with \eqref{equ:2.6}
this leads to $0=\langle v_t(\cdot,0),\hat{v}_{\xi}\rangle_{L^2}=\langle v_0,\hat{v}_{\xi}\rangle_{L^2}+\mu_1^0\langle u_{0,\xi},\hat{v}_{\xi}\rangle_{L^2}$.
If $\langle u_{0,\xi},\hat{v}_{\xi}\rangle_{L^2}\neq 0$ this determines a unique value for $\mu_1^0$.
Let us summarize the set of equations obtained by the freezing method of the original Cauchy problem \eqref{equ:2.1}. Combining the differential equations
\eqref{equ:2.5}, the initial data \eqref{equ:2.6} and the phase condition \eqref{equ:2.7}, we arrive at the following partial differential algebraic evolution
equation (short: PDAE) to be solved numerically:
\begin{subequations}
\label{equ:2.10}
\begin{align}
\label{equ:2.10a}
&\begin{aligned}
Mv_{tt} &= (A-\mu_1^2 M)v_{\xi\xi} + 2\mu_1 M v_{\xi,t} + \mu_2 M v_{\xi} + f(v,v_{\xi},v_t-\mu_1 v_{\xi}),\\
\mu_{1,t} &= \mu_2, \quad \gamma_t=\mu_1,
\end{aligned}&t\geqslant 0,\\
& 0 = \psi(v,v_t,\mu_1,\mu_2), &t\geqslant 0,\label{equ:2.10b}\\
&\begin{aligned}
v(\cdot,0) &= u_0,\quad v_t(\cdot,0) = v_0+\mu_1^0 u_{0,\xi}, \quad \mu_1(0) = \mu_1^0, \quad \gamma(0) = 0.
\end{aligned}\label{equ:2.10c}
\end{align}
\end{subequations}
The system \eqref{equ:2.10} depends on the choice of phase condition $\psi$ and is to be solved for $(v,\mu_1,\mu_2,\gamma)$
with given initial data $(u_0,v_0,\mu_1^0)$. It consists of a PDE for $v$ that is coupled to two ODEs for $\mu_1$ and $\gamma$
\eqref{equ:2.10a} and an algebraic constraint \eqref{equ:2.10b} which closes the system. A consistent initial value $\mu_1^0$
for $\mu_1$ is computed from the phase condition and the initial data. Further initialization of the algebraic variable $\mu_2$
is usually not needed for a PDAE-solver but can be provided if necessary (see \cite{beynottenrottmann-matthes2016}).
The ODE for $\gamma$ is called the \begriff{reconstruction equation} in \cite{RowleyKevrekidisMarsden2003}. It decouples from
the other equations in \eqref{equ:2.10} and can be solved in a postprocessing step. The ODE for $\mu_1$ is the new feature of
the PDAE for second order systems when compared to the first order parabolic and hyperbolic equations in
\cite{BeynThuemmler2004,RottmannMatthes2010,BeynOttenRottmannMatthes2013}.
Finally, note that $(v,\mu_1,\mu_2)=(v_{\star},\mu_{\star},0)$ satisfies
\begin{align*}
0 & = (A-\mu_{\star}^2 M)v_{\star,\xi\xi} + \mu_{\star} M v_{\star,\xi} + f(v_{\star},v_{\star,\xi},-\mu_{\star}v_{\star,\xi}),\;\xi\in\R,\\
0 & = \mu_2, \\
0 & = \psi(v_{\star},0,\mu_{\star},0),
\end{align*}
and hence is a stationary solution of \eqref{equ:2.10a},\eqref{equ:2.10b}. Here we assume that $v_{\star},\mu_{\star}$ have been selected
to satisfy the phase condition. Obviously, in this case we have $\gamma(t)=\mu_{\star}t$.
For a stable traveling wave we expect that solutions $(v,\mu_1,\mu_2,\gamma)$ of \eqref{equ:2.10} show the limiting behavior
\begin{align*}
v(t)\rightarrow v_{\star},\quad \mu_1(t)\rightarrow\mu_{\star},\quad \mu_2(t)\rightarrow 0\quad\text{as}\quad t\to\infty,
\end{align*}
provided the initial data are close to their limiting values.
\begin{example}[Freezing quintic Nagumo wave equation]\label{exa:1}
Consider the quintic Nagumo wave equation,
\begin{equation}
\label{equ:2.22}
\varepsilon u_{tt} = A u_{xx} + f(u,u_x,u_t),\; x\in \R,\, t\geqslant 0,
\end{equation}
with $u=u(x,t)\in\R$, $\varepsilon>0$, $0<\alpha_1<\alpha_2<\alpha_3<1$,
and the nonlinear term
\begin{equation} \label{equ:2.22a}
f:\R^3\rightarrow\R,\quad f(u,u_x,u_t)=-u_t+u(1-u)\prod_{j=1}^{3}(u-\alpha_j).
\end{equation}
\begin{figure}[ht]
\centering
\subfigure[]{\includegraphics[height=3.8cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Nonfrozen_TravelingFront.png}\label{fig:2.1a}}
\subfigure[]{\includegraphics[height=3.8cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Nonfrozen_SpaceTime.png} \label{fig:2.1b}}
\caption{Traveling front of quintic Nagumo wave equation \eqref{equ:2.22} at different time instances (a) and its time evolution (b) for parameters from \eqref{equ:2.23}.}
\label{fig:2.1}
\end{figure}
For the parameter values
\begin{equation}
\label{equ:2.23}
M=\varepsilon=\frac{1}{2},\quad A=1,\quad\alpha_1=\frac{2}{5},\quad \alpha_2=\frac{1}{2},\quad \alpha_3=\frac{17}{20},
\end{equation}
equation \eqref{equ:2.22} admits a traveling front solution connecting the asymptotic states $v_-=0$ and $v_+=1$.
Figure \ref{fig:2.1} shows a numerical simulation of the solution $u$ of \eqref{equ:2.22} on the spatial domain $(-50,50)$ with homogeneous
Neumann boundary conditions, with initial data
\begin{align}
\label{equ:2.24}
u_0(x)=\tfrac{1}{2}\left(1+\tanh\left(\tfrac{x}{2}\right)\right),\quad v_0(x)=0
\end{align}
and parameters taken from \eqref{equ:2.23}. For the space discretization we use continuous piecewise linear finite
elements with spatial stepsize $\triangle x=0.1$. For the time discretization we use the BDF method of order $2$
with absolute tolerance $\mathrm{atol}=10^{-3}$, relative tolerance $\mathrm{rtol}=10^{-2}$, temporal stepsize $\triangle t=0.2$
and final time $T=800$. Computations are performed with the help of the software COMSOL 5.2.
Let us now consider the frozen quintic Nagumo wave equation resulting from \eqref{equ:2.10}
\begin{subequations}
\label{equ:2.25}
\begin{align}
\label{equ:2.25a}
&\begin{aligned}
\varepsilon v_{tt} + v_t &= (1-\mu_1^2 \varepsilon)v_{\xi\xi} +
2\mu_1 \varepsilon v_{\xi,t} +
(\mu_2 \varepsilon + \mu_1)v_{\xi} + \tilde{f}(v),\\
\mu_{1,t} &= \mu_2, \quad \gamma_t=\mu_1,
\end{aligned}&t\geqslant 0,\\
& 0 = \bigl\langle v_t(\cdot,t),\hat{v}_\xi\bigr\rangle_{L^2(\R,\R)}, &t\geqslant 0,\label{equ:2.25b}\\
&\begin{aligned}
v(\cdot,0) &= u_0,\quad v_t(\cdot,0) = v_0+\mu_1^0 u_{0,\xi}, \quad
\mu_1(0) = \mu_1^0, \quad
\gamma(0) = 0.
\end{aligned}\label{equ:2.25c}
\end{align}
\end{subequations}
\begin{figure}[ht]
\centering
\subfigure[]{\includegraphics[height=3.8cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Profile.png}\label{fig:2.2a}}
\subfigure[]{\includegraphics[height=3.8cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Velocity.png} \label{fig:2.2b}}
\subfigure[]{\includegraphics[height=3.8cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_SpaceTime.png}\label{fig:2.2c}}
\caption{Solution of the frozen quintic Nagumo wave equation \eqref{equ:2.25}: approximation of profile $v(x,1000)$ (a) and time evolutions of velocity $\mu_1$
and acceleration $\mu_2$ (b) and of the profile $v$ (c) for parameters from \eqref{equ:2.23}.}
\label{fig:2.2}
\end{figure}
Figure \ref{fig:2.2} shows the solution $(v,\mu_1,\mu_2,\gamma)$ of \eqref{equ:2.25} on the spatial domain $(-50,50)$ with homogeneous Neumann
boundary conditions, initial data $u_0$, $v_0$ from \eqref{equ:2.24}, and reference function $\hat{v}=u_0$. For the computation we used the
fixed phase condition $\psi_{\mathrm{fix}}(v)$ from \eqref{equ:2.8} with consistent intial data $\mu_1^0=0$, see above.
The spatial discretization data are taken as in the nonfrozen case. For the time discretization we used the BDF method of order $2$
with absolute tolerance $\mathrm{atol}=10^{-3}$, relative tolerance $\mathrm{rtol}=10^{-2}$, temporal stepsize $\triangle t=0.6$
and final time $T=3000$. The diagrams show that after a very short transition phase the profile becomes stationary, the acceleration $\mu_2$
converges to zero, and the speed $\mu_1$ approaches an asymptotic value $\mu_{\star}^{\mathrm{num}}= 0.0709$.
\end{example}
\subsection{Spectra of traveling waves.}
\label{subsec:2.2}
Consider the linearized equation
\begin{equation}
\label{equ:2.11}
M v_{tt} - (A-\mu_{\star}^2 M)v_{\xi\xi} - 2\mu_{\star}Mv_{\xi t} - (D_2 f_{\star} - \mu_{\star} D_3 f_{\star})v_{\xi} - D_3 f_{\star} v_t - D_1 f_{\star} v = 0
\end{equation}
which is obtained from the co-moving frame \eqref{equ:1.4} linearized at the profile $v_{\star}$. In \eqref{equ:2.11} we use the short form
$D_j f_{\star}=D_j f(v_{\star},v_{\star,\xi},-\mu_{\star}v_{\star,\xi})$.
Looking for solutions of the form $v(\xi,t)=e^{\lambda t}w(\xi)$ to \eqref{equ:2.11} yields the quadratic eigenvalue problem
\begin{equation}
\label{equ:2.12}
\mathcal{P}(\lambda)w = \left(\lambda^2 P_2 + \lambda P_1 + P_0\right)w = 0,\,\xi\in\R
\end{equation}
with differential operators $P_j$ defined by
\begin{align*}
P_2 = M,\quad
P_1 = -2\mu_{\star}M\partial_\xi - D_3 f_{\star},\quad
P_0 = -(A-\mu_{\star}^2 M)\partial^2_\xi - (D_2 f_{\star} - \mu_{\star}D_3 f_{\star})\partial_\xi - D_1 f_{\star}.
\end{align*}
We are interested in solutions $(\lambda,w)$ of \eqref{equ:2.12} which are
candidates for eigenvalues $\lambda\in\C$ and eigenfunctions $w:\R\to\C^m$
in suitable function spaces.
In fact, it is usually imposssible to determine the spectrum $\sigma(\mathcal{P})$ analytically, but one is able to analyze certain
subsets. Let us first calculate the symmetry set $\sigma_{\mathrm{sym}}(\mathcal{P})$, which belongs to the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$
and is affected by the underlying group symmetries. Then, we calculate the dispersion set $\sigma_{\mathrm{disp}}(\mathcal{P})$, which belongs to the
essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$ and is affected by the far-field behavior of the wave. Let us first derive the symmetry set of $\mathcal{P}$.
This is a simple task for traveling waves but becomes more involved when analyzing the symmetry set for rotating waves (see Section \ref{subsubsec:3.2.1}).
\subsubsection{Point Spectrum and symmetry set.}
Applying $\partial_\xi$ to the traveling wave equation \eqref{equ:1.5} yields $P_0 v_{\star,\xi}=0$ which proves the following result.
\begin{proposition}[Point spectrum of traveling waves]\label{prop:2.1}
Let $f\in C^1(\R^{3m},\R^m)$ and let $v_{\star}\in C^3(\R,\R^m)$ be a nontrivial classical solution of \eqref{equ:1.5} for some $\mu_{\star}\in\R$.
Then, $w=v_{\star,\xi}$ and $\lambda=0$ is a classical solution of the eigenvalue problem \eqref{equ:2.12}. In particular, the symmetry set
\begin{align*}
\sigma_{\mathrm{sym}}(\mathcal{P})=\{0\}
\end{align*}
belongs to the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$ of $\mathcal{P}$.
\end{proposition}
Of course, a rigorous statement of this kind requires to specify the
function spaces involved, e.g. $L^2(\R,\R^m)$ or $H^1(\R,\R^m)$, see
\cite{GallayRaugel1997}, \cite{GarrayJoly2009}, \cite{beynottenrottmann-matthes2016}.
\subsubsection{Essential Spectrum and dispersion set.}
\label{subsubsec:2.2.2}
\begin{enumerate}[label=\bf{\arabic*.},leftmargin=*]
\item \textbf{The far-field operator.}
It is a well known fact that the essential spectrum is affected by the limiting equation obtained from \eqref{equ:2.12} as $\xi\to\pm\infty$.
Therefore, we let formally $\xi\to\pm\infty$ in \eqref{equ:2.12}
and obtain
\begin{equation}
\label{equ:2.15}
\left(\lambda^2 P_2 + \lambda P_1^{\pm} + P_0^{\pm}\right)w=0,\;\xi\in\R.
\end{equation}
with the constant coefficient operators
\begin{align*}
P_2 = M,\quad
P_1^{\pm} = -2\mu_{\star}M\partial_\xi - D_3 f_{\pm},\quad
P_0^{\pm} = -(A-\mu_{\star}^2 M)\partial^2_\xi - (D_2 f_{\pm} - \mu_{\star}D_3 f_{\pm})\partial_\xi - D_1 f_{\pm},
\end{align*}
where $v_{\pm}$ are from \eqref{equ:1.3} and
$D_j f_{\pm}=D_j f(v_{\pm},0,0)$.
We may then write equation \eqref{equ:2.12} as
\begin{equation*}
\left(\lambda^2 P_2 + \lambda (P_1^{\pm}+Q_1^{\pm}(\xi)) + (P_0^{\pm} + Q_2^{\pm}(\xi)\partial_\xi + Q_3^{\pm}(\xi))\right)w=0,\;\xi\in\R
\end{equation*}
with the perturbation operators defined by
\begin{equation*}
Q_1^{\pm}(\xi)=D_3 f_{\pm}-D_3f_{\star},\;\;
Q_2^{\pm}(\xi)=D_2 f_{\pm} -D_2f_{\star} + \mu_{\star}(D_3 f_{\star}- D_3 f_{\pm}),\;\;
Q_3^{\pm}(\xi)=D_1 f_{\pm}-D_1f_{\star},
\end{equation*}
Note that $v_{\star}(\xi)\to v_{\pm}$ implies $Q_j^{\pm}(\xi)\to 0$ as $\xi\to\pm\infty$ for $j=1,2,3$.
\item \textbf{Spatial Fourier transform.}
For $\omega\in\R$, $z\in\C^m$, $|z|=1$ we apply the spatial Fourier transform $w(\xi)=e^{i\omega\xi}z$ to equation \eqref{equ:2.15} which leads
to the $m$-dimensional quadratic eigenvalue problem
\begin{equation}
\label{equ:2.16}
\left(\lambda^2 A_2 + \lambda A_1^{\pm}(\omega) + A_0^{\pm}(\omega)\right)z = 0
\end{equation}
with matrices $A_2\in\R^{m,m}$ and $A_1^{\pm}(\omega),A_0^{\pm}(\omega)\in\C^{m,m}$ given by
\begin{equation}
\label{equ:2.16a}
A_2 = M,\;
A_1^{\pm}(\omega) = -2i\omega\mu_{\star}M - D_3 f_{\pm},\;
A_0^{\pm}(\omega) = \omega^2(A-\mu_{\star}^2 M) - i\omega(D_2 f_{\pm} - \mu_{\star} D_3 f_{\pm}) - D_1 f_{\pm}.
\end{equation}
\item \textbf{Dispersion relation and dispersion set.} The dispersion relation for traveling waves of second order evolution equations states the following:
Every $\lambda\in\C$ satisfying
\begin{equation}
\label{equ:2.17}
\det\left(\lambda^2 A_2 + \lambda A_1^{\pm}(\omega) + A_0^{\pm}(\omega)\right)=0
\end{equation}
for some $\omega\in\R$ belongs to the essential spectrum of $\mathcal{P}$, i.e. $\lambda\in\sigma_{\mathrm{ess}}(\mathcal{P})$. Solving \eqref{equ:2.17} is equivalent
to finding all zeros of a polynomial of degree $2m$. Note that the limiting case $M=0$ in \eqref{equ:2.17} leads to the dispersion relation for traveling
waves of first order evolution equations, which is well-known in the literature, see \cite{Sandstede2002}.
\end{enumerate}
\begin{proposition}[Essential spectrum of traveling waves]\label{prop:2.2}
Let $f\in C^1(\R^{3m},\R^m)$ with $f(v_{\pm},0,0)=0$ for some $v_{\pm}\in\R^m$. Let $v_{\star}\in C^2(\R,\R^m)$, $\mu_{\star}\in\R$
be a nontrivial classical solution of \eqref{equ:1.5} satisfying $v_{\star}(\xi)\to v_{\pm}$ as $\xi\to\pm\infty$. Then, the dispersion set
\begin{align*}
\sigma_{\mathrm{disp}}(\mathcal{P}) = \{\lambda\in\C : \text{$\lambda$ satisfies \eqref{equ:2.17} for some $\omega\in\R$, and $+$ or $-$}\}
\end{align*}
belongs to the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$ of $\mathcal{P}$.
\end{proposition}
\begin{example}[Spectrum of quintic Nagumo wave equation]\label{exa:2}
As shown in Example \ref{exa:1} the quintic Nagumo wave equation \eqref{equ:2.22} with coefficients and parameters \eqref{equ:2.23}
has a traveling front solution $u_{\star}(x,t)=v_{\star}(x-\mu_{\star}t)$ with velocity $\mu_{\star}\approx 0.0709$, whose
profile $v_{\star}$ connects the asymptotic states $v_{-}=0$ and $v_{+}=1$ according to \eqref{equ:1.3}.\\
We solve numerically the eigenvalue problem for the quintic Nagumo wave equation
\begin{align}
\label{equ:2.18a}
\left(\lambda^2\varepsilon + \lambda\left(-2\mu_{\star}\varepsilon\partial_\xi - D_3 f_{\star}\right) + \left(-(1-\mu_{\star}^2 \varepsilon)\partial^2_\xi
- (D_2 f_{\star} - \mu_{\star}D_3 f_{\star})\partial_\xi - D_1 f_{\star}\right)\right)w = 0.
\end{align}
Both approximations of the profile $v_{\star}$ and the velocity $\mu_{\star}$ in \eqref{equ:2.18a} are chosen from the solution of \eqref{equ:2.25} at time $t=3000$ in Example \ref{exa:1}.
Due to Proposition \ref{prop:2.1} we expect $\lambda=0$ to be an isolated eigenvalue belonging to the point spectrum. Let us next discuss
the dispersion set from Proposition \ref{prop:2.2}. The quintic Nagumo
nonlinearity \eqref{equ:2.22a} satisfies
\begin{equation*}
f_{\pm}=0,\quad D_3f_{\pm}=-1,\quad D_2f_{\pm}=0,\quad D_1f_{-}=-\alpha_1\alpha_2\alpha_3,\quad D_1f_{+}=-\prod_{j=1}^{3}(1-\alpha_j).
\end{equation*}
The matrices $A_2$, $A_1^{\pm}(\omega)$, $A_0^{\pm}(\omega)$ from \eqref{equ:2.16a} of the quadratic problem \eqref{equ:2.16} are given by
\begin{equation*}
A_2=\varepsilon,\quad A_1^{\pm}(\omega)=-2i\omega\mu_{\star}\varepsilon+1,\quad A_0^{\pm}(\omega)=\omega^2(1-\mu_{\star}^2\varepsilon)-i\omega\mu_{\star}-D_1 f_{\pm}.
\end{equation*}
The dispersion relation \eqref{equ:2.17} for the quintic Nagumo front states that every $\lambda\in\C$ satisfying
\begin{equation}
\label{equ:2.20}
\lambda^2\varepsilon + \lambda(-2i\omega\mu_{\star}\varepsilon+1) + (\omega^2(1-\mu_{\star}^2\varepsilon)-i\omega\mu_{\star}-D_1f_{\pm})=0
\end{equation}
for some $\omega\in\R$, and for $+$ or $-$, belongs to $\sigma_{\mathrm{ess}}(\mathcal{P})$. We introduce a new unknown $\tilde{\lambda}\in\C$
via $\lambda=\tilde{\lambda}+i\omega\mu_{\star}$ and solve the transformed equation
\begin{equation*}
\tilde{\lambda}^2 + \frac{1}{\varepsilon}\tilde{\lambda} + \frac{1}{\varepsilon}(\omega^2-D_1 f_{\pm}) = 0.
\end{equation*}
obtained from \eqref{equ:2.20}. Thus, the quadratic eigenvalue problem \eqref{equ:2.20} has the solutions
\begin{align*}
\lambda = -\frac{1}{2\varepsilon} + i\omega\mu_{\star} \pm \frac{1}{2\varepsilon}\sqrt{1-4\varepsilon(\omega^2-D_1 f_{\pm})},\,\omega\in\R.
\end{align*}
These solutions lie on the line $\mathrm{Re}=-\frac{1}{2\varepsilon}$ and on two ellipses if $-4D_1 f_{\pm}\varepsilon<1$ (cf. Figure \ref{fig:2.3_alt}(a)).
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_Traveling1Front_EssentialSpectrum_View2.png}\label{fig:2.3a_alt}}
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Spectrum_View2.png} \label{fig:2.3b_alt}}
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Spectrum_LargeInterval_View2.png}\label{fig:2.3c_alt}}
\caption{Spectrum of the quintic Nagumo wave equation for parameters \eqref{equ:2.23} (a)
and the numerical spectrum on the spatial domain $[-R,R]$ for $R=50$ (b) and $R=400$ (c) both for spatial stepsize $\triangle x=0.1$.}
\label{fig:2.3_alt}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Eigenfunction.png}\label{fig:2.4a}}
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Eigenfunction2.png} \label{fig:2.4b}}
\subfigure[]{\includegraphics[height=3.9cm] {Paper_DampedQuinticNagumo_1D_TravelingFront_Frozen_Spectrum_LargeInterval_View3.png} \label{fig:2.4c}}
\caption{Eigenfunctions of the quintic Nagumo wave equation for parameters \eqref{equ:2.23} belonging to the isolated eigenvalues
$\lambda_1\approx 0$ (a), $\lambda_2\approx -0.011274$ (b), and a zoom into the spectrum from Fig.\ref{fig:2.3_alt}(c) in (c).}
\label{fig:2.4}
\end{figure}
Figure \ref{fig:2.3_alt}(a) shows the part of the spectrum of the quintic Nagumo wave which is guaranteed by Proposition \ref{prop:2.1} and \ref{prop:2.2}.
It is subdivided into the symmetry set $\sigma_{\mathrm{sym}}(\mathcal{P})$ (blue circle), which is determined by Proposition \ref{prop:2.1} and belongs to
the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$, and the dispersion set $\sigma_{\mathrm{disp}}(\mathcal{P})$ (red lines), which is determined by Proposition \ref{prop:2.2}
and belongs to the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$. In general, there may be further essential spectrum in
$\sigma_{\mathrm{ess}}(\mathcal{P})\setminus \sigma_{\mathrm{disp}}(\mathcal{P})$ and further isolated eigenvalues in
$\sigma_{\mathrm{pt}}(\mathcal{P}) \setminus \sigma_{\mathrm{sym}}(\mathcal{P}) $. In fact,
for the quintic
Nagumo wave equation we find an extra eigenvalue with negative real part, cf. Figure \ref{fig:2.4}(c). The numerical spectrum
of the quintic Nagumo wave equation on the spatial domain $[-R,R]$ equipped with periodic boundary conditions is shown in Figure \ref{fig:2.3_alt}(b) for
$R=50$ and in Figure \ref{fig:2.3_alt}(c) for $R=400$. Each of them consists of the approximations of the point spectrum subdivided into the symmetry set
(blue circle) and an additional isolated eigenvalue (blue plus sign), and of the essential spectrum (red dots).
The missing line inside the ellipse in Figure \ref{fig:2.3_alt}(b) gradually appears numerically when enlarging the spatial domain,
see Figure \ref{fig:2.3_alt}(c). The second ellipse only develops on even larger domains.
\end{example}
\sect{Rotating waves in several space dimensions}
\label{sec:3}
\subsection{Freezing rotating waves.}
\label{subsec:3.1}
Consider the Cauchy problem associated with \eqref{equ:1.6}
\begin{subequations}
\label{equ:3.1}
\begin{align}
& Mu_{tt}+Bu_t = A\triangle u + f(u), && \,x\in\R^d,\,t>0, \label{equ:3.1a} \\
& u(\cdot,0) = u_0,\quad u_t(\cdot,0) = v_0, && \,x\in\R^d,\,t=0, \label{equ:3.1b}
\end{align}
\end{subequations}
for some initial data $u_0,v_0:\R^d\rightarrow\R^m$, where $u_0$ denotes the \begriff{initial displacement} and
$v_0$ the \begriff{initial velocity}. The damped wave equation \eqref{equ:3.1}
has a more special nonlinearity than in the one-dimensional case,
see \eqref{equ:1.6}. This will simplify some of the computations below.
In the following, let $\SE(d)=\SO(d)\ltimes\R^d$ denote the \begriff{special Euclidean group} and $\SO(d)$ the special orthogonal group.
Let us introduce new unknowns $(Q(t),\tau(t))\in\SE(d)$ and $v(\xi,t)\in\R^m$ via the \begriff{rotating wave ansatz}
\begin{equation}
\begin{aligned}
\label{equ:3.2}
u(x,t) & = v(\xi,t),\quad\xi:=Q(t)^{\top}(x-\tau(t)),\,x\in\R^d,\,t\geqslant 0.
\end{aligned}
\end{equation}
Inserting \eqref{equ:3.2} into \eqref{equ:3.1a} and suppressing arguments
of $u$ and $v$ leads to
\begin{align}
\triangle_x u =& \triangle_{\xi} v,\quad f(u)=f(v), \quad
u_t = v_{\xi}\left(Q_t^{\top}(x-\tau)-Q^{\top} \tau_t\right) + v_t, \label{equ:3.3}\\
u_{tt} =& v_{\xi\xi}\left(Q_t^{\top}(x-\tau)-Q^{\top} \tau_t\right)^2 + v_{\xi}\left(Q_{tt}^{\top}(x-\tau)-2Q_t^{\top} \tau_t - Q^{\top} \tau_{tt}\right) \nonumber \\
+& 2v_{\xi t}\left(Q_t^{\top}(x-\tau)-Q^{\top} \tau_t\right) + v_{tt}.
\nonumber
\end{align}
Hence equation \eqref{equ:3.1a} turns into
\begin{equation}
\begin{aligned}
\label{equ:3.5}
\begin{split}
&Mv_{tt} + Bv_t = A\triangle v - Mv_{\xi\xi}\left(Q_t^{\top} Q\xi - Q^{\top}\tau_t\right)^2 - 2Mv_{\xi t}\left(Q_t^{\top} Q\xi - Q^{\top}\tau_t\right) \\
&\quad\quad\quad\quad\quad\quad- Mv_{\xi}\left(Q_{tt}^{\top} Q\xi - 2Q_t^{\top}\tau_t - Q^{\top}\tau_{tt}\right) - Bv_{\xi}\left(Q_t^{\top} Q\xi-Q^{\top}\tau_t\right) + f(v).
\end{split}
\end{aligned}
\end{equation}
It is convenient to introduce time-dependent functions $S_1(t),S_2(t)\in\R^{d,d}$, $\mu_1(t),\mu_2(t)\in\R^d$ via
\begin{equation*}
\begin{aligned}
S_1 := Q^{\top} Q_t,\quad S_2 := S_{1,t},\quad \mu_1 := Q^{\top}\tau_t,\quad \mu_2 := \mu_{1,t}.
\end{aligned}
\end{equation*}
Obviously, $S_1$ and $S_2$ satisfy $S_1^{\top}=-S_1$ and $S_2^{\top}=-S_2$, which follows from $Q^{\top}Q=I_d$ by differentiation.
Moreover, we obtain
\begin{align*}
&Q_t^{\top}Q = -S_1,\quad Q^{\top}\tau_t = \mu_1,\quad Q_t^{\top}\tau_t+Q^{\top}\tau_{tt} = \mu_2, \\
&Q_{tt}^{\top}Q = -S_{1,t} - S_1^{\top} S_1 = -S_2+S_1^2,\quad -Q_t^{\top}\tau_t = -Q_t^{\top} Q Q^{\top} \tau_t = S_1\mu_1,
\end{align*}
which transforms \eqref{equ:3.5} into the system
\begin{subequations}
\label{equ:3.6}
\begin{align}
&Mv_{tt} + Bv_t = A\triangle v - Mv_{\xi\xi}\left(S_1\xi+\mu_1\right)^2
+2Mv_{\xi t}\left(S_1\xi+\mu_1\right)
\label{equ:3.6a}\\
&\quad\quad\quad\quad\quad\quad + Mv_{\xi}\left((S_2-S_1^2)\xi - S_1\mu_1 + \mu_2\right) + Bv_{\xi}\left(S_1\xi+\mu_1\right) + f(v), \nonumber\\
&\begin{pmatrix}S_1\\\mu_1\end{pmatrix}_t = \begin{pmatrix}S_2\\\mu_2\end{pmatrix},\label{equ:3.6b}\\
&\begin{pmatrix}Q\\\tau\end{pmatrix}_t = \begin{pmatrix}QS_1\\Q\mu_1\end{pmatrix}.\label{equ:3.6c}
\end{align}
\end{subequations}
The quantity $(Q(t),\tau(t))$ describes the \begriff{position} by its spatial \begriff{shift} $\tau(t)$ and the \begriff{rotation} $Q(t)$. Moreover, $S_1(t)$ denotes the
\begriff{rotational velocities}, $\mu_1(t)$ the \begriff{translational velocities}, $S_2(t)$ the \begriff{angular acceleration} and $\mu_2(t)$ the \begriff{translational acceleration}
of the rotating wave $v$ at time $t$. Note that in contrast to the traveling waves the leading part $A\triangle-M\partial_{\xi}^2(\cdot)(S_1\xi+\mu_1)^2$
not only depends on the velocities $S_1$ and $\mu_1$, but also on the spatial variable $\xi$, which means that the leading part has unbounded (linearly growing)
coefficients. We next specify initial data for the system \eqref{equ:3.6} as follows,
\begin{equation}
\begin{aligned}
\label{equ:3.7}
\begin{split}
&\quad v(\cdot,0) = u_0,\quad v_t(\cdot,0)=v_0+u_{0,\xi}(S_1^0\xi+\mu_1^0),\\
&S_1(0)=S_1^0,\quad \mu_1(0)=\mu_1^0,\quad Q(0)=I_d,\quad \tau(0)=0.
\end{split}
\end{aligned}
\end{equation}
Note that, requiring $Q(0)=I_d$, $\tau(0)=0$, $S_1(0)=S_1^0$ and $\mu_1(0)=\mu_1^0$ for some $S_1^0\in\R^{d,d}$ with $(S_1^0)^{\top}=-S_1^0$ and $\mu_1^0\in\R^d$,
the first equation in \eqref{equ:3.7} follows from \eqref{equ:3.2} and \eqref{equ:3.1b}, while the second condition in \eqref{equ:3.7} can be deduced from
\eqref{equ:3.3}, \eqref{equ:3.1b}, \eqref{equ:3.6c} and the first condition in \eqref{equ:3.7}.
The system \eqref{equ:3.6} comprises evolution equations for the unknowns $v$, $S$ and $\mu_1$.
In order to specify the remaining variables $S_2$ and $\mu_2$ we impose $\mathrm{dim}\,\SE(d)=\frac{d(d+1)}{2}$ additional scalar algebraic constraints, also
known as \begriff{phase conditions}
\begin{align}
\psi(v,v_t,(S_1,\mu_1),(S_2,\mu_2))=0\in\R^{\frac{d(d+1)}{2}},\quad t\geqslant 0. \label{equ:3.8}
\end{align}
Two possible choices of such a phase condition are
\begin{align}
&\psi_{\mathrm{fix}}(v) := \begin{pmatrix}\langle v-\hat{v},D_l\hat{v}\rangle_{L^2}\\
\langle v-\hat{v},D^{(i,j)}\hat{v}\rangle_{L^2}\end{pmatrix} = 0,\;t\geqslant 0, \label{equ:3.9} \\
&\psi_{\mathrm{orth}}(v,v_t) := \begin{pmatrix}\langle v_t,D_l v\rangle_{L^2}\\
\langle v_t,D^{(i,j)} v\rangle_{L^2}\end{pmatrix} = 0,\;t\geqslant 0, \label{equ:3.10}
\end{align}
for $l=1,\ldots,d$, $i=1,\ldots,d-1$ and $j=i+1,\ldots,d$ with $D_l:=\partial_{\xi_l}$ and $D^{(i,j)}:=\xi_j \partial_{\xi_i}-\xi_i
\partial_{\xi_j}$.
Condition \eqref{equ:3.9} is obtained from the requirement that the distance
\begin{align*}
\rho(Q,\tau) := \left\|v(\cdot,t)-\hat{v}(Q^{\top}(\cdot -\tau))\right\|^2_{L^2}
\end{align*}
attains a local minimum at $(Q,\tau)=(I_d,0)$. Since $D_l, D^{(i,j)}$ are the generators of the Euclidean group action, condition \eqref{equ:3.10}
requires the time derivative of $v$ to be orthogonal to the group
orbit of $v$ at any time instance.
Combining the differential equations \eqref{equ:3.6}, the initial data \eqref{equ:3.7} and the phase condition \eqref{equ:3.8}, we obtain the following
\begriff{partial differential algebraic evolution equation} (PDAE)
\begin{subequations}
\label{equ:3.11}
\begin{align}
&Mv_{tt} + Bv_t = A\triangle v - Mv_{\xi\xi}\left(S_1\xi+\mu_1\right)^2
+ 2Mv_{\xi t}\left(S_1\xi+\mu_1\right)
\label{equ:3.11a}\\
&\quad\quad\quad\quad\quad\quad + Mv_{\xi}\left((S_2-S_1^2)\xi - S_1\mu_1 + \mu_2\right)+ Bv_{\xi}\left(S_1\xi+\mu_1\right) + f(v), &&\,\xi\in\R^d,\,t>0,\nonumber\\
&v(\cdot,0) = u_0,\quad v_t(\cdot,0) = v_0+u_{0,\xi}(S_{1}^0\xi+\mu_1^0), &&\,\xi\in\R^d,\,t=0, \label{equ:3.11b}\\
&0 = \psi(v,v_t,(S_1,\mu_1),(S_2,\mu_2)), &&\,t\geqslant 0, \label{equ:3.11c}\\
&\begin{pmatrix}S_1\\\mu_1\end{pmatrix}_t = \begin{pmatrix}S_2\\\mu_2\end{pmatrix},\quad \begin{pmatrix}S_1(0)\\\mu_1(0)\end{pmatrix} = \begin{pmatrix}S_1^0\\\mu_1^0\end{pmatrix}, &&\,t\geqslant 0, \label{equ:3.11d}\\
&\begin{pmatrix}Q\\\tau\end{pmatrix}_t = \begin{pmatrix}QS_1\\Q\mu_1\end{pmatrix},\quad \begin{pmatrix}Q(0)\\\tau(0)\end{pmatrix} = \begin{pmatrix}I_d\\0\end{pmatrix}, &&\,t\geqslant 0.
\label{equ:3.11e}
\end{align}
\end{subequations}
The system \eqref{equ:3.11} depends on the choice of phase condition and must be solved for $(v,S_1,\mu_1,S_2,\mu_2,Q,\tau)$ for given $(u_0,v_0,S_1^0,\mu_1^0)$.
It consists of a PDE for $v$ in \eqref{equ:3.11a}--\eqref{equ:3.11b},
two systems of ODEs for $(S_1,\mu_1)$ in \eqref{equ:3.11d} and for $(Q,\tau)$ in \eqref{equ:3.11e} and $\frac{d(d+1)}{2}$ algebraic constraints for $(S_2,\mu_2)$ in \eqref{equ:3.11c}.
The ODE \eqref{equ:3.11e} for $(Q,\tau)$ is the \begriff{reconstruction equation} (see \cite{RowleyKevrekidisMarsden2003}), it decouples from the other equations in \eqref{equ:3.11}
and can be solved in a postprocessing step. Note that in the frozen equation for first order evolution equations, the ODE for $(S_1,\mu_1)$ does not appear, see \cite[(10.26)]{Otten2014}.
The additional ODE is a new component of the PDAE and is caused by the second order time derivative.
Finally, note that $(v,S_1,\mu_1,S_2,\mu_2)=(v_{\star},S_{\star},\mu_{\star},0,0)$ satisfies
\begin{align*}
&0 = A\triangle v - Mv_{\star,\xi\xi}\left(S_{\star}\xi+\mu_{\star}\right)^2
- Mv_{\star,\xi}S_{\star}\left(S_{\star}\xi + \mu_{\star}\right)+
Bv_{\star,\xi}\left(S_{\star}\xi+\mu_{\star}\right) + f(v_{\star}),\,\xi\in\R^d,\\
&0 = \begin{pmatrix}S_2\\\mu_2\end{pmatrix}.
\end{align*}
If, in addition, it has been arranged that $v_{\star},S_{\star},\mu_{\star}$
satisfy the phase condition $\psi(v_{\star},0,S_{\star},\mu_{\star},0,0)=0$ then $(v_{\star},S_{\star},\mu_{\star},0,0)$ is a stationary solution of the system \eqref{equ:3.11a},\eqref{equ:3.11c},\eqref{equ:3.11d}. For a stable rotating wave we expect that solutions $(v,S_1,\mu_1,S_2,\mu_2)$ of \eqref{equ:3.11a}--\eqref{equ:3.11d} satisfy
\begin{align*}
v(t)\rightarrow v_{\star},\quad (S_1(t),\mu_1(t))\rightarrow(S_{\star},\mu_{\star}),\quad (S_2(t),\mu_2(t))\rightarrow (0,0),\quad\text{as}\quad t\to\infty,
\end{align*}
provided the initial data are close to their limiting values.
\begin{example}[Cubic-quintic complex Ginzburg-Landau wave equation]\label{exa:3}
Consider the cubic-quintic complex Ginzburg-Landau wave equation
\begin{equation}
\label{equ:3.40}
\varepsilon u_{tt} + \rho u_t = \alpha \triangle u + u(\delta+\beta|u|^2+\gamma|u|^4),\; x\in \R^d,\, t\geqslant 0
\end{equation}
with $u=u(x,t)\in\C$, $d\in\{2,3\}$, $\varepsilon,\rho,\alpha,\beta,\gamma,\delta\in\C$ and $\Re\alpha>0$.
For the parameter values
\begin{align}
\label{equ:3.41}
\varepsilon=10^{-4},\quad\rho=1,\quad\alpha=\frac{3}{5},\quad\gamma=-1-\frac{1}{10}i,\quad\beta=\frac{5}{2}+i,\quad\delta=-0.73.
\end{align}
equation \eqref{equ:3.40} admits a spinning soliton solution.
\begin{figure}[ht]
\centering
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Nonfrozen_Reu.png}\label{fig:3.1a}}
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Nonfrozen_SpaceTime_Reu.png} \label{fig:3.1b}}
\caption{Solution of cubic-quintic complex Ginzburg-Landau wave equation \eqref{equ:3.40}: Spinning soliton $u(x,t)$ at time $t=50$ (a) and its time evolution along $x_2=0$ (b)
for parameters from \eqref{equ:3.41}.}
\label{fig:3.1}
\end{figure}
Figure \ref{fig:3.1} shows a numerical simulation of the solution $u$ of \eqref{equ:3.40} on the ball $B_R(0)$ of radius $R=20$,
with homogeneous Neumann boundary conditions and with parameter
values from \eqref{equ:3.41}. The initial data $u_0$ and $v_0$ are generated in the following way. First we use the freezing method
to compute a rotating wave in the parabolic case (as in \cite{Otten2014}) for parameter values $\varepsilon=0$, $\rho=1$ and
\begin{align*}
\alpha=\frac{1}{2}+\frac{1}{2}i,\quad\gamma=-1-\frac{1}{10}i,
\quad\beta=\frac{5}{2}+i,\quad\delta=-\frac{1}{2}.
\end{align*}
Then the parameter set $(\varepsilon,\alpha,\delta)$ is gradually
changed until the values \eqref{equ:3.41} are attained.
For the space discretization we use continuous piecewise
linear finite elements with spatial stepsize $\triangle x=0.8$. For the time discretization we use the BDF method of order $2$
with absolute tolerance $\mathrm{atol}=10^{-4}$, relative tolerance $\mathrm{rtol}=10^{-3}$, temporal stepsize $\triangle t=0.1$
and final time $T=50$. Computations are performed with the help of the software COMSOL 5.2.
Let us now consider the frozen cubic-quintic complex Ginzburg-Landau wave equation resulting from \eqref{equ:3.11}
\begin{subequations}
\label{equ:3.42}
\begin{align}
&\varepsilon v_{tt} + \rho v_t = \alpha\triangle v - \varepsilon v_{\xi\xi}\left(S_1\xi+\mu_1\right)^2 + 2\varepsilon v_{\xi t}\left(S_1\xi+\mu_1\right) \label{equ:3.42a}\\
&\quad\quad\quad\quad\quad\quad + \varepsilon v_{\xi}\left((S_2-S_1^2)\xi - S_1\mu_1 + \mu_2\right)+ \rho v_{\xi}\left(S_1\xi+\mu_1\right) + f(v), &&\,\xi\in\R^d,\,t>0,\nonumber\\
&v(\cdot,0) = u_0,\quad v_t(\cdot,0) = v_0+u_{0,\xi}(S_{1}^0\xi+\mu_1^0), &&\,\xi\in\R^d,\,t=0, \label{equ:3.42b}\\
&0 = \psi_{\mathrm{fix}}(v) := \begin{pmatrix}\langle v-\hat{v},D_l\hat{v}\rangle_{L^2}\\
\langle v-\hat{v},D^{(i,j)}\hat{v}\rangle_{L^2}\end{pmatrix}, &&\,t\geqslant 0, \label{equ:3.42c}\\
&\begin{pmatrix}S_1\\\mu_1\end{pmatrix}_t = \begin{pmatrix}S_2\\\mu_2\end{pmatrix},\quad
\begin{pmatrix}S_1(0)\\\mu_1(0)\end{pmatrix} = \begin{pmatrix}S_1^0\\\mu_1^0\end{pmatrix}, &&\,t\geqslant 0, \label{equ:3.42d}\\
&\begin{pmatrix}Q\\\tau\end{pmatrix}_t = \begin{pmatrix}QS_1\\Q\mu_1\end{pmatrix},\quad
\begin{pmatrix}Q(0)\\\tau(0)\end{pmatrix} = \begin{pmatrix}I_d\\0\end{pmatrix}, &&\,t\geqslant 0. \label{equ:3.42e}
\end{align}
\end{subequations}
\begin{figure}[ht]
\centering
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Profile_Rev.png}\label{fig:3.2a}}
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Profile_SpaceTime_Rev.png} \label{fig:3.2b}}\\
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Velocity.png}\label{fig:3.2c}}
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Acceleration.png} \label{fig:3.2d}}
\caption{Solution of the frozen cubic-quintic complex Ginzburg-Landau wave equation \eqref{equ:3.42}: profile $v(x,t)$ at time $t=2000$ (a),
its time evolution along $x_2=0$ (b), velocities $\mu_1(t)$ (c), and accelerations $\mu_2(t)$ (d) for parameters from \eqref{equ:3.41}.}
\label{fig:3.2}
\end{figure}
Figure \ref{fig:3.2} shows the solution $(v,S_1,\mu_1,S_2,\mu_2,Q,\tau)$ of \eqref{equ:3.42} on the ball $B_R(0)$ with radius $R=20$,
homogeneous Neumann boundary conditions, initial data $u_0$, $v_0$ as in the nonfrozen case, and reference function $\hat{v}=u_0$.
For the computation we used the fixed phase condition $\psi_{\mathrm{fix}}(v)$ from \eqref{equ:3.9}. The spatial discretization data are
taken as in the nonfrozen case. For the time discretization we used the BDF method of order $2$ with absolute tolerance $\mathrm{atol}=10^{-3}$,
relative tolerance $\mathrm{rtol}=10^{-2}$, maximal temporal stepsize $\triangle t=0.5$, initial step $10^{-4}$, and final time $T=2000$.
Due to the choice of initial data, the profile becomes immediately stationary, the acceleration $\mu_2$ converges to zero, while
the speed $\mu_1$ and the nontrivial entry $S_{12}$ of $S$ approach asymptotic values
\begin{align*}
\mu_1^{(1)}=-0.2819,\quad\mu_1^{(2)}=-0.1999,\quad S_{12}=1.3658.
\end{align*}
Note that we have a clockwise rotation if $S_{12}>0$, and a counter clockwise rotation if $S_{12}<0$. Thus,
the spinning soliton rotates clockwise. The center of rotation $x_{\star}$ and the temporal period $T^{\mathrm{2D}}$ for one rotation are given by, see \cite[Exa.10.8]{Otten2014},
\begin{align*}
x_{\star} = \frac{1}{S_{12}}\begin{pmatrix}\mu_1^{(2)}\\-\mu_1^{(1)}\end{pmatrix}=\begin{pmatrix}-0.1464\\0.2064\end{pmatrix},\quad\quad
T^{\mathrm{2D}}=\frac{2\pi}{|S_{12}|}=4.6004.
\end{align*}
\end{example}
\subsection{Spectra of rotating waves.}
\label{subsec:3.2}
Consider the linearized equation
\begin{equation}
\label{equ:3.13}
Mv_{tt} + Bv_t - A\triangle v + Mv_{\xi\xi}(S_{\star}\xi)^2 - 2Mv_{\xi t}S_{\star}\xi
+ Mv_{\xi}S_{\star}^2\xi
- Bv_{\xi}S_{\star}\xi
- Df(v_{\star})v = 0.
\end{equation}
Equation \eqref{equ:3.13} is obtained from
the co-rotating frame equation \eqref{equ:1.8} when linearizing at the profile $v_{\star}$. Moreover, we assume $\mu_{\star}=0$, that is the wave
rotates about the origin. Shifting the center of rotation does not influence the stability properties, see the discussion in \cite{BeynOtten2016}.
Looking for solutions of the form $v(\xi,t)=e^{\lambda t}w(\xi)$ to \eqref{equ:3.13} yields the quadratic eigenvalue problem
\begin{equation}
\begin{aligned}
\label{equ:3.14}
\mathcal{P}(\lambda)w := \left(\lambda^2 P_2 + \lambda P_1 + P_0\right)w = 0,\,\xi\in\R^d
\end{aligned}
\end{equation}
with differential operators $P_j$ defined by
\begin{equation}
\begin{aligned}
\label{equ:3.15}
\begin{split}
P_2 =& M,\quad
P_1 = B-2M\left(\partial_{\xi}\,\cdot\right)S_{\star}\xi = B-
2M\sum_{j=1}^{d}(S_{\star}\xi)_j\partial_{\xi_j}, \\
P_0 =& -A\triangle\,\cdot + M\left(\partial_{\xi}^2\,\cdot\right)(S_{\star}\xi)^2
+ M\left(\partial_{\xi}\,\cdot\right)S_{\star}^2\xi
-B\left(\partial_{\xi}\,\cdot\right)S_{\star}\xi- Df(v_{\star})\,\cdot \\
=& -A\sum_{j=1}^{d}\partial_{\xi_j}^2 +
M\sum_{j=1}^{d}\sum_{\nu=1}^{d}(S_{\star}\xi)_j(S_{\star}\xi)_\nu\partial_{\xi_j}\partial_{\xi_\nu} +
M\sum_{j=1}^{d}(S_{\star}^2\xi)_j\partial_{\xi_j}
- B\sum_{j=1}^{d}(S_{\star}\xi)_j \partial_{\xi_j} - Df(v_{\star}).
\end{split}
\end{aligned}
\end{equation}
As in the one-dimensional case we cannot solve equation \eqref{equ:3.14} in general. Rather, our aim is to determine the
symmetry set $\sigma_{\mathrm{sym}}(\mathcal{P})$ as a subset of the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$, and the dispersion
set $\sigma_{\mathrm{disp}}(\mathcal{P})$ as a subset of the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$. The point spectrum
is affected by the underlying group symmetries while the essential spectrum depends on the far-field behavior of the wave.
In the following we present the recipe for computing the subsets
$\sigma_{\mathrm{sym}}(\mathcal{P})\subseteq \sigma_{\mathrm{pt}}(\mathcal{P})$ and
$\sigma_{\mathrm{disp}}(\mathcal{P})\subseteq \sigma_{\mathrm{ess}}(\mathcal{P})$.
\subsubsection{Point Spectrum and symmetry set.}
\label{subsubsec:3.2.2}
Let us look for eigenfunctions $w$ of \eqref{equ:3.14} of the form
\begin{align}
\label{equ:3.26}
w(\xi) = v_{\star,\xi}(\xi)(E\xi+b)\quad\quad\text{for some $E\in\C^{d,d}$, $b\in\C^d$, $E^{\top}=-E$, $v_{\star}\in C^3(\R^d,\R^m)$.}
\end{align}
This ansatz is motivated by the fact that functions of this type
span the image of the derivative of the
group action $(Q,\tau)\rightarrow v_{\star}(Q^{\top}(\cdot - \tau))$ at
the unit element $(Q,\tau)=(I_d,0)\in \SE(d)$ (compare \eqref{equ:3.2}).
We plug \eqref{equ:3.26} into \eqref{equ:3.14} and use the equalities
\begin{align}
&Mw = Mv_{\star,\xi}(E\xi+b),\quad\quad
Bw = Bv_{\star,\xi}(E\xi+b), \nonumber\\
&2M(\partial_{\xi} w)S_{\star}\xi = 2M v_{\star,\xi\xi}(E\xi+b)S_{\star}\xi + 2Mv_{\star,\xi}ES_{\star}\xi
\label{equ:3.27} \\
&A\triangle w = (\partial_{\xi}(A\triangle v_{\star}))(E\xi+b)
\label{equ:3.28} \\
&M(\partial_{\xi}^2 w)(S_{\star}\xi)^2 = (\partial_{\xi}(M v_{\star,\xi\xi}(S_{\star}\xi)^2))(E\xi+b) + 2Mv_{\star,\xi\xi}([E,S_{\star}]\xi-S_{\star}b)S_{\star}\xi,
\label{equ:3.29} \\
&M(\partial_{\xi} w)S_{\star}^2\xi = (\partial_{\xi}(M v_{\star,\xi}S_{\star}^2\xi))(E\xi+b) + Mv_{\star,\xi}([E,S_{\star}^2]\xi-S_{\star}^2 b),
\label{equ:3.30} \\
&B(\partial_{\xi} w)S_{\star}\xi = (\partial_{\xi}(Bv_{\star,\xi}S_{\star}\xi))(E\xi+b) + Bv_{\star,\xi}([E,S_{\star}]\xi-S_{\star}b),
\label{equ:3.31} \\
&Df(v_{\star})w = (\partial_{\xi}(f(v_{\star})))(E\xi+b)
\label{equ:3.32}
\end{align}
where $[E,S_{\star}]:=ES_{\star}-S_{\star}E$ is the Lie bracket.
This leads to the following equation:
\begin{align}
\label{equ:3.33}
0 =& \lambda^2 M v_{\star,\xi}(E\xi+b) + \lambda\Big(Bv_{\star,\xi}(E\xi+b) - 2Mv_{\star,\xi\xi}(E\xi+b)S_{\star}\xi - 2Mv_{\star,\xi}ES_{\star}\xi\Big) \nonumber\\
& +\Big(2Mv_{\star,\xi\xi}([E,S_{\star}]\xi-S_{\star}b)S_{\star}\xi + Mv_{\star,\xi}([E,S_{\star}^2]\xi-S_{\star}^2 b) -Bv_{\star,\xi}([E,S_{\star}]\xi-S_{\star}b) \\
&\;\;\quad\;-\partial_{\xi}\big(A\triangle v_{\star} - Mv_{\star,\xi\xi}(S_{\star}\xi)^2 - Mv_{\star,\xi}S_{\star}^2\xi + Bv_{\star,\xi}S_{\star}\xi + f(v_{\star})\big)(E\xi+b)\Big). \nonumber
\end{align}
Now we use the rotating wave equation \eqref{equ:1.9} in
\eqref{equ:3.33} and obtain by rearranging the remaining terms
\begin{align}
\label{equ:3.35}
0 =& Mv_{\star,\xi}\Big(\lambda^2(E\xi+b) - 2\lambda ES_{\star}\xi + [E,S_{\star}^2]\xi - S_{\star}^2 b\Big)
+ Bv_{\star,\xi}\Big(\lambda(E\xi+b)-[E,S_{\star}]\xi+S_{\star}b\Big) \nonumber \\
& - 2Mv_{\star,\xi\xi}\Big(\lambda(E\xi+b) - [E,S_{\star}]\xi + S_{\star}b\Big)S_{\star}\xi \\
=& Mv_{\star,\xi}\Big( (\lambda^2 E - 2\lambda ES_{\star} + [E,S_{\star}^2])\xi + \lambda^2 b - S_{\star}^2 b\Big)
+ Bv_{\star,\xi}\Big((\lambda E-[E,S_{\star}])\xi+\lambda b+S_{\star}b\Big) \nonumber \\
& - 2Mv_{\star,\xi\xi}\Big((\lambda E-[E,S_{\star}])\xi + \lambda b+S_{\star}b\Big)S_{\star}\xi. \nonumber
\end{align}
Comparing coefficients in \eqref{equ:3.35} yields the finite-dimensional eigenvalue problem (see \cite{BlochIserles2005},\cite{Otten2014}, \cite{BeynOtten2016b})
\begin{subequations}
\label{equ:3.36}
\begin{align}
\lambda E &= [E,S_{\star}],
\label{equ:3.36a} \\
\lambda b &= -S_{\star}b,
\label{equ:3.36b}
\end{align}
\end{subequations}
which must be solved for $(\lambda,E,b)$ and admits $\frac{d(d+1)}{2}$ solutions. In fact, having a solution $(\lambda,E,b)$ of \eqref{equ:3.36}, then
the last two terms in \eqref{equ:3.35} obviously vanish. The first term vanishes if we write both summands as
\begin{align*}
\lambda^2 b - S_{\star}^2 b = \lambda(\lambda b+S_{\star}b)-S_{\star}(\lambda b+S_{\star}b)
\end{align*}
and
\begin{align*}
&\lambda^2 E - 2\lambda ES_{\star} + [E,S_{\star}^2]
= \lambda(\lambda E-[E,S_{\star}])-(2\lambda ES_{\star}-\lambda[E,S_{\star}]-[E,S_{\star}^2]) \\
=& \lambda(\lambda E-[E,S_{\star}])-\left((\lambda E-[E,S_{\star}])S_{\star}+S_{\star}(\lambda E-[E,S_{\star}])+[E,S_{\star}]S_{\star}+S_{\star}[E,S_{\star}]-[E,S_{\star}^2]\right),
\end{align*}
and use the identity $[E,S_{\star}]S_{\star}+S_{\star}[E,S_{\star}]-[E,S_{\star}^2]=[E,[S_{\star},S_{\star}]]=0$ which holds by skew-symmetry of $S_{\star}$.
Therefore, it is sufficient to solve \eqref{equ:3.36}. Furthermore, if $(\lambda,E)$ is a solution of \eqref{equ:3.36a}, then $(\lambda,E,0)$ solves \eqref{equ:3.36}, and,
similarly, if $(\lambda,b)$ is a solution
of \eqref{equ:3.36b}, then $(\lambda,0,b)$ solves \eqref{equ:3.36}. Therefore, it is sufficient to solve \eqref{equ:3.36a} and \eqref{equ:3.36b} separately.
For the skew-symmetric matrix $S_{\star}$ we have $S_{\star}=U\Lambda U^{{\mathsf{H}}}$ for some unitary $U \in C^{d,d}$ and some diagonal matrix
$\Lambda=\diag(\lambda_1,\ldots,\lambda_d)$ where
$\lambda_1,\ldots,\lambda_d \in i \R$ are the eigenvalues of $S_{\star}$.
In particular, this implies $S_{\star}^{\top}=\overline{U}\Lambda U^{\top}$.
\begin{itemize}[leftmargin=*]
\item Multiply \eqref{equ:3.36b} from the left by $U^{{\mathsf{H}}}$ and define $\tilde{b}=U^{{\mathsf{H}}}b$ to obtain
\begin{align}
\label{equ:3.37}
\lambda \tilde{b} = \lambda U^{{\mathsf{H}}}b = -U^{{\mathsf{H}}}S_{\star}b = -U^{{\mathsf{H}}}U\Lambda U^{{\mathsf{H}}}b = -\Lambda \tilde{b}.
\end{align}
Equation \eqref{equ:3.37} has solutions $(\lambda,\tilde{b})=(-\lambda_l,e_l)$, hence \eqref{equ:3.36b} has solutions $(\lambda,b)=(-\lambda_l,Ue_l)$,
and \eqref{equ:3.36} has solutions $(\lambda,E,b)=(-\lambda_l,0,Ue_l)$ for $l=1,\ldots,d$.
\item Multiply \eqref{equ:3.36a} from the left by $U^{{\mathsf{H}}}$, from
the right by $\bar{U}$, define $\tilde{E}=U^{{\mathsf{H}}}E\overline{U}$,
and use the skew-symmetry of $S_{\star}$ and $\tilde{E}$, to obtain
\begin{align}
\label{equ:3.38}
\lambda\tilde{E} = \lambda U^{{\mathsf{H}}}E\overline{U} = U^{{\mathsf{H}}}[E,S_{\star}]\overline{U} = -U^{{\mathsf{H}}}E\overline{U}\Lambda U^{\top}\overline{U} - U^{{\mathsf{H}}}U\Lambda U^{{\mathsf{H}}}E\overline{U}
= -\tilde{E}\Lambda - \Lambda \tilde{E} = \tilde{E}^{\top}\Lambda - \Lambda\tilde{E}.
\end{align}
Equation \eqref{equ:3.38} has solutions $(\lambda,\tilde{E})=(-(\lambda_i+\lambda_j),I_{ij}-I_{ji})$, hence \eqref{equ:3.36a} has solutions
$(\lambda,E)=(-(\lambda_i+\lambda_j),U(I_{ij}-I_{ji})U^{\top})$, and \eqref{equ:3.36} has solutions $(\lambda,E,b)=(-(\lambda_i+\lambda_j),U(I_{ij}-I_{ji})U^{\top},0)$
for $i=1,\ldots,d-1$, $j=i+1,\ldots,d$, where $I_{ij}$ has entry $1$ in the $i$th row and $j$th column and $0$ otherwise.
\end{itemize}
Let us summarize the result in a proposition.
\begin{proposition}[Point spectrum of rotating waves]\label{prop:3.2}
Let $f\in C^2(\R^m,\R^m)$ and let $v_{\star}\in C^3(\R^d,\R^m)$ be a classical solution of
\eqref{equ:1.9} for some skew-symmetric matrix $S_{\star}\in \R^{m,m}$ with eigenvalues denoted
by $\lambda_1,\ldots,\lambda_d$ and unitary matrix $U\in\C^{d,d}$ diagonalizing $S_{\star}$. Then
\begin{align*}
w=v_{\star,\xi}(E\xi+b)
\end{align*}
is a classical solution of the eigenvalue problem \eqref{equ:3.14} provided that
\begin{align*}
(\lambda,E,b)=(-\lambda_l,0,Ue_l)\quad\quad\text{or}\quad\quad (\lambda,E,b)=(-(\lambda_i+\lambda_j),U(I_{ij}-I_{ji})U^{\top},0)
\end{align*}
for some $l=1,\dots,d$, $i=1,\ldots,d-1$, $j=i+1,\ldots,d$. In particular, the symmetry set
\begin{align*}
\sigma_{\mathrm{sym}}(\mathcal{P}) = \sigma(S_{\star}) \cup \{\lambda_i+\lambda_j:1\leqslant i<j\leqslant d\}.
\end{align*}
belongs to the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$ of $\mathcal{P}$.
\end{proposition}
Altogether, Proposition \ref{prop:3.2} yields $\frac{d(d+1)}{2}$ solutions of the quadratic eigenvalue problem \eqref{equ:3.14}.
It is a remarkable feature that the eigenvalues and the eigenfunctions coincide with those for first order evolution equations,
see \cite{BeynOtten2016b}, \cite{Otten2014}. Moreover, we suggest that Proposition \ref{prop:3.2} also applies to rotating waves
that are not localized, e.g. spiral waves and scroll waves. This has been confirmed in numerical experiments.
Figure \ref{fig:3.5} shows the eigenvalues $\lambda\in\sigma_{\mathrm{sym}}(\mathcal{P})$ from Proposition \ref{prop:3.2} and their corresponding
multiplicities for different space dimensions $d=2,3,4,5$. The eigenvalues $\lambda\in\sigma(S_{\star})$ are indicated by blue circles,
the eigenvalues $\lambda\in\left\{\lambda_i+\lambda_j\mid \lambda_i,\lambda_j\in\sigma(S_{\star}),\,1\leqslant i<j\leqslant d\right\}$
by green crosses. The imaginary values to the right of the symbols denote eigenvalues and the numbers to the left their corresponding
multiplicities. As expected, there are $\frac{d(d+1)}{2}$ eigenvalues on the imaginary axis in case of space dimension $d$.
\begin{figure}[ht]
\centering
\subfigure[$d=2$\newline{}$\mathrm{dim}\,\SE(2)=3$]{\includegraphics[page=1, height=6cm]{Images.pdf} \label{fig:Pointspectrum_d2}}
\subfigure[$d=3$\newline{}$\mathrm{dim}\,\SE(3)=6$]{\includegraphics[page=2, height=6cm]{Images.pdf} \label{fig:Pointspectrum_d3}}
\subfigure[$d=4$\newline{}$\mathrm{dim}\,\SE(4)=10$]{\includegraphics[page=3, height=6cm]{Images.pdf} \label{fig:Pointspectrum_d4}}
\subfigure[$d=5$\newline{}$\mathrm{dim}\,\SE(5)=15$]{\includegraphics[page=4, height=6cm]{Images.pdf}\label{fig:Pointspectrum_d5}}
\caption{Point spectrum of the linearization $\mathcal{P}$ on the imaginary axis $i\R$ for space dimension $d=2,3,4,5$ given by Proposition \ref{prop:3.2}.}
\label{fig:3.5}
\end{figure}
\subsubsection{Essential spectrum and dispersion set.}\label{subsubsec:3.2.1}
\begin{enumerate}[label=\bf{\arabic*.},leftmargin=*]
\item \textbf{Quasi-diagonal real form.}
Let us transform the skew-symmetric matrix $S_{\star}$ into quasi-diagonal real form. For this purpose, let $\pm i\sigma_1,\ldots,\pm i\sigma_k$ be the nonzero eigenvalues
of $S_{\star}$ so that $0$ is a semisimple eigenvalue of multiplicity $d-2k$. There is an orthogonal matrix $P\in\R^{d,d}$ such that
\begin{equation*}
S_{\star}=P\Lambda P^{\top},\quad
\Lambda=\mathrm{diag}\left(\Lambda_1,\ldots,\Lambda_k,\mathbf{0}\right),\quad
\Lambda_j=\begin{pmatrix}0 &\sigma_j\\-\sigma_j &0\end{pmatrix},\quad
\mathbf{0}\in\R^{d-2k,d-2k}.
\end{equation*}
The transformation $\tilde{w}(y)=w(Py),\tilde{v}_{\star}(y)=v_{\star}(Py)$ transfers \eqref{equ:3.14} with operators $P_j$ from \eqref{equ:3.15} into
\begin{equation} \label{equ:3.14a}
(\lambda^2\tilde{P}_2+ \lambda \tilde{P}_1 + \tilde{P}_0)\tilde{w} =0.
\end{equation}
With the abbreviations
\begin{equation} \label{equ:3.17a}
D_j=\partial_{y_j}, \quad D^{(i,j)}=y_jD_i - y_i D_j,\quad
K=\sum_{l=1}^k \sigma_l D^{(2l-1,2l)}
\end{equation}
the operators $\tilde{P}_j$ are given by
\begin{equation}
\label{equ:3.18}
\begin{aligned}
\tilde{P}_2 = & M,\quad
\tilde{P}_1 = B-2M\sum_{j=1}^{d}(\Lambda y)_jD_j
= B-2MK,\\
\tilde{P}_{0} = & -A\triangle
+M\sum_{j=1}^{d}\sum_{\nu=1}^{d}(\Lambda y)_j(\Lambda y)_{\nu}
D_jD_{\nu}\,+ M\sum_{j=1}^{d}(\Lambda^2 y)_j D_j\,
- B\sum_{j=1}^{d}(\Lambda y)_j D_j\,- Df(\tilde{v}_{\star})\\
= & -A\triangle +
MK^2 - BK - Df(\tilde{v}_{\star}).
\end{aligned}
\end{equation}
\item \textbf{The far-field operator.} Assume that $v_{\star}$ has an
asymptotic state $v_{\infty}\in\R^m$, i.e. $f(v_{\infty})=0$ and $v_{\star}(\xi)\to v_{\infty}\in\R^m$ as $|\xi|\to\infty$.
In the limit $|y| \rightarrow \infty$ the eigenvalue problem
\eqref{equ:3.14a} turns into the far-field problem
\begin{equation}
\label{equ:3.17}
\left(\lambda^2 \tilde{P}_2 + \lambda \tilde{P}_1 + \tilde{P}_{\infty}
\right)\tilde{w} = 0,\,y\in\R^d, \quad \tilde{P}_{\infty}=-A\triangle +
MK^2 - BK - Df(v_{\infty}).
\end{equation}
\item \textbf{Transformation into several planar polar coordinates.}
Since we have $k$ angular derivatives in $k$ different planes it is advisable to transform into several planar polar coordinates via
\begin{equation*}
\begin{pmatrix}y_{2l-1}\\y_{2l}\end{pmatrix} = T(r_l,\phi_l):=\begin{pmatrix}r_l\cos\phi_l\\r_l\sin\phi_l\end{pmatrix},\;\phi_l\in[-\pi,\pi),\;r_l\in(0,\infty),\;l=1,\ldots,k.
\end{equation*}
All further coordinates, i.e. $y_{2k+1},\ldots,y_d$, remain fixed. The transformation
$\hat{w}(\psi):=\tilde{w}(T_2(\psi))$ with $T_2(\psi) = (T(r_1,\phi_1),\ldots,T(r_k,\phi_k),y_{2k+1},\ldots,y_d)$
for $\psi=(r_1,\phi_1,\ldots,r_k,\phi_k,y_{2k+1},\ldots,y_d)$ in
the domain $\Omega=((0,\infty)\times[-\pi,\pi))^k\times\R^{d-2k}$ transfers \eqref{equ:3.17} into
\begin{equation}
\label{equ:3.19}
\left(\lambda^2 \hat{P}_2 + \lambda \hat{P}_1 + \hat{P}_{\infty}
\right)\hat{w} = 0,\,\psi\in\Omega
\end{equation}
with
\begin{equation*}
\begin{aligned}
\begin{split}
&\hat{P}_2 = M,\quad
\hat{P}_1 = B+2M\sum_{l=1}^{k}\sigma_l\partial_{\phi_l}, \\
&\hat{P}_{\infty} = - A\bigg[\sum_{l=1}^{k}\left(\partial_{r_l}^2+\frac{1}{r_l}\partial_{\phi_l}+\frac{1}{r_l^2}\partial_{\phi_l}^2\right)+
\sum_{l=2k+1}^{d}\partial_{y_l}^2\bigg] +
M\sum_{l,n=1}^{k}\sigma_l\sigma_n\partial_{\phi_l}\partial_{\phi_n}
+ B\sum_{l=1}^{k}\sigma_l\partial_{\phi_l} - Df(v_{\infty}).
\end{split}
\end{aligned}
\end{equation*}
\item \textbf{Simplified far-field operator:}
The far-field operator \eqref{equ:3.19} can be further simplified by letting
$r_l\to\infty$ for any $1\leqslant l\leqslant k$ which turns \eqref{equ:3.19} into
\begin{equation}
\label{equ:3.21}
\left(\lambda^2 \hat{P}_2 + \lambda \hat{P}_1 +
P_{\infty}^{\mathrm{sim}}\right)\hat{w} = 0,\,\psi\in\Omega
\end{equation}
with
\begin{equation}
\label{equ:3.22}
P_{\infty}^{\mathrm{sim}} = -A\left[\sum_{l=1}^{k}\partial_{r_l}^2+
\sum_{l=2k+1}^{d}\partial_{y_l}^2\right]
+M\sum_{l,n=1}^{k}\sigma_l\sigma_n\partial_{\phi_l}
\partial_{\phi_n}
+ B\sum_{l=1}^{k}\sigma_l\partial_{\phi_l} - Df(v_{\infty}).
\end{equation}
\item \textbf{Angular Fourier transform:}
Finally, we solve for eigenvalues and eigenfunctions of \eqref{equ:3.22}
by separation of variables and an angular resp. radial Fourier ansatz
with $\omega\in\R^k$, $\rho,y\in\R^{d-2k}$, $n\in\Z^k$, $z\in\C^m$, $|z|=1$, $r\in(0,\infty)^k$, $\phi\in(-\pi,\pi]^k$:
\begin{align*}
\hat{w}(\psi) = \exp\left(i\sum_{l=1}^{k}\omega_l r_l\right)\exp\left(i\sum_{l=1}^{k}n_l\phi_l\right)\exp\left(i\sum_{l=2k+1}^{d}\rho_l y_l\right)z
= \exp\left(i\langle\omega,r\rangle + i\langle n,\phi\rangle + i\langle\rho,y\rangle\right)z.
\end{align*}
Inserting this in \eqref{equ:3.21} leads to the $m$-dimensional quadratic eigenvalue problem
\begin{equation}
\label{equ:3.23}
\left(\lambda^2 A_2 + \lambda A_1(n) +
A_{\infty}(\omega,n,\rho)\right)z = 0
\end{equation}
with matrices $A_2\in\R^{m,m}$ and $A_1(n),
A_{\infty}(\omega,n,\rho)\in\C^{m,m}$ given by
\begin{equation}
\begin{aligned}
\label{equ:3.24}
\begin{split}
A_2 = & M,\quad
A_1(n) = B+2i\langle\sigma,n\rangle M,\\
A_{\infty}(\omega,n,\rho) = & \left(|\omega|^2+|\rho|^2\right)A - \langle\sigma,n\rangle^2 M + i\langle\sigma,n\rangle B - Df(v_{\infty}).
\end{split}
\end{aligned}
\end{equation}
The Fourier ansatz is a well-known tool for investigating essential spectra, see e.g. \cite{FiedlerScheel2003}.
\item \textbf{Dispersion relation and dispersion set:} As in Section
\ref{subsubsec:2.2.2} we consider the dispersion set consisting of all values $\lambda \in \C$
satisfying the dispersion relation
\begin{equation}
\label{equ:3.25}
\det\left(\lambda^2 A_2 + \lambda A_1(n) +
A_{\infty}(\omega,n,\rho)\right) = 0
\end{equation}
for some $\omega\in\R^k$, $\rho\in\R^{d-2k}$ and $n\in\Z^k$. Of course,
one can replace $|\omega|^2+|\rho|^2$ by any nonnegative real number.
Solving \eqref{equ:3.25} is equivalent to finding all zeros of a parameterized polynomial of degree $2m$.
Note that the limiting case $M=0$ and $B=I_m$ in \eqref{equ:3.25} leads to the dispersion relation for rotating
waves of first order evolution equations, see \cite{BeynLorenz2008} for $d=2$, and \cite[Sec. 7.4 and 9.4]{Otten2014},
\cite{BeynOtten2016b} for general $d\geqslant 2$.
\end{enumerate}
Using standard cut-off arguments as in \cite{BeynLorenz2008},\cite{Otten2014},\cite{BeynOtten2016b},
the following result can be shown for suitable function spaces (e.g. $L^2(\R^d,\R^m)$):
\begin{proposition}[Essential spectrum of rotating waves]\label{prop:3.1}
Let $f\in C^1(\R^m,\R^m)$ with $f(v_{\infty})=0$ for some $v_{\infty}\in\R^m$. Let $v_{\star}\in C^2(\R^d,\R^m)$ with skew-symmetric $S_{\star}\in \R^{m,m}$ be a classical solution of
\eqref{equ:1.9} satisfying $v_{\star}(\xi)\to v_{\infty}$ as $|\xi|\to\infty$. Then, the dispersion set
\begin{equation*}
\sigma_{\mathrm{disp}}(\mathcal{P}) = \{\lambda\in\C\mid\text{$\lambda$ satisfies \eqref{equ:3.25} for some $\omega\in\R^k$, $\rho\in\R^{d-2k}$, $n\in\Z^k$}\}
\end{equation*}
belongs to the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$ of the operator
polynomial $\mathcal{P}$ from \eqref{equ:3.14}.
\end{proposition}
\begin{example}[Cubic-quintic Ginzburg-Landau wave equation]\label{exa:4}
As shown in Example \ref{exa:3} the cubic-quintic Ginzburg-Landau wave equation \eqref{equ:3.40} with coefficients and parameters \eqref{equ:3.41}
has a spinning soliton solution $u_{\star}(x,t)=v_{\star}(e^{-tS_{\star}}(x-x_{\star}))$ with rotational velocity $\mu_1^{(3)}=1.3658$.
We next solve numerically the eigenvalue problem for the cubic-quintic Ginzburg-Landau wave equation. For this purpose we consider the real valued version
of \eqref{equ:3.40}
\begin{equation}
\label{equ:3.44}
M U_{tt} + B U_t = A \triangle U + F(U),\; x\in \R^d,\, t\geqslant 0
\end{equation}
with
\begin{align}
\label{equ:3.45}
\begin{split}
&M = \begin{pmatrix}\varepsilon_1&-\varepsilon_2\\\varepsilon_2&\varepsilon_1\end{pmatrix},\quad B = \begin{pmatrix}\rho_1&-\rho_2\\\rho_2&\rho_1\end{pmatrix},\quad
A = \begin{pmatrix}\alpha_1&-\alpha_2\\\alpha_2&\alpha_1\end{pmatrix},\quad U = \begin{pmatrix}u_1\\u_2\end{pmatrix},\\
&F(U) = \begin{pmatrix}(U_1\delta_1-U_2\delta_2)+(U_1\beta_1-U_2\beta_2)(U_1^2+U_2^2)+(U_1\gamma_1-U_2\gamma_2)(U_1^2+U_2^2)^2\\
(U_1\delta_2+U_2\delta_1)+(U_1\beta_2+U_2\beta_1)(U_1^2+U_2^2)+(U_1\gamma_2+U_2\gamma_1)(U_1^2+U_2^2)^2\end{pmatrix},
\end{split}
\end{align}
where $u=u_1+iu_2$, $\varepsilon=\varepsilon_1+i\varepsilon_2$, $\rho=\rho_1+i\rho_2$, $\alpha=\alpha_1+i\alpha_2$, $\beta=\beta_1+i\beta_2$, $\gamma=\gamma_1+i\gamma_2$, $\delta=\delta_1+i\delta_2$
and $\varepsilon_j,\rho_j,\alpha_j,\beta_j,\gamma_j,\delta_j\in\R$.
Now, the eigenvalue problem for the cubic-quintic Ginzburg-Landau wave equation is, cf. \eqref{equ:3.14}, \eqref{equ:3.15},
\begin{align}
\label{equ:3.46}
\left(\lambda^2 M\cdot + \lambda\left[B\cdot-2M(\partial_\xi \cdot)S\xi\right] + \left[-A\triangle\cdot + M(\partial_{\xi}^2 \cdot)(S\xi)^2+M(\partial_{\xi}\cdot)S^2\xi-B(\partial_{\xi}\cdot)S\xi-DF(v_{\star})\cdot\right]\right)w= 0.
\end{align}
Both approximations of the profile $v_{\star}$ and the velocity matrix $S=S_{\star}$ in \eqref{equ:3.46} are chosen from the solution of \eqref{equ:3.42} at time
$t=2000$ in Example \ref{exa:3}. By Proposition \ref{prop:3.2} the problem \eqref{equ:3.46} has eigenvalues $\lambda=0,\pm i\sigma$. These eigenvalues will
be isolated and hence belong to the point spectrum, if the differential
operator is Fredholm of index $0$ in suitable function spaces. For the parabolic
case ($M=0$) this has been established in \cite{BeynOtten2016b} and we expect
it to hold in the general case as well.
Let us next discuss the dispersion set from Proposition \ref{prop:3.1}. The cubic-quintic Ginzburg-Landau nonlinearity $F:\R^2\rightarrow\R^2$ from \eqref{equ:3.45}
satisfies
\begin{equation}
\label{equ:3.47}
DF(v_{\infty})=\begin{pmatrix}\delta_1&-\delta_2\\\delta_2&\delta_1\end{pmatrix}\quad\text{for}\quad v_{\infty}=\begin{pmatrix}0\\0\end{pmatrix}.
\end{equation}
The matrices $A_2$, $A_1(n)$, $A_{\infty}(\omega,n)$ from \eqref{equ:3.24} of the quadratic problem \eqref{equ:3.23} are given by
\begin{align*}
A_2 = M,\quad
A_1(n) = B+2i\sigma n M,\quad
A_{\infty}(\omega,n) = \omega^2 A - \sigma^2 n^2 M + i\sigma n B - DF(v_{\infty})
\end{align*}
for $M,B,A$ from \eqref{equ:3.45}, $DF(v_{\infty})$ from \eqref{equ:3.47}, $\omega\in\R$, $n\in\Z$ and $\sigma=\mu_{1}^{(3)}$.
The dispersion relation \eqref{equ:3.25} for the spinning solitons of the Ginzburg-Landau wave equation in $\R^2$ states that every $\lambda\in\C$ satisfying
\begin{equation*}
\det\left(\lambda^2 M + \lambda(B+2i\sigma n M) + (\omega^2 A - \sigma^2 n^2 M + i\sigma n B - DF(v_{\infty}))\right)=0
\end{equation*}
for some $\omega\in\R$ and $n\in\Z$, belongs to the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$ of $\mathcal{P}$. We may rewrite this in complex notation and find
the dispersion set
\begin{align}
\label{equ:3.50} \sigma_{\mathrm{disp}}(\mathcal{P})= \{ \lambda \in \C:
\lambda^2 \varepsilon + \lambda(\rho+2i\sigma n \varepsilon) + (\omega^2 \alpha - \sigma^2 n^2 \varepsilon + i\sigma n \rho - \delta)=0 \; \text{for some} \; \omega\in \R,n\in \Z\}.
\end{align}
The elements of the dispersion set are
\begin{align*}
\lambda_{1,2} = -\frac{\rho}{2\varepsilon} - i\sigma n \pm \frac{1}{2\varepsilon}\sqrt{\rho^2-4\varepsilon(\omega^2\alpha-\delta)},\quad
n \in \Z, \omega \in \R.
\end{align*}
They lie on the vertical line $\mathrm{Re}=-\frac{\rho}{2\varepsilon}$ and on infinitely many horizontal lines given for $n \in \Z$ by \\$i\sigma n + \frac{1}{2 \varepsilon}
[ -\rho - \sqrt{\rho^2+4 \varepsilon \delta}, \rho+
\sqrt{\rho^2+4 \varepsilon \delta}]$,
see Figure \ref{fig:3.3} (a),(b).
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[height=4.2cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Spectrum_View2.png}\label{fig:3.3a}}
\subfigure[]{\includegraphics[height=4.2cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Spectrum_View1.png}\label{fig:3.3b}}\\
\subfigure[]{\includegraphics[height=4.2cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Spectrum_View2.png} \label{fig:3.3c}}
\subfigure[]{\includegraphics[height=4.2cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Spectrum_View1.png}\label{fig:3.3d}}
\caption{Subsets $\sigma_{\mathrm{disp}}(\mathcal{P})$ and $\sigma_{\mathrm{sym}}(\mathcal{P})$ of the spectrum for the cubic-quintic Ginzburg-Landau wave equation for $d=2$ with parameters \eqref{equ:3.41} (a),(b) and two different views of the numerical spectrum on a ball $B_R(0)$ with radius $R=20$ (c),(d).}
\label{fig:3.3}
\end{figure}
Figure \ref{fig:3.3}(a) and (b) shows two different views for the part of the spectrum of the spinning solitons which is guaranteed by Proposition
\ref{prop:3.1} and \ref{prop:3.2}. It is subdivided into the symmetry set $\sigma_{\mathrm{sym}}(\mathcal{P})$ (blue circle), which is determined by Proposition \ref{prop:3.2}
and belongs to the point spectrum $\sigma_{\mathrm{pt}}(\mathcal{P})$, and the dispersion set $\sigma_{\mathrm{disp}}(\mathcal{P})$ (red lines), which is determined by Proposition \ref{prop:3.1}
and belongs to the essential spectrum $\sigma_{\mathrm{ess}}(\mathcal{P})$. In general, there may be further essential spectrum in $\sigma_{\mathrm{ess}}(\mathcal{P})\setminus\sigma_{\mathrm{disp}}(\mathcal{P})$
and further isolated eigenvalues in $\sigma_{\mathrm{pt}}(\mathcal{P})\setminus \sigma_{\mathrm{sym}}(\mathcal{P}) $. In fact, for the spinning solitons of the
cubic-quintic Ginzburg-Landau wave equation we find $18$ extra eigenvalues with negative real parts ($8$ complex conjugate pairs and $2$ purely real eigenvalues),
cf. Figure \ref{fig:3.3}(c),(d). These Figures show two different views for the numerical spectrum of the cubic-quintic Ginzburg-Landau wave equation on
the ball $B_R(0)$ with radius $R=20$ equipped with homogeneous Neumann boundary conditions. They consist of the approximations of the point spectrum
subdivided into the symmetry set (blue circle) and additional isolated eigenvalues (blue cross sign), and of the essential spectrum (red dots).
Three of these isolated eigenvalues are very close to the imaginary axis, see Figure \ref{fig:3.4}(c). Therefore, the spinning solitons seem to be
only weakly stable. Finally, the approximated eigenfunctions belonging to the eigenvalues $\lambda\approx 0$ and $\lambda\approx +i\sigma$ are shown
in Figure \ref{fig:3.4}(a) and (b). In particular, Figure \ref{fig:3.4}(a) is an approximation of the rotational term $\langle Sx,\nabla v_{\star}(x)\rangle$.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Eigenfunction1.png}\label{fig:3.4a}}
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Eigenfunction2.png}\label{fig:3.4b}}
\subfigure[]{\includegraphics[height=4.0cm] {GinzburgLandauWaveEqu_2D_SpinningSoliton_Frozen_Spectrum_View3.png}\label{fig:3.4c}}
\caption{Eigenfunctions of the cubic-quintic Ginzburg-Landau wave equation for parameters \eqref{equ:3.41} belonging to the isolated eigenvalues $\lambda_1\approx 0$ (a) and $\lambda_2\approx i\sigma$ (b) and a zoom into the spectrum from Fig.\ref{fig:3.3}(c) in (c).}
\label{fig:3.4}
\end{figure}
\end{example}
\textbf{Acknowledgement.}
We gratefully acknowledge financial support by the Deutsche
Forschungsgemeinschaft (DFG) through CRC 701 and CRC 1173.
\def$'${$'$}
|
2,869,038,157,044 | arxiv | \section{Introduction}
For decades, two dimensional melting\cite{Nelson2002} has been an
important subject of research in condensed matter physics
both theoretically and experimentally.
One of the most important theoretical frameworks was given by Halperin and
Nelson\cite{Halperin,Nelson1979}, and Young\cite{Young} who proposed
(building upon the work by Kosterlitz and Thouless\cite{Kosterlitz}) the
so called KTHNY theory
that the two dimensional melting can occur in two stages of continuous
defect-mediated transitions with the intermediate hexatic phase
characterized by quasi-long-range orientational order and short range
translational order\cite{Strandburg-Review}.
One of the most important questions in the two dimensional melting, which has
not been satisfactorily answered yet, is probably the question of how to
determine the form of the inter-particle potential that is most favorable for
the existence of the hexatic phase.
Even though several experimental studies support the existence of hexatic
phases\cite{Murray87, Kusner94, Marcus96, Zahn99,Pindak81, Brock86, Cheng88,
Chou1998,Angel2005,Olafsen2005, Reis2006}, computational studies of two-dimensional
melting of hard-core potential systems\cite{Simulation-Review,Strandburg89,Lee92, Bates2000,
Mak2006} including hard discs or Lennard-Jones (LJ) potentials tend to favor first
order transition scenarios (though some conflicting results also
exist)\cite{Strandburg-Review}.
Chui, et al\cite{Chui83-prl, Chui83-prb, Saito1982a,Saito1982b} advanced the possibility
of first order melting transition through grain boundary formations when the defect core
energy becomes low enough. Kleinert and Janke argued that the nature of the two-dimensional melting
can change from continuous to first order transition as the magnitude of the so-called
angular stiffness of the local rotation field\cite{kleinert1988, janke1988, kleinert1989}
is decreased. From this argument, they contended that the melting of LJ systems
would occur via first order, while, it would be continuous in the case of Wigner crystals
where the particles are interacting via long-range Coulomb potentials.
Recently we investigated on the criterion for the existence of the hexatic phase
by tuning the form of the interparticle potential (which is the Morse potential)
that could change the size of the range of the dominant interparticle interaction\cite{silee2008}.
The Morse potential can be written as
\begin{eqnarray}
V_M (r) & = & \epsilon_0 \left [ e^{ - \alpha (r - \sigma )} - 1
\right ]^2 - \epsilon_0 \label{Morse_pot}
\end{eqnarray}
where, $r$ is the distance between particles, $\sigma$ the
distance from the origin to the minimum of the potential, and $\epsilon_0 $ is the strength
of the interaction.
After setting the $\epsilon_0 =1$ and $\sigma =1$, we can vary the value of the single
parameter $\alpha$ to tune the softness and the range of the potential.
Smaller value of $\alpha$ corresponds to a softer potential
with longer range of the attractive part of the potential. On the other hand, as the
value of $\alpha$ increases, the potential gets stiffer and shorter-ranged (Fig.~\ref{Morse_pot_fig}).
In our previous work, we investigated the trend of hexatic phase formations as
the range of the potential is varied. Detailed simulation results were presented
especially in the regime of softer potential with $\alpha = 3.0$,
where the melting exhibits a stable region of hexatic phase on th $PT$ plane.
Here in this work, we investigate how the characteristics of melting evolve
as the value of $\alpha$ is varied from $\alpha = 3.5$ to larger values of
$\alpha = 6$ and $\alpha=12$. See Fig.~\ref{Morse_pot_fig}, for the shape of the
potentials for different values of $\alpha$.
We observed that different types of melting are exhibited for an atomic system
described by the Morse potential. A system with $\alpha=3.5$ exhibits a melting
transition which is compatible with the KTHNY theory, showing thermodynamically stable
hexatic phase as well as two stage melting. For simulations with $\alpha=6$, it is
found that, although some hexatic-like features are revealed in $NVT$ ensemble simulations,
it exhibits a weakly first order transition in $NPT$ ensemble. Systems with a short
range potential with $\alpha=12$ show a strong first order transition compared to
the case of $\alpha=6$, clearly showing coexistence of solid and liquid.
These results appear to be consistent with the arguments of Kleinert\cite{kleinert1989}
in that the systems with short-ranged potential are correlated with smaller angular
stiffness and first order melting transition.
\section{Simulation Methods and Results}
In this work, we performed $NPT$ MD simulations using the modified
Parrinello-Rahman (PR) method\cite{pr1980, Li92} combined
with Nose-Hoover (NH) thermostat\cite{Nose84}. As for the mass of
the particles, for convenience, we put $m=1$ which implies that the
time unit $t_0 \equiv \sqrt{m \sigma^2 /\epsilon_0}$ also becomes
unity when we set $\sigma = 1$ and $\epsilon_0 =1$.
The equations of motion were integrated via the
Nordsieck-Gear 5th-order predictor-corrector method with the
integration time step of $\Delta t=0.002$. This guarantees the
conservation of the total Hamiltonian without noticeable drift. In
the simulations we used two empirical parameters, the barostat mass
$M_{v}=1$ and the thermostat mass $M_{s}=1$. Test simulations
with several other values ($M_{v}=0.1, 1, 10 , M_{s}=0.1, 1, 10$) of
the parameters were also performed with almost the same results.
The number of particles employed ranges from $N=400$ up to $N=10000$.
In order to investigate the characteristics of the melting transition,
we obtained the isothermal equation of state on the plane of pressure
vs. density. This was obtained by the NH-MD simulations by decoupling
the PR (isobaric) part from NH-PR MD equations of motions by taking
$M_v = \infty$ which reduces the system to $NVT$ condition.
The pressure was evaluated by means of the virial expression
for the range of the densities corresponding to the region of transition
from liquid to solid. For each density, $10^6 \sim 3 \times 10^6$ steps of
integration were carried out for equilibration beginning with a
configuration of triangular lattice and, after equilibration, $10^7$
steps of integration were performed for thermodynamic calculations.
In this case of $NVT$ ensemble we have to fix the shape of the box.
In order to reduce the finite size effect, we used a rhombic box (with the
smaller side angle of $60$ degrees) for the shape of the system with periodic
boundary conditions. However, independent results of ours from square box
showed no significant difference (from those of rhombic box) with respect
to the quantities of our interest.
Important criterion for the existence of hexatic phase (and hence continuous
melting transition) would be that the isothermal equation of state for pressure
vs. density exhibit a monotonic behavior together with a non-monotonic region of
dip in the slope $dP/d\rho$. On the other hand, a first order melting transition
would be associated with the existence of a van der Waals type loop in pressure
vs. density curve with unstable and metastable region in $NVT$ ensemble simulations.
In order to investigate the nature of the possible hexatic phases, one has to
compute the bond-orientational order parameter. The local bond-orientational order
parameter $\psi_{6}(r)$ at position $r$ is defined as
\begin{equation}
\psi_{6}(r)=\frac{1}{N_{i}}\sum_{j}e^{6i\theta_{ij}(r)}.
\end{equation}
Here, the sum on particle $j$ is over the $N_{i}$ neighbors of the
particle $i$ (corresponding to $\vec{r}$ at the center with $\theta_{ij}$
being the angle between the particles $i$ and ${j}$ with respect to
a fixed reference axis.
We regarded the particles within a cutoff radius as the neighbors,
where the cutoff radius is chosen as the first minimum of the pair
correlation function of the system. This method is found to be
efficient and reliable for large scale simulations\cite{Bagchi96}.
Then the global bond-orientational order parameter is defined as
\begin{equation}
\Psi_{6} = \large | \frac{1}{N}\sum_{r} \psi_{6}(r) \large |
\end{equation}
where $N$ denotes the total number of particles in the system.
In order to distinguish the bond-orientational order of the different thermodynamic
phases, we compute the spatial correlation function $G_{6}(r)$ of the
bond-orientational order parameter, defined as \cite{Halperin}
\begin{equation}
G_{6}(r)=<\psi_{6}(r)\psi_{6}^{\ast}(0)>,
\end{equation}
In the hexatic phase, according to KTHNY theory, the bond-orientational
correlation function is expected to exhibit an algebraic decay
i.e., $G_{6}(r) \sim r^{-\eta_{6}}$ with the decay exponent $\eta_6 \leq 1/4 $,
where $\eta_{6}=1/4$ corresponds to the limit of the power-law decay behavior
in the KTHNY theory.
In order to further understand the nature of the hexatic phase
we obtained the histogram distribution\cite{Lee90} of the density order parameter
for different system sizes for the values of pressure and temperature where
a melting transition is expected to occur (from other measurements).
We expect that histograms with single peaks would imply that there exists
continuous melting transition. On the other hand, existence of double peaks
with increasing peak heights (as the system size increases) would imply
a first order transition.
We also investigate the behavior of the linear susceptibility for the global
bond-orientational order parameter near the melting transition in order to check
the consistency with the result from the isothermal equation of states.
Specifically, we obtain the size dependence of the susceptibility by calculating
the fluctuation of the bond-orientational order parameter for sub-blocks of the system
with linear sizes $L$, which is defined as\cite{weber95}
\begin{equation}
\chi_L = L^d \left ( \left < \Psi_{6}^2 \right >_L - \left < \Psi_{6} \right >^{2}_{L} \right ) .
\end{equation}
where $d=2$ is the spatial dimension.
In the computation, the system is sub-divided into sub-blocks of linear sizes with
$L = N/ M_b $ where $M_b$ ranges from $10$ to $20$.
Also, it is useful to calculate the Binder cumulant for the global bond-orientationl
order parameter for subsystem (linear) size $L$ is defined by
\begin{equation}
U_{L} = 1- \frac{\left < \Psi_{6}^{4} \right >_L }{ 3 \left < \Psi_{6}^{2} \right >^{2}_{L} }.
\end{equation}
where the subscript $L$ denotes that the quantities are calculates for subsystem
sizes of linear size $L$.
\subsection{The case of a soft and long-ranged Morse potential: $\alpha = 3.5$}
Here, we first deal with the case of a moderately soft (and longer ranged)
potential with $\alpha = 3.5$. Figure~\ref{pre_den_3.5} shows the
isothermal equation of state (at $T=0.7$) on the plane of pressure vs.
density. This was obtained by the NH-MD simulations by
decoupling the PR (isobaric) part from NH-PR MD equations of motions
by taking $M_v = \infty$ which reduces to $NVT$ condition. We here define
the density $\rho $ as $ \rho \equiv N \sigma^2 /V $ where $N$ is the total
number of particles and $V$ the
total volume (area in two dimensions) of the system. Densities were
chosen from the range $\rho=1.56 \sim 1.62 $, with the density
increment of $\Delta \rho = 0.005$ and the pressure was evaluated by
means of the virial expression (with $k_{B}=1$). This range of the
density corresponds to the region of transition from liquid to
solid. For each density, $10^6 \sim 3 \times 10^6$ steps of
integration were carried out for equilibration beginning with a
configuration of triangular lattice and, after equilibration, $10^7$
steps of integration were performed for thermodynamic calculations.
The number of particles employed was $N=3600$. In order to reduce the
finite size boundary effect, we used a rhombic box (with the smaller
side angle of $60$ degrees) for the shape of the system with
periodic boundary conditions. However, independent results of ours
from square box showed no significant difference (from those of
rhombic box) with respect to the quantities of our interest.
The isothermal curve increases monotonically near the transition
region satisfying the condition of mechanical stability (unlike the
discontinuity of density in a first order transition) that the
isothermal compressibility should be positive
$ K_T = (1/\rho) (\partial \rho / \partial P)_T > 0$.
We may identify the boundary of stable hexatic
phase as the values of the density where an abrupt change in the
isothermal compressibility occurs. In this way, we estimate the
density of solid-hexatic transition as $\rho_{s-h} \simeq 1.6$.
Although the change in isothermal compressibility is less
conspicuous near the hexatic-liquid boundary, we see that, near the
density $1.58\leq \rho \leq 1.585$, there exists a crossover in the
slope of the isothermal compressibility. Below, we give an
estimation of the density of hexatic-liquid transition by applying a
theoretical expectation from KTHNY theory on the decay exponent of
the spatial correlation of the orientational order parameter (see
below).
The fact that the pressure within the hexatic phase is monotonically
increasing as the density increases (with the resulting isothermal
compressibility kept positive) appears to be a very compelling
evidence for a stable hexatic phase in thermal equilibrium.
In order to distinguish the orientational order of the phases, we
have computed the bond-orientational correlation function $G_{6}(r)$
defined above.
Figure.~\ref{r_ori_3.5} is the bond-orientational correlation function for
the range of the density ($1.56\leq \rho \leq 1.615$). We find that,
for the density range of $1.585 < \rho \leq 1.595$, the averaged
correlation functions exhibit algebraic decays with the decay
exponent $\eta < 1/4 $ while, at lower densities, they exhibit
decays with faster than power law behavior in the long distance limit.
At $\rho=1.585$, the orientational correlation exhibit a slope of
approximately $\eta=0.25$. Here, the crossover from a power law decay
to exponential takes places with exponent approximately equal to $1/4$,
and also this value of the crossover density agrees almost precisely with
the density region exhibiting an abrupt change of the slope i.e.,
of the compressiblity in the equation of state (Figure~\ref{pre_den_3.5}).
Now, the fourth order Binder cumulant for the global bond-orientationl
order parameter for sub-block systems of (linear) size $L$ is shown in
Fig.~\ref{binder_3.5}. We can see that the density at the crossing point
is around $\rho \simeq 1.585$. This is compatible with the boundary
density ($1.585$) between the liquid and the hexatic phase which was shown
above in isothermal curve for $NVT$ ensemble, and also compatible with the
the boundary density between liquid and hexatic-like phase in terms of the
power law decay of the bond-orientational order near $\rho =1.585$
It may be expected theoretically that the binder cumulants of local orientational
order in hexatic phase collapse to a line because of the critical
charateristic of the phase. However, we may not consider non-collapse of
the Binder cumulants to a line as an evidence for non-existence of hexatic
phases. This is because, even for the case of XY model,
complete collapse was not found in the region where orientational order decay
algebraically, but rather it exhibits a crossing point at the transition
temperature \cite{oliveira}.
Now, we turn to the linear susceptibility for the global bond-orientational order
parameters near the melting transition.
Shown in Fig.~\ref{suscep_3.5} is the sub-block susceptibility obtained from the
fluctuation of the bond-orientational order parameter for sub-blocks of the system
with linear sizes $L$ with $L = N/ M_b $ where $M_b$ ranges from $11$ to $20$.
We see that the suceptibility shows a broad peak region near the density
$ 1.580 < \rho < 1.585$ which borders the liquid-hexatic phase boundary region.
We also see that the suceptibility exhibits broader shape (showing weaker dependence
on the density) in the liquid region compared with other cases that will be
shown below.
Figure~\ref{p3.5_conf_9} shows a snapshot of the particle configuration
for density $\rho=1.585$ within the hexatic phase region but close to
the transition (to liquid phase) at the temperature $T=0.7$, which shows free
dislocations (i.e., bounded pair disclinations).
This shows rather clearly the fundamental role of defects leading to the
power law decay of orientational correlations.
In order to further understand the nature of the hexatic phase
we obtained the histogram distribution\cite{Lee90} of the density order parameter
for five different system sizes ($N=900, 1600, 3600, 10000$)
under constant external pressure and temperature of $T=0.7$, and $P=13.5$, where
a hexatic phase is expected to occur from our measurement of the
orientational correlations.
In Fig.~\ref{den_dist_Morse_3.5} we see that all the
histograms exhibit single peaks, which implies that there exist a unique phase
with minimum free energy. It is also observed that, as the number of
particles increases, the peak height becomes higher and, at the same time,
the width of the peak decreases.
Also the position of the peak tends to shift to the lower density
(within the hexatic regime) probably due to the development of long range fluctuation.
This indicates that this region corresponds to a single phase region
(unlike solid-liquid mixture) consistent with the absence of van der Waals loop in
the pressure vs. density curve.
All of these observations lead us to the conclusion that there exist a
thermodynamcally stable hexatic phase consistent with the KTHNY melting scenario
for the case of $\alpha = 3.5$.
\subsection{The case of an intermediate-ranged potential: $\alpha = 6$ }
Next, we examine an intermediate-range potential with $\alpha=6$,
which corresponds approximately to the famous LJ potential\cite{tclim2003}.
In Fig.~\ref{pre_den_6}, the equation of state exhibits a weak van der Waals-like loop
in the pressure vs. density, which indicates a first order transition.
The unstable region ranges from $ \rho \simeq 1.04 $ to $ \rho \simeq 1.065$.
This is confirmed more rigorously by the histogram distributions of the density in
$NPT$ ensemble simulations for different system sizes.
In Fig.~\ref{den_dist_Morse6}, the histogram distributions of the density for systems
with $N=900$, $3600$, and $10000$ are shown from $NVT$ ensemble for $T=0.57$ and $P=1.85$.
For the cases of $N=900$ and $3600$, we observe transitions between two peaks (through
the valley of finite height between the peaks) with the resulting double peaks in the
histograms. For the $N=10000$ systems,
however, we can no longer observe crossing between the the coexisting phases. Instead,
we could observe two different (separate) histograms that are determined by the
initial states, depending whether the initial state is in the ordered solid
phase or in the disordered liquid phase.
Evidently, the free energy barrier increases with increasing system size, which
indicates clearly that the transition is of first order\cite{Lee90}.
Nevertheless, the system configurations in the coexistence region resembles those of the
hexatic phases (Fig.~\ref{p6_conf_11}), showing algebraically decaying orientational
order (Fig~\ref{r_ori6}). We see that the boundary between liquid and hexatic-like
phase in terms of the orientational order is located around $ \rho = 1.05 \sim 1.055$.
Furthermore, we also observe a hexatic-like feature in
the $NPT$ ensemble, where the system goes through temporarily a hexatic-like phase
before transiting into the other phase. This kind of characteristics in $NPT$ ensemble
seems to have been already reported as `metastable hexatic phase' in LJ system by
Chen et al\cite{Chen95, Somer97, Somer98}.
Although they argued that large system size ($N \simeq 40000$) is necessary to observe
this kind of features, we could observe such metastable hexatic phases even for systems with
smaller sizes of $N=1600$. We think that this hexatic-like feature in a first order transition
can be attributed to weakly first order nature of the transition.
As shown below, the relative free energy barrier in this case is considerably lower than that
in the case of shorter-ranged potential of $\alpha=12$, from which we presume that
the metastable or unstable hexatic-like phase comes from defect proliferation by
thermal fuctuations, but not from some mechanism leading to a true phase transition.
Now, the fourth order Binder cumulant for the global bond-orientationl
order parameter for sub-block systems of (linear) size $L$ is shown in
Fig.~\ref{binder_6.0}. We can see that the density at the crossing point
is located near $\rho \sim 1.06$. This corresponds to a point
in the middle of the unstable part of the van der Waals curve in the
isothermal equation of state.
Next, we deal with the linear susceptibility for the bond-orientational order
parameters. Shown in Fig.~\ref{suscep_6.0} is the sub-block susceptibility obtained
from the fluctuation of the bond-orientational order parameter for sub-blocks of
the system with linear sizes $L$ with $L = N/ M_b $ where $M_b$ ranges from $10$ to $20$.
We see that the suceptibility shows a sharper peak (as compared with the case of
$\alpha =3.5$) at the density $\rho \simeq 1.045$ which is a little bit below
the density ($\rho = 1.05 \sim 1.055$) where the orientational correlation
exhibits a spatial decay exponent of $0.25$.
This might be interpreted as a small evidence that the nature of the melting transition
of this system is inconsistent with the expectation of the KTHNY theory.
\subsection{The case of a short-ranged potential: $\alpha = 12$}
Finally, we investigate the case of a short-ranged potential with $\alpha=12$.
Figure.~\ref{pre_dist_12} shows the equation of state in the density region
of $0.85 \leq\rho\leq 1.075$ obtained from $NVT$ ensemble simulations at $T=0.57$.
We can see that the equation of state exhibits a van der Waals-like region in the
density with the unstable region ranging from $ \rho \simeq 0.96$ to $ \rho \simeq 1.04$
clearly indicating a first order melting transition.
The fourth order Binder cumulant for the global bond-orientationl
order parameter for sub-block system size $L$ is shown in
Fig.~\ref{binder_12.0}. We can see that the density at the crossing point
is located near $\rho \sim 1.045$ which is located near the lower density limit of the
metastable solid (spinodal) as shown in the $NVT$ isothermal equation of state.
First order nature of the melting transition is confirmed further with the double peak
nature of the histogram distributions of the density from $NPT$ ensemble simulations
for system sizes of $N=400, 900, 3600, 10000$ as shown in Fig.~\ref{den_dist_12}.
In this case, the free energy barrier is presumably much higher than the case of
$\alpha=6$, and none of the systems with $\alpha =12$ exhibit any tunneling transitions
between liquid-like states to solid-like states during $10^8$ MD steps of simulations with
$\Delta T=0.002$.
Therefore, double peak histogram distributions for each of the system sizes are actually
obtained by combining two separate histograms, one with ordered initial states (higher density)
and the other with disordered initial states (lower density), respectively.
Also, typical system configuration for $\alpha =12$ is shown in Fig.~\ref{p12_conf_10}
for $\rho =1.0$ and $T=0.57$ corresponding to the coexisting region, where we can see
that hexatic-like feature disappears, and that liquid phase region consisting of defect
clusters coexists with
solid region. Also in $NPT$ ensemble simulations of the melting process, we can no longer
observe metastable or unstable hexatic phase, but observe a discontinuous abrupt change in
density. From these observations we thus conclude that first order nature of the melting
gets stronger as the potential range decreases.
Next, we look into the linear susceptibility for the bond-orientational order
parameters. Shown in Fig.~\ref{suscep_12} is the sub-block susceptibility obtained
from the fluctuation of the bond-orientational order parameter for sub-blocks of
the system with linear sizes $L$ with $L = N/ M_b $ where $M_b$ ranges from $12$ to $20$.
We see that the suceptibility shows a peak at the density $\rho \simeq 0.96$
which is located near the limit of the metastable liquid (spinodal) as shown
in the curve of the isothermal equation of state from $NVT$ ensemble.
\section{Summary and discussions}
In conclusion, we have reported on some details of two dimensional melting
in systems of particles interacting via Morse potential when the range of the
potential is varied.
We showed that the melting of system with longer-ranged potential ($\alpha=3.5$)
clearly exhibits features of melting consistent with KTHNY theory exhibiting stable
hexatic phases. As the range of the potential decreases, however, we observe a
crossover in the transition nature to a first order transtion. In the case of $\alpha=6$
where the range of the potential is intermediate, the system exhibits a weakly first
order melting transition with some unstable hexatic-like phase during melting process
in $NPT$ simulations. In the case of $\alpha=12$ where the range of the potential is
shorter, we observe a stronger first order melting.
It appears that the crossover from continuous to first order melting transition in
this system is related to the decrease of the so-called angular stiffness
of the rotation field\cite{kleinert1988, janke1988, kleinert1989}. It would be interesting
to carry out a detailed calculation of the angular stiffness in our model system
as the value of $\alpha$ is varied. It would be also interesting to find a possible
connection to the change from continuous to first order transition in two dimensional
$XY$ model when the shape of the $XY$ potential gets
sharpened\cite{domany1984,himber_1984a,himber_1984b}.
|
2,869,038,157,045 | arxiv | \section{Introduction}
Speaker Verification (SV) is the task of validating the identity of a speaker using the voice sample of the claimant. The tremendous development in SV technology in the last five decades has enabled the system to be deployed in various application areas, starting from voice-based attendance system to authentication for bank transactions~\cite{bai2021speaker}. However, the performance of the systems suffer when multiple languages and sensors are involved during testing~\cite{khosravani2017plda}. Hence, the scalability of SV systems is limited considering such scenarios. The citizens of India use approximately $122$ major and $1599$ other languages in their day-to-day conversation. Most importantly, they are polyglot in nature. Therefore, the flexibility in language and sensors during testing may restrict the reach of SV technologies. With this motivation, the Indian Institute of Technology Guwahati Multi Variability (IITG-MV) data was collected using five different sensors from the people coming from different geographical locations of India having variations in the native language, dialect, and accent~\cite{haris2012multivariability}.
In the literature, there exist few works on the development of SV in multilingual and domain mismatch scenarios~\cite{khosravani2017plda}. The reported works contribute to the feature, model, and score level for minimizing the impact of language and domain mismatch~\cite{khosravani2017plda}. Most of the reported work uses either an in-house dataset or publicly available data (mostly crawled from the public domain) for performing their studies. The in-house data are limited by the number of speakers, languages, and sensors. Though the publicly available data have a huge number of speakers, languages and environmental variations, the unavailability of appropriate annotations (mostly done with automatic algorithms) poses a challenge for an in-depth analysis~\cite{khosravani2017plda}. The current challenge was planned with aim of resolving the above-mentioned issues by inviting the community to work on the development of the language and sensor invariant speaker representation.
This work considers the conversation recordings of the IITG-MV phase-I dataset. The dataset is divided into four parts, viz. (1) Development, (2) Enrollment, (3) Public, and (4) Private test set. The development set consists of speech utterances from $50$ speakers recorded with all $5$ sensors and in $13$ languages. The enrollment set consists of utterances from the remaining $50$ speakers, spoken in English language and through a headset microphone. The public test set consists of utterances from the $50$ enrolled speaker in both matched and mismatched sensors and languages. The private test set only consists of cross-lingual and sensor utterances. Along with releasing the dataset, the challenge was offered in the form of two sub-tasks, (1) constrained and (2) unconstrained. The constrained sub-task restricts the participants to use only the provided data. On the other hand, no such restrictions are there in the unconstrained sub-task. The aim of the constrained sub-task here was to encourage the community to develop the SV with limited training data. Conversely, the aim of the unconstrained sub-task was to observe the performance of SV technologies developed with a sufficient amount of training data. A baseline system implemented with X-vector framework for both constrained and unconstrained sub-tasks was made available to the participants during the challenge (available at \href{https://github.com/jagabandhumishra/I-MSV-Baseline} {\url{https://github.com/jagabandhumishra/I-MSV-Baseline}}). The performance of the baseline in public test data on both the sub-tasks were $9.32\%$ and $8.15\%$, respectively.
The rest of the paper is organized as follows: the challenge rules are described in section~\ref{2}. The detailed description of the data preparation is described in section~\ref{3}. Section~\ref{4} reports the procedure of baseline system development and the performance measure used. A brief description of the top five systems along with their performance are described in section~\ref{5}. Finally, the summary and future directions are reported in section~\ref{6}.
\section{Challenge Rules}
\label{2}
As mentioned in the earlier section, the challenge consisted of two sub-tasks, viz. (1) constrained SV and (2) unconstrained SV.
\begin{itemize}
\item \textbf{Constrained SV}: Participants were not allowed to use speech data other than the speech data released as a part of the constrained SV challenge for the development of the SV system.
\item \textbf{Unconstrained SV}: Participants were free to use any publicly available speech data in addition to the audio data released as a part of unconstrained SV.
\end{itemize}
The challenge was organized as a part of the $25^{th}$ edition of the O-COCOSDA-2022 conference along with the Asian-Multilingual Speaker Verification (A-MSV) track. The participants were asked for registration. Upon agreeing to the data usage licenses agreement, the download link of the development, enrollment, and public test sets were provided. Through a license agreement, the participant teams agreed that they could use the data only for research purposes. Moreover, the top five systems in both the sub-tasks would have to submit the source code of their systems and a detailed report.
The public test set released during the time of registration had ground truth information. The purpose here was to tune the system parameter using the public test data. The participants were asked to upload their score files in a specific format on the challenge portal. The corresponding performance was evaluated by a back-end script and the results were uploaded to a online leader board. There was no constraint on uploading and evaluating the score files on the public test set. After around one month of the public test set release, the private test set was released without ground truth information. The participant teams were asked to submit their final results on the private test set within $24$ hours from the release of the private test set. A maximum of three successful attempts were allowed for each team for evaluating their system on the private test set.
\section{Data Preparation}
\label{3}
The IITG-MV speaker recognition dataset was recorded in four phases for dealing with various speaker recognition applications, viz. speaker identification, verification, and change detection, etc.~\cite{haris2012multivariability}. Among the four phases, the phase-I dataset is considered for this study. The IITG-MV-Phase-I dataset consists of recordings from $100$ speakers in reading and conversation mode. In both modes, each speaker has given their speech data in two sessions. The duration of each session is around $5-8$ minutes. In addition, each speaker has given their data in two languages, viz. English and favorite language. Favorite language mostly meant their mother tongue/native language and varied from person to person. Furthermore, all the speech utterances were recorded through five different sensors, viz. H01, M01, M02, D01 and T01. The details of the dataset can be found at~\cite{haris2012multivariability}. The utterances belonging to the conversation mode were only considered here. The total duration of the selected utterances is approximately $100$ hours. The selected utterances are named as the I-MSV dataset. Further, the I-MSV dataset is segregated into four parts, viz. development, enrollment, public test, and private test.
\subsection{Development set}
This partition consists of recordings from $50$ speakers. The utterances from each speaker are available in two languages, with two sessions, and with five sensors. The approximate duration of the development set is $50$ hours.
\subsection{Enrollment set}
This partition consists of recordings from $50$ speakers that are disjoint from the speakers used in the development set. The utterances belonging to both the sessions with the English language and the Headset (H01) sensor are used here. The first session utterances are completely used in this set. However, the utterances from the second session are segmented into two parts. Half of them are used in enrollment and the rest have been used in the public test set (to observe the performance in matched sensor and language conditions). The approximate duration of speech available for each speaker is $8-10$ minutes.
\subsection{Public test set}
This set consists of the utterances from the second session recordings with three sensors and cross-languages along with the matched utterances. The second session utterances in the original IITG-MV-Phase-I dataset are segregated into two parts. Half of them are reserved for the preparation of the private test set. After that, each utterance is segmented into $10$, $30$, and $60$ second utterances. The segments are split into silence regions using the knowledge of Voice Activity Detection. The segmented files were made available to the participants as the public test set. The total number of utterances available in this partition is $5907$.
\subsection{Private test set}
This set consists of the utterances from the second session recordings with four sensors and cross-languages. This partition does not consist of matched sensors and language utterances. The selected utterances are segmented into $10$s, $30$s, and $60$s second utterances and made available to the participants as the private test set. The total number of utterances available in this partition is $9521$. The partition consists of cross-language utterances from $10$ Indian languages.
\begin{table}[!t]
\centering
\caption{Baseline results on I-MSV dataset}
\label{baseline_perf}
\begin{tabular}{
|p{0.2\columnwidth}
|p{0.3\columnwidth}
|p{0.3\columnwidth}
|}
\hline
& \multicolumn{2}{c|}{\textbf{EER} (\%)} \\
\cline{2-3}
\textbf{Model} & \textbf{Overall} & \textbf{Matched Sensor and Language} \\
\hline
I-vector & $13.72$ & $4.61$ \\
\hline
X-vector & $9.32$ & $2.40$ \\
\hline
X-vector (unconstrained) & $8.15$ & $0.82$ \\
\hline
\end{tabular}
\end{table}
\begin{table*}[!t]
\centering
\caption{Summary of top $5$ submissions to the challenge. FE:=\emph{Frontend}, LF:=\emph{Loss Function}, BE:=\emph{Backend}, C-SV:={Constrained-SV}, UC-SV:={Unconstrained-SV}.}
\label{submission_summary}
\begin{tabular}{
|p{0.05\linewidth}
|p{0.2\linewidth}
|p{0.22\linewidth}
|p{0.22\linewidth}
|p{0.08\linewidth}
|p{0.08\linewidth}
|}
\hline
& & & & \multicolumn{2}{c|}{\textbf{EER} (\%)} \\
\cline{5-6}
\textbf{Team} &
\textbf{BE} &
\textbf{LF} &
\textbf{FE} &
\textbf{C-SV} &
\textbf{UC-SV} \\
\hline
team0 & Rawnet3 & Training: triplet margin loss; Fine-tuning: AAM Loss + K-Subcenter loss + Inter-topK loss & Cosine similarity & -- & $0.26$ \\
\hline
team1 & ResNet with SE attention & Softmax + Angular Prototypical Loss & Model scoring (DNN, Random Forest and Gradient Boosting Trees) & -- & $0.36$ \\
\hline
team2 & ECAPA-TDNN + SE-ResNet blocks & Weight Transfer loss + AAM-Softmax loss + L2 loss & Cosine similarity & $2.12$ & $0.63$ \\
\hline
team3 & ECAPA-TDNN SE-ResNet blocks & AAM Loss & Cosine similarity & $2.77$ & $2.70$ \\
\hline
team4 & ECAPA-TDNN + SE-ResNet blocks & Large
Margin Cosine Loss & PLDA & 2.97 & $2.97$ \\
\hline
\end{tabular}
\end{table*}
\section{Performance Measures and Baselines}
\label{4}
This challenge employs the Equal Error Rate (EER) measure to compare the performances of the different submissions with the baseline results. This section briefly describes the method of computing the EER measure and reports the baseline results on the I-MSV dataset. Let, $N_{P}$ and $N_{N}$ be the number of positive and negative test samples in the data, respectively. The number of samples out of a total of $N_{P}$ positive samples predicted as positive are termed as True Positives (TP). On the other hand, the number of samples out of a total of $N_{N}$ negative samples correctly predicted as negative are termed as True Negatives (TN). Incorrectly predicted positive and negative samples are termed as False Positives (FP) and False Negatives (FN), respectively. The prediction of a test sample as positive or negative is based on a pre-determined threshold $\tau$ which may be varied. The total number of TP, TN, FP, and FN for the whole test data can be used to compute two measures, viz., False Acceptance Rate (FAR) and False Rejection Rate (FRR). The FAR can be defined using eq.~\ref{far}.
\begin{equation}
\label{far}
\text{FAR}=\dfrac{FP}{FP+TN}
\end{equation}
\noindent Similarly, the FRR can be defined as in eq.~\ref{frr}.
\begin{equation}
\label{frr}
\text{FRR}=\dfrac{FN}{TP+FN}
\end{equation}
\noindent When $\tau$ is varied, different values of FAR and FRR can be obtained. Among all the different $\tau$ used, a specific threshold $\tau_{equal}$ can be identified which provides equal (or almost equal) values of FAR and FRR. The EER measure is computed as the mean of FAR and FRR at $\tau_{equal}$ (eq.~\ref{eer}).
\begin{equation}
\label{eer}
\text{EER}=\dfrac{1}{2} \left(FAR+FRR\right)
\end{equation}
\noindent where, $\mid \text{FAR}-\text{FRR}\mid \to 0$.
The challenge organizers provided results on the I-MSV dataset using Kaldi based I-vector and X-vector systems as a baseline for comparison. The baseline performances are reported in Table~\ref{baseline_perf}.
\begin{figure*}[!t]
\centerline{
\includegraphics[width=0.7\linewidth]{Duration_performance}
}
\centerline{
(a)
}
\centerline{
\includegraphics[width=0.7\linewidth]{Language_performance}
}
\centerline{
(b)
}
\centerline{
\includegraphics[width=0.7\linewidth]{Sensor_performance}
}
\centerline{
(c)
}
\caption{Illustrating the effect of (a) different duration, (b) different languages, and (c) different sensors on the performance of submitted systems.}
\label{fig:duration_language_sensor_effect}
\end{figure*}
\section{Systems and Results}
\label{5}
A total of $25$ teams registered for the I-MSV 2022 challenge. Among these, $10$ teams submitted their results for the public test set evaluation. For the private test set evaluation, a total of $6$ teams submitted their results and systems. The best $5$ participating systems are summarised in the next paragraph. Table~\ref{submission_summary} lists a brief summary of the top $5$ systems.
The submission of \emph{team0} obtained the best EER of $0.26$ on the private test set using unconstrained training data. The best system of \emph{team0} used the Rawnet3 architecture~\cite{jung2022raw} as their front-end system. They initially trained the model with a Triplet Margin loss~\cite{BMVC2016_119}. Subsequently, they fine-tuned their model with a combination of Adaptive Angular Margin (AAM) K-Subcenter loss~\cite{deng2019arcface} and Inter-TopK loss~\cite{zhao2022multi}. They performed the backend scoring using the cosine-similarity measure and used adaptive score normalization.
The second best EER of $0.36$ using unconstrained data was obtained by \emph{team1}. They used the ResNet-34 architecture proposed in~\cite{heo2020clova} with Attentive Statistics Pooling~\cite{okabe18_interspeech} for their front-end. They trained the model using a combination of vanilla Softmax loss and Angular Prototypical loss~\cite{chung20b_interspeech}. They also proposed a two-layer model scoring system composed of Fully-Connected Feed-Forward layers, Random Forests and Gradient Boosting Trees.
The EER obtained by \emph{team2} on the constrained data scenario was $2.12$. They achieved an EER of $0.63$ using unconstrained training data. They used combination of ECAPA-TDNN~\cite{desplanques20_interspeech} and ResNet-34~\cite{heo2020clova} with Squeeze-and-Excitation (SE) attention as front-end models to obtain the best results in the constrained data scenario. However, only the ResNet-34-SE network provided the best performance in the unconstrained scenario. For the unconstrained scenario, they fine-tuned the backbone model using a combination of Weight-Transfer loss~\cite{zhang2022npu}, AAM-Softmax loss and $L_{2}$ loss. The backend scoring was performed using cosine similarity measure.
The \emph{team3} obtained an EER of $2.77$ in the constrained scenario and and EER of $2.70$ in the unconstrained scenario. They used a similar front-end system as that of \emph{team2} and trained it using the AAM loss. They also performed the backend scoring using cosine similarity.
The EER obtained by \emph{team4} in the unconstrained scenario was $2.97$. They also employed a similar front-end architecture as that of \emph{team2} and used the Large Margin Cosine loss for training. They performed the backend scoring using Probabilistic Linear Discriminant Analysis (PLDA)~\cite{jiang12_interspeech}.
\section{Summary and Discussion}
\label{6}
The results obtained by the submitted systems can be summarised along the following broad directions. First, use of unconstrained training data is hugely beneficial in performing SV in low-resource scenario like the current challenge. Second, automatic feature learning and end-to-end models can learn highly discriminating features. Third, the choice of loss function for the front-end system has a huge impact on the obtained performance of similar architectures. Fourth, simple backend scoring like cosine similarity might be enough if the learnt speaker embedding are highly discriminating. Fifth, longer utterances (Fig.~\ref{fig:duration_language_sensor_effect}(a)) are more helpful in identifying the speakers. Sixth, change in language (Fig.~\ref{fig:duration_language_sensor_effect}(b)) degrades the SV performance. However, it might also be noted that such an observation may also be the result of imbalance in the number of utterances for the different languages in the I-MSV dataset. Seventh, the change in sensor (Fig.~\ref{fig:duration_language_sensor_effect}(a)) has a huge impact on the performance of SV systems. More specifically, SV systems fare poorly when presented with telephone channel recordings. In future, better SV systems may be developed by taking into consideration the observations made in this challenge.
\section*{Acknowledgments}
The authors like to acknowledge Ministry of Electronics and Information Technology (MeitY), Govt. of India, for supporting us through "Bhashini: Speech technologies in Indian languages" project. We are also grateful to K. T. Deepak, Rajib Sharma and team (IIIT Dharwad, Karnataka), S. R. Nirmala, S. S. Chikkamath and team (KLETech, Hubballi, Karnataka), Debadatta Pati, Madhusudan Singh and team (NIT Nagaland, Nagaland), Joyanta Basu, Soma Khan and team (CDAC Kolkata, WB), Akhilesh Kumar Dubey, Govind Menon and team (KLU Vijayawada, AP), Gayadhar Pradhan, Jyoti Prakash Singh and team (NIT Patna, Bihar), and S. R. M. Prasanna, Gayathri A. and team (IIT Dharwad, Karnataka) for their help and cooperation in successfully organizing this challenge.
|
2,869,038,157,046 | arxiv | \section{Introduction}
Several tasks related to foreign and second language (L2) learning can be partly or entirely automatized with the help of Natural Language Processing (NLP) tools. One such task is exercise generation, whose automation offers both self-directed learning opportunities and support for teaching professionals' practice. The pedagogical relevance and practical usefulness of such solutions, however, would need to be further improved before these systems can become widely used in language instruction. During our work, we aimed at maintaining a pedagogical angle, on the one hand, by incorporating statistical information from existing hand-written teaching materials into our selection criteria and, on the other hand, by evaluating the performance of our system with L2 teachers and learners.
Practice plays an important role in L2 learning for the development of both receptive and productive skills \cite{dekeyser2007practice}. Corpora as potential practice material are readily available in large quantities, however, their use in L2 teaching has been both supported and opposed in previous years, \citeasnoun{o2007corpus} present an overview of this debate. Corpora offer a large amount of diverse examples at a low cost, and their use has been shown to have a positive effect on learners’ progress \cite{cobbthere,cresswell2007getting}. Moreover, corpora are evidence of real-life language use which, however, might be hard for learners to process \cite{kilgarriff2009corpora}. Non-authentic, teacher-constructed materials have also been subject to criticism. While this approach benefits from teachers' expert knowledge, these materials are “based on intuition about how we use language, rather than actual evidence of use” \cite[p.~21]{o2007corpus}. We aim at bringing together intuition and evidence about language use by employing insights from coursebooks for selecting examples from real-life corpora (e.g.~news texts, novels).
Recent years have seen a number of efforts in the NLP community to automatically generate exercise items (e.g.~\cite{arregik2011automatic,smith2010gap,sumita2005measuring}). Most of these, however, tend to neglect what criteria sentences should fulfil in order to be suitable as exercise items and, instead, build on either a predefined set of manually selected sentences, or require merely a certain linguistic pattern (e.g.~a particular word) to be present in the sentence (see Section~\ref{ssec:ex_item}). When selecting sentences from corpora, however, there are a number of additional aspects that sentences need to adhere to in order to be usable and understandable in isolation. These have been previously explored mostly in a lexicographic context \cite{kilgarriff2008gdex}, but they are also relevant for language teaching \cite{kilgarriff2009corpora}. Two fundamental questions in this respect are: (i) Can the sentence function in isolation, outside its larger textual context? (ii) Is the complexity of the linguistic content of the sentence suitable for the intended L2 learner(s)?
We will refer to the former as \emph{context independence} and to the latter as \emph{L2 complexity}.
Language learners pass through different learning stages (levels) reflecting the development and improvement of their competences. A scale of such levels is \textit{CEFR}, the Common European Framework of Reference for Languages \cite{councilofeurope2001}. The CEFR defines proficiency levels on a six-point scale: A1 (beginner), A2 (elementary), B1 (intermediate), B2 (upper intermediate), C1 (advanced) and C2 (mastery). A subset of language learners’ competences are \textit{linguistic competences}, which include, among others, lexical, grammatical and semantic competences. When assessing L2 complexity, we concentrate on linguistic competences required for reading comprehension since these can be matched to linguistic elements observable in language samples written for learners at different CEFR levels.
Both context independence and L2 complexity emerged as a main reason for discarding candidate sentences in previous evaluations \cite{arregik2011automatic,Pilan-Ildiko2013-9}, but thorough methods targeting these aspects have not been proposed up to date to our knowledge.
Our approach, building on previous attempts at selecting sentences, contributes to previous research by offering a comprehensive set of criteria and by performing a more sophisticated selection in terms of the two fundamental aspects just mentioned, context independence and L2 complexity. We propose a hybrid system with both rule-based and machine learning driven components that encompasses a wide range of aspects. Incorporating rules makes the system customizable to users' needs and thus relevant for a wide range of application scenarios including vocabulary and grammatical exercises of different formats, as well as vocabulary examples. An evaluation with teachers and students indicates that our system identifies sentences that are, in general, of a suitable level of difficulty for learners. The algorithm is available to the general public free of charge both as a customizable sentence selection interface and as a web service. The development of automatically generated exercises using the selected sentences is also in progress.
Our target language is Swedish, a language for which the number of L2 learners has grown rapidly over recent years \cite{scb2016}. Although the current implementation is based on resources and tools for Swedish, the methods described can serve as an example for future implementations of exercise item candidate selection systems for other languages.
This paper is structured as follows. In Section~\ref{sec:background}, we provide an overview of the related literature. Then, in Section~\ref{sec:implementation}, we describe our sentence selection framework in detail together with its implementation. Finally, in Section~\ref{sec:evaluation}, we present and discuss the results of a user-based evaluation of the system.
\section{Related work}
\label{sec:background}
In this section, we provide an overview of the related literature which includes sentence selection strategies for both vocabulary examples and exercise items as well as studies on readability and CEFR level prediction.
\subsection{Sentence selection for vocabulary examples}
\label{ssec:context_indep}
GDEX, Good Dictionary Examples \cite{husak2010automatic,kilgarriff2008gdex} is an algorithm for selecting sentences from corpora for the purposes of illustrating the meaning and the usage of a lexical unit. It incorporates a number of linguistic criteria (e.g.~sentence length, vocabulary frequency, anaphoric pronouns) based on which example candidates are ranked. Some of these are related to context dependence (e.g.~incompleteness of sentences, presence of personal pronouns), but they are somewhat coarse-grained criteria without a focus on syntactic aspects.
Besides English, the algorithm has also been successfully implemented for other languages. \citeasnoun{kosem2011gdex} and \citeasnoun{tiberius2015gdex} explore GDEX configurations for Slovene and Dutch respectively, aiming at identifying the optimal parameter settings for these languages for lexicographic projects.
\citeasnoun{didakowski2012automatic} propose an example selection algorithm similar to GDEX for German. A fundamental difference of this method compared to the ranking mechanism of GDEX is having "hard criteria" which, if not met, result in sentences being excluded.
GDEX has also inspired a Swedish algorithm for sentence selection \cite{volodina2012semi} and it has been employed also for generating gap-fill exercises \cite{smith2010gap}. Furthermore, a number of machine learning approaches have been explored for these purposes in recent years \cite{geyken2015using,lemnitzer2015combining,ljubevsic2015predicting}.
Example sentence selection for illustrating lexical items has also been addressed from a language teaching perspective by \citeasnoun{segler2007investigating}, where a set of selection criteria used by teachers was modelled with logistic regression. The main dimensions examined include syntactic complexity and similarity between the original context of a word and an example sentence.
\subsection{Sentence selection for exercise item generation}
\label{ssec:ex_item}
In a language-learning scenario, corpus example sentences can be useful both as exercise items and as vocabulary examples. Sentences used in exercises are also known as \textit{seed sentences} \cite{sumita2005measuring} or \textit{carrier sentences} \cite{smith2010gap} in the Intelligent, i.e.~NLP-enhanced, Computer-Assisted Language Learning (ICALL) literature.
Previous work on exercise item generation has taken into consideration a rather limited amount of aspects when selecting seed sentences. In some cases, sentences are only required to contain a lexical item or a linguistic pattern that constitutes the target of the exercise, but context dependence and L2 complexity are not explicitly addressed \cite{sumita2005measuring,Mitkov:2006:CEG:1133917.1133920,arregik2011automatic,wojatzki-melamud-zesch:2016:BEA11}.
LanguageMuse \cite{burstein2012language}, a system supporting teachers in generating activities for learners of English, also belongs to this category. The sentences are selected from texts provided by teachers, the criteria of selection being the presence of a specific linguistic element that constitutes the target of the exercise: a lexical entity, a syntactic structure or a discourse relation.
Another alternative has been using dictionary examples as seed sentences, e.g.~from WordNet \cite{pino2009semi}. Such sentences are inherently context-independent, however, they impose some limitations on which linguistic aspects can be targeted in the exercises and they are not adjusted to finer-grained L2 learning levels. A system using GDEX for seed sentence selection is described in \citeasnoun{smith2010gap}, who underline the importance of the well-formedness of a sentence and determine a sufficient amount of context in terms of sentence length.
\citeasnoun{P16-4020} describe an ICALL system for fill-in-the-blanks preposition learning exercises, where seed sentences are checked for their lexical difficulty based on the level of the words according to a graded vocabulary lists. \citeasnoun{pilan-volodina-johansson:2014:W14-18} present and compare two algorithms for candidate sentence selection for Swedish, using both rule-based and machine learning methods. Context dependence, which has not been specifically targeted in their system, emerged as a key issue underlying suboptimal candidate sentences during an empirical evaluation.
\subsection{Readability and proficiency level classification}
\label{ssec:readability}
The degree of complexity in the linguistic content of sentences and texts is one of the aspects underlying not only proficiency levels, but also readability. Readability measures typically classify texts into school grade levels or into a binary category of easy- vs. hard-to-read, but the term has also been used in the context of CEFR level classification, e.g.~\citeasnoun{xia-kochmar-briscoe:2016:BEA11}, \citeasnoun{franccois2012ai}.
In recent years a number of NLP-based readability models have been proposed not only for English \cite{collins2004language,schwarm2005reading,graesser2011coh,vajjala2012improving,collinscomputational}, but also for other languages, e.g.~Italian \cite{dellorletta-montemagni-venturi:2011:SLPAT} and Swedish \cite{heimann2013see}. The linguistic features explored so far for this task include information, among others, from part-of-speech (POS) taggers and dependency parsers. Cognitively motivated features have also been proposed, for example, in the Coh-Metrix \cite{graesser2011coh}. Although the majority of previous work focuses primarily on text-level analysis, the concept of sentence-level readability has also emerged and attracted an increasing interest in recent years \cite{Pilan-Ildiko2013-9,Vajjala.Meurers-14-eacl,dell2014assessing}.
The prediction of proficiency levels for L2 teaching materials using supervised machine learning methods has been explored for English \cite{heilman2007combining,huang2011robust,zhang2013feature,salesky-shen:2014:W14-18,xia-kochmar-briscoe:2016:BEA11}, French \cite{franccois2012ai}, Portuguese \cite{branco2014rolling}, Chinese \cite{sung2015leveling} and, without the use of NLP, for Dutch \cite{velleman2014online}.
Readability formulas for the Swedish language have a long tradition. One of the most popular, easy-to-compute formulas is LIX (\emph{Läsbarhetsindex}, `Readability index') proposed by \citeasnoun{bjornsson1968lasbarhet}. This measure combines the average number of words per sentence in the text with the percentage of long words, i.e.~tokens consisting of more than six characters. Besides traditional formulas, supervised machine learning approaches have also been tested. A Swedish document-level readability model is described by \citeasnoun{heimann2013see} and \citeasnoun{falkenjack2013features}. \citeasnoun{pilan2015readable}, on the other hand, investigate L2 complexity for Swedish both at document and sentence level.
\section{HitEx: a candidate sentence selection framework and its implementation}
\label{sec:implementation}
In this section, we present our candidate sentence selection framework, HitEx (\textit{Hitta Exempel} ‘Find Exemples’) and its implementation. After an overall description, we introduce and motivate each selection criteria in Sections~\ref{ssec:trg_pattern} to \ref{ssec:lex_crit}.
\subsection{Overall description}
In Table~\ref{tab:criteria}, we show the selection criteria belonging to the proposed framework, grouped into broader categories. Each \emph{criterion} is used to scan a sentence for the presence (or the absence) of linguistic elements associated to its "goodness", i.e.~its suitability for the intended use. Most criteria target aspects that are negatively correlated to the goodness of a sentence. Certain selection criteria are associated with one (or more) numeric \textit{parameter(s)} that users can set, e.g.~the minimum and maximum number of tokens for the sentence length criterion.
The categories concerning the search term, well-formedness and context independence can be considered \emph{generic} criteria that are applicable for a number of different use cases, e.g.~different exercise types, vocabulary examples, while the rest of the criteria are more \emph{specific} for exercise item generation.
In general, the sources that served as basis for these criteria include previous literature (Section \ref{sec:background}), L2 curricula and the qualitative results of previous user-based evaluations \cite{volodina2012semi,pilan-volodina-johansson:2014:W14-18}.
\begin{table}
\centering
\begin{tabular}{cl|cl}
\toprule
{\bf Nr} & {\bf Criterion} & {\bf Nr} & {\bf Criterion} \\
\midrule
& \bf Search term & & \bf Additional structural criteria \\
1 & \it Absence of search term & 13 & Negative formulations \\
2 & Number of matches & 14 & \it Interrogative sentence \\
3 & \it Position of search term & 15 & \it Direct speech \\
& \bf Well-formedness & 16 & \it Answer to closed questions \\
4 & \it Dependency root & 17 & Modal verbs \\
5 & Ellipsis & 18 & Sentence length \\
6 & \it Incompleteness & & \bf Additional lexical criteria\\
7 & Non-lemmatized tokens & 19 & Difficult vocabulary \\
8 & Non-alphabetical tokens & 20 & Word frequency \\
& \bf Context independence & 21 & Out-of-vocabulary words \\
9 & \it Structural connective in isolation & 22 & Sensitive vocabulary\\
10 & Pronominal anaphora & 23 & Typicality \\
11 & Adverbial anaphora & 24 & Proper names \\
12 & \bf L2 complexity in CEFR level & 25 & Abbreviations\\
\bottomrule
\end{tabular}
\caption{HitEx: a sentence selection framework.}
\label{tab:criteria}
\end{table}
We implemented a \emph{hybrid system} which uses a combination of machine-learning methods for assessing L2 complexity and heuristic rules for all other criteria. The motivation behind using rules is, on the one hand, that certain linguistic elements are easily identifiable with such methods. On the other hand, a sufficient amount of training data encompassing the range of all possible exercise types would be extremely costly to create. Moreover, explicit rules make the sentence selection customizable to users' task-specific needs which increases the applicability of HitEx to a diverse set of situations. The criterion of L2 complexity has been implemented using machine learning methods since its assessment comprises multiple linguistic dimensions and data was available for approaching this sub-problem in a data-driven fashion.
A few selection criteria in our framework are re-implementations of those described by \citeasnoun{volodina2012semi} and \citeasnoun{pilan-volodina-johansson:2014:W14-18}. Major additions to previous work include: (i) L2 complexity assessment on a 5-level scale, vs.~a previously available binary classification model by \citeasnoun{pilan-volodina-johansson:2014:W14-18}, (ii) typicality and (iii) the assessment of context dependence. Sensitive vocabulary filtering and the use of word frequencies from \textit{SVALex} \cite{FRANCOIS16.275}, a word list based on coursebook texts, are also novel aspects that we incorporated with the aim of making the sentence selection algorithm more pedagogically aware.
Our implementation relies on a number of different NLP resources. Our system searches for sentence candidates via \emph{Korp} \cite{borin2012korp}, an online infrastructure providing access to a variety of (mostly) Swedish corpora. The concordance web service of Korp provides a list of corpus examples containing a certain user-specified search term, e.g.~an uninflected word, \textit{lemma} or a grammatical structure. Through Korp, a large variety of text genres are available such as novels, blogs, news and easy-to-read texts. All corpora are annotated at different linguistic levels, which include lemmatization, part-of-speech (POS) tagging and dependency parsing. HitEx assesses sentences based on these annotations as well as information from a number of Swedish lexical-semantic resources. A major lexical resource used is SALDO \cite{borin2013saldo} which is based on lexical-semantic closeness between word senses organized in a tree structure.
As a first step in our sentence \emph{scoring algorithm}, for each candidate sentence $s$ \Pisymbol{psy}{206} $S$, we apply a linguistic criterion $c$ \Pisymbol{psy}{206} $C$ to $s$ either as a \emph{filter} $f$ \Pisymbol{psy}{206} $F$ or as a \emph{ranker} $r$ \Pisymbol{psy}{206} $R$, that is $C = F \cup R$.
The application of each criterion $c_k$ to all the sentences, $c_k(S) = V_{c_k}$ is a set of criterion \emph{values} ($v_{c_k}$ \Pisymbol{psy}{206} $V_{c_k}$). $V_{c_k}=\{0,1\}$ when $c_k$ \Pisymbol{psy}{206} $F$ and $V_{c_k}$ \Pisymbol{psy}{206} $\mathbb{R}$ when $c_k$ \Pisymbol{psy}{206} $R$; for instance, when $c_k$ is the proper names criterion used as a ranker, $v_{{c_k}{s_i}}$ corresponds to the number of times a proper name appears in $s_i$ \Pisymbol{psy}{206} $S$.
If $s_i$ contains an undesired linguistic element associated to $c_k$ \Pisymbol{psy}{206} $F$, then $v_{{c_k}{s_i}}=1$, and $s_i$ is excluded from the ranking of suitable candidates. Further details about how we obtain $V_{c_k}$ are outlined in Sections~\ref{ssec:trg_pattern} to \ref{ssec:lex_crit}.
Some criteria encode binary characteristics (e.g.~interrogative sentence), therefore, only $c_k$ \Pisymbol{psy}{206} $F$ holds for these. We present these in italics in Table~\ref{tab:criteria}.
To rank non-filtered sentences, we compute a \emph{goodness} score $G_{s_i}$ \Pisymbol{psy}{206} $\mathbb{N}$, which reflects the degree to which $s_i$ is a suitable candidate based on $R$.
When $c_k$ \Pisymbol{psy}{206} $R$, $R=R^+ \cup R^-$, where $r^+$ \Pisymbol{psy}{206} $R^+$ is a \emph{positive ranker} for positively correlated properties with goodness, namely typicality and SVALex frequencies; and $r^-$ \Pisymbol{psy}{206} $R^-$ is a \emph{negative ranker} that includes all other criteria.
Based on $V_{c_k}$, we compute an intermediate (per-criterion) goodness score ($subG_{{c_k}{s_i}}$) for each $s_i$, by sorting $S$ based on $V_{c_k}$ and assigning the ranking position of $s_i$ according to $V_{c_k}$ to $subG_{{c_k}{s_i}}$. Consequently, the number of subscores will be equal to the number of $c_k$ \Pisymbol{psy}{206} $R$ selected.
During this sorting, for $s_i$ \Pisymbol{psy}{206} $S$ and $s_j$ \Pisymbol{psy}{206} $S$, for $r^+$ $subG_{{c_k}{s_i}} \geq subG_{{c_k}{s_j}} \Leftrightarrow v_{{c_k}{s_i}} \geq v_{{c_k}{s_j}}$ holds, and for $r^-$ $subG_{{c_k}{s_i}} \geq subG_{{c_k}{s_j}} \Leftrightarrow v_{{c_k}{s_i}} \leq v_{{c_k}{s_j}}$ applies. In other words, we rank $S$ based on an ascending order of goodness if $c_k$ \Pisymbol{psy}{206} $R^+$ and a descending order of badness if $c_k$ \Pisymbol{psy}{206} $R^-$. Therefore, more suitable candidates receive a higher $subG_{{c_k}{s_i}}$. For example, suppose $r_k^-$ is proper names and $s_i$ contains 2 proper names, while $s_j$ contains none; then $subG_{{c_k}{s_i}}=1$ and $subG_{{c_k}{s_j}}=2$. The score $G_{s_i}$ is then computed by summing all subscores, that is $G_{s_i}=\sum subG_{{c_k}{s_i}}$. Finally, candidate sentences are ordered in a decreasing order based on $G_{s_i}$. A weighting scheme similar to GDEX would be possible with the availability of data specific for the end use of the sentences from where to estimate these weights. At the current stage, all ranking criteria contribute equally to $G_{s_i}$.
Suboptimal sentences containing elements to filter can also be retained and ranked separately, if so wished, based on the amount of $F$ matched.
The final results include, for each $s_i$ \Pisymbol{psy}{206} $S$, its summed overall score ($G_{s_i}$), its final rank and \emph{detailed information} per selection criteria, as the screenshot presenting the system's graphical user interface in Figure~\ref{img:gui} in Section~\ref{sec:platform} shows. In the following subsections, we present each criterion in detail, grouped into categories.
\subsection{Search term}
\label{ssec:trg_pattern}
A \emph{search term} corresponds to one (or more) linguistic element(s) that users would like the selected sentences to contain. It can be either a lexical element such as an inflected word or a lemma; or a grammatical pattern, e.g.~verbs in a certain tense followed by a noun. The presence of a search term is guaranteed through the mere use of the Korp concordance web service which only returns sentences containing the searched expression. In some application scenarios, repeated matches of the search term may be considered suboptimal \cite[p.~157]{kosem2011gdex}, therefore we include this aspect among our criteria. Similarly, there might be a preference for the \textit{position} of the search term in the sentence in some use cases such as dictionary examples \cite{kilgarriff2008gdex}.
\subsection{Well-formedness}
Good candidate sentences from corpora should be structurally and lexically well-formed \cite{kilgarriff2008gdex,husak2010automatic}. We incorporate two criteria targeting the former aspect: one can check sentences for the presence of a dependency \emph{root}, and \emph{ellipsis}, i.e.~the lack of a subject or a finite verb (all verb forms except infinitive, supine and participle) inspired by \citeasnoun{volodina2012semi}. The completeness criterion checks the beginning and the end of a sentence for orthographic clues such as capital letters and punctuation, in a similar fashion to \citeasnoun{pilan:2016:BEA11}. A large amount of \emph{non-lemmatized tokens}, i.e.~tokens for which no matching dictionary form could be identified (in the SALDO lexicon in our case), are also preferably avoided \cite[p.~15]{husak2010automatic}. These are mostly cases of spelling or optical character recognition errors, foreign words, infrequent compounds, etc. A large portion of \emph{non-alphabetical tokens} could be e.g.~a sign of mark-up traces in web material, which has a negative influence on the L2 complexity and the usability of a sentence \cite[p.~15]{husak2010automatic}. Users can specify a constant as a threshold for these criteria to determine the allowed amount of non-lemmatized and non-alphabetical tokens in a sentence.
\subsection{Context independence}
\label{ssec:context_indep_crit}
Since sentences originally form part of coherent texts, a crucial aspect to take into consideration during selection is whether sentences would be meaningful also as a stand-alone unit without their original, larger context. The presence of linguistic elements responsible for connecting sentences at a syntactic or semantic level is therefore suboptimal \cite{kilgarriff2008gdex}. We incorporate a number of criteria for capturing this aspect which we described also in \citeasnoun{pilan:2016:BEA11}.
Syntactic aspects include \emph{structural connectives}, i.e.~conjunctions and subjunctions \cite{webber2003anaphora}. Two concepts connected by structural connectives may appear in separate sentences which give rise to context dependence. Our system considers sentences with connectives in sentence-initial position context dependent unless the sentence consists of more than one clause. Connectives that are paired conjunctions are also allowed (e.g.~\textit{antingen ... eller} `either ... or').
\emph{Anaphoric expressions} referring to previously mentioned information are aspects related to the semantic dimension. Our pronominal anaphora criterion targets mentions of the third person singular pronouns \textit{den} ‘it’ (common gender) and \textit{det} ‘it’ (neuter gender) as well as the demonstrative pronouns (e.g.~\textit{denna} ‘this’, \textit{sådan} ‘such’ etc.). The non-anaphoric use of \textit{det} (e.g.~in clefts: \textit{It is carrots that they eat.}), however, is not counted here. Such cases can be distinguished based on the output of the dependency parser: these occurrences of \textit{det} are tagged as expletive (pleonastic). Pronouns followed by a relative clause introduced by \textit{som} `which' are also considered non-anaphoric.
Under \textit{adverbial anaphora}, we count time and location adverbs that behave anaphorically (e.g.~\textit{då} ‘then’) \cite{webber2003anaphora}. Another group of adverbs relevant for anaphora are those expressing logical relations (e.g.~\textit{istället} ‘instead’), which are also referred to as \textit{discourse connectives} \cite{webber2003anaphora}. Based on
\citeasnoun{teleman1999svenska}, a list of anaphoric adverbs has been collected and sentences are checked for the occurrence of any of the listed items.
\subsection{L2 complexity}
\label{ssec:l2_complexity_crit}
The aspect of L2 complexity has been assessed with the help of a supervised machine learning algorithm based on a number of different linguistic dimensions. We used the CEFR level classifier for sentences that we previously described in \citeasnoun{pilan2015readable}. The source of the training data was single sentences from COCTAILL \cite{volodina22you}, a corpus of coursebook texts for L2 Swedish. Such single sentences occurred either in the form of lists or so-called \emph{language examples}, sentences exemplifying a lexical or a grammatical pattern. The feature set used for assessing L2 complexity is presented in Table~\ref{table:feature_set}. This set consists of five subgroups of features: count-based, lexical, morphological, syntactic, and semantic features.
\begin{table}[h]
\begin{center}
\begin{tabular}{ll|ll}
\toprule
\bf Name & \bf Type & \bf Name & \bf Type\\
\midrule
Sentence length & \textsc{Count} & Modal V to V & \textsc{Morph}\\
Avg token length & \textsc{Count} & Particle IS & \textsc{Morph}\\
Extra-long token & \textsc{Count} & 3SG pronoun IS & \textsc{Morph}\\
Nr characters & \textsc{Count} & Punctuation IS & \textsc{Morph}\\
LIX & \textsc{Count} & Subjunction IS & \textsc{Morph}\\
Bilog TTR & \textsc{Count} & PR to N & \textsc{Morph}\\
Square root TTR & \textsc{Count} & PR to PP & \textsc{Morph}\\
\cline{1-2}
Avg \textsc{KELLY} log freq & \textsc{Lexical} & S-V IS & \textsc{Morph}\\
A1 lemma IS & \textsc{Lexical} & S-V to V & \textsc{Morph}\\
A2 lemma IS & \textsc{Lexical} & ADJ IS & \textsc{Morph}\\
B1 lemma IS & \textsc{Lexical} & ADJ variation & \textsc{Morph}\\
B2 lemma IS & \textsc{Lexical} & ADV IS & \textsc{Morph}\\
C1 lemma IS & \textsc{Lexical} & ADV variation & \textsc{Morph}\\
C2 lemma IS & \textsc{Lexical} & N IS & \textsc{Morph}\\
Difficult W IS & \textsc{Lexical} & N variation & \textsc{Morph}\\
Difficult N\&V IS & \textsc{Lexical} & V IS & \textsc{Morph}\\
OOV IS & \textsc{Lexical} & V variation & \textsc{Morph}\\
No lemma IS & \textsc{Lexical} & Function W IS & \textsc{Morph}\\
\cline{1-2}
Avg. DepArc length & \textsc{Syntactic} & Neuter N IS & \textsc{Morph} \\
DepArc Len $>$ 5 & \textsc{Syntactic} & CJ + SJ IS & \textsc{Morph}\\
Max length DepArc & \textsc{Syntactic} & Past PC to V & \textsc{Morph}\\
Right DepArc Ratio & \textsc{Syntactic} & Present PC to V & \textsc{Morph} \\
Left DepArc Ratio & \textsc{Syntactic} & Past V to V & \textsc{Morph} \\
Modifier variation & \textsc{Syntactic} & Supine V to V & \textsc{Morph}\\
Pre-modifier IS & \textsc{Syntactic} & Present V to V & \textsc{Morph}\\
Post-modifier IS & \textsc{Syntactic} & Nominal ratio & \textsc{Morph}\\
Subordinate IS & \textsc{Syntactic} & N to V & \textsc{Morph}\\
Relative clause IS & \textsc{Syntactic} & Lex T to non-lex T & \textsc{Morph} \\
PP complement IS & \textsc{Syntactic} & Lex T to Nr T & \textsc{Morph}\\
\cline{1-2}
Avg senses per token & \textsc{Semantic} & Relative structure IS & \textsc{Morph} \\
N senses per N & \textsc{Semantic} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{table:feature_set} The feature set for L2 complexity assessment.}
\end{table}
\textit{Count features} are based on the number of characters and tokens (\textit{T}), extra-long words being tokens longer than 13 characters. LIX, a traditional Swedish readability formula (see Section~\ref{sec:background}) combines the sum of the average number of words per sentence in the text and the percentage of tokens longer than six characters \cite{bjornsson1968lasbarhet}. Bi-logarithmic and a square root type-token ratio (TTR) related to vocabulary richness \cite{heimann2013see} are also computed.
\textit{Lexical features} incorporate information from the KELLY list \cite{volodina2012introducing}, a word list with frequencies calculated from a corpus of web texts (thus completely independent of the sentences in the dataset). KELLY provides a suggested CEFR level per each listed lemma based on frequency bands.
For some feature values, \textit{incidence scores} (IS) normalized values per 1,000 tokens are computed, which reduces the influence of sentence length. Word forms or lemmas themselves are not used as features, the IS of their corresponding CEFR level is considered instead.
\textit{Difficult} tokens are those that belong to levels above the overall CEFR level of the text. Moreover, we consider the IS of tokens not present in KELLY (\textit{OOV IS}), the IS of tokens for which the lemmatizer could not identify a corresponding lemma (\textit{No lemma IS}), as well as average KELLY log frequencies.
\textit{Morphological features} include both IS and variational scores, i.e.~the ratio of a category to the ratio of lexical tokens: nouns (N), verbs (V), adjectives (ADJ) and adverbs (ADV).
The IS of all lexical categories as well as the IS of punctuation, particles, sub- and conjunctions (SJ, CJ) are taken into consideration. In Swedish, a special group of verbs ending in -s are called \emph{s-verbs} (\textit{S-VB}). These can indicate either a reciprocal verb, a passive construction or a deponent verb (active in meaning but passive in form). Due to their morphological and semantic peculiarity, they are explicitly targeted in L2 grammar books \cite{formifocus}.
Nominal ratio \cite{hultman1977gymnasistsvenska} is another readability formula proposed for Swedish that corresponds to the ratio of nominal categories, i.e.~nouns, prepositions (PP) and participles to the ratio of verbal categories, namely pronouns (PR), adverbs, and verbs. Relative structures consist of relative adverbs, determiners, pronouns and possessives.
\textit{Syntactic features} are based, among others, on the length (depth) and the direction of dependency arcs (\textit{DepArc}). These aspects are related to readers working memory load when processing sentences \cite{gibson1998linguistic}.
For similar reasons, we consider also relative clauses as well as pre- and post-modifiers, which include, for example, adjectives and prepositional phrases respectively.
\textit{Semantic features} draw on information from the SALDO lexicon. We use the average number of senses per token and the average number of noun senses per noun. Polysemous words can be demanding for readers as they need to be disambiguated for a full understanding of the sentence \cite{graesser2011coh}.
\citeasnoun{pilan2015readable} utilizing the feature set described above report 63.4\% accuracy using a logistic regression classifier for the identification of CEFR levels with an exact match, and 92\% accuracy for classifications within a distance of one CEFR level.
Besides the features outlined above, also the lack of culture-specific knowledge can be a factor influencing L2 complexity, as well as learners' knowledge of other languages. We, however, do not address these dimensions in the current stage due to a lack of relevant data.
\subsection{Additional structural criteria}
Besides the aspects mentioned above, a number of additional structural criteria are available which proved to be relevant either based on previous evaluations \cite{volodina2012semi,Pilan-Ildiko2013-9} or evidence from coursebooks \cite{volodina22you}. One such aspect is \emph{negative wording} which is preferable to avoid in exercise items \cite{frey2005item}. All tokens with the dependency tag of negation adverbials fall under this criterion. Under the \emph{interrogative sentence} criterion, we handle direct questions ending with a question mark. To detect \emph{direct speech}, we have compiled a list of verbs denoting the act of speaking based on the Swedish FrameNet \cite{heppin2012rocky}. The list contains 107 verbs belonging to frames relevant to speaking (e.g.~\textit{viska} `whisper' from the \textit{Communication manner} frame). This is combined with POS tag patterns composed of a minor delimiter (e.g.~dash, comma) or a pairwise delimiter (e.g.~quotation marks), followed by a speaking verb (optionally combined with auxiliary verbs), followed by a pronoun or a proper name. Both questions and sentences containing direct speech tend to be less common as exercise items, incorporating these among our criteria allows users to avoid such sentences if so wished.
\emph{Answers to polar (or close-ended) questions} are rarely employed as exercise items and they were also negatively perceived in previous evaluations \cite{volodina2012semi,Pilan-Ildiko2013-9}. This aspect is also relevant to the dependence of a sentence on a wider context. The algorithm tries to capture sentences of this type based on POS patterns: sentence-initial adverbs and interjections (e.g.~\textit{ja} `yes', \textit{nej} `no') preceded and followed by minor delimiters where the initial delimiter is optional for interjections. \emph{Modal verbs} were identified based on a small set of verbs used typically (but not exclusively) as modal verbs (e.g.~\textit{kan} `can', `know') where the dependency relation tag indicating a verb group excludes the non-auxiliary use. \emph{Sentence length}, a criterion which is also part of GDEX, is measured as the number of tokens including punctuation in our system.
\subsection{Additional lexical criteria}
\label{ssec:lex_crit}
HitEx includes also options for filtering and ranking sentences based on information from lexical resources for ensuring an explicit control of this crucial aspect \cite{segler2007investigating}. Sentences containing \textit{difficult words}, i.e.~words whose CEFR level is above the target CEFR level according to the KELLY list, can be penalized or filtered. Besides KELLY, we also integrated into our system information from the SVALex list based on word frequencies from coursebook texts. Sentences with words absent from SVALex or with words below the average frequency threshold for the target CEFR level are thus additional scoring criteria. Another criterion involves the presence of \textit{proper names} which, although undesirable for dictionary examples \cite{kilgarriff2008gdex}, may be familiar and easy to understand for L2 students \cite{segler2007investigating}. Both proper names and \textit{abbreviations} were counted based on the POS tagger output.
In a pedagogical setting, certain \textit{sensitive vocabulary items} and topics tend to be avoided by coursebook writers. These are also referred to as PARSNIPs, which stands for Politics, Alcohol, Religion, Sex, Narcotics, Isms\footnote{An ideology or practice, typically ending with the suffix \textit{-ism}, e.g.~\textit{anarchism}.} and Pork \cite{gray2010construction}. Some topics are perceived as taboos cross-culturally, such as swear words, while others may be more culture-bound.
We compiled a word list starting with an initial group of seed words from more generally undesirable domains (e.g.~swear words) collected from different lexical and collaborative online resources (e.g.~Wikipedia\footnote{\url{https://www.wikipedia.org}.}) complemented with a few manually added entries. Furthermore, we expanded this automatically with all child node senses from SALDO for terms which represent sensitive topics (e.g.~\textit{narkotika} `narcotics', \textit{svordom} `profanities', \textit{mörda} `murder', etc.) so that synonyms and hyperonyms would also be included. A few common English swear words that are frequently used in Swedish in an informal context (e.g.~blog texts) were also incorporated. The current list of 263 items is not exhaustive and can be expanded in the future. The implementation allows teaching professionals to make the pedagogical decision of tailoring the subset of topics to use to a specific group of learners during the sentence selection.
\textit{Typicality} can be an indication of more or less easily recognizable meaning of concepts without a larger context \cite{barsalou1982context}. We assessed the typicality of sentences with the help of a co-occurrence measure: \emph{Lexicographers' Mutual Information} (LMI) score \cite{kilgarriff2004itri}. LMI measures the probability of two words co-occurring together in a corpus and it offers the advantage of balancing out the preference of the Mutual Information score for low-frequency words \cite{bordag2008comparison}. We used a web service offered by Korp for computing LMI scores based on Swedish corpora. As a first step in computing the LMI scores, we collected nouns and verbs in the KELLY and SVALex lists (removing duplicates), which resulted in a list of 12,484~items. Then using these, we estimated LMI scores for all noun-verb combinations (nouns being subjects or objects) as well as LMI for nouns and their attributes using Korp. Counts were based on 8 corpora of different genres amounting to a total of 209,110,000 tokens. The resulting list of commonly co-occurring word pairs consisted of 99,112 entries. Only pairs with a threshold of LMI $\geq$ 50 were included. The typicality value of a candidate sentence corresponded to the sum of all LMI scores available in the compiled list for each noun and verb in the sentence.
\subsection{Integration into an online platform}
\label{sec:platform}
To provide access to our sentence selection algorithm to others, we have integrated it into a freely available learning platform, Lärka.\footnote{\url{https://spraakbanken.gu.se/larkalabb/hitex}.} With the help of a graphical interface, shown in Figure~\ref{img:gui}, users can perform a sentence selection customized to their needs. Under the advanced options menu, users can choose which selection criteria presented in Table~\ref{tab:criteria} to activate as filters or rankers.
Moreover, the selection algorithm will serve as a seed sentence provider for automatically generated multiple-choice exercises for language learners within the same platform. The sentence selection algorithm is also available as a web service that others can easily integrate in their own systems.
\begin{figure
\centering
\fbox{\includegraphics[width=0.90\textwidth]{gui_with_results}}
\caption{The HitEx user interface with {\normalfont fisk} `fish' as search term. }
\label{img:gui}
\end{figure}
\section{A user-based evaluation}
\label{sec:evaluation}
The main objective when developing our candidate selection algorithm was to identify seed sentences for L2 learning exercises. In absence of an annotated dataset for this task in Swedish, we tested the performance of HitEx with the help of a user-based evaluation.
We assessed the goodness of the candidate sentences in two ways: (i) through L2 teachers confirming their suitability, (ii) by inspecting whether L2 learners' degree of success in solving exercise items constructed based on these candidates matched what is typically expected in L2 teaching. This provided us with information about the extent to which the set of criteria proposed in Section \ref{sec:implementation} was useful for identifying suitable seed sentences. The evaluation sentences and the associated results will be available as a dataset on \url{https://spraakbanken.gu.se/eng/resources}.
\subsection{Participants}
The participants consisted of 5 teachers of L2 Swedish from different institutions and 19 students from a language school targeting young adults newly arrived to Sweden. Participating students were between ages 16 and 19 with a variety of native languages including several Somali and Dari speakers. The proportion of female and male students was approximately equal. The CEFR level of students is assessed on a regular basis with a two-month interval in their school. In our evaluation, as a point of reference for students' CEFR level, we referred to the levels achieved on their latest assessment test. According to this, 3 students were at A1 level, and the remaining 16 were a 50--50\% split between A2 and B1 levels.
\subsection{Material and task}
\label{sssec:material}
To create the evaluation material, we retrieved a set of sentences from Korp for CEFR levels A1--C1 using HitEx. To perform the Korp concordance search, we used lemmas from SVALex whose level corresponded to the level of the sentences we aimed at identifying. We used a lemma-based search and the parts of speech included nouns, verbs and adjectives. The sentences have been selected from 10 different corpora including novels and news texts. For each search lemma, a maximum of 300~matching Korp sentences were fed to the sentence selection algorithm, out of which only the top ranked candidate for each lemma was included in the evaluation material. Most selection criteria were used as filters, but typicality, proper names, KELLY and SVALex frequencies were used as rankers. Modal verbs were allowed in the sentences and the position of the search term was not restricted. Sentence length was set to a minimum of 6 and a maximum of 20 tokens. The threshold used for the percentage of non-alphabetic and non-lemmatized tokens was 30\%.
Teachers received 330 sentences to evaluate, evenly distributed across the 5~CEFR levels A1--C1. The sentences were divided into two subgroups based on their level, at least two teachers rating each sentence. One set consisted of A1--B1 level sentences and the other of sentences within levels B1--C1. (B1 level sentences have been evenly split between the two subsets.) There was a \emph{common subset} of 30 sentences from all 5~CEFR levels which was rated by all 5 teachers.
Besides an overall score per sentence reflecting the performance of the combination of all criteria from Table~\ref{tab:criteria}, we elicited teacher judgements targeting two criteria in particular, which were focal points during the implementation of HitEx, namely context independence and L2 complexity (see Sections~\ref{ssec:context_indep_crit} and \ref{ssec:l2_complexity_crit} respectively).
No specific exercise type needed to be considered for evaluating these aspects, but rather a more application-neutral scenario of a learner reading the sentence.
Teachers rated the three dimensions on a 4-point scale as defined in Table~\ref{table:eval_scale}. Besides these aspects, teachers were also required to suggest an alternative CEFR level if they did not agree with the one predicted by the system.
\begin{table}
\centering
\begin{tabular}{cl}
\toprule
\multicolumn{2}{l}{\it \bf The sentence...}\\
\midrule
1 & \it ... doesn't satisfy the criterion. \\
2 & \it ... satisfies the criterion to a smaller extent.\\
3 & \it ... satisfies the criterion to a larger extent.\\
4 & \it ... satisfies the criterion entirely.\\
\bottomrule
\end{tabular}
\caption{\label{table:eval_scale} Evaluation scale. }
\end{table}
To investigate further whether our selection criteria with the chosen setting produced good seed sentence candidates at the CEFR levels predicted by our L2 complexity criteria, we observed
L2 learners' performance on exercise items created out of these sentences. Exercise generation requires a number of additional steps after the selection of seed sentences, many of which are open research problems. Therefore, we opted for a semi-automatic approach to the generation of these exercises. We manually controlled the combination of sentences into exercises and the selection of a \textit{distractor}, an incorrect answer option which did not fit into any sentence, in order to reduce potential ambiguity in answer options. A subset of the sentences given to teachers were used as exercise items so that teachers' ratings and students' answers could be correlated.
The exercise type chosen was \emph{word bank}, a type of matching exercise, since this posed less challenges when selecting distractors compared to multiple-choice items. Word bank exercises consist of a list of words followed by a list of sentences, each containing a gap. Learners' task is to identify which word is missing from which sentence. We created worksheets consisting of word bank exercises in Google Forms\footnote{\url{https://docs.google.com/forms/}.}.
To lower the probability of answering correctly by chance, we added a distractor. Students had to provide their answers in a grid following the list of candidate words and the gapped sentences. The missing word to identify (and its position) corresponded to the search term used to retrieve the sentence from Korp.
Worksheets consisted of 9~exercises with 5~sentences each, amounting to a total of 45~sentences. (The only exception was A1 level, where students had 2~exercises less.) Students had 60~minutes to work with the exercises, including 5 minutes of instructions. Students worked individually in a computer lab, access to external resources was not allowed.
The difficulty of the exercises varied along two dimensions: in terms of their CEFR level and in terms of the similarity of the morpho-syntactic form of the candidate words included in the word bank. A worksheet for a certain level contained 5~exercises from the same level as well as 2~exercises from one level below and one level above. In 5~exercises, the word bank consisted of lexical items with the same morpho-syntactic form (e.g.~only plural neuter gender nouns), while 4~exercises had a word bank with mixed POS. The latter group of exercises was somewhat easier, since, besides lexical-semantic knowledge, students could identify the correct solution also based on grammatical clues such as inflectional endings.
\subsection{Results and discussion}
Below, we present teachers' and students' results on the evaluation material.
\subsubsection{Teachers}
To understand to which extent our set of criteria was able to select suitable seed sentences overall as well as specifically in terms of L2 complexity and context independence, we computed average values and standard deviation (\textsc{StDev}) over L2 teachers' ratings. (8 sentences had to be excluded between A1-B1 levels due to missing values.) The results are presented in Table~\ref{table:teacher_crit_res}.
\begin{table}
\centering
\begin{tabular}{cccc}
\toprule
\bf Criterion & \bf \# of raters & \bf Average & \bf \textsc{StDev} \\
\midrule
L2 complexity & 5 & 3.18 & 0.53\\
Context independence & 5 & 3.05 & 0.56\\
Overall suitable (all criteria) & 4 & 3.23 & 0.73\\
\bottomrule
\end{tabular}
\caption{\label{table:teacher_crit_res} Average teacher-assigned rating per criteria.}
\end{table}
As for the criterion of context independence, 80\% of the sentences were found suitable (received an average score higher than 2.5), and 61\% of the sentences received score 3 or 4 from at least half of the evaluators.
Besides rating the three dimensions in Table~\ref{table:teacher_crit_res}, teachers also provided an alternative CEFR level in case they did not agree with the CEFR level suggested by the system. HitEx correctly assessed L2 complexity for 64\% of sentences based on teachers' averaged CEFR label, and in 80\% of the cases the system's CEFR level coincided with at least one teacher's decision. Besides comparing system-assigned and teacher-assigned levels, we measured also the inter-rater agreement (IAA) among the teachers. We used \emph{Krippendorff's $\alpha$} measuring observed and expected disagreement, since it is suitable for multiple raters. An $\alpha$ = 1 corresponds to complete agreement, while $\alpha$ = 0 is equivalent to chance agreement. The inter-rater agreement results among teachers are presented in Table~\ref{table:iaa_cefr}. The extent of agreement among teachers was considerably higher than chance agreement, but it still remained below what is commonly considered as reliability threshold in annotation tasks, namely $\alpha$ = 0.8. CEFR level assignment for sentences thus seems to be a hard task even for teaching professionals.
\begin{table}
\centering
\begin{tabular}{ccccc}
\toprule
\bf SentID & \bf \# sents & \bf \# raters & \bf CEFR & \bf IAA \\
\midrule
1-38 & 38 & 5 & A1-C1 & 0.65 \\
39-188 & 142 & 2 & A1-B1 & 0.68 \\
189-338 & 150 & 3 & B1-C1 & 0.53 \\
\midrule
Tot/Avg & 330 & 5 & A1-C1 & 0.62 \\
\bottomrule
\end{tabular}
\caption{\label{table:iaa_cefr} Inter-rater agreement for CEFR level assignment.}
\end{table}
Besides inter-rater agreement in terms of $\alpha$, we considered also the distance between the CEFR levels assigned by all teachers compared both to each other and to HitEx (Table \ref{table:teacher_lbl_dist}). This would provide us information about the degree to which teachers' accepted our system's assessment of L2 complexity. CEFR levels were mapped to integer values for this purpose, and \textit{averaged pairwise distances} between the levels have been computed in all cases.
Surprisingly, teachers agreed with each other exactly on the CEFR level of sentences in only half of the cases, which shows that the exact CEFR level assignment on a 5 point scale is rather difficult even for humans. The percentage of sentences that teachers agreed on with HitEx completely (distance of 0) was slightly (4\%) higher than the extent to which teachers agreed with each other. This may be due to the fact that teachers were confirming the system-assigned CEFR levels, but did not have information about each others' answers. Teacher-assigned CEFR levels remained within 1 level of difference when compared to each other in almost all cases and compared to the system for 92\% of the sentences.
All in all, the automatic CEFR levels predicted by HitEx were accepted by teachers in the majority of cases within 1 level distance.
\begin{table}
\centering
\begin{tabular}{ccc}
\toprule
\bf Level & \bf Teacher - & \bf Teacher - \\
\bf Distance & \bf Teacher & \bf System I \\
\midrule
0 & 50.0 & \bf 53.9 \\
1 & \bf 49.4 & 37.9 \\
2 & 0.6 & 6.7 \\
$\geq$ 3 & 0.0 & 1.5 \\
\bottomrule
\end{tabular}
\caption{\label{table:teacher_lbl_dist} Percentage of sentences per assigned CEFR label distance.}
\end{table}
Finally, we computed the \textit{Spearman correlation coefficient} for teachers' scores of overall suitability and the two target criteria, L2 complexity and context independence, to gain insight into how strongly associated these two aspects were to seed sentence quality according to our evaluation data. The correlation over all sentences was $\rho=0.34$ for L2 complexity and $\rho=0.53$ for context dependence. The maximum possible value is $\rho=1$ for a positive correlation and $\rho=-1$ for a negative one. Both criteria were thus positively associated with overall suitability: the more understandable and context-independent a sentence was, the more suitable our evaluators found it overall. Out of the two criteria, context dependence showed a somewhat stronger correlation.
\subsubsection{Students}
First, based on students' responses, we computed \textit{item difficulty} for each exercise item, which corresponds to the percentage of students correctly answering an item, a higher value thus indicating an easier item \cite{crocker1986introduction}. The average item difficulty over all exercises was 0.62, corresponding to 62\% of students correctly answering items on average. Table~\ref{table:res_exe_type} shows additional average item difficulty scores divided per CEFR level, exercise type (distractors with same or different morpho-syntactic form) and POS. Values were averaged only over the exercise items that were of the same CEFR level as the answering students' level according to the system.
\begin{table*}
\centering
\begin{tabular}{ccccccccc}
\toprule
\textbf{Ex. type} &
\multicolumn{3}{c}{\bf \textsc{Same POS+Infl}} &
\textbf{Avg} &
\multicolumn{3}{c}{\bf \textsc{Mixed POS+Infl}} &
\textbf{Avg} \\
\midrule
\textbf{CEFR} & \textbf{A1} & \textbf{A2} & \textbf{B1} & & \textbf{A1} & \textbf{A2} & \textbf{B1} & \\
\midrule
Noun & 0.67 & 0.83 & 0.73 & \bf 0.74 & 0.67 & 0.52 & 0.65 & \bf 0.61\\
Verb & 0.50 & 0.69 & 0.69 & \bf 0.63 & 0.0 & 0.62 & 0.77 & \bf 0.46 \\
Adjective & - &- & - & - & 0.58 & 0.56 & 0.62 & \bf 0.59\\
\midrule
\bf Avg & 0.59 & 0.76 & 0.71 & \bf 0.69 & 0.42 & 0.57 & 0.68 & \bf 0.55 \\
\midrule
\textbf{Overall} & \multicolumn{8}{c}{\bf 0.62} \\
\bottomrule
\end{tabular}
\caption{\label{table:res_exe_type} Average item difficulty per exercise item category, POS and CEFR level.}
\end{table*}
To be able to measure whether the item difficulty observed in our students' results matched the values one would typically expect in L2 teaching, we calculated the \textit{ideal item difficulty} (IID) score for our exercises, which takes into consideration correct answers based on chance. We used the formula proposed by \citeasnoun{thompson1985using} presented in [\ref{formula:iid}], where $P_C$ is the probability of correct answers by chance.
\begin{equation}
\label{formula:iid}
IID = P_C + \frac{1-P_C}{2}
\end{equation}
Our exercises consisted of 5 gapped items and 6 answer options in the word bank. Students had, thus, a chance of 1/6 for filling in the first item, 1/5 for the second item etc., which corresponds to an average $P_C$ of $(0.167+0.2+0.25+0.333+0.5)/5=0.29$ for the whole exercise and, consequently, an IID score of 0.645, that is 64.5\% of students correctly answering. The observed item difficulty averaged over all students and exercise items of our evaluation was 62\%, which is only slightly lower than the ideal item difficulty. If we break down this average to students' CEFR levels, we can notice that for A1 students the exercises were considerably more challenging than they should have been according to the ideal threshold. Only 51\% of them responded correctly A1-level exercise items. Our sample size was particularly small, however, at this level, thus further evaluations with additional students would be required to confirm this tendency. A2 and B1 level students produced considerably better results: averaging over exercise types and POS, 66.5\% and 69.5\% of them respectively answered correctly the items of their levels. This indicates that the set of criteria proposed in Section \ref{sec:implementation} can successfully select seed sentences for L2 exercises for students of A2 and B1 levels.
Contrary to what one might expect, exercise items with distractors bearing different morpho-syntactic forms proved to be actually harder for students compared to items with the same POS and inflection based on our evaluation data. The latter would be inherently harder since only lexico-semantic information can contribute to solving the exercises without the help of grammatical clues. As the item difficulty values show in Table~\ref{table:res_exe_type}, approximately 14\% more students answered correctly exercise items with distractors with the same morpho-syntactic tags, an outcome, which, however, may also depend on the inherent difficulty of the sentences presented. As mentioned in Section~\ref{sssec:material}, the work sheets also included exercises constructed with sentences belonging to one CEFR level higher and lower than students' level. This allowed us to further assess whether the CEFR levels suggested based on the L2 complexity criterion were appropriate. We display in Figure~\ref{img:stud_perf_S_vs_T_EXT} students' performance based on the CEFR level of exercise items comparing the system-assigned and the teacher-suggested CEFR levels for the items.
\begin{figure
\centering
\includegraphics[trim = 7mm 0mm 0mm 0mm, clip, width=0.90\textwidth]{stud_perf_S_vs_T_EXT}
\caption{Correct answers per averaged teacher and system CEFR level.}
\label{img:stud_perf_S_vs_T_EXT}
\end{figure}
As Figure~\ref{img:stud_perf_S_vs_T_EXT} shows, at A1 level students answered a larger amount of items according to the CEFR level determined by teachers (63\%, vs.~48\% with the system-assigned CEFR). The percentage of correct answers at A2 and B1 levels, however, showed more consistency with the levels assigned by our L2 complexity criterion: 64\% (A2) and 69\% (B1) correct answers based on our system's CEFR levels, vs.~60\% (A2) and 56\% (B1) with teacher-assigned levels. When considering these scores, however, it is worth noting that both teachers and the system were assessing only seed sentence difficulty, not the overall difficulty of the exercises. A few additional aspects play a role in determining the difficulty of exercise items, e.g.~the selected distractors \cite{beinborn2014predicting}, nevertheless the observed tendencies in error rates provide useful insights into the suitability of seed sentences in terms of L2 complexity.
\section{Conclusion}
\label{sec:conclusion}
We presented a comprehensive framework and its implementation for selecting sentences useful in the L2 learning context. The framework, among others, includes the assessment of L2 complexity in sentences and their independence of the surrounding context, both of which are relevant for a wide range of application scenarios. To our knowledge, this is the first comprehensive study addressing automatic seed sentence selection for L2 learning exercises. We invested considerable effort into creating a system that would yield pedagogically more relevant results.
We conducted an empirical evaluation with L2 teachers and learners to gain insights into how successfully the proposed framework can be used for the identification of seed sentences for L2 exercises. Although the sample size was somewhat limited, the evaluation yielded very promising results. On average, the selected sentences lived up to teachers' expectations on L2 complexity, context independence and overall suitability.
The exercises constructed with the use of the selected sentences were overall somewhat hard for beginners, but they were of an appropriate difficulty level for learners at subsequent stages.
Moreover, learners' error rates at some levels correlated even slightly better with the CEFR levels predicted by our system than the averaged levels assigned by teachers.
All in all, the evaluation indicated that the described system has good potentials to enhance language instruction by either assisting teaching professional when creating practice material or by providing self-learning opportunities for learners in the form of automatically generated exercises. Although our main focus was on seed sentences selection, the proposed system can be useful also for the identification of dictionary example sentences.
Future work could include a version of the system aware of word senses, both as search terms and as entries in the word lists applied. This would also enable searching for sentences belonging to specific topics or domains. Moreover, additional information about learners' lexical knowledge could be incorporated, for example, based on learner-written essays.
Another valuable direction of further research would be the extension of the algorithm to multiple languages, for example through the use of universal POS and dependency tags.
Finally, collecting additional data on how learners perform on the exercises constructed out of the selected sentences could also provide further indication on the quality and usefulness of the proposed algorithm.
|
2,869,038,157,047 | arxiv | \section{Introduction}
The magnetic dynamics and transport properties in antiferromagnetic materials are essential for the performance of antiferromagnetic spintronic devices, and have been investigated intensively in the past decade~\cite{Baltz18,Hoffmann18}. Among these studies, high efficient spin transmission through an insulating antiferromagnetic layer was demonstrated in ferromagnet-antiferromagnet-normal metal (NM) trilayer structures~\cite{WangHL2014,Lin16,Qiu16,YiWang2019}. The lack of itinerant electrons implies that the spin information should be able to transmit across the antiferromagnetic insulator (AFI) in form of polarized magnons. Non-local measurement in different materials revealed that the spin transport distance associated with antiferromagnetic magnons can reach several or even tens microns~\cite{Lebrun18,Yuan2018,2D_Xing19}, which is already comparable with that in high quality yttrium iron garnet, a ferrimagnetic material famous for its ultralow magnetic damping~\cite{Cornelissen15,Cornelissen16b}. Recently, spin injection into NM from subterahertz magnons was realized in AFI-NM bilayers by spin pumping~\cite{JShi20,Vaidya20} and optical approach~\cite{DiWu20}. These progresses offer new opportunities for promising applications of AFIs.
As most of the previous experimental works are about easy-axis AFIs, such as $\alpha$-Fe$_2$O$_3$ (below transition temperature around 260~K in bulk)~\cite{Lebrun18}, Cr$_2$O$_3$~\cite{Yuan2018,Qiu18,JShi20}, MnF$_2$~\cite{Vaidya20}, and MnPS$_3$~\cite{2D_Xing19}, systems with a magnetic easy plane like NiO and $\alpha$-Fe$_2$O$_3$ (above transition temperature) are also quite interesting because of their distinctive features. For instance, the U(1) spin-rotational symmetry within the easy plane allows spin superfluidity~\cite{Halperin69,Takei14,Qaium17}. Another important advantage for applications, compared with the easy-axis AFIs, is the easy access of magnetization manipulation, because the Neel vector in easy-plane AFIs keeps perpendicular to and therefore can be rotated by an in-plane magnetic field. This allows field modulation of spin transport~\cite{Luqiao20} and spin Hall magnetoresistance~\cite{Wees20}. In contrast to the easy-axis case, where the magnon bands are two-fold degenerate, the magnetic anisotropy in easy-plane AFIs breaks the symmetry between the in-plane and out-of-plane magnetization dynamics and results in a band splitting. The lower and higher frequency modes corresponds to the in-plane and out-of-plane motion of the Neel vector~\cite{Rezende19}, respectively. Such a splitting leads to a coherent dynamics of magnon spin polarization~\cite{Luqiao20} and its modulation by an external magnetic field causes a Hanle-type effect~\cite{Wimmer20,Kamra20}.
Interestingly, the spatial motion of the magnons in AFIs, similar to the mobile electrons in metallic systems, can be correlated with their spin polarization, for example, in noncentrosymmetric systems via the Dzyaloshinskii–Moriya interaction (DMI), which provides the possibility to discovery electron-like spin-orbit phenomena. Theoretical studies predicted magnon spin Nernst effect~\cite{Cheng2016,Kovalev16} and magnonic Edelstein effect~\cite{Kovalev20,Rancheng20} driven by DMI. Recently, the dipole-dipole interaction (DDI), which was usually ignored in antiferromagnets, was shown to be able to manifest itself as an effective spin-orbit coupling (SOC)~\cite{Shen20,JLiu20b} between magnon states in uniaxial easy-axis AFIs. Such a magnon SOC can also give rise to various spin-orbit phenomena, e.g., an intrinsic magnon spin Hall effect (SHE)~\cite{Shen20}, D'yakonov-Perel' (DP)-type magnon spin relaxation, and topological surface states~\cite{JLiu20}. The role of this DDI-induced mechanism in easy-plane AFIs is yet to be examined.
In this work, we calculate the magnon spectrum analytically in easy-plane AFIs by taking into account the exchange interaction, magnetic anisotropy, Zeeman energy due to an in-plane magnetic field and, as interpreted above, the DDI. Since the magnitude of DDI is relatively weak than the splitting between the in-plane and out-of-plane polarized magnon modes under magnetic anisotropy, its influence is negligible in weak magnetic field regime. Very interestingly, as the magnetic field increases, the band splitting is found to be globally suppressed and, at a compensation magnetic field, the contributions from the magnetic anisotropy and Zeeman term cancel with each other exactly in the entire Brillouin zone, making the easy-plane AFI approximately equivalent to an easy-axis one.
Physically, this is because the external field introduces an additional magnetic anisotropy, which behaves as a hard axis along the magnetic field and together with the original natural hard axis defines a hard plane, making the normal direction equivalently an easy axis. The resulting magnetic anisotropy becomes uniaxial when the strengths in the two hard axes are equal.
As a result, the momentum-dependent SOC due to DDI becomes dominant and the magnon spin Hall mechanism is switched on. In the meantime, the DP-type magnon spin relaxation, although it is relevant regardless of the strength of magnetic field, can be strongly modified around the compensation field. Moreover, the role of DMI and the additional magnetic anisotropy within the easy plane will also be addressed.
\section{Model}\label{smodel}
We start from the minimal model for an easy-plane AFI including the magnetic anisotropy and antiferromagnetic exchange interaction between the nearest neighbors. An external magnetic field is applied within the $y$-$z$ easy plane to control the orientation of the Neel vector. Without loss of generality, as illustrated in Fig.~\ref{config}, the magnetic field is set to be along $y$-axis, which leads to a canting of the two antiferromagnetic coupled sublattice magnetizations $\bs m_1$ and $\bs m_2$. The net magnetization $\bs m=\bs m_1 +\bs m_2$ and the Neel vector $\bs n=\bs m_1 -\bs m_2$ are therefore along $y$ and $z$ directions, respectively. The canting angle $(\theta)$ can be determined by minimizing the total energy described by the Hamiltonian
\ber
H&=&\sum_{i}\left[K(S_{ai}^{x})^{2}+K(S_{di}^{x})^{2}-g\mu_{B}BS_{ai}^{y}-g\mu_{B}BS_{di}^{y}\right]\nonumber\\
&&-\sum_{\langle i,j\rangle}J\boldsymbol{S}_{ai}\cdot\boldsymbol{S}_{dj},
\label{H0}
\eer
where the anisotropy coefficient $K>0$ and the inter-sublattice exchange coupling constant $J<0$.
The subscripts $a$ and $d$ label the two sublattices.
For a system with $2N$ magnetic ions, the total energy reads
\be
E\approx-Nz|J|S^{2}\cos2\theta-2Ng\mu_{B}BS\sin\theta,
\label{totalE}
\ee
and thus the canting angle is determined by
\be
\sin\theta=\frac{\omega_{\rm Z}}{2\omega_{\rm ex}}.
\ee
Here, $\omega_{\rm Z}=g\mu_B B/\hbar$ and $\omega_{\rm ex}=z|J|S/\hbar$ represent the frequency scales of the Zeeman term and the exchange interaction, respectively.
\begin{figure}[tp]
\includegraphics[width=5cm]{fig1.eps}
\caption{The spin configuration of an easy-plane AFI at equilibrium state in the presence of an in-plane external magnetic field.}
\label{config}
\end{figure}
The spin operators in Eq.~(\ref{H0}) can be expressed under the local equilibrium configuration via a rotation operation
\ber
\left(\begin{array}{c}
S_{ai}^{x}\\
S_{ai}^{y}\\
S_{ai}^{z}
\end{array}\right) &=& \left(\begin{array}{ccc}
1 &0 &0\\
0& \cos\theta & \sin\theta\\
0& -\sin\theta & \cos\theta
\end{array}\right)\left(\begin{array}{c}
\tilde{S}_{ai}^{x}\\
\tilde{S}_{ai}^{y}\\
\tilde{S}_{ai}^{z}
\end{array}\right),\\
\left(\begin{array}{c}
S_{di}^{x}\\
S_{di}^{y}\\
S_{di}^{z}
\end{array}\right) &=& \left(\begin{array}{ccc}
1 &0 &0\\
0& \cos\theta & -\sin\theta\\
0& \sin\theta & \cos\theta
\end{array}\right)\left(\begin{array}{c}
\tilde{S}_{di}^{x}\\
\tilde{S}_{di}^{y}\\
S_{di}^{z}
\end{array}\right),
\eer
which leads to
\ber
H&=&\sum_{i}\left[K(\tilde{S}_{ai}^{x})^{2}+K(\tilde{S}_{di}^{x})^{2}-g\mu_{B}B\sin\theta(\tilde{S}_{ai}^{z}-\tilde{S}_{di}^{z})\right]\nonumber\\
&&-\sum_{\langle i,j\rangle}J\left[\tilde{\boldsymbol{S}}_{ai}\cdot\tilde{\boldsymbol{S}}_{dj}-2\sin^{2}\theta(\tilde{S}_{ai}^{y}\tilde{S}_{dj}^{y}+\tilde{S}_{ai}^{z}\tilde{S}_{dj}^{z})\right.\nonumber\\
&&\hspace{1cm}\left. -\sin2\theta(\tilde{S}_{ai}^{y}\tilde{S}_{di}^{z}-\tilde{S}_{ai}^{z}\tilde{S}_{di}^{y})\right].\label{EqH0}
\eer
By performing the standard Holstein-Primakoff transformation~\cite{Holstein40} to the spin operators
\ber
\tilde S_{a}^{z}=S-a^{\dagger}a,& \tilde S_{a}^{+}=\sqrt{2S-a^{\dagger}a}a,\nonumber\\
\tilde S_{d}^{z}=-S+d^{\dagger}d,& \tilde S_{d}^{+}=d^{\dagger}\sqrt{2S-d^{\dagger}d},
\label{HP}
\eer
one can write out the quadratic terms in momentum space under the basis of $(a_{\boldsymbol{k}},d_{\boldsymbol{k}},a_{-\boldsymbol{k}}^{\dagger},d_{-\boldsymbol{k}}^{\dagger})^T$ as
\be
H_{\bs k,-\bs k}^0=
\left(\begin{array}{cccc}
{\cal A} & {\cal C}_{\boldsymbol{k}} & {\cal {\cal A}}' & {\cal B}_{\boldsymbol{k}}-{\cal C}_{\boldsymbol{k}}\\
{\cal C}_{\boldsymbol{k}} & {\cal A} & {\cal B}_{\boldsymbol{k}}-{\cal C}_{\boldsymbol{k}} & {\cal {\cal A}}'\\
{\cal {\cal A}}' & {\cal B}_{\boldsymbol{k}}-{\cal C}_{\boldsymbol{k}} & {\cal A} & {\cal C}_{\boldsymbol{k}}\\
{\cal B}_{\boldsymbol{k}}-{\cal C}_{\boldsymbol{k}} & {\cal {\cal A}}' & {\cal C}_{\boldsymbol{k}} & {\cal A}
\end{array}\right),\label{H00}
\ee
where ${\cal A}/\hbar=\omega_{{\rm an}}+\omega_{{\rm ex}}$, ${\cal A}'/\hbar=\omega_{{\rm an}}$, $\cal B_{\bs k}/\hbar=\gamma_{\boldsymbol{k}}\omega_{{\rm ex}}$, and ${\cal C}_{\boldsymbol{k}}/\hbar=\gamma_{\boldsymbol{k}}\omega_{\delta}$ with $\omega_{\rm an}={KS}/\hbar$ and $\omega_{\delta}=\omega_{\rm ex}\sin^2\theta$. The form factor $\gamma_{\boldsymbol k}=(1/z)\sum_{\bs \delta}\exp(i \bs \delta \cdot \bs k)$ averages the phase factor over all $z$ antiferromagnetic coupled neighbors and is real in cubic lattice.
\subsection{Magnon dispersion relation}
In order to compute the dispersion relation analytically, it is convenient to define the magnon operators, according to the symmetry, as orthogonal linearly polarized basis
\be
\phi^\pm_{\boldsymbol{k}} = (a_{\boldsymbol{k}}\pm d_{\boldsymbol{k}})/\sqrt{2},
\ee
and rewrite Hamiltonian (\ref{H00}) under the basis of $[\phi^+_{\boldsymbol{k}},(\phi^+_{-\boldsymbol{k}})^{\dagger},\phi_{\boldsymbol{k}}^-,(\phi^-_{-\boldsymbol{k}})^{\dagger}]^T$ as
\be
\tilde H_{\bs k,-\bs k}^0=
\left(\begin{array}{cccc}
{\cal A}+{\cal C}_{\boldsymbol{k}} & {\cal B}_{\boldsymbol{k}}^{+}-{\cal C}_{\boldsymbol{k}} & 0 & 0\\
{\cal B}_{\boldsymbol{k}}^{+}-{\cal C}_{\boldsymbol{k}} & {\cal A}+{\cal C}_{\boldsymbol{k}} & 0 & 0\\
0 & 0 & {\cal A}-{\cal C}_{\boldsymbol{k}} & {\cal B}_{\boldsymbol{k}}^{-}+{\cal C}_{\boldsymbol{k}}\\
0 & 0 & {\cal B}_{\boldsymbol{k}}^{-}+{\cal C}_{\boldsymbol{k}} & {\cal A}-{\cal C}_{\boldsymbol{k}}
\end{array}\right),
\label{Hkk}
\ee
in which ${\cal B}_{\boldsymbol{k}}^{\pm}=\cal A' \pm\cal B_{\bs k}$. Apparently, Hamiltonian~(\ref{Hkk}) can be divided into two individual BdG blocks, both of which can be solved analytically via the Bogoliubov transformation. Straightforward calculation gives the eigenfrequencies of two linear polarized magnon modes
\be
\omega_{\boldsymbol{k}}^{\pm}=\sqrt{({\cal A}+{\cal B}_{\boldsymbol{k}}^{\pm})({\cal A}-{\cal B}_{\boldsymbol{k}}^{\pm}\pm2{\cal C}_{\boldsymbol{k}})}/\hbar.
\label{freqpm}
\ee
and the operators of the eigenstates
\be \psi_{\boldsymbol{k}}^{\pm} = u_{\boldsymbol{k}}^{\pm}\phi_{\boldsymbol{k}}^{\pm}+v_{\boldsymbol{k}}^{\pm}(\phi_{-\boldsymbol{k}}^{\pm})^{\dagger},\label{psipm}
\ee
where the coefficients can be expressed by
\ber
u_{\boldsymbol{k}}^{\pm} &=& \sqrt{\frac{{\cal A}\pm{\cal C}_{\boldsymbol{k}}+\hbar\omega_{\boldsymbol{k}}^{\pm}}{2\hbar\omega_{\boldsymbol{k}}^{\pm}}},\\
v_{\boldsymbol{k}}^{\pm} &=& {\rm sgn}{({\cal B}_{\boldsymbol{k}}^{\pm}\mp{\cal C}_{\boldsymbol{k}})}\sqrt{\frac{{\cal A}\pm{\cal C}_{\boldsymbol{k}}-\hbar\omega_{\boldsymbol{k}}^{\pm}}{2\hbar\omega_{\boldsymbol{k}}^{\pm}}}.
\eer
In the long-wavelength limit, $\bs k\simeq 0$, one has $\gamma_{\bs k}\simeq 1$ and therefore
\ber
\omega_{\boldsymbol{0}}^{+} &=& 2\sqrt{\omega_{\delta}(\omega_{{\rm ex}}+\omega_{{\rm an}})}=\omega_{Z}\sqrt{\frac{\omega_{{\rm ex}}+\omega_{{\rm an}}}{\omega_{{\rm ex}}}},\\
\omega_{\boldsymbol{0}}^{-} &=& 2\sqrt{\omega_{{\rm an}}(\omega_{{\rm ex}}-\omega_{\delta})}=\sqrt{\frac{\omega_{{\rm an}}}{\omega_{{\rm ex}}}(4\omega_{{\rm ex}}^{2}-\omega_{Z}^{2})}.
\eer
Notice that $\omega_{\boldsymbol{0}}^{+}$ is proportional to the external field and therefore vanishes at zero field, whereas $\omega_{\boldsymbol{0}}^{-}$ is of finite value and relatively insensitive to the magnetic field. As a consequence, they become equal at
\be
\omega_{\delta }=\frac{\omega_{{\rm ex}}\omega_{{\rm an}}}{\omega_{{\rm ex}}+2\omega_{{\rm an}}},
\ee
corresponding to the compensation Zeeman field
\be
\omega_{Zc}=2\omega_{{\rm ex}}\sqrt{\frac{\omega_{{\rm an}}}{\omega_{{\rm ex}}+2\omega_{{\rm an}}}}.
\ee
In hematite, the compensation field is around $8$~T~\cite{Wimmer20}.
The canting angle at this compensation field is
\be
\sin\theta_c=\frac{\omega_{Zc}}{2\omega_{{\rm ex}}}=\sqrt{\frac{\omega_{{\rm an}}/\omega_{\rm ex}}{1+2\omega_{{\rm an}}/\omega_{\rm ex}}}.
\ee
Since the in-plane anisotropy in typical antiferromagnets is much smaller than the exchange energy, i.e., $\omega_{{\rm an}}/\omega_{\rm ex}\ll 1$, the canting angle $\theta$ at the compensation field is relatively small, retaining collinear antiferromagnetic configuration approximately.
Figure~\ref{dispersion} shows the dispersion relations with three typical strengths of the external magnetic field. The gapless linear dispersive mode ($\omega_{\bs k}^{+}$) branch in the absence of magnetic field corresponds to the Neel vector oscillating within the easy plane together with a small net magnetization oscillating out of the plane. In contrast, the gaped mode ($\omega_{\bs k}^-$) displays a large out-of-plane oscillation of the Neel vector along with a small in-plane magnetization oscillation. Since the $\omega_{\bs k}^{+}$ mode is more sensitive to the magnetic field than the $\omega_{\bs k}^{-}$ mode as discussed above, the two frequencies at $\bs k=0$ becomes equal at $\omega_Z=\omega_{Zc}$. Very importantly, according to the middle plot of Fig.~\ref{dispersion} and Eq.~(\ref{freqpm}), the two branches at this condition actually become degenerate for any $\bs k$. The dispersion relation reads
\be
\omega_{\boldsymbol{k}}^{\pm}=\hbar\sqrt{\frac{\omega_{{\rm ex}}}{\omega_{{\rm ex}}+2\omega_{{\rm an}}}}\sqrt{(2\omega_{{\rm an}}+\omega_{{\rm ex}})^{2}-(\gamma_{\boldsymbol{k}}\omega_{{\rm ex}})^{2}}.
\ee
Under this condition, an arbitrary combination of the two linearly polarized mode remains the eigenmode of Hamiltonian~(\ref{Hkk}), which allows a transform from the linear polarized modes to circularly polarized modes. This is very similar to the situation in easy-axis AFIs. In other words, the compensation magnetic field drives the easy-plane AFI into a configuration equivalent to an easy-axis AFI. This issue will be discussed further below in Sec.~\ref{Rspin}.
And, this effect is robust against an in-plane anisotropy as will be shown later in the paper.
For a field stronger than $\omega_{Zc}$, the $\omega_{\bs k}^{+}$ branch is lifted above the $\omega_{\bs k}^{-}$ one.
\begin{figure}[tp]
\includegraphics[width=8.5cm]{fig2.eps}
\caption{Dispersion relations of the two magnon modes (the green curve for $\omega^-_{\bs k}$ and the purple one for $\omega^+_{\bs k}$) with three typical magnetic field strengths. In the calculation, we adopt $\omega_{\rm an}/\omega_{\rm ex}=0.01$ and the form factor $\gamma_{\boldsymbol k}=\cos(k_xa/2)\cos(k_ya/2)\cos(k_ya/2)$ is applied along $(k_x,0,0)$ momentum line. The insets illustrate the magnetization dynamics of the two sublattices, with the white curves representing the trajectory of each magnetic moment.}
\label{dispersion}
\end{figure}
\subsection{DDI-induced SOC}
The long-range dipole-dipole interaction includes the coupling between any spin pair and it reads
\be
H^d=\frac{\mu_0 (g\mu_B)^2}{2}\sum_{l\ne l'} \frac{{|{\boldsymbol R}_{ll'}|}^2 \boldsymbol S_l\cdot\boldsymbol S_{l'}-3({{\boldsymbol R}_{ll'}}\cdot \boldsymbol S_l)({{\boldsymbol R}_{ll'}}\cdot \boldsymbol S_{l'})}{|{\boldsymbol R}_{ll'}|^5},\label{dd}
\ee
where $g$ is the $g$ factor, $\mu_B$ the Bohr magneton, and $\mu_0$ the vacuum permeability. As mentioned above, the ground spin configuration remains approximately collinear in the regime we are interested, due to the small canting angle. Therefore,
for the sake of simplicity, we make an approximation ${\bs S} \simeq \tilde {\bs S} $ to Eq.~(\ref{dd}) and apply the Holstein-Primakoff transformation, which results in,
under the basis of $(a_{\boldsymbol{k}},d_{\boldsymbol{k}},a_{-\boldsymbol{k}}^{\dagger},d_{-\boldsymbol{k}}^{\dagger})^T$~\cite{Shen20}
\be
H_{\boldsymbol k,-\bs k}^d=
\left(\begin{array}{cccc}
A_{\boldsymbol{k}} & \gamma_{\boldsymbol{k}}B_{\boldsymbol{k}}^{\ast} & B_{\boldsymbol{k}}^{\ast} & \gamma_{\boldsymbol{k}}A_{\boldsymbol{k}}\\
\gamma_{\boldsymbol{k}}B_{\boldsymbol{k}} & A_{\boldsymbol{k}} & \gamma_{\boldsymbol{k}}A_{\boldsymbol{k}} & B_{\boldsymbol{k}}\\
B_{\boldsymbol{k}} & \gamma_{\boldsymbol{k}}A_{\boldsymbol{k}} & A_{\boldsymbol{k}} & \gamma_{\boldsymbol{k}}B_{\boldsymbol{k}}\\
\gamma_{\boldsymbol{k}}A_{\boldsymbol{k}} & B_{\boldsymbol{k}}^{\ast} & \gamma_{\boldsymbol{k}}B_{\boldsymbol{k}}^{\ast} & A_{\boldsymbol{k}}
\end{array}\right),
\label{Hd}
\ee
in which
\ber
A_{\boldsymbol{k}}&=&-2S\mu_{0}\mu_{B}^{2}\sum_{R_{ll'}\ne0} \frac{R_{ll'}^{2}-3(R_{ll'}^{z})^{2}}{R_{ll'}^{5}} e^{-i\mathbf{k}\cdot R_{ll'}},\\
B_{\boldsymbol{k}}&=&-6S\mu_{0}\mu_{B}^{2}\sum_{R_{ll'}\ne0} \frac{1}{R_{ll'}^{5}}(R_{ll'}^{+})^{2} e^{i\mathbf{k}\cdot R_{ll'}}.
\eer
After computing the summation in continuum limit~\cite{Akhiezer68,Akashdeep17B,Shen20}, we obtain
\be
B_{\boldsymbol{k}}=A_{\boldsymbol{k}}e^{2i\phi_{\boldsymbol k}}= \frac{1}{2}\hbar\omega_m\sin^2 \theta_{\boldsymbol k}e^{2i\phi_{\boldsymbol k}},\label{Bk}
\ee
with $\omega_m=\gamma\mu_0M_s$. Here, $M_s$ represents the magnetization of a single sublattice.
By projecting Eq.~(\ref{Hd}) to the magnon particle space $(\psi^+_{\bs k}, \psi^-_{\bs k})$, we obtain
\be
H_{\bs k,-\bs k}^d=\left(\begin{array}{cc}
\Delta_{\boldsymbol{k}}^{++} & i\Delta_{\boldsymbol{k}}^{+-}\\
-i \Delta_{\boldsymbol{k}}^{+-} &-\Delta_{\boldsymbol{k}}^{--}
\end{array}\right),\label{H0d}
\ee
which shows a coupling between the two linear polarized magnon modes. The coupling parameters are
\ber
\Delta_{\boldsymbol{k}}^{++}&=&\Re B_{\boldsymbol{k}}(\gamma_{\boldsymbol{k}}u_{\boldsymbol{k}}^{+}u_{\boldsymbol{k}}^{+}-2v_{\boldsymbol{k}}^{+}u_{\boldsymbol{k}}^{+}+\gamma_{\boldsymbol{k}}v_{\boldsymbol{k}}^{+}v_{\boldsymbol{k}}^{+}),\label{socp}\\
\Delta_{\boldsymbol{k}}^{--}&=&\Re B_{\boldsymbol{k}}(\gamma_{\boldsymbol{k}}u_{\boldsymbol{k}}^{-}u_{\boldsymbol{k}}^{-}+2v_{\boldsymbol{k}}^{-}u_{\boldsymbol{k}}^{-}+\gamma_{\boldsymbol{k}}v_{\boldsymbol{k}}^{-}v_{\boldsymbol{k}}^{-}),\\
\Delta_{\boldsymbol{k}}^{+-}&=&\Im B_{\boldsymbol{k}}(\gamma_{\boldsymbol{k}}u_{\boldsymbol{k}}^{+}u_{\boldsymbol{k}}^{-}+u_{\boldsymbol{k}}^{+}v_{\boldsymbol{k}}^{-}-v_{\boldsymbol{k}}^{+}u_{\boldsymbol{k}}^{-}-\gamma_{\boldsymbol{k}}v_{\boldsymbol{k}}^{+}v_{\boldsymbol{k}}^{-}).\label{soc}
\eer
The total effective non-interacting Hamiltonian $H_{\bs k}$ under the basis of $(\psi^+_{\bs k}, \psi^-_{\bs k})$ thus becomes
\be
H_{\bs k}=\left(\begin{array}{cc}
\bar\varepsilon_{\bs k}+\delta\varepsilon_{\bs k} & i\Delta_{\boldsymbol{k}}^{+-}\\
-i\Delta_{\boldsymbol{k}}^{+-} &\bar\varepsilon_{\bs k}-\delta\varepsilon_{\boldsymbol{k}}
\end{array}\right),\label{H0t}
\ee
with $\bar\varepsilon_{\bs k}=(\hbar\omega_{\bs k}^++\hbar\omega_{\bs k}^-+\Delta_{\bs k}^{++}-\Delta_{\bs k}^{--})/2$ and $\delta\varepsilon_{\bs k}=(\hbar\omega_{\bs k}^+-\hbar\omega_{\bs k}^-+\Delta_{\bs k}^{++}+\Delta_{\bs k}^{--})/2$. One see that the magnetic anisotropy supplies a contribution to the spin-orbit field via band splitting $|\hbar\omega_{\bs k}^+-\hbar\omega_{\bs k}^-|$ in $\delta{\varepsilon_{\bs k}}$. Since this band splitting is in subterahertz region, much stronger than the dipolar interaction $|\bs B_{\bs k}|$ in the order of gigahertz, the magnon SOC is dominated by the magnetic anistropy, except around the compensation magnetic field where $|\hbar\omega_{\bs k}^+-\hbar\omega_{\bs k}^-|\simeq 0$.
\subsection{Spin polarized representation}\label{Rspin}
As discussed above, the two magnon eigenstates given by Eq.~(\ref{psipm}) are both linearly polarized, meaning that they do not carry net spin. A unitary transformation into circularly polarized basis can be achieved by
\be
\left(\begin{array}{c}
\alpha_{\boldsymbol{k}}\\
\beta_{\boldsymbol{k}}
\end{array}\right)
={\bf A}\left(\begin{array}{c}
\psi_{\boldsymbol{k}}^{+}\\
\psi_{\boldsymbol{k}}^{-}
\end{array}\right),
\ee
where the transformation matrix is defined as
\be
{\bf A}=\left(\begin{array}{cc}
\cos\chi & \sin\chi\\
-\sin\chi & \cos\chi
\end{array}\right),\label{transform}
\ee
One can verify that with the parameter $\chi$ given by
\be
\sin2\chi=(u_{\boldsymbol{k}}^{+}u_{\boldsymbol{k}}^{-}+v_{\boldsymbol{k}}^{+}v_{\boldsymbol{k}}^{-})^{-1},
\ee
the two modes $\alpha$ and $\beta$ have one unit spin, but with opposite sign.
In particular, at the compensation magnetic field, we have $\omega_{\bs k}^+=\omega_{\bs k}^-=\omega_{\bs k}$ and $\omega_\delta\ll\omega_{\rm ex}$, which lead to
\ber
u_{\bs k}^+&\simeq& u_{\bs k}^-\simeq u_{\bs k},\\
v_{\bs k}^+&\simeq& -v_{\bs k}^-\simeq v_{\bs k},
\eer
and therefore $\sin2\chi\simeq (u_{\bs k}^2-v_{\bs k}^2)^{-1}=1$.
Under this condition, the transform matrix reduces to
\be
{\bf A}\simeq\left(\begin{array}{cc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}\right),
\ee
and the SOC coefficients become
\ber
\Delta_{\boldsymbol{k}}^{\pm\pm}&\simeq&\Re B_{\boldsymbol{k}}(\gamma_{\boldsymbol{k}}u^2_{\boldsymbol{k}}-2v_{\boldsymbol{k}}u_{\boldsymbol{k}}+\gamma_{\boldsymbol{k}}v^2_{\boldsymbol{k}}),\\
\Delta_{\boldsymbol{k}}^{+-}&\simeq&\Im B_{\boldsymbol{k}}(\gamma_{\boldsymbol{k}}u^2_{\boldsymbol{k}}-2v_{\boldsymbol{k}}u_{\boldsymbol{k}}+\gamma_{\boldsymbol{k}}v^2_{\boldsymbol{k}}).
\eer
In this spin polarized representation $(\alpha,\beta)$, the effective Hamiltonian reads
\ber
\tilde H_{\bs k}&=&\left(\begin{array}{cc}
\bar\varepsilon_{\bs k} & -\delta\varepsilon_{\boldsymbol{k}}+i\Delta_{\boldsymbol{k}}^{+-}\\
-\delta\varepsilon_{\boldsymbol{k}}-i\Delta_{\boldsymbol{k}}^{+-} & \bar\varepsilon_{\bs k}
\end{array}\right),\nonumber\\
&=&\bar\varepsilon_{\bs k}+\bs h_{\bs k}\cdot\bs \sigma,
\label{tildeH}
\eer
with the effective spin-orbit field $\bs h_{\bs k}=(-\delta \varepsilon_{\bs k},-\Delta_{\bs k}^{+-},0)$.
Up to the first-order of $\omega_{\rm an}/\omega_{\bs k}$, we obtain
\ber
\tilde H_{\bs k}&=&\left(\begin{array}{cc}
\hbar\omega_{\bs k} & -\eta_{\bs k} B_{\bs k}^\ast \\
-\eta_{\bs k} B_{\bs k} & \hbar\omega_{\bs k}
\end{array}\right),\label{approx_H}
\eer
with $\eta_{\bs k}=(\gamma_{\boldsymbol{k}}u^2_{\boldsymbol{k}}-2v_{\boldsymbol{k}}u_{\boldsymbol{k}}+\gamma_{\boldsymbol{k}}v^2_{\boldsymbol{k}})\approx 2 \gamma_{\bs k}\omega_{\rm an}/\omega_{\bs k}$. Eq.~(\ref{approx_H}) is in the same form as the easy-axis case~\cite{Shen20}, because, as aforementioned in Introduction, the magnetic anisotropy and magnetic field together define the effective hard plane ($x$-$y$ plane) and the easy axis ($z$ axis) normal to the plane.
\section{Magnon spin transport}
The spin dynamics of magnons can be described by the semiclassical kinetic equation
\be
\partial_{t}\rho_{\boldsymbol{k}}+i[H_{\boldsymbol{k}},\rho_{\boldsymbol{k}}]
+\frac{1}{2}\{\nabla_{\bs k} H_{\bs k},\nabla \rho_{\bs k}\}=I_{\boldsymbol{k}},\label{kineticE}
\ee
where $\rho_{\bs k}$ is defined as a $2\times 2$ magnon density matrix and the collision integral $I_{\bs k}$ should include all relevant, not only elastic but also inelastic, scattering processes~\cite{Akhiezer68,SSZhang12,Cornelissen16,Liu19,Streib19,Shen2019c,Kamra20,Troncoso20}. By taking into account the large splitting between the two spin bands, the density matrix and Eq.~(\ref{kineticE}) should be written in the representation of $(\psi_{\bs k}^+,\psi_{\bs k}^-)$, especially for an accurate computation of the collision integrals.
The second and third terms on the left side of Eq.~(\ref{kineticE}) describe separately the coherent (quasi)spin precession due to band splitting and the diffusion owing to spatially inhomogenous distribution~\cite{Tokatly16,Kamra20}.
It is important to recall that the density matrix $\rho_{\bs k}(t)$ defined under $(\psi_{\bs k}^+,\psi_{\bs k}^-)$ however does not tell spin information directly. In order to extract the spin polarization, one has to project $\rho_{\bs k}(t)$ into the spin polarized representation via
\be
\tilde\rho_{\bs k}(t)={\bf A}\rho_{\bs k}(t){\bf A}^{\rm T}.
\ee
The magnon spin density then can be read out easily from
\be
s_{\bs k(k)}^i=(1/2)\rm Tr[\tilde\rho_{\bs k(k)}\sigma^i].
\ee
Around the compensation field, the two magnon branches are nearly degenerate, in that case, it is more convenient to write and solve the kinetic equation directly in $(\alpha_{\bs k},\beta_{\bs k})$ representation~\cite{Shen20}
\be
\partial_{t}\tilde\rho_{\boldsymbol{k}}+i[\tilde H_{\boldsymbol{k}},\tilde\rho_{\boldsymbol{k}}]
+\frac{1}{2}\{\nabla_{\bs k} \tilde H_{\bs k},\nabla \tilde\rho_{\bs k}\}=\tilde I_{\boldsymbol{k}}.\label{kineticEb}
\ee
Strictly speaking, different scattering processes will contribute to the dynamics in different ways, relying on their characteristics about the conservation of particle number, spin polarization, momentum and so on~\cite{Shen2019c,Troncoso20}. As a simplified treatment, one may apply the relaxation-time approximation as
\be
\tilde I_{\bs k}=-\frac{1}{\tau}(\tilde\rho_{\bs k}-\tilde\rho_{k}^0),
\ee
where $\tilde \rho_{k}^0$ and $\tau$ represent the quasi-equilibrium density matrix and relaxation time for a specific scattering mechanism~\cite{SSZhang12,Flebus16,Cornelissen16,Shen2019c,Troncoso20}.
\subsection{Magnon spin relaxation}
After some calculations based on perturbation expansion technique~\cite{Shen:prb2014}, we obtain a drift-diffusion equation~\cite{Shen20,Kamra20}
\be
\partial_t S^i=D \nabla^2 S^i+\epsilon_{ijk}\langle h^j_{\bs k}\rangle S^k-\frac{1}{\tau_s^i}S^i,\label{DDe}
\ee
in which $S^i=\sum_{\bs k}s^i_{\bs k}$ stands for the local spin density and $D=\tau \langle v_{\bs k}^2 \rangle/3$ is the diffusion constant. The first term on the right hand side corresponds to the spin diffusion due to spatial inhomogeneity of the magnon spin density and the second term describes the spin precession around the net effective spin-orbit field $\langle \bs h_{\bs k}\rangle$. Here, $\langle.\rangle$ represents average over all thermally occupied magnon states weighted by the Bose distribution. The magnetic-field dependence of $\langle \bs h_{\bs k}\rangle$ results in a Hanle-type feature, which has been explicitly discussed in Refs.~\cite{Wimmer20,Kamra20}.
The last term in Eq.~(\ref{DDe}) is the spin relaxation term, which can be caused by various spin non-conserving scattering processes~\cite{Shen20}. Due to the presence of spin-orbit field, the spin-conserving scatterings can also contribute to the spin relaxation via the DP-type mechanism~\cite{DPsrt71}. The spin relaxation time from this machanism reads
\be
(\tau_{s,\rm DP}^{i})^{-1}=\sum_{j\ne i}\tau[\langle(h_{\bs k}^{j})^2\rangle-\langle h_{\bs k}^{j}\rangle^2].\label{DPt}
\ee
In easy-axis AFIs, the magnon spin-orbit field $\bs h_{\bs k}$ is solely from dipole-dipole interaction~\cite{Shen20}. In the present case, the magnetic anisotropy provides an additional contribution. Although this SOC piece is collinear (with only $h_{\bs k}^x$ component), its magnitude varies with frequencies, resulting in a difference between $\langle(h_{\bs k}^{j})^2\rangle$ and $\langle h_{\bs k}^{j}\rangle^2$. Accordingly, the relaxation time $\tau$ should involve the inelastic scatterings, such as magnon-magnon and magnon-phonon scatterings. As the SOC field due to magnetic anisotropy relies on the strength of external field, the spin relaxation rate given by Eq.~(\ref{DPt}) also varies with magnetic field and achieves a minimum at the compensation field. Assuming the diffusion constant $D$ is insensitive to the magnetic field, the magnon spin diffusion length $\lambda_s=\sqrt{D\tau_s}$ will then also vary sharply around the compensation point, as qualitatively shown in Fig.~\ref{ls}.
\begin{figure}[tp]
\includegraphics[width=7cm]{fig3.eps}
\caption{Magnon spin diffusion length as function of external field at temperature $k_BT=\omega_Z$.}
\label{ls}
\end{figure}
\subsection{Magnon (inverse) spin Hall effect}
To examine the magnon (inverse) spin Hall effect in the presence of band splitting due to magnetic anisotropy, we next calculate the Berry curvature for spin Hall effect~\cite{sinova04}
\ber
\Omega^{z,\pm}_{x,y}(\bs k)&=&-\frac{2\Im\langle\psi_{\boldsymbol{k}}^{\pm}|\hat v_{x}|\psi_{\boldsymbol{k}}^{\mp}\rangle\langle\psi_{\boldsymbol{k}}^{\mp}|\hat v_{y}^{z}|\psi_{\boldsymbol{k}}^{\pm}\rangle}{(\varepsilon^{\mp}_{\bs k}-\varepsilon^{\pm}_{\bs k})^{2}}, \label{berry}
\eer
where $\varepsilon^{\pm}_{\bs k}$ and $|\psi_{\bf k}^\pm\rangle$ are eigenenergies and wave functions.
For a general Hamiltonian in form of
\be
H_{\bs k}=\varepsilon_{\bs k} +\Delta^x_{\bs k}\sigma_x+\Delta^y_{\bs k} \sigma_y
\ee
one has
\ber
\varepsilon_{\bs k}^\pm&=&\varepsilon_{\bs k}\pm\sqrt{(\Delta^x_{\bs k})^2+(\Delta^y_{\bs k})^2} ,\\
|\psi_{\bf k}^\pm\rangle&=&\frac{1}{\sqrt{2}}\left(\begin{array}{c}
1\\
\pm e^{i\varphi_{\Delta_{\boldsymbol{k}}}}
\end{array}\right).\label{psi}
\eer
The (spin) velocity operators
\ber
\hat v_{x}&=&v_x^0 +v_x^x\sigma_x+v_x^y\sigma_y,\\
\hat v^z_{y}&=&v_y^0\sigma_{z}.
\eer
where ${v}_{x/y}^0=\partial_{k_{x/y}}\varepsilon_{\bs k}$ and $v_x^{x/y}=\partial_{k_x}\Delta^{x/y}_{\bs k}$. The matrix elements in Eq.~(\ref{berry}) then can be calculated
\ber
\langle\psi_{\boldsymbol{k}}^{\pm}|{\hat v_{x}}|\psi_{\boldsymbol{k}}^{\mp}\rangle
&=&\mp i(v_{x}^{x}\sin\varphi_{\Delta_{\boldsymbol{k}}}-v_{x}^{y}\cos\varphi_{\Delta_{\boldsymbol{k}}}),\\
\langle\psi_{\boldsymbol{k}}^{\mp}|\hat v_{y}^{z}|\psi_{\boldsymbol{k}}^{\pm}\rangle&=&v_y^0.
\eer
By substituting these matrix elements into Eq.~(\ref{berry}), we obtain
\be
\Omega^{z,\pm}_{x,y}(\bs k)=\pm\frac{2v_{y}^0(v_{x}^{x}\sin\varphi_{\Delta_{\boldsymbol{k}}}-v_{x}^{y}\cos\varphi_{\Delta_{\boldsymbol{k}}})}{(\Delta^x_{\bs k})^2+(\Delta^y_{\bs k})^2}.
\ee
Specifically, for the present case, we have
\ber
\varepsilon_{\bs k}&=&(\hbar\omega_{\bs k}^++\hbar\omega_{\bs k}^-+\Delta_{\bs k}^{++}-\Delta_{\bs k}^{--})/2,\\
\Delta_{\bs k}^x&=&-\delta\varepsilon_{\bs k}=-(\hbar\omega_{\bs k}^+-\hbar\omega_{\bs k}^-+\Delta_{\bs k}^{++}+\Delta_{\bs k}^{--})/2,\\
\Delta_{\bs k}^y&=&-\Delta_{\bs k}^{+-}.
\eer
Around the compensation field, the approximate expression of the low energy dispersion relation $\varepsilon_{\bs k}\approx \sqrt{\varepsilon_0^2+c_s^2 k^2}$ gives $v_y^0=({c_{s}^{2}k}/{\varepsilon_{\bs k}})\sin\theta_{\bs k}\sin\phi_{\boldsymbol{k}}$, where $c_s=\omega_{\rm ex}a/2$. And, according to Eqs.~(\ref{tildeH}) and (\ref{approx_H}), we have
\ber
\Delta_{\bs k}^x&\simeq&-\xi_{\bs k}-\zeta_{\bs k}\sin^2 \theta_{\boldsymbol k}\cos(2\phi_{\bs k}),\\
\Delta_{\bs k}^y&\simeq&-\zeta_{\bs k}\sin^2 \theta_{\boldsymbol k}\sin(2\phi_{\bs k}).
\eer
where $\xi_{\bs k}=\hbar(\omega_{\bs k}^+-\omega_{\bs k}^-)/2$ and $\zeta_{\bs k}={\gamma_{\bs k}\hbar\omega_m\omega_{\rm an}}/{\omega_{\bs k}}$ are SOC due to magnetic anisotropy and dipole-dipole interactions, respectively.
Notice that
\be
\tan\varphi_{\Delta_{\boldsymbol{k}}}=\frac{\Delta_{\boldsymbol{k}}^{y}}{\Delta_{\boldsymbol{k}}^{x}}=\frac{2\zeta_{\boldsymbol{k}}k_{x}k_{y}}{k^{2}\xi_{\boldsymbol{k}}+\zeta_{\boldsymbol{k}}(k_{x}^{2}-k_{y}^{2})}.\label{tanphi}
\ee
Very close to the compensation point, the SOC is dominant by the dipolar interaction, i.e., $\zeta_{\bs k}\gg \xi_{\bs k}$. Thus, from Eq.~(\ref{tanphi}), we have $\varphi_{\Delta_{\bs k}}=2\phi_{\bs k}$. The berry curvature then reads
\be
\Omega_{x,y}^{z,\pm}(\boldsymbol{k})=\mp\frac{c_s^2}{\varepsilon_{\bs k}\zeta_{\bs k}}(1+k\frac{\xi_{\boldsymbol{k}}^{\prime}}{\zeta_{\boldsymbol{k}}}\cos^{2}\phi_{\boldsymbol{k}})\frac{\sin^{2}\phi_{\boldsymbol{k}}}{\sin^2\theta_{\bs k}},
\ee
which is globally negative and positive for the upper and lower magnon bands, respectively. This indicates the occurrence of spin Hall effect.
In the opposite limit, the magnetic anisotropy dominates the SOC, i.e., $\xi_{\bs k}\gg \zeta_{\bs k}$, we have
\be
\tan\varphi_{\Delta_{\boldsymbol{k}}}=\frac{\Delta_{\boldsymbol{k}}^{y}}{\Delta_{\boldsymbol{k}}^{x}}\simeq \frac{\zeta_{\boldsymbol{k}}}{\xi_{\boldsymbol{k}}}\sin^{2}\theta_{\boldsymbol{k}}\sin2\phi_{\boldsymbol{k}}\ll1,
\ee
and therefore
\ber
\sin\varphi_{\Delta_{\boldsymbol{k}}}&\simeq& ({\zeta_{\boldsymbol{k}}}/{\xi_{\boldsymbol{k}}})\sin^{2}\theta_{\boldsymbol{k}}\sin2\phi_{\boldsymbol{k}},\\
\cos\varphi_{\Delta_{\boldsymbol{k}}}&\simeq& 1,
\eer
which lead to
\ber
\Omega_{x,y}^{z,\pm}(\boldsymbol{k})&\simeq&\pm\frac{c_s^2}{\varepsilon_{\bs k}\xi_{\bs k}}\left[\frac{\zeta_{\boldsymbol{k}}}{\xi_{\bs k}}+(\frac{k\zeta_{\boldsymbol{k}}^{\prime}}{\xi_{\bs k}}-2\frac{\zeta_{\boldsymbol{k}}}{\xi_{\bs k}})\sin^{2}\theta_{\boldsymbol{k}}\cos^{2}\phi_{\boldsymbol{k}}\right]\nonumber\\
&&\times \sin^{2}\theta_{\boldsymbol{k}}\sin^{2}\phi_{\boldsymbol{k}}.
\eer
One sees that the Berry curvature reduces with ${\zeta_{\boldsymbol{k}}}/{\xi_{\bs k}}$, meaning the suppression of the spin Hall effect by the magnetic anisotroy. This is because of the collinear nature of the SOC due to magnetic anisotropy.
\section{Influence of DMI and in-plane anisotropy}
In some easy-plane antiferromagnetic magnets like $\alpha$-Fe$_2$O$_3$, there is a zero-field magnetization induced by DMI. To examine the consequence of DMI, we describe it by
\be
H^{\rm DM}=D\sum_{{\langle i,j\rangle}^\prime}\hat{x}\cdot\boldsymbol{S}_{ai}\times\boldsymbol{S}_{dj},
\ee
where only those DMI-active bonds are counted in the summation. The DMI then leads to an additional energy
\be
E^{\rm DM}=-Nz'DS^{2}\sin2\theta. \label{EDMI}
\ee
where $z'$ stands for the number of neighboring ions connected by DMI. The condition of the equilibrium canting angle then can be derived by including Eq.(\ref{EDMI}) into Eq.~(\ref{totalE}) as
\be
\omega_{Z}\cos\theta+\omega_{{\rm DM}}\cos2\theta=\omega_{{\rm ex}}\sin2\theta,
\ee
with $\omega_{\rm DM}=z'DS/\hbar$. After some calculations following the techniques introduced in Sec.~\ref{smodel}, we find its contribution to magnon Hamiltonian can be included by the substitutions
\ber
{\cal A} &\to& {\cal A}+\hbar\omega_{{\rm DM}}\tan\theta,\\
{\cal C}_{\boldsymbol{k}} &\to&\hbar\gamma_{\boldsymbol{k}}\omega_{{\rm ex}}\sin^{2}\theta\left(1-\frac{\gamma_{\boldsymbol{k}}^{\prime}\omega_{{\rm DM}}}{\gamma_{\boldsymbol{k}}\omega_{{\rm ex}}}\cot\theta\right).
\eer
In reality, only part of the exchange interacting bonds are involved in the DMI, which means in general $\gamma_{\bs k}\ne\gamma_{\bs k}^\prime$. This makes ${\cal C}_{\boldsymbol{k}}/{\gamma_{\bs k}}$ no longer a constant. As a result, $\omega_{\bs k}^+=\omega_{\bs k}^-$ is not able to satisfy in the entire Brillouin zone for any magnetic field. Namely, no compensation field is allowed and the DMI provides an effective spin-orbit field at any external magnetic field, from which it can affect the magnon spin relaxation and spin Hall effect.
In biaxial antiferromagnets like the intensively studied material, NiO~\cite{WangHL2014,Lin16,YiWang2019}, there is an easy axis within the easy plane. This effect can be included by an in-plane magnetic anisotropy term~\cite{NiO72,Rezende19}
\be
H^{\rm in}=\sum_i K'(S_{ai}^{z})^{2}+K'(S_{di}^{z})^{2},
\ee
with anisotropy parameter $K'<0$. For simplicity, we here assume the in-plane easy axis is along $z$-direction, i.e., perpendicular to the applied field. This term gives an enengy
\be
E^{\rm in}=2NK'S^{2}\cos^{2}\theta.
\ee
By taking this term into account, we find the condition of the equilibrium (without DMI) canting angle
\be
\sin\theta=\frac{\omega_{Z}}{2(\omega_{{\rm ex}}-\omega_{{\rm an}}^{\prime})},
\ee
with $\omega_{{\rm an}}^{\prime}=|K'|S/\hbar$ and the corrections to the magnon Hamiltonian can be included in Hamiltonian~(\ref{H00}) via the replacement
\ber
{\cal A}&\to&{\cal A}+\hbar\omega_{{\rm an}}^{\prime}(2-3\sin^{2}\theta),\\
{\cal A}'&\to&{\cal A}'+\hbar\omega_{{\rm an}}^{\prime}\sin^{2}\theta.
\eer
Since the corrections are moment independent, the compensation features will survive.
\section{Summary}
In summary, we study the magnon spin transport in easy-plane antiferromagnetic insulators under an in-plane magnetic field. From the analysis on the influence of the magnetic field, we find the two magnon branches becomes degenerate at a compensation magnetic field, making the easy-plane antiferromagnet equivalent to the uniaxial easy-axis antiferromagnets. At this compensation condition, magnon spin-orbit coupling due to dipolar interaction results in magnon (inverse) spin Hall effect and D'yakonove Perel'-type spin relaxation. The compensation feature is found to survive in biaxial easy-plane systems but will be removed by Dzyaloshinskii–Moriya interaction. Far away from the compensation magnetic field, the magnon spin-orbit coupling is dominated by the magnetic anisotropy, where the magnon (inverse) spin Hall effect is suppressed. These results are expected to be applicable in synthetic antiferromagnets, in which the larger magnetic moments of the artificial spin elements benefit the enhancement of the predicted dipolar-induced spin-orbit effects.
\begin{acknowledgments}
This work is supported by the National Natural Science Foundation of China (Grants No.11974047).
\end{acknowledgments}
{\bf{DATA AVAILABILITY}\\}\\
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2,869,038,157,048 | arxiv | \section{Introduction}
\label{sec:intro}
Supermassive black holes (SMBHs) are harbored at the nuclei of almost all massive galaxies
in the present-day universe \citep{Kormendy2013ARA&A..51..511K}.
In the bottom-up hierarchical structure formation of the $\Lambda$ cold dark matter (CDM) cosmologies,
galaxies were assembled out of smaller mass via halo and galaxy mergers.
As a natural outcome of frequent galaxy mergers, incoming massive BHs would sink toward the centers,
form binary SMBHs at the galactic nuclei, and coalesce with gravitational wave (GW) emission,
if the BHs were to decay their orbit via dynamical processes within a Hubble time
\citep{Begelman1980Natur.287..307B,Yu2002MNRAS.331..935Y,Merritt2013CQGra..30x4005M,
Khan2016PhRvD..93d4007K,Kelley2017MNRAS.464.3131K}.
Low-frequency GW detectors (LISA, Tianqin, Taiji) and experiments (PTA) will enable us to probe
the cosmological evolution of SMBHs in the current framework of cosmology
\citep{Sesana2008MNRAS.390..192S,Bonetti2018MNRAS.477.3910B,Bonetti2018MNRAS.477.2599B,Bonetti2019MNRAS.486.4044B,Inayoshi2018ApJ...863L..36I,Luo2016CQGra..33c5010L}
Giant elliptical galaxies, the most massive objects in the local universe, have experienced
a large number of merger events, predominantly minor and dry (i.e., gas-poor) mergers at lower redshifts ($z<2$),
where their star formation activities ceased \citep{Thomas2005ApJ...621..673T}.
In gas-poor environments, multi-body BH interactions
would be one plausible way
to make BHs coalesce within a short timescale.
Because of the nature of multi-body interactions, less massive objects are likely to be ejected from the core,
leaving behind more massive binaries \citep{Bonetti2018MNRAS.477.3910B, Ryu2018MNRAS.473.3410R}.
Those ejected BHs with high velocities comparable to the escape speed from the galactic cores plunge
into diffuse hot gas in the galactic outskirts and orbit as wandering BHs
\citep{Zivancev&Ostriker2020arXiv200406083Z}.
Similarly, the anisotropic emission of GWs (or `gravitational recoil') during the final coalescence of
two SMBHs would make the merger remnant offset from the centers of the host galaxies
\citep{Bekenstein1973ApJ...183..657B,Campanelli2005CQGra..22S.387C,Campanelli2007ApJ...659L...5C,Campanelli2007PhRvL..98w1102C,Lousto2012PhRvD..85h4015L,Fragione2020arXiv200601867F}.
Wandering BHs can also be populated by minor galaxy mergers with significantly low mass ratios ($q\ll 0.01$)
and could fail to reach the galactic center within a Hubble time owing to slow dynamical friction
\citep{Schneider2002ApJ...571...30S,Bellovary2010ApJ...721L.148B, Tremmel2018MNRAS.475.4967T}.
However, those BHs are significantly less massive compared to the population ejected via dynamical processes
from the galactic centers.
Ejected BHs, depending on the velocity, are bound within the galactic halo potential and orbit in diffuse gas
at velocities of $v_\infty \simeq \sigma_\star$, where $\sigma_\star$ is the stellar velocity dispersion.
When a BH with a mass of $M_\bullet$ is moving in fixed medium (or a BH stays fixed in a moving medium),
mass accretion onto the BH begins from a characteristic radius, where the negative gravitational
energy becomes greater than the sum of the kinetic and thermal energy of the gas.
The so-called Bondi-Hoyle-Littleton (BHL) radius is given by
\begin{equation}
R_{\rm BHL}\equiv\frac{GM_{\bullet}}{c_\infty^2+v_\infty^2},
\label{eq:BHLradius}
\end{equation}
\citep{Bondi1952spherically}, where $G$ is the gravitational constant and $c_\infty$ is the sound speed of gas incoming from infinity.
In the classical picture, the incoming laminar flow develops a bow shock in front of the BH
and accretes to the hole from the backward direction.
However, 3D numerical simulations find that the symmetric accretion behavior is broken by
the instability at the shock front, leading to highly turbulent flows (see a review of earlier studies
in \citealt{Edgar2004NewAR..48..843E}).
In the presence of a density gradient in the inflowing gas, non-zero angular momentum is carried with accreting
turbulent matter and a disk-like structure forms around the BH \citep{Xu2019MNRAS.488.5162X}.
Generally, the outskirts of massive galaxies are filled with hot and diffuse plasma with
a density of $n_{\rm e}\simeq 0.1~{\rm cm}^{-3}$ and temperature of $T\simeq 10^7~{\rm K}$ \citep[e.g.,][]{Russell2013MNRAS.432..530R}.
Since wandering BHs are likely fed with the plasma at significantly low rates, the accretion matter does not cool
via emitting radiation, but forms a geometrically thick and hot disk.
The solution of radiatively inefficient accretion flows (RIAFs) has been found by \citet{Ichimaru1977ApJ...214..840I} and
studied in the subsequent works by \citet{Narayan1994ADAF_ApJ...428L..13N,Narayan1995ADAF_ApJ...444..231N}.
There are several different solutions of RIAFs, depending on what physical processes transport energy
and angular momentum: the advection-dominated accretion flow (ADAF;
\citealt{Narayan1994ADAF_ApJ...428L..13N,Narayan1995ADAF_ApJ...444..231N}),
the convection-dominated accretion flow (CDAF; \citealt{Narayan2000CDAF_ApJ...539..798N,
Quataert&Gruzinov2000ApJ...539..809Q}), and the adiabatic inflow-outflow solution (ADIOS;
\citealt{Blandford&Begelman1999MNRAS.303L...1B, Blandford&Begelman2004MNRAS.349...68B}).
In addition, numerical simulations suggest that the properties of the accretion flow are affected by
the choice of the initial conditions and boundary conditions \citep[e.g.,][]{Inayoshi2018low}.
When the gas is weakly bound to the central BH and turbulent, with a wide range of specific angular momentum
as expected for mass accretion onto a wandering BH, the overall properties of accretion differ from
those of the known solutions.
Detecting a population of wandering BHs in the outskirts of massive galaxies is a missing link in the above scenario.
Since the electromagnetic emission from such a BH population is expected to be weak, it is difficult to identify the presence of
accreting BHs \citep[e.g.,][]{Ho2008nuclear}.
For low-luminosity active galactic nuclei (AGNs) with radiative luminosities significantly lower than
the Eddington value of $L_{\rm Edd}$,
the commonly used diagnostics with optical lines are not useful \citep{Schulze2010A&A...516A..87S}.
Previous studies have focused on X-rays from low-luminosity accreting BHs
\citep[e.g.,][]{Fujita2008ApJ...685L..59F, Fujita2009ApJ...691.1050F, Zivancev&Ostriker2020arXiv200406083Z}.
However, a long exposure time (hours) is generally required to search for and detect such dim X-ray sources
even at modest distances.
Observationally, the nuclear emission from low-luminosity AGNs is produced by synchrotron radiation
that has a peak energy between the radio and far-infrared bands \citep[e.g.,][]{Ho1999ApJ...516..672H, Ho2008nuclear}.
The level of radio-loudness scales inversely with the AGN activities, namely, the Eddington ratio
\citep{Ho2002ApJ...564..120H, Sikora2007ApJ...658..815S}.
That spectral feature is also seen in the nearest SMBH, Sagittarius A$^\star$, whose activity is known to
be very quiescent at present ($L_{\rm bol}/L_{\rm Edd} \sim 10^{-8}$; \citealt{Narayan1998ApJ...492..554N}).
The radio emission is considered to be produced from the accretion flow on the nuclear BH
and/or by relativistic jets \citep{Narayan1995Natur.374..623N, Mahadevan1997ApJ...477..585M,
Falcke2000A&A...362..113F, Yuan2004ApJ...606..894Y, Yuan2014ARA&A..52..529Y}.
Recent magnetohydrodynamical simulations that treat electron thermodynamics and
frequency-dependent radiation transport suggest that synchrotron radiation is dominated in spectra of
accretion flows at rates of $\dot{M}_\bullet \ll 10^{-5}~\dot{M}_{\rm Edd}$
(\citealt{Ryan2017ApJ...844L..24R}; see also \citealt{Moscibrodzka2011ApJ...735....9M}),
where the Eddington accretion rate is defined as $\dot{M}_{\rm Edd}\equiv 10~L_{\rm Edd}/c^2$.
Motivated by this background, in this paper we investigate the dynamics of low-density
accretion flows onto a moving BH and estimate the BH feeding rate,
performing 3D hydrodynamical simulations.
We apply the simulation results to BHs wandering at the outskirts of massive galaxies filled by hot and diffuse plasma. With a semi-analytical two-temperature disk model describing RIAFs onto BHs, we estimate that the radiation spectra have a peak in the millimeter band, where the Atacama Large Millimeter/submillimeter Array (ALMA) has the highest sensitivity and spatial resolution.
Millimeter observations with the ALMA and future facilities such as the next generation VLA (ngVLA)
\footnote{https://ngvla.nrao.edu/}
will enable us to hunt for a population of wandering BHs.
The rest of this paper is organized as follows.
In \sect\ref{sec:method}, we describe the methodology of our numerical simulations.
In \sect\ref{sec:result}, we show our simulation results and explain their physical properties.
In \sect\ref{sec:discussion}, we present the radiation spectra of wandering BHs that accrete
gas at the outskirts of different types of galaxies and discuss their detectability.
We summarize our conclusions in \sect\ref{sec:sum}.
\vspace{5mm}
\section{Methodology}\label{sec:method}
We solve the 3D hydrodynamical equations using the open source code PLUTO \citep{Mignone2007PLUTO}.
The basic equations are the equation of continuity,
\begin{equation}
\frac{{\rm d}\rho}{{\rm d}t}+\rho\nabla \boldsymbol{v}=0,
\end{equation}
and the equation of motion,
\begin{equation}
\rho\frac{{\rm d}\boldsymbol{v}}{{\rm d}t}=-\nabla p -\rho\nabla\Phi,
\end{equation}
where $\rho$ is the density, $\boldsymbol{v}$ is the velocity, $p$ is the gas pressure, and
the gravitational potential is set to $\Phi=-GM_{\bullet}/r$, with $r$ the distance from the central BH.
The time derivative is the Lagrangian derivative, given by ${{\rm d}}/{{\rm d}t}=\partial/\partial t + \boldsymbol{v}\cdot\nabla$.
We solve the energy equation
\begin{equation}
\rho\frac{{\rm d }e}{{\rm d}t}= - p \nabla \cdot \boldsymbol{v}
\end{equation}
where $e$ is the internal energy per mass.
The equation of state of the ideal gas is assumed as $p = (\gamma - 1)\rho e$,
where the adiabatic index $\gamma = 1.6$ here.
We introduce basic dimensionless physical quantities that characterize accretion systems of a BH with a mass of $M_\bullet$
moving at a velocity of $v_\infty$.
If radiative and mechanical feedback associated with BH feeding are negligible, mass accretion begins from the BHL radius
(see Eq. \ref{eq:BHLradius}) and the standard expression of the accretion rate is given by
\begin{equation}
\dot{M}_{\rm BHL}
= \frac{4\pi G^2M^2_{\bullet}\rho_\infty}{c_\infty^3(1+\mathcal{M}^2)^{3/2}},
\end{equation}
where $\mathcal{M}(=v_\infty/c_\infty)$ is the Mach number and $\rho_\infty$ is the density of the ambient gas
\citep{Shima1985MNRAS.217..367S,Ruffert1994ApJ...427..351R}.
The accretion rate normalized by the Eddington rate is given by
\begin{align}
\dot{m}_{\rm BHL}
& \simeq 1.5 \times10^{-6}~(1+\mathcal{M}^2)^{-3/2}\\
& \times
\left(\frac{M_\bullet}{10^{7}~{M_\odot}}\right)
\left(\frac{\rho_\infty}{10^{-25}~{\rm g~cm}^{-3}}\right)
\left(\frac{T}{10^7~{\rm K}}\right)^{-3/2}.\nonumber
\end{align}
Throughout this paper, we focus on accretion flows at a low rate of $\dot{m}_{\rm BHL}\ll 10^{-4}$,
where the gas adiabaticity holds without radiative cooling, and ensure that our numerical results are scale-free.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig_rho_initial.pdf}
\caption{
Initial density distribution on the $x-y$ plane ($\theta=\pi/2$) given by Eq. (\ref{eq:initial_rho}) with
$\mathcal{M}=2$ and $\lambda=5R_{\rm BHL}$.
The flow has a uniform velocity field of $\boldsymbol{v} =-v_\infty \boldsymbol{\hat{x}}$.
}
\label{fig:ini_rho}
\vspace{5mm}
\end{figure}
To compute the basic equations, we employ spherical coordinates (the position of the BH is the coordinate origin)
in a three-computational domain of $r_{\rm in} \le r \le r_{\rm out}$, $\epsilon\leq \theta \leq \pi - \epsilon$ and $0\le\phi\le2\pi$,
where $r_{\rm in }=0.08~R_{\rm BHL}, r_{\rm out }=16~R_{\rm BHL}$ in our fiducial cases and
$\epsilon$ is set to 0.001 to avoid numerical singularity at the poles.
We set up logarithmically spaced grids in the radial direction and uniformly spaced grids in the $\theta$- and $\phi$-directions.
The number of grid points of our standard resolution is set to $N_r\times N_\theta \times N_\phi = 128\times 128\times 128$.
We also run simulations with a lower resolution ($N_r\times N_\theta \times N_\phi = 64\times 64\times 64$)
and with larger values of $r_{\rm in}$, in order to check the convergence of the simulation results.
As initial conditions, we set a uniform velocity field of $\boldsymbol{v} =-v_\infty \boldsymbol{\hat{x}}$,
where $\boldsymbol{\hat{x}}$ is the normal vector along the $x$-axis ($\theta=\pi/2,\phi=0$).
The density distribution is given by
\begin{equation}
\label{eq:initial_rho}
\frac{\rho}{\rho_\infty}=
1+A\exp{-\frac{R^2_{\rm BHL}}{(x-x_0)^2}}\sin{\frac{2\pi y}{\lambda}}\cos{\frac{2\pi z}{\lambda_0}},
\end{equation}
where the amplitude of fluctuation is set to $A=0.99$ at $x>2R_{\rm BHL}$, $|y|<\lambda$,
and $|z|<\lambda/4$, and $A=0$ elsewhere.
The characteristic wavelengths along the $y$- and $z$-directions, which are perpendicular to the $x$-axis,
are expressed as $\lambda$ and $\lambda_0\, (5~R_{\rm BHL})$, and we set $x_0=2~R_{\rm BHL}$.
We also impose a pressure equilibrium within the density bumps ($p_\infty=\rho_\infty c_\infty ^2/\gamma$)
to prevent the bumpy structure from being smeared out before entering within the BH gravitational
sphere of influence ($r<R_{\rm BHL}$).
In our simulations, the Mach number $\mathcal{M}$ and wavelength $\lambda$ are free parameters,
which characterize the amount of angular momentum supplied to the vicinity of the BH.
As an example, Figure~\ref{fig:ini_rho} shows the initial density distribution for the case with
$\mathcal{M}=2$ and $\lambda=5R_{\rm BHL}$.
\begin{table}[t]
\caption{\label{tab:para}
Parameters of the models.}
\begin{ruledtabular}
\begin{tabular} {lcccc}
Name &$\mathcal{M}$ &$\lambda(R_{\rm BHL}) $ & $r_{\rm in}(R_{\rm BHL})$ & $N_r \times N_\theta \times N_\phi$ \\
\colrule
A2 &$2$ & $5$ & 0.08 & $128^3$ \\
B2 &$2$ & $2.5$ & 0.08 & $128^3$ \\
A1 &$1$ & $5$ & 0.08 & $128^3$ \\
B1 &$1$ & $2.5$ & 0.08 & $128^3$ \\
A0 &$0.5$ & $5$ & 0.08 & $128^3$ \\
B0 &$0.5$ & $2.5$ & 0.08 & $128^3$ \\
A2r2 &$2$ & $5$ & 0.16 & $128^3$ \\
A2r4 &$2$ & $5$ & 0.32 & $128^3$ \\
A2low &$2$ & $5$ & 0.08 & $64^3$ \\
\end{tabular}
\tablecomments{Column (1): simulation name. Column (2): Mach number. Column (3): wavelength of fluctuation normalized by $R_{\rm BHL}$.
Column (4): position of the radial inner boundary normalized by $R_{\rm BHL}$. Column (5): cell numbers.
}
\end{ruledtabular}
\vspace{5mm}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig_snapshot.pdf}
\caption{
Snapshots of accretion flow for the fiducial simulation A2 at three different elapsed times of $t=2$, $5$, and $35~t_{\rm BHL}$, sliced at $z=0$.
From the left to the right, we show the gas density $\rho/\rho_\infty$, temperature $T/T_\infty$, Mach number $\mathcal{M}$,
radial velocity $v_r$ normalized by free-fall velocity $v_{\rm ff}=\sqrt{2GM_\bullet/r}$, and azimuthal velocity $v_\phi$ normalized by
the Keplerian velocity $v_{\rm Kep}=\sqrt{GM_\bullet/r}$.
The accretion flow is symmetric with respect to $y=0$ in the early stage ($t=2~t_{\rm BHL}$), forms a spiral structure owing to
angular momentum supply in the middle stage ($t=5~t_{\rm BHL}$), and becomes highly turbulent in the late stage ($t=35~t_{\rm BHL}$).
%
}
\label{fig:snapshot}
\vspace{5mm}
\end{figure*}
The outer boundary ($r=r_{\rm out}$) is divided into the upstream side ($0\leq \phi<\pi/2$ and $3\pi/2 \leq \phi \leq 2\pi$) and
downstream side ($\pi/2\leq\phi<3\pi/2$).
At the upstream side, we inject gas inflow at a velocity of $v_\infty$ with density set by Eq. (\ref{eq:initial_rho}).
We adopt the outflow boundary condition at the outermost grid (downstream side) and innermost grid \citep{1992ApJS...80..753S},
where zero gradients crossing the boundary are imposed on physical quantities in order to avoid spurious reflection of wave energy at the boundary.
At the inner boundary, $v_r \leq 0$ is imposed (i.e., inflowing gas from ghost cells is prohibited).
We also set a continuous condition on the poles ($\theta=0$ and $\pi$) to avoid the unphysical singularity.
In the continuous condition, the values in the ghost cells are copied from the grids on the other side of the pole
and the signs of $v_\theta$ and $v_\phi$ are flipped \citep{Athena2019ascl.soft12005S}.
We test that for $M_\bullet =0$ (i.e., no gravitational force) the density perturbations are advected, keeping the bumpy structure
from the upstream ($x>0$) to the downstream ($x<0$) side without numerical diffusion and reflection due to numerical artifacts.
In Table~\ref{tab:para}, we summarize the simulation parameters we investigate in this paper.
We study the dynamics of mildly sub/supersonic gas flows ($0.5 \leq \mathcal{M} \leq 2$) because those are relevant
to the case of BHs wandering in the outskirts of galaxies accreting hot plasma (see discussion in \S\ref{sec:discussion}).
The characteristic scale of density fluctuation $\lambda$ is set to $\sim O(R_{\rm BHL})$, in order to study the effect of
disk formation caused by advection of angular momentum within $R_{\rm BHL}$ (in the limit of $\lambda \gg R_{\rm BHL}$,
the flow pattern approaches the classical BHL accretion).
To see the impact of our choice of $\lambda$, we consider two cases with $\lambda = 5R_{\rm BHL}$ (run A)
and $2.5R_{\rm BHL}$ (run B).
We also check the dependence on $r_{\rm in}$ for simulation A2.
All the simulations last until $t=40t_{\rm BHL}$, where $t_{\rm BHL} \equiv R_{\rm BHL} / (c_\infty ^2 + v_\infty ^2)^{1/2}$
is the characteristic dynamical timescale.
\section{Results} \label{sec:result}
\subsection{Overview of the simulations}
\label{sec:overview}
First, we discuss our fiducial case of A2, where a massive BH moves at a constant velocity with a Mach number $\mathcal{M}=2$
into hot plasma that has a density fluctuation with a characteristic wavelength $\lambda=5~R_{\rm BHL}$.
In Figure~\ref{fig:snapshot}, we show the two-dimensional snapshots of the accretion flow at the plane of $z=0$
(i.e., perpendicular to the net angular momentum vector)
at three different elapsed times of $t/t_{\rm BHL}=2$, $5$, and $35$.
In the early stage ($t=2~t_{\rm BHL}$), the supersonic gas flow is attracted by the gravitational force of the BH and forms a bow shock
with a symmetric structure in front of the BH.
As the density fluctuations reach within the BH influence radius ($t=5~t_{\rm BHL}$; middle panels),
two streams both from $y>0$ and $y<0$ collide at $y=0$ and dissipate the linear momentum parallel to the $y$-axis.
Because of the density asymmetry, however, non-zero angular momentum is left behind the colliding flows, and thus
the denser flow from $0<y/R_{\rm BHL}<0.7$ accretes onto the BH, forming spiral arms and shocks.
In the late stage after several dynamical timescales (represented at $t=35~t_{\rm BHL}$; bottom panels),
the laminar flow with a spiral structure turns chaotic and turbulent.
Since the gas is adiabatically compressed owing to the lack of radiative cooling, thermal pressure is not negligible.
Therefore, the turbulent flow becomes subsonic ($\mathcal{M}_{\rm tur} \approx 0.5$; third column) and the rotational velocity
is sub-Keplerian ($v_\phi /v_{\rm Kep}\approx 0.3$; fifth column).
In this turbulent stage, the BH is fed not only through the disk but also by free-falling gas with substantially small angular momentum.
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig_accretion_rate.pdf}
\caption{
Time evolution of mass accretion rate $\dot{M}/\dot{M}_{\rm BHL}$ (top panel) and mean specific angular momentum
to the $z$-direction $\bar{j}_z/ j_{\rm Kep}$ of accreted materials (bottom panel) for six simulations with different values of
$\mathcal{M}$ and $\lambda$.
Our fiducial model is shown by thick blue curves.
The accretion rates rise to $\sim 0.5~\dot{M}_{\rm BHL}$ at the beginning and drop down to $0.1-0.2~\dot{M}_{\rm BHL}$
at $t>5-10~t_{\rm BHL}$, when density fluctuations enter within $R_{\rm BHL}$ and begin to form a rotating disk
($\bar{j}_z \simeq 0.4~j_{\rm Kep}$).
For all the cases, the accretion flows are in quasi-steady states.
}
\label{fig:acc_rate}
\vspace{5mm}
\end{figure*}
Figure~\ref{fig:acc_rate} shows the time evolution of gas accretion rate $\dot{M}/\dot{M}_{\rm BHL}$ through
the sink cell at $r=r_{\rm in}$ (top panel) and mean specific angular momentum to the $z$-direction
$\bar{j}_z/ j_{\rm Kep}$ of the accreted mass (bottom panel).
Blue solid curves correspond to our fiducial case.
The accretion rate rises up to the BHL rate by $t=2~t_{\rm BHL}$ and drops to $\sim 0.1~\dot{M}_{\rm BHL}$
when density bumps enter within the BH influence radius and supply angular momentum of $\gtrsim 0.8~j_{\rm Kep}$ into the accreting matter.
At $t>10~t_{\rm BHL}$, mass accretion approaches a quasi-steady state at a mean rate of $\langle \dot{M} \rangle \approx 0.15~\dot{M}_{\rm BHL}$,
though the angular momentum of the accreting matter has a large fluctuation with a mean value of $\langle \bar{j}_z \rangle /j_{\rm Kep}\approx 0.25$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig_accretion_dist.pdf}
\caption{
Frequency distribution of mass accretion rates $\dot{M}/\dot{M}_{\rm BHL}$ (left panel) and mean specific angular momentum
$\bar{j}_z/j_{\rm Kep}$ of accreted materials (right panel) during the time interval of $10\leq t/t_{\rm BHL} \leq 40$.
Each curve corresponds to the case with $\mathcal{M}$ and $\lambda$ denoted in the left panel.
With higher $\mathcal{M}$, the peak value of mass accretion rate decreases and its distribution becomes wider (i.e., the flow becomes
more unstable and turbulent).
With smaller $\lambda$, the accretion rate and absolute values of angular momentum do not change significantly, but the sign of angular
momentum flips due to colliding flows with different specific angular momentum (see Figure \ref{fig:snap_AB1}).
}
\label{fig:acc_dist}
\vspace{5mm}
\end{figure*}
Figure~\ref{fig:acc_rate} also shows the dependence of the accretion flow and its angular momentum on Mach number ($\mathcal{M}=0.5$, $1.0$, and $2.0$)
and wavelength of the density fluctuation ($\lambda/R_{\rm BHL}=2.5$ and $5.0$), respectively.
For all the cases, the overall behavior of the accretion flow is qualitatively similar to that in our fiducial case:
the accretion rate initially increases to $\sim \dot{M}_{\rm BHL}$ and decreases to a quasi-steady value after the density bumps carry
angular momentum within $R_{\rm BHL}$.
In Figure~\ref{fig:acc_dist}, we also calculate the frequency distribution of $\log_{10}(\dot{M}/\dot{M}_{\rm BHL})$ and $\bar{j}_z/j_{\rm Kep}$
during the quasi-steady state.
With a higher Mach number, the average accretion rate in the quasi-steady state tends to be lower:
$\langle \dot{M} \rangle /\dot{M}_{\rm BHL} \simeq 0.25$, $0.20$, and $0.15$ in the simulations of A0, A1, and A2, respectively.
The angular momentum of accreting matter weakly depends on the Mach number, and the peak value is kept at
$\langle \bar{j}_z \rangle \approx 0.3~j_{\rm Kep}$.
Besides, as shown in Figure~\ref{fig:acc_dist} (solid curves), the width of the distributions becomes wider as the Mach number increases.
This indicates that the accretion flow becomes more unstable and turbulent for higher values of $\mathcal{M}$.
Note that since $\dot{M}_{\rm BHL}\propto (1+\mathcal{M}^2)^{-3/2}$, the accretion rate is reduced by a factor of $\simeq 13.3$ from
the A0 run ($\mathcal{M}=0.5$) to the A2 run ($\mathcal{M}=2$).
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig_snapshot_AB1.pdf}
\caption{
Snapshots of the density distribution and velocity vectors of accretion flow for simulations A1 (left) and B1 (right)
at $t=20~ t_{\rm BHL}$, sliced at $z=0$.
With the larger $\lambda$ (left), the angular momentum of the flow within $\sim R_{\rm BHL}$ (white circle) is dominated by
the incoming stream from $0\leq y \leq \lambda/2$, leading to $j_{\rm z}>0$.
With the smaller $\lambda$ (right), the net angular momentum is determined by complex collisions of flows with different
angular momentum, inducing the flip of the rotating direction.
}
\label{fig:snap_AB1}
\vspace{5mm}
\end{figure*}
With a shorter wavelength of density fluctuation, the flow pattern becomes more complex,
although the absolute values of accretion rates and angular momentum do not change significantly (dashed curves in Figures \ref{fig:acc_rate} and \ref{fig:acc_dist}).
In Figure~\ref{fig:snap_AB1}, we show the distribution of the gas density and velocity vector at an elapsed time of $t=20~t_{\rm BHL}$ for the A1 (left) and B1 (right) runs, respectively.
When the half wavelength is sufficiently larger than $R_{\rm BHL}$ as shown in the left panel, the incoming stream from $0\leq y \leq \lambda/2$ supplies mass and
angular momentum with $j_{\rm z}>0$ (i.e., the counterclockwise direction) within $R_{\rm BHL}$.
On the other hand, in the right panel, the incoming stream from $-\lambda/2\leq y \leq -\lambda$ carries a larger amount of angular momentum and flips
the direction of angular momentum (see also the bottom panel of Figure~\ref{fig:acc_rate} at $t=20~t_{\rm BHL}$).
As a result of the flow collisions around $\sim R_{\rm BHL}$, the accretion flow turns highly turbulent, and thus the angular momentum distribution becomes wider.
\subsection{The properties of the accretion flows}\label{sec:disk_property}
Next, we describe the properties of the accretion flow onto a moving BH, considering the time-averaged profiles of physical quantities.
In the following, we show time-averaged values over $10\leq t/t_{\rm BHL} \leq 40$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig_massflux_Ma.pdf}
\caption{
Time-averaged radial profiles of the mass inflow (dashed), outflow (dotted), and net (solid) accretion rate
for the A0 (red), A1 (green), and A2 (blue) simulations.
Within the BH influence radius ($r<R_{\rm BHL}$), the inflow rate follows $\dot{M}_{\rm in} \propto r^{1/2}$
and the net accretion rate becomes a constant value.
}
\label{fig:massflux_Ma}
\vspace{5mm}
\end{figure}
Figure~\ref{fig:massflux_Ma} shows the radial structure of the angle-integrated mass inflow (dashed) and outflow (dotted) rates for
the A0, A1, and A2 simulations.
These rates are defined as
\begin{equation}
\dot{M}_{\rm in}= -\int_0^{2\pi}\int_0^{\pi} \langle \rho \cdot \min(v_r,0) \rangle r^2 \sin \theta d\theta d\phi,
\end{equation}
\begin{equation}
\dot{M}_{\rm out}= \int_0^{2\pi}\int_0^{\pi} \langle \rho \cdot \max(v_r,0) \rangle r^2 \sin \theta d\theta d\phi,
\end{equation}
where $\langle \cdot\cdot\cdot \rangle$ means the time-averaged value.
We also define the net accretion rate by $\dot{M} \equiv \dot{M}_{\rm in}- \dot{M}_{\rm out}$ (solid).
Note that both the inflow and outflow rates are proportional to the area ($\propto r^2$) at larger radii where
a uniform medium moves with a constant velocity without being affected by the gravitational force of the BH\footnote{
The time-averaged values of $\dot{M}$ at larger radii do not converge to zero because the flows at $|x|\gg R_{\rm BHL}$ are not fully symmetric
within $t<40~t_{\rm BHL}$.}.
Within the BH influence radius ($r<R_{\rm BHL}$), the mass inflow rate starts to deviate from $\dot{M}_{\rm in}\propto r^2$ and approaches
$\dot{M}_{\rm in}\propto r^{1/2}$, while the outflow rate decreases toward the center.
As a result, the net accretion rate is nearly constant, and the accretion system is in a quasi-steady state.
The radial dependence of the mass inflow rate $\dot{M}_{\rm in}\propto r^{1/2}$ is consistent with the result of simulations
where mass accretion with a broad range of angular momentum occurs
\citep{Ressler2018MNRAS.478.3544R, Xu2019MNRAS.488.5162X}.
We note that this accretion solution is different from those of self-similar RIAF solutions for a static BH (see also discussion below):
$\dot{M}_{\rm in}\propto r^{0}$ (ADAF; \citealt{Narayan1995ApJ...452..710N}) and $\dot{M}_{\rm in}\propto r$ (CDAF; \citealt{Quataert&Gruzinov2000ApJ...539..809Q}, \citealt{Inayoshi2018low}).
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{fig_disk_profile.pdf}
\caption{
Time and angle-averaged radial structures of the density (top), azimuthal velocity (middle), and temperature (bottom)
for different values of $\mathcal{M}$ and $\lambda$.
The density and temperature profiles within $R_{\rm BHL}$ are well approximated by a power-law distribution of $\rho \propto r^{-1}$ and
$T \propto r^{-1}$, respectively.
}
\label{fig:disk_pro}
\vspace{5mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.94\linewidth]{fig_disk_profile_theta.pdf}
\caption{
Same as Figure~\ref{fig:disk_pro}, but for the angular profiles at $r=0.2~R_{\rm BHL}$.
}
\label{fig:disk_pro_theta}
\vspace{5mm}
\end{figure}
In Figure~\ref{fig:disk_pro}, we present the angle-averaged radial profiles of the density, rotational velocity, and temperature for the six models.
For all the cases, the density and temperature begin to increase toward the center within the BH influence radius ($r\lesssim 2~R_{\rm BHL}$),
where the accretion flow forms a sub-Keplerian rotating disk with a mean velocity $|v_\phi|\approx (0.2-0.4)~ v_{\rm Kep}$.
Since the flow is not fully supported by the centrifugal force, the time-averaged inflow velocity is comparable to $\sim (0.3-0.5)v_{\rm Kep}$.
As the inflow rate in the quasi-steady state is approximated as $\dot{M}_{\rm in} \propto r^{1/2}$, the density follows $\rho \propto r^{-1}$ (see the top panel of Figure \ref{fig:disk_pro}).
Since radiative cooling is neglected in our simulations, the accretion flow is adiabatically compressed by the gravity of the BH and
the temperature increases to the center following $T\propto r^{-1}$, as expected from energy conservation.
Note that this treatment is valid only when the BH is embedded in a low-density diffuse plasma so that the radiative cooling time is longer
than the dynamical timescale at $r\simeq R_{\rm BHL}$ (and the orbital timescale for wandering BHs at the outskirts of galaxies; see \S\ref{sec:discussion}).
In Figure~\ref{fig:disk_pro_theta}, we show the time-averaged angular profiles at $r=0.2~R_{\rm BHL}$ for the same physical quantities shown in Figure~\ref{fig:disk_pro}.
Although the density and rotational velocity increase around the equatorial plane, the accretion flow is no longer a geometrically thin disk structure.
The power-law density profile ($\rho\sim r^{-1}$) is qualitatively different from those of known RIAFs:
$\rho \sim r^{-3/2}$ for ADAF solutions \citep{Narayan1995ApJ...452..710N} and $\rho\sim r^{-1/2}$ for CDAF solutions
\citep{Quataert&Gruzinov2000ApJ...539..809Q, Inayoshi2018low}.
The overall properties of the accretion flow are similar to those discussed by
\citet{Ressler2018MNRAS.478.3544R} and \citet{Xu2019MNRAS.488.5162X},
where the angular momentum of accretion flows is widely distributed.
Figure~\ref{fig:disk_pro} also shows the dependence of the physical quantities on the Mach number and wavelength of density fluctuation.
While the density and temperature hardly depend on the choice of $\lambda$, the density decreases and temperature increases with higher values of $\mathcal{M}$.
The density reduction simply reflects the dependence of $\dot{M}_{\rm in}$ on the Mach number due to the input of different angular momentum within
the BH influence radius, as shown in Figures \ref{fig:acc_rate}, \ref{fig:acc_dist}, and \ref{fig:massflux_Ma}.
We note that the $\mathcal{M}$ dependence of temperature is not true, but is caused by the radius being normalized by the BHL radius.
In adiabatic gas, the temperature is given by the virial temperature independent of $\mathcal{M}$;
$T\propto GM/r \propto (1+\mathcal{M}^2)(r/R_{\rm BHL})^{-1}$.
The amplitude of the rotational velocity is a fraction of the Keplerian velocity within $r \lesssim 2~R_{\rm BHL}$,
though the rotation direction is more time-dependent for shorter wavelengths, as shown in Figure~\ref{fig:acc_dist}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_viscosity.pdf}
\caption{
Radial profiles of the $r-\phi$ component of the Reynolds stress $\tau_{r\phi}$ normalized by $\rho v_{\rm Kep}^2$
for the cases with different Mach numbers.
The solid and dashed curves show positive and negative values, respectively.
Since the Reynolds stress is approximated as $\tau_{r\phi}\propto Ar^{-2}$, where $A$ is positive,
turbulent motions transport angular momentum outward within the BH influence radius.
}
\label{fig:vis}
\vspace{5mm}
\end{figure}
We note that our simulations do not treat an explicit viscosity.
As discussed in previous studies
\citep{Igumenshchev2000ApJS..130..463I, Igumenshchev2000ApJ...537L..27I, Narayan2000CDAF_ApJ...539..798N,
Quataert&Gruzinov2000ApJ...539..809Q, Igumenshchev2003ApJ...592.1042I},
the angular momentum of the accretion flow can be transported by turbulence excited by colliding flows.
To analyze the effect, we calculate the $r-\phi$ component of the mass-weighted Reynolds stress,
\begin{equation}
\tau_{r\phi} = \frac{1}{4\pi} { \int\limits_0^{2\pi} \int\limits_0^{\pi} \langle \rho v'_r v'_\phi \rangle \sin{\theta}{\rm d}\theta {\rm d}\phi},
\end{equation}
where $v'_i\equiv v_i - \langle v_i \rangle$.
In Figure~\ref{fig:vis}, we show the radial profile of the Reynolds stress normalized by $\rho v_{\rm Kep}^2$ for the three cases.
The Reynolds stress increases with the Mach number because the flow is more turbulent, and for $\mathcal{M}\gtrsim 1$ it is approximated by
$\tau_{r\phi}\simeq Ar^{-2}$, where $A$ is positive.
This positive value of $\tau_{r\phi}$ indicates that the turbulent motions transport angular momentum outward.
By analogy with the standard $\alpha$-viscosity model
\citep{Shakura&Sunyaev1973A&A....24..337S},
we define the effective viscous parameter by
\begin{equation}
\hat{\alpha}(r) \equiv \frac{ \int\limits_0^{2\pi} \int\limits_0^{\pi} \langle \rho v'_r v'_\phi \rangle \sin{\theta}{\rm d}\theta {\rm d}\phi}
{\int\limits_0^{2\pi} \int\limits_0^{\pi} \langle \rho c_s^2 \Omega /\Omega_{\rm Kep} \rangle \sin{\theta} {\rm d}\theta {\rm d}\phi},
\end{equation}
to quantify the strength of turbulent viscosity.
In our simulations, we obtain $\hat{\alpha} (r)\simeq 0.2$ within $r\simeq R_{ \rm BHL}$.
Therefore, turbulence transports angular momentum effectively even without MHD effects.
Recently, \citet{Ressler2020MNRAS.492.3272R} found that MHD and pure-HD simulations show similar properties of wind-fed accretion flows onto a BH in a nuclear region. In their situation, similarly to our simulations, mass accretion is allowed owing to a wide distribution of angular momentum provided stellar winds, even absent much angular momentum transport led by the MRI.
\subsection{Dependence on $r_{\rm in}$}
\label{sec:r_in}
Because of limitations in computing time, we do not extend our computational domain down to the BH event horizon scale ($r \sim r_{\rm Sch}$).
Instead, we conduct two additional simulations with different locations of the innermost grid, at $r_{\rm in}/R_{\rm BHL} = 0.16$ and $0.32$.
Figure~\ref{fig:massflux_rin} shows the radial profiles of time-averaged and angle-integrated mass inflow rate (dashed), outflow rate (dotted),
and net accretion rate $\dot{M}$ for each value of $r_{\rm in}$.
Within the BH influence radius, the inflow rate dominates the outflow rate, and the net rate becomes constant for all the cases.
The normalization of the net accretion rate nicely scales with $\dot{M}_{\rm in}(r=r_{\rm in})\propto r_{\rm in}^{1/2}$.
In Appendix~\ref{appendix:toymodel}, we describe the physical reason why the inflow rate depends on $r^{1/2}$ with an analytical model.
In order to check whether radiative cooling matters, we compare the the heating timescale to the cooling timescale.
Since $\rho\simeq \rho_\infty(r/R_{\rm BHL})^{-1}$ and $T\simeq T_\infty (r/R_{\rm B})^{-1}$, the timescale for free-free emission at the rate of
$Q_{\rm br}^{-}\propto \rho^2T^{1/2} \propto r^{-5/2}$ is estimated as $t_{\rm cool}\propto \rho T/Q_{\rm br}^{-} \propto r^{1/2}$.
Since the main heating source in a RIAF is viscous dissipation, the heating timescale is given by
$t_{\rm vis}\simeq (\gamma(\gamma-1)\hat{\alpha}\Omega)^{-1} \propto r^{3/2}$, where
$\hat{\alpha}\simeq 0.2$ and $\Omega \simeq 0.3~\Omega_{\rm Kep}$ for our case.
Thus, the ratio of the two timescales is estimated as
\begin{equation}
\begin{split}
\frac{t_{\rm vis}}{t_{\rm cool}}
\simeq &~0.14~ \left(\frac{\hat{\alpha}}{0.2}\right)^{-1} \left(\frac{r}{R_{\rm BHL}}\right)
\left(\frac{\dot{m}_{\rm B}}{10^{-3}}\right)\\
&\times \left(\frac{T_\infty}{10^7~{\rm K}}\right)^{-\frac{1}{2}} \left({1+\mathcal{M}^2}\right)^{-2}.\\
\end{split}
\end{equation}
Since the heating timescale is shorter than the cooling timescale everywhere within $r\simeq R_{\rm BHL}$,
radiative cooling does not play an important role in the accretion flow as long as $\dot{m}_{\rm B}\lesssim7\times10^{-3}$.
The $r_{\rm in}$ dependence of the net accretion rate affects the actual BH feeding rate and radiative output from the nuclear disk at $r\ll r_{\rm in}$.
Numerical simulations of RIAFs find that the positive gradient of the inflow rate (i.e., $s\equiv d\ln \dot{M}/d\ln r>0$) ceases
and the net accretion rate becomes constant within a transition radius of $r_{\rm tr} \approx (30-100)r_{\rm Sch}$
\citep[e.g.,][]{Abramowicz2002ApJ...565.1101A,Narayan2012MNRAS.426.3241N,
Yuan2012ApJ...761..129Y, Sadowski2015MNRAS.447...49S}.
Assuming $s=1/2$, the reduction factor of the net accretion rate is estimated as $\sim (r_{\rm in}/100~r_{\rm Sch})^s \simeq 5$
for a RIAF onto a moving BH with $c_\infty =500~{\rm km~s}^{-1}$ and $\mathcal{M}=2$.
In Appendix~\ref{appendix:spectra}, we discuss how radiation spectra are modified by this effect.
We note that the similarity between MHD and HD simulations seen at larger scales would not hold all the way down to the event horizon scales. In the inner region ($r\ll r_{\rm in}$), since the adiabatic index of gas changes from $5/3$ to $4/3$ because of relativistic effects and cooling processes (synchrotron and/or thermal conduction), magnetic field would be dynamically more important as seen in MHD simulations with general relativistic effects. However, the estimation of the transition scale is beyond our scope in this paper.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_massflux_rin.pdf}
\caption{
Time-averaged radial structures of the mass inflow (dashed), outflow (dotted), and net (solid) accretion rate for simulations
with different sizes of the innermost grid $r_{\rm in}$.
The net accretion rate follows $\propto r_{\rm in}^{1/2}$.
}
\label{fig:massflux_rin}
\vspace{5mm}
\end{figure}
\vspace{5mm}
\section{Radiation spectra of wandering BHs }\label{sec:discussion}
In this section, we calculate the radiation spectral energy distribution (SED) of accretion flows onto a moving BH
and discuss the detectability of wandering (SM)BHs in different types of galaxies.
The electromagnetic emission and feeding mechanism of a moving BH have both been studied. Most previous studies have focused on X-ray emission from low-density accretion flows
\citep[e.g.,][]{Agol2002MNRAS.334..553A, Tsuna2018MNRAS.477..791T, Manshanden2019JCAP...06..026M, Zivancev&Ostriker2020arXiv200406083Z}, by
analogy with low-luminosity AGNs \citep{Ho2008nuclear, Ho2009radiatively}.
However, the radiation spectrum is expected to peak at $\sim 100$ GHz, for which the radio interferometers
such as ALMA and VLA have the highest sensitivity and spatial resolution
\citep{VLA1980ApJS...44..151T, ALMA2015ApJ...808L...3A}.
Most radiation is generated at the innermost region of the accretion flow near the BH event horizon.
However, because of the limitation of our numerical simulations, we do not address the properties of accreting gas
within $r_{\rm in}\, (\gg r_{\rm Sch})$, as discussed in \S\ref{sec:r_in}.
Instead, we here calculate the radial distribution of physical quantities adopting a semi-analytical two-temperature disk model, using
our simulation data as boundary conditions \citep{Manmoto1997ApJ...489..791M,Yuan2000ApJ...537..236Y}.
Using the profiles, we can quantify the radiation spectrum of a RIAF onto a wandering BH embedded in a hot, diffuse plasma.
Although the model includes several free parameters (e.g., the strength of viscosity and the fraction $\delta$ of turbulent dissipation
that heats the electrons directly) to characterize the disk properties, we choose their parameters so that the relation between the radiative efficiency and BH accretion rate becomes consistent with the efficiency model by \citet{Inayoshi2019transition}.
The model is based on
the results of MHD simulations that include general relativistic effects and frequency-dependent radiation transport by
\citet{Ryan2017ApJ...844L..24R} and a semi-analytical model by \citet{Xie2012MNRAS.427.1580X}.
The details of the model are given in Appendix \ref{appendix:spectra}.
In the following, we consider the radiation spectra from wandering BHs that accrete gas at the outskirts of elliptical galaxies,
the Milky Way, and satellite dwarf galaxies, and we discuss their detectability by ALMA, VLA, and future facilities such as ngVLA.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_spectra.pdf}
\caption{
Radiation spectra from a wandering BH that accretes hot and diffuse plasma in the outskirts of the massive elliptical galaxy M87.
Each curve corresponds to a case with different BH mass ($10^7 \leq M_\bullet/{M_\odot} \leq 10^8$) and Mach number
($0.5\leq \mathcal{M} \leq 2.0$).
The black solid curve shows the ALMA sensitivity to continuum emission, which is manually calculated by the sensitivity calculator provided by the observatory (\url{https://almascience.nrao.edu/proposing/sensitivity-calculator}).
The sensitivity at each frequency is calculated adopting the rms noise level achieved by single-point
1 hour on-source integration (assuming precipitable water vapor of $0.472$ mm).
The black dashed curve indicates the VLA sensitivity to continuum emission for 1 hour on-source integration,
which is manually calculated by the sensitivity calculator provided by the observatory (\url{https://obs.vla.nrao.edu/ect/}).
The black dotted curve is the ngVLA continuum sensitivity demonstrated by the performance estimates on their website
(\url{https://ngvla.nrao.edu/page/performance}).
We set the distance to M87 galaxy ($D=16.68$ Mpc) and adopt the Chandra observational data from
\cite{Russell2015MNRAS.451..588R} to model the properties of gas surrounding a wandering BH.
Wandering BHs with masses of $M_\bullet \gtrsim 3\times 10^7~\rm {M_\odot}$, if any, could be detectable with ALMA and VLA.
}
\label{fig:spectra}
\vspace{5mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_detection.pdf}
\caption{
Radio luminosity at $100~{\rm GHz}$ for different BH masses and Mach numbers.
The horizontal lines are the threshold for detections with the ALMA and ngVLA, assuming the distance to M87.
}
\label{fig:detection}
\vspace{5mm}
\end{figure}
\subsection{Elliptical galaxies}\label{sec:elliptical}
In the framework of hierarchical structure formation in the $\Lambda$CDM model, lower-mass galaxies form first, and
they subsequently merge to build larger objects.
In this paradigm, massive elliptical galaxies in the local universe are expected to experience a large number of galaxy mergers in a Hubble time.
As a natural result of multiple dry mergers at low redshifts (gas-rich mergers at high redshifts), binary SMBHs form at the galactic core,
and some of them merge into a single SMBH through multi-body BH interactions that likely eject the smallest BHs from the core \citep{ Ryu2018MNRAS.473.3410R, Zivancev&Ostriker2020arXiv200406083Z}.
Therefore, some ejected BHs, depending on the kick velocity, are still bound within the galactic halo and orbit at velocities of $v_\infty \sim \sigma_\star$,
where $\sigma_\star$ is the stellar velocity dispersion.
When the orbiting BHs are fed with the diffuse gas of the surrounding host, they emit nonthermal radiation, as discussed below.
\begin{table*}[t]
\caption{\label{tab:detection}
Luminosity of wandering BHs.}
\begin{ruledtabular}
\begin{tabular}{lccccccc}
Name & $D~{\rm (Mpc)}$ & $\log{(M_\bullet/{M_\odot})}$ & $n_{\rm e}~{\rm (cm^{-3})}$ & $T~{\rm (keV)}$ & $\log{\dot{m}}$ & $\log{(L_{\rm bol}/{\rm erg~s^{-1}})}$ & $\log{(F_{\nu_{\rm p}}/{\rm \mu Jy})}$\\
\colrule
M87 & 16.68 & 9.789$\pm$0.027 & 0.114$\pm$0.016 & 1.650$\pm$0.050 & $-$5.918$\pm$0.104 & 38.268$\pm$0.220 & 2.685$\pm$0.115 \\
NGC 507 & 70.80 & 9.210$\pm$0.160 & 0.029$\pm$0.009 & 0.965$\pm$0.015 & $-$6.742$\pm$0.288 & 36.132$\pm$0.695 & $-$0.344$\pm$0.618 \\
NGC 1316 & 20.95 & 8.230$\pm$0.080 & 0.033$\pm$0.007 & 0.620$\pm$0.010 & $-$7.385$\pm$0.181 & 33.923$\pm$0.437 & $-$1.551$\pm$0.404 \\
NGC 4374 & 18.51 & 8.970$\pm$0.050 & 0.022$\pm$0.005 & 0.595$\pm$0.025 & $-$6.778$\pm$0.157 & 35.703$\pm$0.486 & 0.422$\pm$0.328 \\
NGC 4472 & 16.72 & 9.400$\pm$0.100 & 0.029$\pm$0.010 & 0.785$\pm$0.005 & $-$6.418$\pm$0.233 & 36.944$\pm$0.543 & 1.620$\pm$0.393 \\
NGC 4552 & 15.30 & 8.920$\pm$0.110 & 0.018$\pm$0.004 & 0.455$\pm$0.035 & $-$6.738$\pm$0.257 & 35.863$\pm$0.592 & 0.621$\pm$0.469 \\
NGC 4636 & 14.70 & 8.490$\pm$0.080 & 0.028$\pm$0.011 & 0.485$\pm$0.015 & $-$7.029$\pm$0.244 & 34.879$\pm$0.545 & $-$0.336$\pm$0.443 \\
NGC 5044 & 31.20 & 8.710$\pm$0.170 & 0.050$\pm$0.009 & 0.645$\pm$0.015 & $-$6.748$\pm$0.254 & 35.644$\pm$0.658 & $-$0.294$\pm$0.534 \\
NGC 5813 & 32.20 & 8.810$\pm$0.110 & 0.042$\pm$0.009 & 0.585$\pm$0.015 & $-$6.661$\pm$0.208 & 35.856$\pm$0.563 & $-$0.085$\pm$0.396 \\
NGC 5846 & 24.90 & 8.820$\pm$0.110 & 0.042$\pm$0.009 & 0.625$\pm$0.015 & $-$6.689$\pm$0.210 & 35.859$\pm$0.515 & 0.128$\pm$0.396 \\
\end{tabular}
\tablecomments{Column (1): galaxy name. Column (2): distance. Column (3): mass of the central BH. Columns (4) and (5): electron density and temperature at $\sim2-3~{\rm kpc}$.
Column (6): accretion rate normalized by the Eddington rate ($\dot{m}_{\rm }\equiv{\dot{M}}/{\dot{M}_{\rm Edd}}$) by the scaling relation from the simulation A1,
assuming that the wandering BH mass is $1\%$ of the central BH mass and $\mathcal{M}=0.5$.
Column (7): bolometric luminosity.
Column (8): radiation flux density at $\nu_{\rm p} = 100~{\rm GHz}$.
The distance and central BH mass are taken from \citep[][and references therein]{Inayoshi2020ApJ...894..141I}.
The electron density and temperature are taken from \citet{Russell2015MNRAS.451..588R} for M87 galaxy and from \citet{Russell2013MNRAS.432..530R} for the others.
}
\end{ruledtabular}
\vspace{5mm}
\end{table*}
As an example of a massive elliptical galaxy, we consider M87.
To model the properties of gas surrounding a wandering BH, we adopt the {\it Chandra} observational
data from \citet{Russell2015MNRAS.451..588R}:
the electron density $n_e\approx 0.11 ~ {\rm cm^{-3}}$ ($\rho\approx1.84\times10^{-25}~{\rm g ~ cm^{-3}}$) and
temperature $T\approx 1.9\times10^7 ~{\rm K}$ ($c_{\rm s}\approx6.5\times10^2~{\rm km~s^{-1}}$) for
gas at a distance of $r \approx 2-3 ~{\rm kpc}$ from the center.
Since the mass of the central SMBH is as high as $\simeq 6\times 10^9~{M_\odot}$
\citep{Gebhardt_M87BHmass2011ApJ...729..119G, M87ETH2019ApJ...875L...1E}, the masses of the wandering BHs would be in
the range $M_\bullet \approx 10^{7-8}~{M_\odot}$, which corresponds to BH mass ratios of $q\approx 10^{-3}-10^{-2}$. These
mass ratios are reasonable for massive ellipticals that have frequently experienced minor dry mergers (see Figure~ 1 in \citealt{Ryu2018MNRAS.473.3410R}).
The orbital velocity of the moving BH is estimated as $v_\infty \sim \sigma_\star \simeq 321~{\rm km}~{\rm s}^{-1}$
(the stellar velocity dispersion is taken from \citealt{Babyk2018ApJ...857...32B}), corresponding to $\mathcal{M}\simeq 0.5$.
Since this estimation is somewhat uncertain and the result is sensitive to the choice of $\mathcal{M}$, as shown below, we treat the Mach number as a free parameter
in the range of $0.5 \leq \mathcal{M} \leq 2$.
For reference, for a BH with $M_\bullet=3\times10^7~{M_\odot}$ moving at a velocity of $\mathcal{M}=1$,
the BH feeding rate is approximated as $\sim 0.2~\dot{M}_{\rm BHL}=2.2\times10^{-7}~\dot{M}_{\rm Edd}$ for the A1 run.
Figure~\ref{fig:spectra} presents the radiation spectra with different BH masses of $M_\bullet=10^7$, $3\times10^7$, and $10^8~{M_\odot}$
and Mach numbers of $\mathcal{M}=0.5$, $1.0$, and $2.0$.
We also overlay the sensitivity curve of ALMA, assuming a distance of 16.68 Mpc for M87 \citep{Blakeslee2009ApJ...694..556B}.
For all the cases, the radiation spectra have peaks in the millimeter band at $\nu_{\rm p} \simeq 100$ GHz, where the ALMA sensitivity is the highest.
The peak luminosity increases and exceeds the ALMA sensitivity with higher BH masses and lower Mach numbers.
In Figure~\ref{fig:detection}, we show the $100$ GHz continuum luminosity as a function of BH mass $M_\bullet$ for different Mach numbers.
The two horizontal lines correspond to the detection limits for ALMA (solid) and ngVLA (dashed), respectively.
This shows that wandering BHs with $M_\bullet \gtrsim 2\times 10^7~{M_\odot}$ and $\mathcal{M}\lesssim 1$ could be detectable with ALMA.
The detectable BH mass is reduced by a factor of $2-3$ with ngVLA, whose sensitivity is one order of magnitude higher than that of ALMA.
Note that if those BHs are wandering at larger distances of $\sim5-10\,{\rm kpc}$ from the galactic center, where the plasma density is lower,
their luminosities decrease and thus the detection threshold for the BH mass increases by a factor of $\sim 2-4$.
We apply this argument to other nearby massive elliptical galaxies, assuming the existence of wandering BHs
at their galaxy outskirts.
Taking the observational data from \citet{Russell2013MNRAS.432..530R,Russell2015MNRAS.451..588R}
and \citet{Inayoshi2020ApJ...894..141I}, we estimate the properties of gas surrounding
those BHs and quantify their predicted bolometric luminosities and 100 GHz flux densities.
The errors of density and temperature are given by the maximum and minimum values
at distances of $r \simeq 2-3~{\rm kpc}$ from the centers.
We assume the mass of the wandering BH to be 1\% of the central SMBH mass, and we choose $\mathcal{M}=0.5$ (note that $\sigma_\star/c_{\rm s}\simeq0.5-0.8$ for most cases in our sample).
As shown in Table~\ref{tab:detection}, the bolometric luminosities produced from wandering BHs are on the order of
$\simeq 10^{35}-10^{-36}~{\rm erg~s^{-1}}$ and the flux densities at $100~{\rm GHz}$ are
$10^{-1}-10^{3}~{\rm \mu Jy}$.
We note that the ALMA sensitivity at 100 GHz is $\sim 9~{\rm \mu Jy}$ for 1 hour on-source integration.
Therefore, BHs, if any, wandering at the galactic outskirts could be detectable in M87 and NGC 4472.
With the capability of ALMA, only a few nearby ($\lesssim 20$ Mpc) ellipticals are
interesting targets for hunting wandering BHs.
Finally, we generalize this argument for early-type, gas-poor galaxies of several morphological types
and give an estimate of the millimeter luminosity from wandering BHs as a function of the stellar velocity
dispersion $\sigma_\star$.
To characterize the gas density and temperature of the ambient environment of the wandering BH,
we approximate the density profile with an isothermal $\beta$-model
\begin{equation}
\rho=\frac{\rho_0}{(1+r^2/r_{\rm c}^2)^{1.5}},
\end{equation}
where $r_{\rm c}$ is the core radius, and the core density and gas temperature are estimated with
Eqs. (22) and (23) in \citet{Zivancev&Ostriker2020arXiv200406083Z}
(scaling relations fitted with data from \citealt{Babyk2018ApJ...857...32B}) as
\begin{subequations}
\begin{equation}
\log \left(\frac{\rho_0}{{\rm g~cm^{-3}}}\right)= 0.6 ~\log \left(\frac{\sigma_\star}{{\rm 100 ~ km~s^{-1}}}\right) - 23.8,
\end{equation}
\begin{equation}
\log \left(\frac{T}{{\rm keV}} \right)= 2.50~\log \left( \frac{\sigma_\star}{{\rm 100 ~ km~s^{-1}}} \right) - 1.06.
\end{equation}
\end{subequations}
We estimate the mass of the central SMBH using the $M_\bullet-\sigma_\star$ relation
\citep{Kormendy2013ARA&A..51..511K} and set the mass of the wandering BH to $1\%$ of the nuclear SMBH.
As a reference, the orbital distance of the wandering BH from the galactic center is set to $r=3~r_{\rm c}$, and
its velocity relative to the surrounding hot gas is set to $\mathcal{M}=0.5$.
We estimate the relation between the luminosity at $\nu_{\rm p}=100~{\rm GHz}$ and the central velocity dispersion as
\begin{equation}
\log \left( \frac{\nu_{\rm p} L_{\nu_{\rm p}}}{{\rm erg~s^{-1}}} \right) = {7.6~\log\left(\frac{\sigma_\star}{100~{\rm km~s^{-1}}}\right)+32.1}.
\end{equation}
For distances comparable to that of M87, galaxies with $\sigma_\star\gtrsim320 ~{\rm km~s^{-1}}$ yield $\nu_{\rm p}L_{\nu_{\rm p}}\gtrsim10^{36}~{\rm erg~s^{-1}}$, which can be detected by ALMA.
\subsection{Milky Way}
The existence of intermediate-mass BHs (IMBHs; see a recent review by \citealt{Greene_ARAA_2019}) with $M_\bullet \gtrsim 10^4~{M_\odot}$ in our Galaxy has been argued based on
observations of high-velocity compact clouds \citep{Oka2017NatAs...1..709O, Tsuboi2017ApJ...850L...5T, Ravi2018MNRAS.478L..72R}
and theoretical/numerical studies \citep{Volonteri2005MNRAS.358..913V, Bellovary2010ApJ...721L.148B, Tremmel2018MNRAS.475.4967T}.
\citet{Tremmel2018ApJ...857L..22T} predict that Milky Way-size halos would host $\sim 10$ IMBHs within their virial radii, and that they
would be wandering within their host galaxies for several gigayears.
We apply the same exercise as in \S\ref{sec:elliptical} for wandering BHs with kpc-scale orbits within the Milky Way.
To model the properties of the hot gas surrounding the Milky Way halo, we adopt the results of the Suzaku X-ray observations
\citep{Nakashima2018ApJ...862...34N}, which estimate a plasma temperature
of $T\simeq 3\times 10^6{\rm K}$ and an emission measure of $\rm EM \simeq (0.6-16.4)\times 10^{-3}\,{\rm cm}^{-6}~{\rm pc}$.
Based on these results, we adopt $n_{\rm e}= 0.01~{\rm cm}^{-3}$ as the gas density around wandering BHs\footnote{The electron number density is inferred as $n_{\rm e}\approx 4\times 10^{-3}~{\rm cm}^{-3}$ in \cite{Nakashima2018ApJ...862...34N},
assuming spherical and disk-like distributions of gas.
The value we adopt is higher than the median by a factor of 2.5, but is within the spatial fluctuation of the emission measure.}.
In Figure~\ref{fig:spectra_MW}, we show the radiation spectra of wandering BHs with $M_\bullet \approx 3\times 10^4 - 3\times 10^5\,{M_\odot}$
located at $\sim 10~{\rm kpc}$ from the Earth.
The spectra in the millimeter band extend to lower frequencies, where (ng)VLA has the highest sensitivity.
We could detect IMBHs down to $M_\bullet \gtrsim 10^5~{M_\odot}$ for $\mathcal{M}\lesssim1$.
There is additional indirect evidence of the existence of hot gas in the Milky Way halo at distances larger than $\sim 50~{\rm kpc}$,
based on observations of the Local Group dwarf galaxies with gas removed by ram pressure stripping \citep{Grcevich2009ApJ...696..385G} and absorption lines of high-velocity clouds associated with the Magellanic Stream that is close to pressure equilibrium with a hot plasma
\citep{Fox2005ApJ...630..332F}.
Those observations suggest a lower density for the hot gas halo ($n\approx 10^{-4}~{\rm cm^{-3}}$), which, if true, would imply that wandering BHs in the Milky Way halo are too dim to be detected.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_spectra_MW.pdf}
\caption{
Similar to Figure~\ref{fig:spectra}, but for accretion flows of wandering BHs in the outskirts of the Milky Way.
}
\label{fig:spectra_MW}
\vspace{5mm}
\end{figure}
\subsection{Dwarf galaxies}
Observations have identified IMBHs in low-mass dwarf galaxies with confidence down to $M_\bullet \approx 10^5-10^6~{M_\odot}$, and more tentatively for
$M_\bullet \lesssim 10^5~{M_\odot}$ (\citealt{Greene_ARAA_2019}; see also \citealt{2017IJMPD..2630021M}, \citealt{Mezcua2020ApJ...898L..30M}).
Cosmological simulations studying the occupation fraction of IMBHs in dwarf galaxies and find that a significant fraction of them are not centrally located but wander
within a few kpc from the galaxy centers \citep{2019MNRAS.482.2913B}.
This is expected for dwarf galaxies because of their shallow gravitational potential wells and the longer dynamical friction timescale for the wandering BHs.
Multi-body BH interactions and GW recoils due to BH mergers further contribute
to the off-nuclear population of IMBHs \citep{Lousto2012PhRvD..85h4015L, Bonetti2019MNRAS.486.4044B}.
Recent radio observations of dwarf galaxies with the VLA by \cite{Reines2020ApJ...888...36R} reported a sample of wandering IMBH candidates
that are significantly offset from the optical centers of the host galaxies. Based on an empirical scaling relation between BH mass and total stellar mass,
these authors argue that the candidate wandering BHs might have masses in the range $M_\bullet\approx 10^{4}-10^{6}~{M_\odot}$.
With radio luminosities of $\sim 10^{20}-10^{22}~{\rm W~Hz^{-1}}$ at $\nu_0 =9~{\rm GHz}$, the sources are radiating at $\gtrsim \nu_0 L_{\nu_0} \simeq 10^{-6}~L_{\rm Edd}$.
Based on the radiative efficiency model for RIAFs, this level of (bolometric) luminosity can be produced only when BHs accrete at relatively high
accretion rates of $\dot{m}\approx 10^{-4}$. However, at such a high accretion rate, radio synchrotron photons are heated to X-rays via inverse Compton scattering \citep{Ryan2017ApJ...844L..24R}.
The brightness of the radio emission could be explained by synchrotron radiation from nonthermal electrons accelerated in a relativistic jet instead of arising from a disk. Since the majority of the Reines et al. candidate wandering BHs are point-like sources at a resolution of $\sim 0\farcs25$ (which corresponds to a physical scale of $\sim 85$ pc at the median distance of the sources), the jet age can be constrained to $\lesssim 10^3$ yr for an assumed jet propagation speed of $\sim 0.3c$ \citep[e.g.,][]{2006ApJ...648..148N,2008A&A...487..885O}. The hypothesis of young jets seems consistent with their steep spectral indices ($F_\nu \propto \nu^{-0.79}$), analogous to compact steep-spectrum sources \citep{1998PASP..110..493O}, although the spectral indices were estimated over a narrow frequency range ($9-10.65$ GHz).
\section{Summary} \label{sec:sum}
We perform 3D hydrodynamical simulations to investigate the dynamics of radiatively inefficient gas accretion flows onto massive BHs orbiting around the outskirts of their host galaxies in the presence of a hot and diffuse plasma. A population of wandering BHs can arise from ejection from the galactic nuclei through multi-body BH interactions and GW recoils associated with galaxy mergers and BH coalescences. We find that when a wandering BH is fed with hot, diffuse plasma with density fluctuations, the accretion flow
forms a geometrically thick and hot disk.
Owing to a wide distribution of inflowing angular momentum,
the mass accretion rate is limited at $\sim 10\%-20\%$ of the canonical Bondi-Hoyle-Littleton rate and decreases as the innermost radius decreases following a power law $r_{\rm in}\,\propto r^{-1/2}$.
Using the simulation results, we further calculate the radiation spectra of the radiatively inefficient accretion flows, which peak in the millimeter band ($\sim 100$ GHz). We show that the predicted signal may be detectable with ALMA for a hypothetical wandering BH with $M_\bullet \simeq 2\times10^7~{M_\odot}$ orbiting a massive ($\sigma_\star\gtrsim 300~{\rm km/s}$) nearby elliptical galaxy such as M87, or $M_\bullet \simeq 10^5~{M_\odot}$ moving through the halo of the Milky Way. The sensitivity will improve with future facilities such as ngVLA.
Our radiation spectral model, combined with numerical simulations, can be applied to provide physical interpretations of candidate off-nuclear BHs detected in nearby dwarf galaxies, which may constrain BH seed formation scenarios.
\\
\acknowledgments
\section*{Acknowledgement}
We greatly thank Feng Yuan, Kengo Tomida, and Kohei Ichikawa for the constructive discussion.
This work is partially supported by the National Science Foundation of China (11721303, 11991052, 11950410493) and
the National Key R\&D Program of China (2016YFA0400702).
Numerical computations were carried out with the High-performance Computing Platform of Peking University and
Cray XC50 at the Center for Computational Astrophysics of the National Astronomical Observatory of Japan.
\vspace{5mm}
|
2,869,038,157,049 | arxiv | \section{Introduction}
For finite, nonempty subsets $A$ and $B$ of an abelian group $G$, we define their sumset to be $$A+B=\{a+b:\;a\in A,\,b\in B\}.$$ All intervals will be discrete, so $[x,y]=\{z\in\Z:\; x\leq z\leq y\}$ for real numbers $x,\,y\in \R$. More generally, for $d\in G$ and $x,\,y\in \Z$, we let $$[x,y]_d=\{xd,(x+1)d,\ldots,yd\}$$ denote the corresponding interval with difference $d$.
For a nonempty subset $X\subseteq \Z$, we let $\gcd(X)$ denote the greatest common divisor of all elements of $X$, and use the abbreviation $\gcd^*(X):=\gcd(X-X)$ to denote the affine (translation invariant) greatest common divisor of the set $X$, which is equal to $\gcd(-x+X)$ for any $x\in X$. Note $\gcd^*(X)=\gcd(X)$ when $0\in X$.
The study of the structure of $A$ and $B$ assuming $|A+B|$ is small in comparison to the cardinalities $|A|$ and $|B|$ is an important topic in Inverse Additive Number Theory. For instance, if $A=B\subseteq \Z$ with $|A+A|\leq C|A|$, where $C$ is a fixed constant, then Freiman's Theorem asserts that there is a multi-dimensional progression $P_A\subseteq \Z$ with $A\subseteq P_A$ and $|P_A|\leq f(C)|A|$, where $f(C)$ is a constant that depends only on $C$. The reader is directed to the text \cite{Tao-vu-book} for a fuller discussion of this result, its generalizations, and its implications and importance.
In this paper, we are interested in the special case of Freiman's Theorem when $|A+B|$ is very small, with $C<3$. The following is the (Freiman) $3k-4$ Theorem, proved in the case $A=B$ by Freiman \cite{freiman-3k4} \cite{freiman-book}, extended (in various forms) to general summands $A\neq B$ by Freiman \cite{freiman-3k4-distinct}, by Lev and Smeliansky \cite{lev-smel-3k4}, and by Stanchescu \cite{Stanchesuc-3k4}, with the additional conclusion regarding a long length arithmetic progression added later by Freiman \cite{freiman-3k4-longAP} (in the special case $A=B$) and by Bardaji and Grynkiewicz \cite{itziar-3k4} (for general $A\neq B$). The formulation given below is an equivalent simplification of that given in the text \cite[Theorem 7.1(i)]{Grynk-book}.
\begin{theirtheorem}[$3k-4$ Theorem]
Let $A,\,B\subseteq \Z$ be finite, nonempty subsets with
$$|A+B|=|A|+|B|+r\leq |A|+|B|+\min\{|A|,\,|B|\}-3-\delta,$$ where $\delta=1$ if $A$ and $B$ are translates of each other, and otherwise $\delta=0$.
Then there exist arithmetic progressions $P_A, P_B, P_C\subseteq \Z$, each with common difference $d=\gcd^*(A+B)$, such that $A\subseteq P_A$, $B\subseteq P_B$, and $C\subseteq A+B$ with
$$|P_A|\leq |A|+r+1,\quad |P_B|\leq |B|+r+1\,\quad \und\quad |P_C|\geq |A|+|B|-1.$$
\end{theirtheorem}
The bounds $|P_A|\leq |A|+r+1$, $|P_B|\leq |B|+r+1$ and $|P_C|\geq |A|+|B|-1$ are best possible, as seen by the example $A=[0,r]_2\cup [2r+2,|A|+r]$ and $B=[0,r]_2\cup [2r+2,|B|+r]$, which has $A+B=[0,r]_2\cup [2r+2,|A|+|B|+2r]$ for $-1\leq r\leq \min\{|A|,\,|B|\}-3$, showing that all three bounds can hold with equality simultaneously. The bound $|A+A|\leq 3|A|-4$ is tight, as seen by the example $A=[0,|A|-2]\cup \{N\}$ for $N$ large, which shows $|P_A|$ cannot be bounded when $|A+A|\geq 3|A|-3$. Likewise, when $A$ and $B$ are not translates of each other, then the bound $|A+B|\leq |A|+|B|+\min\{|A|,\,|B|\}-3$ is also tight, as seen by the example $B=[0,|B|-1]$ and $A=[0,|A|-2]\cup \{N\}$ for $N$ large and $|A|\geq |B|$.
When $|B|$ is significantly smaller than $|A|$, the hypothesis $|A+B|\leq |A|+2|B|-3$ is rather strong, making effective use of the $3k-4$ Theorem more restricted.
There has only been limited success in obtaining conclusions similar to the $3k-4$ Theorem above the threshold $|A|+|B|+\min\{|A|,\,|B|\}-3-\delta$. See for instance \cite{huicochea-3k4}, where a weaker bound on $|P_B|$ is obtained under an alternative hypothesis (discussed in the concluding remarks) than our hypothesis \eqref{hyp-bounds}. For versions involving more than two summands, see \cite{huicochea-levconj} \cite{lev-mult-3k4-adem} \cite{lev-mult-3k4}. Some related results may also be found in \cite{chen-3k4} \cite{jin} \cite{lev-long} \cite{Ruzsa-3k4}.
As the previous examples show, if one wishes to consider sumsets with cardinality above the threshold $|A|+|B|+\min\{|A|,\,|B|\}-3-\delta$, then $A$ and $B$ cannot \emph{both} be contained in short arithmetic progressions.
The goal of this paper is to show that, nonetheless, at least \emph{one} of the sets $A$ and $B$ can, indeed, be contained in a short arithmetic progression under a much weaker hypothesis than that of the $3k-4$ Theorem. Specifically, our main result is the following theorem, whose bounds are optimal in the sense described afterwards.
\begin{theorem}\label{thm-3k-4-minimprove}
Let $A,\,B\subseteq \Z$ be finite, nonempty subsets with $|B|\geq 3$ and let $s\geq 1$ be the unique integer with
\be\label{hyp-bounds}(s-1)s\left(\frac{|B|}{2}-1\right)+s-1<|A|\leq s(s+1)\left(\frac{|B|}{2}-1\right)+s.\ee
Suppose \be\label{hyp1a} |A+B|=|A|+|B|+r< (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1).\ee
Then there exists an arithmetic progression $P_B\subseteq \Z$ such that $B\subseteq P_B$ and $|P_B|\leq |B|+r+1$.
\end{theorem}
The hypothesis \eqref{hyp-bounds} depends on the relative size of $|A|$ and $|B|$. This dependence is necessary, and essentially best possible, as seen by the example $B=[0,\frac{|B|}{2}-1]\cup (N+[0,\frac{|B|}{2}-1])$ and $A=[0,\frac{|A|}{s}-1]\cup (N+[0,\frac{|A|}{s}-1])\cup (2N+[0,\frac{|A|}{s}-1])\cup \ldots\cup ((s-1)N+[0,\frac{|A|}{s}-1])$ for $|B|$ even with $s\mid |A|$ and $N$ large. It is then a minimization problem (carried out in Lemma \ref{lemma-2D-redcalc}) that the optimal choice of $s$ depends on the relative size of $|A|$ and $|B|$ as described in \eqref{hyp-bounds}. The bound $|P_B|\leq |B|+r+1$ is also best possible, as seen by the example $B=[0,|B|-2]\cup \{|B|+r\}$ and $A=[0,|A|-1]$.
As a weaker consequence of Theorem \ref{thm-3k-4-minimprove}, we derive the following corollary, which eliminates the parameter $s$.
\begin{corollary}\label{cor-3k-4-minimprove}
Let $A,\,B\subseteq \Z$ be finite, nonempty subsets.
Suppose \be\nn |A+B|=|A|+|B|+r<|A|+\frac{|B|}{2}-1+2\sqrt{|A|(\frac{|B|}{2}-1)}.\ee
Then there exists an arithmetic progression $P_B\subseteq \Z$ such that $B\subseteq P_B$ and $|P_B|\leq |B|+r+1$.
\end{corollary}
\section{Preliminaries}
For an abelian group $G$ and nonempty subset $X\subseteq G$, we let $$\mathsf H(X)=\{g\in G:\; g+X=X\}\leq G$$ denote the stabilizer of $X$, which is the largest subgroup $H$ such that $X$ is a union of $H$-cosets. The set $X$ is called \emph{aperiodic} if $\mathsf H(X)$ is trivial, and \emph{periodic} if $\mathsf H$ is nontrivial. More specifically, we say $X$ is $H$-periodic if $H\leq \mathsf H(X)$, equivalently, if $X$ is a union of $H$-cosets. For a subgroup $H\leq G$, we let $$\phi_H:G\rightarrow G/H$$ denote the natural homomorphism. We let $\la X\ra$ denote the subgroup generated by $X$, and let $\la X\ra_*=\la X-X\ra$ denote the affine (translation invariant) subgroup generated by $X$, which is the minimal subgroup $H$ such that $X$ is contained in an $H$-coset. Note $\la X\ra_*=\la -x+X\ra$ for any $x\in X$. In particular, $\la X\ra_*=\la X\ra$ when $0\in X$. If $k\in\Z$, then $k\cdot A=\{kx:\;x\in A\}$ denotes the $k$-dilate of $A$.
Kneser's Theorem \cite[Theorem 6.1]{Grynk-book} \cite[Theorem 5.5]{Tao-vu-book} is a core result in inverse additive theory.
\begin{theirtheorem}[Kneser's Theorem]\label{thm-kt}
Let $G$ be an abelian group, let $A,\,B\subseteq G$ be finite, nonempty subsets, and let $H=\mathsf H(A+B)$. Then $$|A+B|\geq |A+H|+|B+H|-|H|=|A|+|B|-|H|+\rho,$$ where $\rho=|(A+H)\setminus A|+|(B+H)\setminus H|\geq 0$.
\end{theirtheorem}
A very special case of Kneser's Theorem is the following basic bound for integer sumsets.
\begin{theirtheorem}\label{CDT-for-Z}
Let $A,\,B\subseteq \Z$ be finite, nonempty subsets. Then $|A+B|\geq |A|+|B|-1$.
\end{theirtheorem}
If $|A+B|\leq |A|+|B|-1$, then $|\phi_H(A)+\phi_H(B)|=|\phi_H(A)|+|\phi_H(B)|-1$ follows from Kneser's Theorem, where $H=\mathsf H(A+B)$, reducing the description of sumsets with $|A+B|\leq |A|+|B|-1$ to the case when $A+B$ is aperiodic with $|A+B|=|A|+|B|-1$. The complete description is then addressed by the Kemperman Structure Theorem. We summarize the relevant details here, which may be found in
\cite[Chapter 9]{Grynk-book} and are summarized in more general form in \cite{ittI}
Let $A,\,B\subseteq G$ and $H\leq G$. A nonempty subset of the form $(\alpha+H)\cap A$ is called an \emph{$H$-coset slice} of $A$. If $A_\emptyset\subseteq A$ is a nonempty subset of an $H$-coset and $A\setminus A_\emptyset$ is $H$-periodic, then $A_\emptyset$ is an $H$-coset slice and we say that $A_\emptyset$ \emph{induces an $H$-quasi-periodic decomposition} of $A$, namely,
$A=(A\setminus A_\emptyset)\cup A_\emptyset$.
If, in addition, $B_\emptyset
\subseteq B$ induces an $H$-quasi-periodic decomposition, and $\phi_H(A_\emptyset)+\phi_H(B_\emptyset)$ is a unique expression element in $\phi_H(A)+\phi_H(B)$, then $A_\emptyset+B_\emptyset\subseteq A+B$ also induces an $H$-quasi-periodic decomposition.
Let $X,\,Y\subseteq G$ be finite and nonempty subsets with $K=\la X+Y\ra_*$. We say that the pair $(X,Y)$ is \emph{elementary of type} (I), (II), (III) or (IV) if there are $z_A,\,z_B\in G$ such that $X=z_A+A$ and $Y=z_B+B$ for a pair of subsets $A,\,B\subseteq K$ satisfying the corresponding requirement below:
\begin{itemize}
\item[(I)] $|A|=1$ or $|B|=1$.
\item[(II)] $A$ and $B$ are arithmetic progressions of common difference $d\in K$ with $|A|,\,|B|\geq 2$ and $\ord(d)\geq |A|+|B|-1\geq 3$.
\item[(III)] $|A|+|B|=|K|+1$ and there is precisely one unique expression element in the sumset $A+B$; in particular, $A+B=K$, \, $|A|,\,|B|\geq 3$, and $|K|\geq 5$.
\item[(IV)] $B=-(K\setminus A)$ and the sumset $A+B$ is aperiodic and contains no unique expression elements; in particular, $A+B=A-(K\setminus A)=K\setminus \{0\}$, \ $|A|,\,|B|\geq 3$, and $|K|\geq 7$.
\end{itemize}
We will need the following result regarding type (III) elementary pairs.
\begin{lemma}
\label{thm-typeIII} Let $G$ be an abelian group and let $A,\,B\subseteq G$ be finite, nonempty subsets. Suppose $(A,B)$ is a type (III) elementary pair with $a_0+b_0$ the unique expression element in $A+B$, where $a_0\in A$ and $b_0\in B$. Then $$(A\setminus \{a_0\})+(B\setminus \{b_0\})=(A+B)\setminus \{a_0+b_0\}.$$
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that
$a_0=b_0=0$ and $G=H$. Let $A'=A\setminus \{0\} $ and
$B'=B\setminus \{ 0\}$. Suppose by contradiction $\{0,g\}\subseteq G\setminus (A'+B')$ with $g\neq 0$. Since $g\in G= A+B$ and $g\notin A'+B'$, it follows that every expression $g=x+y\in A+B$, with $x\in A$ and $y\in B$, must have $x=0$ or $y=0$. As a result, since there are at least \emph{two} such expressions (as $0\in A+B$ is the only unique expression element for the type (III) pair), it follows that are exactly two, namely one of the form $g=0+y$ with $y\in B$, and the other of the form $g=x+0$ with $x\in A$, whence \be\label{ghostly}g\in A\cap
B.\ee Since $0,\,g\notin A'+B'$, we have $(\{0,g\}-A')\cap B'=\emptyset$, and since $(A,B)$ has type (III), we have $|A'|+|B'|=|A|+|B|-2=|G|-1$. As a result, $|\{0,g\}-A'|\leq |G|-|B'|= |A'|+1$, which is easily seen to only be possible if $A'=A'_1\cup P_1$, where $A'_1$ is $K$-periodic (or empty), $P_1$ is an arithmetic progression with difference $g$, and $K=\langle g\rangle$; moreover, since $g\in A'$ but $0\notin A'$ (see \eqref{ghostly}), we conclude that the first term in $P_1$ must in fact be $g$. Likewise $B'=B'_1 \cup P_2$ with
$B'_1$ $K$-periodic (or empty) and $P_2$ an arithmetic progression with difference $g$ whose first term is $g$. Thus $0\in P_1+K$ and $0\in P_2+K$. Hence, since $0+0$ is a unique expression element in $A+B$, it follows, in view of $A'=A'_1\cup P_1$ and $B'=B'_1 \cup P_2$, that $0$ is a unique expression element in $\phi_K(A)+\phi_K(B)$. Consequently, any unique expression element from $(P_1\cup \{0\})+(P_2\cup\{0\})$ is also a unique expression element in $A+B$.
Since $g$ is the first term in both $P_1$ and $P_2$, it follows that
$P_1\cup \{0\}$ and $P_2\cup\{0\}$ are both arithmetic progressions with difference $g$. Thus, since $(P_1\cup \{0\})+(P_2\cup\{0\})$ contains a unique expression element, namely $0+0$, it follows that $(P_1\cup \{0\})+(P_2\cup\{0\})$ must contain another unique expression element as well, namely $g_1+g_2$, where $g_1\in P_1$ is the last term of the progression $P_1$ and $g_2\in P_2$ is the last term of the progression $P_2$, contradicting (in view of the previous paragraph) that $0+0$ is the only unique expression element in $A+B$.
\end{proof}
The following is the `dual' formulation of the Kemperman Structure Theorem \cite[Theorem 9.2]{Grynk-book}, introduced by Lev \cite{lev-kemp}.
\begin{theirtheorem}[KST-Dual Form]\label{KST-}
Let $G$ be a nontrivial abelian group and let $A,\,B\subseteq G$ be finite, nonempty subsets. A necessary and sufficient condition for $$|A+B|=|A|+|B|-1,$$ with $A+B$ containing a unique expression element when $A+B$ is periodic, is that either $(A,B)$ is elementary of type (IV) or else there exists a finite, proper subgroup $H<G$ and nonempty subsets $A_\emptyset \subseteq A$ and $B_\emptyset\subseteq B$ inducing $H$-quasi-periodic decompositions such that
\begin{itemize}
\item[(i)] $(\phi_H(A),\phi_H(B))$ is elementary of some type (I)--(III),
\item[(ii)] $\phi_H(A_\emptyset)+\phi_H(B_\emptyset)$ is a unique expression element in $\phi_H(A)+\phi_H(B)$,
\item[(iii)] $|A_\emptyset+B_\emptyset|=|A_\emptyset|+|B_\emptyset|-1$, and
\item[(iv)] either $A_\emptyset+B_\emptyset$ is aperiodic or contains a unique expression element.
\end{itemize}
\end{theirtheorem}
If $G$ and $G'$ are abelian groups and $A,\,B\subseteq G$ are finite, nonempty subsets, then a Freiman homomorphism is a map $\psi:A+B\rightarrow G'$, defined by some coordinate maps $\psi_A:A\rightarrow G'$ and $\psi_B:B\rightarrow G'$, such that $\psi(x+y)=\psi_A(x)+\psi_B(y)$ for all $x\in A$ and $y\in B$ is well-defined. The sumset $\psi_A(A)+\psi_B(B)$ is then the homomorphic image of $A+B$ under $\psi$. If $\psi$ is injective on $A+B$, then $\psi$ is a Freiman isomorphism, in which case the sumsets $A+B$ and $\psi_A(A)+\psi_B(B)$ are isomorphic, denoted $A+B\cong \psi_A(A)+\psi_B(B)$. See \cite[Chapter 20]{Grynk-book}. Equivalently, if there are coordinate maps $\psi_A:A\rightarrow G'$ and $\psi_B:B\rightarrow G'$ such that $\psi_A(x)+\psi_B(y)=\psi_A(x')+\psi_B(y')$ if and only if $x+y=x'+y'$, for any $x,\,x'\in A$ and $y,\,y'\in B$, then $A+B\cong \psi_A(A)+\psi_B(B)$. Isomorphic sumsets have the same behavior with respect to their sumset irrespective of the ambient group in which they live.
The proof of Theorem \ref{thm-3k-4-minimprove} will involve the use of modular reduction, introduced by Lev and Smeliansky \cite{lev-smel-3k4}, in the more general form developed in \cite[Chapter 7]{Grynk-book}. We summarize the needed details from \cite[Chapter 7]{Grynk-book}.
Suppose $A,\,B\subseteq \Z$ are finite nonempty subsets and $n\geq 2$ is an integer. Let $\phi_n:\Z\rightarrow \Z/n\Z$ denote the natural homomorphism.
For each $i\geq 0$, let $A_i\subseteq \Z/n\Z$ be the subset consisting of all $x\in\Z/n\Z$ for which there are least $i+1$ elements of $A$ congruent to $x$ modulo $n$. Thus $\phi_n(A)=A_0\supseteq A_1\supseteq A_2\supseteq \ldots$ and $\Summ{i\geq 0}|A_i|=|A|$. Likewise define $B_j$ for each $j\geq 0$, so $\phi_n(B)=B_0\supseteq B_1\supseteq B_2\supseteq \ldots$ and $\Summ{j\geq 0}|B_j|=|B|$. Set $$\wtilde A=\bigcup_{i\geq 0}(A_i\times \{i\})\quad\und\quad \wtilde B=\bigcup_{j\geq 0}(B_j\times\{j\}).$$ Thus $\wtilde A,\,\wtilde B\subseteq \Z/n\Z\times \Z$ with $|\wtilde A|=|A|$ and $|\wtilde B|=|B|$.
Then $\wtilde A+\wtilde B=\bigcup_{k\geq 0}(C_k\times \{k\})$, where $$C_k=\bigcup_{i+j=k}(A_i+B_j)$$ for $k\geq 0$. Thus $\phi_n(A+B)=C_0\supseteq C_1\supseteq C_2\supseteq\ldots$.
Let $G=\Z/n\Z$ and let $H\leq G$ be a subgroup. Consider an arbitrary $z\in G/H$, say corresponding to the coset $z'+H$. Let $k_z\geq 0$ be the maximal integer such that $z'+H\subseteq C_{k_z}$, or else set $k_z=-1$ if $z'+H\nsubseteq C_k$ for all $k\geq 0$. Set \begin{multline*}\delta_z=\max\Big(\{0\}\cup \big\{|(x+H)\cap A_i|+|(y+H)\cap B_j|-1-|H|-|(z+H)\cap C_{k_z+1}|:\\
i+j=k_z,\; \phi_H(x)+\phi_H(y)=z\big\}\Big)\geq 0.\end{multline*}
Then \cite[Corollary 7.1]{Grynk-book} shows that $\wtilde A+\wtilde B$ can be used to estimate the size of $|A+B|$ as follows.
\begin{theirtheorem}
\label{modular-red-cor}
Let $A,\,B\subseteq \Z$ be finite, nonempty sets, let $n\geq 2$ be an integer, and let all notation be as above. Then
$$|A+B|\geq |\wtilde A+\wtilde B|+\Summ{z\in G/H}\delta_z.$$
\end{theirtheorem}
We will use the above machinery in the case when $\min B=0$ and $n=\max B$. In such case,
$A_t\subseteq \ldots\subseteq A_0=\phi_n(A)\subseteq \Z/n\Z$, where $t\geq 0$ is the maximal index such that $A_t\neq \emptyset$,
$\{0\}=B_1\subseteq B_0=\phi_n(B)\subseteq \Z/n\Z$ and $C_{t+1}\subseteq \ldots\subseteq C_0=\phi_n(A+B)\subseteq \Z/n\Z$, $$|B_0|=|B|-1,\quad\und\quad \Sum{i=0}{t}|A_i|=|A|.$$
Now $\wtilde A+\wtilde B=\bigcup_{i=0}^{t+1}(C_i\times \{i\})$ with
$C_0=A_0+B_0$, \ $C_{t+1}=A_t+B_1=A_t$ and $$C_i=(A_i+B_0)\cup (A_{i-1}+B_1)=(A_i+B_0)\cup A_{i-1}\quad\mbox{ for $i\in [1,t]$}.$$
If $H\leq G=\Z/n\Z$ is a subgroup, and $z\in (G/H)\setminus \phi_H(A_0)$, then set
$$\delta'_z=\max\Big(\{0\}\cup \big\{|(x+H)\cap A_0|+|(y+H)\cap B_0|-1-|H|:\;
\phi_H(x)+\phi_H(y)=z\big\}\Big)\geq 0.$$
As a special case of Theorem \ref{modular-red-cor}, we obtain the following corollary.
\begin{corollary}\label{cor-modred}
Let $A,\,B\subseteq \Z$ be finite, nonempty sets with $0=\min B$ and $n=\max B\geq 2$, and let all notation be as above. Then
$$|A+B|\geq |A_0+B_0|+|A|+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}\delta'_z.$$
\end{corollary}
\begin{proof} For $z\in G/H$, let $c_z=|(z'+H)\cap C_{1}|$, where $z$ corresponds to the coset $z'+H$. Recall that $B_1=\{0\}$. Then,
by Theorem \ref{modular-red-cor}, we have \begin{align}\nn |A+B|&\geq |\wtilde A+\wtilde B|+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}\delta_z\geq |A_0+B_0|+\Sum{i=0}{t}|A_i+B_1|+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}c_z+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}\delta_z\\\label{teel}
&=|A_0+B_0|+\Sum{i=0}{t}|A_i|+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}(c_z+\delta_z)=|A_0+B_0|+|A|+\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}(c_z+\delta_z).
\end{align}
Consider an arbitrary $z\in G/H$ with $z\notin \phi_H(A_0)$. If $k_z\geq 1$, then $c_{z}=|H|>\delta'_z$, with the inequality holding trivially by definition of $\delta'_z$, and the equality following from the definitions of $k_z$ and $c_z$. Otherwise, it follows from the definitions involved that $c_z+\delta_z\geq \delta'_z$. Regardless, we find $\underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}(c_z+\delta_z)\geq \underset{z\notin \phi_H(A_0)}{\Summ{z\in G/H}}\delta'_z$, which combined with \eqref{teel} yields the desired lower bound.
\end{proof}
The idea of using compression to estimate sumsets in higher dimensional spaces is a classical technique. See \cite[Section 7.3]{Grynk-book}. We outline briefly what we will need. Let $A,\,B\subseteq \R^2$ be finite, nonempty subsets. Let $x,\,y\in \R^2$ be a basis for $\R^2$. We can decompose $A=\bigcup_{\alpha\in I}A_\alpha$, where each $A_\alpha=(\alpha+\R x)\cap A\neq \emptyset$. Then $|I|$ equals the number of lines parallel to the line $\R x$ that intersect $A$. We can likewise decompose $B=\bigcup_{\beta\in J}B_\beta$. The linear compression (with respect to $x$) of $A$ is the set $\mathsf C_{x,y}(A)$ obtained by taking $A$ and replacing the elements from each $A_\alpha$ by the arithmetic progression with difference $x$ and length $|A_\alpha|$ contained in $\alpha+\R x$ whose first term lies on the line $\R y$. We likewise define $\mathsf C_{x,y}(B)$. A simply argument (see \cite[eq. (7.18)]{Grynk-book}) shows
$$|A+B|\geq |\mathsf C_{x,y}(A)+\mathsf C_{x,y}(B)|.$$
Finally, we will need the following discrete analog of the Brunn-Minkowski Theorem for two-dimensional sumsets \cite[Theorem 1.3]{oriol2D} \cite[Theorem 7.3]{Grynk-book}.
\begin{theirtheorem}
\label{2D-brunn-Mink} Let $A,\,B\subseteq \R^2$ be finite, nonempty subsets, let $\ell\subseteq \R^2$ be a line, let $m$ be the number of lines parallel to $\ell$ that intersect $A$, and let $n$ be the number of parallel lines to $\ell$ that intersect $B$. Then
$$|A+B|\geq \Big(\frac{|A|}{m}+\frac{|B|}{m}-1\Big)(m+n-1).$$
\end{theirtheorem}
\section{The Proof}
We begin with a lemma showing that a pair of sets
$A,\,B\subseteq \Z$ being short arithmetic progressions modulo $N$ with common difference forces the sumset $A+B$ to be isomorphic to a two-dimensional sumset from $\Z^2$.
\begin{lemma}\label{Lemma-ap-mod-reduction}
Let $A,\,B\subseteq \Z$ be finite, nonempty subsets, let $N\geq 1$ be an integer, and let $\varphi:\Z\rightarrow \Z/N\Z$ be the natural homomorphism. Suppose $\varphi(A)$ and $\varphi(B)$ are arithmetic progressions with common difference $d\in [1,N-1]$ modulo $N$ such that $|\varphi(A)|+|\varphi(B)|-1\leq \ord(\varphi(d))$.
Then there is a Freiman isomorphism $$A+B\cong \bigcup_{i=0}^{m-1}(X_i\times\{i\})+\bigcup_{j=0}^{n-1}(Y_j\times \{j\})\subseteq \Z^2,$$ where $A=A_0\cup \ldots\cup A_{m-1}$ and $B=B_0\cup\ldots \cup B_{n-1}$ are the partitions of $A$ and $B$ into distinct residue classes modulo $N$ indexed so that $\varphi(A_i)-\varphi(A_{i-1})=\varphi(B_j)-\varphi(B_{j-1})=\varphi(d)$ for all $i\in [1,m-1]$ and $j\in [1,n-1]$, with $\alpha_0\in A_0$, \ $\beta_0\in B_0$, \ $\alpha_i=\alpha_0+id$, \ $\beta_j=\beta_0+jd$, \ $X_i=\frac{1}{N}\cdot (A_i-\alpha_i)\subseteq \Z$ and $Y_i=\frac{1}{N}\cdot (B_j-\beta_j)\subseteq \Z$, for $i\in [0,m-1]$ and $j\in [0,n-1]$.
\end{lemma}
\begin{proof}
Let $d\in [1,N-1]\subseteq \Z$ be the common difference modulo $N$ for the arithmetic progressions $\varphi(A)$ and $\varphi(B)$, and let $\alpha_0\in A_0$ and $\beta_0\in B_0$. Set \be\label{def-alphbeta}\alpha_i=\alpha_0+id \quad \und\quad \beta_j= \beta_0+jd,\quad\mbox{ for $i\in [0,m-1]$ and $j\in [0,n-1]$}.\ee
Then each $\alpha_i$ is a representative modulo $N$ for the residue classes $A_i$, and each $\beta_j$
is a representative modulo $N$ for the residue classes $B_j$, for $i\in [0,m-1]$ and $j\in [0,n-1]$.
Note $$m+n-1=|\varphi(A)|+|\varphi(B)|-1\leq \ord(\varphi(d))$$ by hypothesis. As a result, \be\label{cond-iso}\alpha_i+\beta_j\equiv \alpha_{i'}+\beta_{j'}\mod N\quad\mbox{ if and only if }\quad i+j=i'+j'.\ee
For $i\in [0,m-1]$ and $j\in [0,n-1]$, set $X_i=\frac{1}{N}\cdot (A_i-\alpha_i)\subseteq \Z$ and $Y_i=\frac{1}{N}\cdot (B_j-\beta_j)\subseteq \Z$. Thus $A_i=\alpha_i+N\cdot X_i$ and $B_j=\beta_j+N\cdot Y_j$ for $i\in [0,m-1]$ and $j\in [0,n-1]$. Define the maps $\varphi_A:A\rightarrow \Z^2$ and $\varphi_B:B\rightarrow \Z^2$ by $$\varphi_A(\alpha_i+Nx)=(x,i)\quad\und\quad \varphi_B(\beta_j+Ny)=(y,j),$$ where $x\in X_i$ and $y\in Y_j$. Then $\varphi_A$ and $\varphi_B$ are clearly injective on $A$ and $B$, respectively.
Suppose $(\alpha_i+Nx)+(\beta_j+Ny)=(\alpha_{i'}+Nx')+(\beta_{j'}+Ny')$. Reducing modulo $N$ and applying \eqref{cond-iso}, it follows that $i+j=i'+j'$, in turn implying $\alpha_i+\beta_j=\alpha_{i'}+\beta_{j'}$ per the definitions in \eqref{def-alphbeta}. But now $(\alpha_i+Nx)+(\beta_j+Ny)=(\alpha_{i'}+Nx')+(\beta_{j'}+Ny')$ implies $N(x+y)=N(x'+y')$, and thus $x+y=x'+y'$ as $N\neq 0$. It follows that $$\varphi_A(\alpha_i+Nx)+\varphi_B(\beta_j+Ny)=(x+y,i+j)=(x'+y',i'+j')=
\varphi_A(\alpha_{i'}+Nx')+\varphi_B(\beta_{j'}+Ny').$$ Conversely, if $\varphi_A(\alpha_i+Nx)+\varphi_B(\beta_j+Ny)=
\varphi_A(\alpha_{i'}+Nx')+\varphi_B(\beta_{j'}+Ny')$, then $(x+y,i+j)=(x'+y',i'+j')$ follows, implying $x+y=x'+y'$ and $i+j=i'+j'$. Hence \eqref{def-alphbeta} ensures $\alpha_i+\beta_j=\alpha_{i'}+\beta_{j'}$, and now $$(\alpha_i+Nx)+(\beta_j+Ny)=\alpha_{i}+\beta_{j}+N(x+y)=\alpha_{i'}+\beta_{j'}+N(x'+y')=
(\alpha_{i'}+Nx')+(\beta_{j'}+Ny').$$ This shows that $A+B$ is Freiman isomorphic to the sumset $\varphi_A(A)+\varphi_B(B)= \bigcup_{i=0}^{m-1}(X_i\times\{i\})+\bigcup_{j=0}^{n-1}(Y_j\times \{j\})\subseteq \Z^2$, completing the proof.
\end{proof}
\begin{lemma}\label{lemma-2D-redcalc}
Let $x\geq 1$ and $y\geq 3$ be integers and let $s\geq 1$ be the integer with $$(s-1)s(\frac{y}{2}-1)+s-1< x\leq s(s+1)(\frac{y}{2}-1)+s.$$ Then $$\min\left\{\left\lceil(\frac{x}{m}+\frac{y}{n}-1)(m+n-1)\right\rceil:\; m,n\in \Z,\,x\geq m\geq 1,\,\frac{y}{3}+1\geq n\geq 2\right\}=\left\lceil(\frac{x}{s}+\frac{y}{2}-1)(s+1)\right\rceil.$$
\end{lemma}
\begin{proof}
Assuming the lemma fails, we obtain
\be\label{failhyp} (\frac{x}{m}+\frac{y}{n}-1)(m+n-1)-(\frac{x}{s}+\frac{y}{2}-1)(s+1)+\frac{1}{s}\leq 0\ee for some integers $m\geq 1$ and $n\geq 2$ with $y\geq 3n-3$ and $x\geq m$ (note $(\frac{x}{s}+\frac{y}{2}-1)(s+1)$ can be expressed as a rational fraction with denominator $s$ regardless of the parity of $s$). Multiplying \eqref{failhyp} by $2smn$ yields
\be\label{failhyper} 2n(s(n-1)-m)x+sm(2m-2-(s-1)n)y-2smn(m+n-s-2)+2mn\leq 0\ee
\subsection*{Case 1:} $n=2$.
\begin{proof}
In this case, \eqref{failhyper} yields $2(s-m)x\leq sm(s-m)y-2sm(s-m)-2m$, implying $m\neq s$. If $m\leq s-1$, then we obtain $x\leq sm(y/2-1)-\frac{m}{s-m}$. Considering this upper bound as a function of $m$, we find that its discrete derivative (its value at $m+1$ minus its value at $m$) equals $s(\frac{y}{2}-1-\frac{1}{(s-m)(s-m-1)})\geq 0$ (for $m\leq s-2$), meaning it is maximized when $m$ achieves the upper bound $m=s-1$, yielding $x\leq s(s-1)(y/2-1)-s+1$, contrary to hypothesis. On the other hand, if $m\geq s+1$, then we obtain $x\geq sm(y/2-1)+\frac{m}{m-s}$. Considering this lower bound as a function of $m$, we find that its discrete derivative (its value at $m+1$ minus its value at $m$) equals $s(\frac{y}{2}-1-\frac{1}{(m-s)(m+1-s)})\geq 0$ (for $m\geq s+1$), meaning it is minimized when $m$ achieves the lower bound $m=s+1$, yielding $x\geq s(s+1)(y/2-1)+s+1$, contrary to hypothesis, completing the case.
\end{proof}
In view of Case 1, we now assume $n\geq 3$.
\subsection*{Case 2:} $s(n-1)\geq m$.
\begin{proof}
In this case, the coefficient of $x$ in \eqref{failhyper} is non-negative.
\medskip
Suppose first that $s=1$, in which case the coefficient of $y$ in \eqref{failhyper} is also non-negative. Thus using the estimates $x\geq m$ and $y\geq 3n-3$ in \eqref{failhyper}, followed by the estimate $n\geq 3$ (in view of Case 1), yields the contradiction (dividing all terms by $2m$)
$$0\geq nm-3m+3\geq 3.$$ So we now assume $s\geq 2$.
\medskip
As the coefficient of $x$ in \eqref{failhyper} is non-negative, applying the hypothesis $x\geq s(s-1)(y/2-1)+s$ yields
\be\label{hum}\Big(s(s-1)n^2-(s-1)(s+2m)n+2m^2-2m\Big)y-2(s^2-2s+m)n^2-2(m^2-2sm-\frac{m}{s}
-s^2+2s)n\leq 0.\ee
We next need to show that the coefficient of $y$ in \eqref{hum} is non-negative. To this end, assume by contradiction that \be\label{ycoeff}s(s-1)n^2-(s-1)(s+2m)n+2m^2-2m<0.\ee Since $m$ and $s$ are positive integers, \eqref{ycoeff} fails for $s=1$, allowing us to assume $s\geq 2$.
Thus \eqref{ycoeff} is quadratic in $n$ with positive lead coefficient. The expression in \eqref{ycoeff} has non-negative derivative for $n\geq \frac{s+2m}{2s}$.
Consequently, since our case hypothesis gives $n\geq\frac{m}{s}+1>\frac{s+2m}{2s}$, we conclude that the derivative with respect to $n$ in \eqref{ycoeff} is non-negative.
In particular, \eqref{ycoeff} must hold with $n=\frac{m+s}{s}$, yielding $$(s+1)m(m-s)<0.$$
Thus $m\leq s-1$.
Since the derivative with respect to $n$ in \eqref{ycoeff} is non-negative for $n\geq \frac{s+2m}{2s}$ and
$n\geq 2>\frac{s+2m}{2s}$ (as $m\leq s-1$), it follows that \eqref{ycoeff} must also hold for $n=2$, yielding
$$2(m-s)(m-s+1)<0,$$ which contradicts that $m\leq s-1$. So we conclude that \eqref{ycoeff} fails, meaning the coefficient of $y$ in \eqref{failhyper} is non-negative.
As a result, applying the hypothesis $y\geq 3n-3$ in \eqref{hum} yields \be\label{humdull}(4n-6)m^2-(n^2(6s-4)-(10s-12+\frac{2}{s})n-6)m+sn(n-1)(3(s-1)n-5s+7)\leq 0.\ee
The above expression is quadratic in $m$ with positive lead coefficient $4n-6>0$ (as $n\geq 2$) and discriminant equal to $4$ times the quantity
\be\label{discr}-n(n-2)(n-3)(3n-5)s^2-2n(n-2)(n-3)s+(4n^4-30n^3+58n^2-36n+9)+\frac{n^2+s(4n^3
-12n^2+6n)}{s^2}\ee
Since $n\geq 3$ is an integer, the derivative with respect to $s$ of \eqref{discr} is negative, meaning \eqref{discr} is maximized for $s=2$, in which case it equals
$-8n^4+48n^3-100n^2+63n+\frac14 n^2+9$, which is negative for $n\geq 2$ (it has two complex roots with largest real root less than $2$). Thus the discriminant of \eqref{humdull} is negative for $s\geq 2$, contradicting that \eqref{humdull} is non-positive, which completes Case 2.
\end{proof}
\subsection*{Case 3:} $s(n-1)<m$.
\begin{proof}
In this case, the coefficient of $x$ in \eqref{failhyper} is negative, so we can apply the estimate $x\leq s(s+1)(y/2-1)+s$ to yield
\be\label{humII}\Big(s(s+1)n^2-s(s+2m+1)n+2m^2-2m\Big)y-2(s^2+m)n^2+2
(s^2+2sm-m^2+2m+\frac{m}{s})n\leq 0.\ee
We next need to show that the coefficient of $y$ in \eqref{humII} is non-negative. To this end, assume by contradiction that \be\label{ycoeffII}2m^2-(2sn+2)m+s(s+1)n(n-1)=s(s+1)n^2-s(s+2m+1)n+2m^2-2m<0.\ee
Considering \eqref{ycoeffII} as a function of $m$, we find that it has positive derivative when $m\geq \frac{sn+1}{2}$. Thus, since $m>s(n-1)\geq \frac{sn+1}{2}$ by case hypothesis (in view of $n\geq 3$), we see that \eqref{ycoeff} is minimized when $m=s(n-1)$, yielding $$(n-1)(n-2)s(s+1)<0,$$ which fails in view of $s\geq 1$ and $n\geq 2$. So we instead conclude that the coefficient of $y$ in \eqref{humII} is non-negative.
As a result, applying the hypothesis $y\geq 3n-3$ in \eqref{humII} yields \be\label{humduller}(4n-6)m^2-(6sn^2+2n^2-10sn+2n-6-\frac{2n}{s})m+sn(3sn^2+3n^2-8sn-6n+5s+3)\leq 0.\ee
The above expression is quadratic in $m$ with positive lead coefficient $4n-6>0$ (as $n\geq 2$) and discriminant equal to $4$ times the quantity
\begin{align}\nn-n(n-2)(n-3)(3n-5)s^2-2n(n-2)(n-3)(3n-4)s+(n^4-4n^3+5n^2-6n+9)\\+
\frac{n^2+s(6n-2n^2-2n^3)}{s^2}\nn\\ <-n(n-2)(n-3)(3n-5)s^2-2n(n-2)(n-3)(3n-4)s+(n^4-4n^3+5n^2-6n+9)\label{aling}\end{align}
Since $n\geq 3$ is an integer, the derivative with respect to $s$ of \eqref{aling} is non-positive, meaning \eqref{aling} is maximized for $s=1$, in which case it equals
$-8n^4+54n^3-114n^2+72n+9$, which is negative for $n\geq 4$ (it has two complex roots with largest real root less than $4$). Thus the discriminant of \eqref{humduller} is negative for $n\geq 4$, contradicting that \eqref{humduller} is non-positive. It remains only to consider the case when $n=3$.
For $n=3$, \eqref{humduller} becomes (dividing all terms by $6$)
\be\label{humdullest}m^2-(4s+3-\frac{1}{s})m+s(4s+6)\leq 0.
\ee
By case hypothesis, $m\geq (n-1)s+1=2s+1$, while \eqref{humdullest} is minimized for $m=2s+1+\frac12-\frac{1}{2s}$. Thus, since $m$ is an integer, we see \eqref{humdullest} is minimized when $m=2s+1$, in which case \eqref{humdullest} yields the contradiction $1/s\leq 0$, which is a proof concluding contradiction.
\end{proof}
\end{proof}
The following proposition gives a rough estimate for the resulting bound from Lemma \ref{lemma-2D-redcalc}.
\begin{proposition}
\label{prop-rough-estimate} For real numbers $x,y,s> 0$ with $y>2$, we have
$$(\frac{x}{s}+\frac{y}{2}-1)(s+1)\geq x+\frac{y}{2}-1+2\sqrt{x(\frac{y}{2}-1)}.$$
\end{proposition}
\begin{proof}
We have $(\frac{x}{s}+\frac{y}{2}-1)(s+1)=x+\frac{y}{2}-1+\frac{x}{s}+s(\frac{y}{2}-1)$. Thus, if the proposition fails, then $0<\frac{2x}{s}+(y-2)s<\sqrt{8x(y-2)}$. Multiplying by $s$ and squaring both sides, we obtain $4x^2+(y-2)^2s^4+4s^2x(y-2)<8s^2x(y-2)$, implying $$0>4x^2+(y-2)^2s^4-4s^2x(y-2)=(2x-(y-2)s^2)^2,$$
which is not possible.
\end{proof}
We now proceed with the proof of our main result.
\begin{proof}[Proof of Theorem \ref{thm-3k-4-minimprove}]
We may w.l.o.g. assume $0=\min A=\min B$ and $\gcd(A+B)=1$. In view of \eqref{hyp1a}, we have
\be\nn |A+B|<|A|+\frac{|A|}{s}+\frac{s+1}{2}|B|-s-1.\ee
\medskip
Let us begin by showing it suffices to prove the theorem in the case $\gcd^*(B)=1$, that is, when $B-B$ generates $\la A+B\ra_*=\Z$. To this end, assume we know the theorem is true when $\gcd^*(B)=1$ but $\gcd^*(B)=d\geq 2$.
We can partition $A=A_1\cup A_2\cup\ldots\cup A_t$ with each $A_i$ a maximal nonempty subset of elements congruent to each other modulo $d$. For $i\in [1,t]$, let $s_i\geq 1$ be the integer with $$(s_i-1)s_i(|B|/2-1)+s_i-1<|A_i|\leq s_i(s_i+1)(|B|/2-1)+s_i.$$ Note that $\gcd^*(A_i+B)=d=\gcd^*(B)$ for every $i\in [1,t]$. Thus, if $|A_i+B|<(\frac{|A_i|}{s_i}+\frac{|B|}{2}-1)(s_i+1)$ for some $i\in [1,t]$, then we could apply the case $\gcd^*(B)=1$ to the sumset $A_i+B$ (since $B-B$ generates $d\Z=\la A_i+B\ra_*$) thereby obtaining the desired conclusion for $B$. Therefore, we can instead assume this fails, meaning
\be|A_i+B|\geq(\frac{|A_i|}{s_i}+\frac{|B|}{2}-1)(s_i+1)=|A_i|+\frac{|A_i|}{s_i}+\frac{s_i+1}{2}|B|
-s_i-1\quad\mbox{ for every $i\in [1,t]$}.\label{weetee}\ee
Since the sets $A_i$ are distinct modulo $d$ with $B\subseteq d\Z$, it follows that the sets $A_i+B$ are disjoint for $i\in [1,t]$. Thus
\be\label{weevee}|A+B|\geq \Sum{i=1}{t}|A_i+B|\geq \Sum{i=1}{t}\left(|A_i|+\frac{|A_i|}{s_i}+\frac{s_i+1}{2}|B|-s_i-1\right),\ee with the latter inequality in view of \eqref{weetee}.
Let $m=s_1+\ldots+s_t$. Note $|A_1|+\ldots+|A_t|=|A|$ and $1\leq s_i\leq |A_i|$ for all $i\in [1,t]$ (in view of the definition of $s_i$). Thus $1\leq t\leq m\leq |A|$. A simple inductive argument on $t$ (with base case $t=2$) shows that $\Sum{i=1}{t}\frac{x_i}{y_i}\geq \left(\Sum{i=1}{t}x_i\right)/\left(\Sum{i=1}{t}y_i\right)$ holds for any positive real numbers $x_1,y_1,\ldots,x_t,y_t>0$. In particular, $\Sum{i=1}{t}\frac{|A_i|}{s_i}\geq \left(\Sum{i=1}{t}|A_i|\right)/\left(\Sum{i=1}{t}s_i\right)=\frac{|A|}{m}$.
Applying this estimate in \eqref{weevee}, along with the identities $|A_1|+\ldots+|A_t|=|A|$ and $m=s_1+\ldots+s_t$, yields \begin{align}|A+B|\geq \nn |A|+\frac{|A|}{m}+\frac{m}{2}|B|-m+t(|B|/2-1)&\geq |A|+\frac{|A|}{m}+\frac{m}{2}|B|-m+|B|/2-1\\&=(\frac{|A|}{m}+\frac{|B|}{2}-1)(m+1)\label{kaydey}.
\end{align}
Since $1\leq m\leq |A|$, \ $|B|\geq 3$ and $2\leq \frac{|B|}{3}+1$, Lemma \ref{lemma-2D-redcalc} (applied with $x=|A|$, $y=|B|$, and $n=2$) implies $\lceil(\frac{|A|}{m}+\frac{|B|}{2}-1)(m+1)\rceil\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$. As a result, since $|A+B|$ is an integer, we see that \eqref{kaydey} yields the lower bound $|A+B|\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, contrary to hypothesis. So it remains to prove the theorem when $\gcd^*(B)=1$, which we now assume.
\medskip
We proceed by induction on $|A|$. Note, if $|A|=1$, then $s=1$ and the bound $|A+B|\geq |B|=(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$ holds trivially. This completes the base of the induction and allows us to assume $|A|\geq 2$.
\medskip
Suppose $\gcd^*(A)=d>1$. Then $A$ is contained in a $d\Z$-coset. In view of $\gcd^*(B)=1$ and $d\geq 2$, it follows that there are $t\geq 2$ $d\Z$-coset representatives $\beta_1,\ldots,\beta_t\in \Z$ such that each slice $B_{\beta_i}=(\beta_i+\Z)\cap B$ is nonempty for $i\in [1,t]$. Applying Theorem \ref{CDT-for-Z} to $A+B_{\beta_i}$ for each $i\in [1,t]$ yields $|A+B|\geq \Sum{i=1}{t}(|A|+|B_{\beta_i}|-1)=t(|A|-1)+|B|\geq 2|A|+|B|-2\geq(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, with the final inequality in view of Lemma \ref{lemma-2D-redcalc} (applied with $x=|A|$, $y=|B|$, $m=1$ and $n=2$), contrary to hypothesis. So we instead conclude that $${\gcd}^*(A)={\gcd}^*(B)=1.$$
\medskip
By translation, we may assume $B\subseteq [0,n]$ and $A\subseteq [0,m]$ with $0,\,n\in B$ and $0,\,m\in A$. Define $P_B:=[0,n]$. Let $\phi_n:\Z\rightarrow \Z/n\Z$ be the reduction modulo $n$ homomorphism and set $G=\Z/n\Z$.
We aim to use modular reduction as described above Corollary \ref{cor-modred}. To that end, let $\wtilde A$ and $\wtilde B$, as well as all associated notation, be defined as above Corollary \ref{cor-modred} using the modulus $n=\max B-\min B$.
In particular, $A_t\subseteq \ldots\subseteq A_0=\phi_n(A)\subseteq \Z/n\Z$, where $t\geq 0$ is the maximal index such that $A_t\neq \emptyset$,
$B_1=\{0\}$, $B_0=\phi_n(B)\subseteq \Z/n\Z$, $|B_0|=|B|-1$, \ $\Sum{i=0}{t}|A_i|=|A|$, and $$n=|P_B|-1.$$
\subsection*{Case 1:} $A_0+B_0=\Z/n\Z$.
\begin{proof}
In this case, Corollary \ref{cor-modred} implies that $|A|+|B|+r=|A+B|\geq |A_0+B_0|+|A|=n+|A|$, implying $|P_B|=n+1\leq |B|+1+r$, as desired.
\end{proof}
\subsection*{Case 2:} $|A_0+B_0|<\min\{n,\,|A_0|+|B_0|-1\}$.
\begin{proof}
Let $H=\mathsf H(A_0+B_0)\leq G$. In view of the case hypothesis, Kneser's Theorem (Theorem \ref{thm-kt}) implies that $H$ is a \emph{proper}, nontrivial subgroup of $G=\Z/n\Z$ with $|A_0+B_0|\geq |H+A_0|+|H+B_0|-|H|$ and \be\label{kst-hyp}|\phi_H(A_0)+\phi_H(B_0)|=|\phi_H(A_0)|+|\phi_H(B_0)|-1<|G/H|.\ee Note $\phi_H(A_0)+\phi_H(B_0)$ is aperiodic as $H=\mathsf H(A_0+B_0)$ is the maximal period of $A_0+B_0$, and \be\label{kt-holes}|(H+A_0)\setminus A_0|+|(H+B_0)\setminus B_0|\leq |H|-2,\ee else $|A_0+B_0|\geq |A_0|+|B_0|-1$ (in view of the bound from Kneser's Theorem), contrary to case hypothesis.
In view of \eqref{kst-hyp} and $G/H$ being nontrivial (as $H<G$ is proper), we can apply the Kemperman Structure Theorem (Theorem \ref{KST-}) to $\phi_H(A_0)+\phi_H(B_0)$. Then there exists a proper subgroup $L<G$ with $H\leq L$ such that $(\phi_L(A_0),\phi_L(B_0))$ is an elementary pair of some type (I)--(IV). Indeed, if type (IV) occurs, then $L=H$. Moreover, for types (I)--(III), there exist nonempty $L$-coset slices $A_\emptyset\subseteq A_0$ and $B_\emptyset\subseteq B_0$ inducing $L$-quasi-periodic decompositions in $H+A$ and $H+B$, so $H+(A_0\setminus A_\emptyset)$ and $H+(B_0\setminus B_\emptyset)$ are both $L$-periodic, $\phi_H(A_\emptyset)+\phi_H(B_\emptyset)\in \phi_H(A)+\phi_H(B)$ is a unique expression element, and $$|A_\emptyset+B_\emptyset|=|H+A_\emptyset|+|H+B_\emptyset|-|H|.$$
\subsection*{Subcase 2.1:} $(\phi_L(A_0),\phi_L(B_0))$ has type (I).
In this case, either $|\phi_L(A_0)|=1$ or $|\phi_L(B_0)|=1$, both contradicting that $\gcd^*(A)=\gcd^*(B)=1$ in view of $L<G=\Z/n\Z$ being a \emph{proper} subgroup.
\subsection*{Subcase 2.2:} $(\phi_L(A_0),\phi_L(B_0))$ has type (IV).
In this case, $H=L$, $|\phi_H(A_0)|,\,|\phi_H(B_0)|\geq 3$, every element in $\phi_H(A_0)+\phi_H(B_0)$ has at least $2$ representations, and $$|A_0+B_0|=|G|-|H|.$$ Since $|\phi_H(A_0)+\phi_H(B_0)|=|\phi_H(A_0)|+|\phi_H(B_0)|-1\geq |\phi_H(A_0)|+2$, it follows that there are two distinct $H$-cosets $\gamma_1+H$ and $\gamma_2+H$ which intersect $A_0+B_0$ but not $A_0$. For each $\gamma_i$, we can find $\alpha_i\in A_0$ and $\beta_i\in B_0$ such that $\gamma_i+H=\alpha_i+\beta_i+H$, and we choose the pair $(\alpha_i,\beta_i)$ to maximize $|A_0\cap (\alpha_i+H)|+|B_0\cap (\beta_i+H)|$. Since every element in $\phi_H(A_0)+\phi_H(B_0)$ has at least $2$ representations, it follows from the pigeonhole principle and \eqref{kt-holes} that $$|A_0\cap (\alpha_i+H)|+|B_0\cap (\beta_i+H)|\geq 2|H|-\frac12(|H|-2)=\frac32|H|+1\quad \mbox{ for $i=1,2$}.$$
Since each $\gamma_i+H$ does not intersect $A_0=A_0+B_1$, it follows from Corollary \ref{cor-modred} that \begin{align*}|A|+|B|+r=|A+B|&\geq |A_0+B_0|+|A|+2(\frac32|H|+1-1-|H|)\\ &=|A_0+B_0|+|A|+|H|=|G|+|A|=n+|A|,\end{align*} implying $|P_B|=n+1\leq |B|+r+1$, as desired.
\subsection*{Subcase 2.3:} $(\phi_L(A_0),\phi_L(B_0))$ has type (III).
In this case, $|\phi_L(A_0)|,\,|\phi_L(B_0)|\geq 3$ and $$|A_0+B_0|=|(A_0+B_0)\setminus (A_\emptyset+B_\emptyset)|+|A_\emptyset+B_\emptyset|=(|G|-|L|)+
(|H+A_\emptyset|+|H+B_\emptyset|-|H|).$$ Moreover, by Lemma \ref{thm-typeIII}, we have \ \be\label{steefel}\phi_L(A_0\setminus A_\emptyset)+\phi_L(B_0\setminus B_\emptyset)=\phi_L(A_0+B_0)\setminus \phi_L(A_\emptyset+B_\emptyset).\ee Since $|\phi_L(A_0)+\phi_L(B_0)|=|\phi_L(A_0)|+|\phi_L(B_0)|-1\geq |\phi_L(A_0)|+2$, it follows that there is some $L$ coset $\gamma+L$ that intersects $A_0+B_0$ but not $A_0$ and which is distinct from the $L$-coset $A_\emptyset+B_\emptyset+L$. Then \eqref{steefel} ensures there are $\alpha\in A_0\setminus A_\emptyset$ and $\beta\in B_0\setminus B_\emptyset$ with $\alpha+\beta+L=\gamma+L$. As a result, since $H+(A_0\setminus A_\emptyset)$ and $H+(B_0\setminus B_\emptyset)$ are both $L$-periodic, it follows that $$|A_0\cap (\alpha+L)|+|B\cap (\beta+L)|\geq 2|L|-(|(H+A_0)\setminus A_0|+|(H+B_0)\setminus B_0)|)\geq 2|L|-|H|+2,$$ with the final inequality in view of \eqref{kt-holes}.
Since $\gamma+L$ does not intersect $A_0$, it follows from Corollary \ref{cor-modred} that \begin{align*}|A|+|B|+r=|A+B|&\geq |A_0+B_0|+|A|+(2|L|-|H|+2-|L|-1)\\ &=|A_0+B_0|+|A|+|L|-|H|+1\\&=(|G|-|L|+|H+A_\emptyset|+|H+B_\emptyset|-|H|)+|A|+|L|-|H|+1\\&\geq |G|+|A|+1=n+1+|A|,\end{align*} implying $|P_B|=n+1< |B|+r+1$, as desired.
\subsection*{Subcase 2.4:} $(\phi_L(A_0),\phi_L(B_0))$ has type (II).
In this case, Lemma \ref{Lemma-ap-mod-reduction} implies that $A+B$ is Freiman isomorphic to a sumset $A'+B'\subseteq \Z^2$ with $B'$ contained in exactly $n'=|\phi_L(B_0)|\geq 2$ lines parallel to the horizontal axis, and $A'$ contained in exactly $m'=|\phi_L(A_0)|\geq 2$ lines parallel to the horizontal axis. Let $x=(1,0)$ and $y=(0,1)$. Compressing along the horizontal axis results in a sumset $A''+B''\subseteq \Z^2$, where $A''=\mathsf C_{x,y}(A')$ and $B''=\mathsf C_{x,y}(B')$. Then $|A+B|=|A'+B'|\geq |A''+B''|$, $|A''|=|A'|=|A|$ and $|B''|=|B'|=|B|$.
Since $H+(A_0\setminus A_\emptyset)$ and $H+(B_0\setminus B_\emptyset)$ are both $L$-periodic with $A_\emptyset\subseteq A_0$ and $B_\emptyset\subseteq B_0$ each $L$-coset slices, it follows from \eqref{kt-holes} that \begin{align*}|(L+B_0)\setminus B_0|&=|(L+B_0)\setminus (H+B_0)|+|(H+B_0)\setminus B_0|\\&= (|L|-|H+B_\emptyset|)+|(H+B_0)\setminus B_0|\leq |L|-|H|+|H|-2=|L|-2.\end{align*} Thus $$|B|=|B_0|+1\geq n'|L|-|L|+3.$$ As a result, if $|L|\geq 3$, then $|B''|=|B|\geq 3n'$, in which case Theorem \ref{2D-brunn-Mink} (applied with $\ell=\R x$) and Lemma \ref{lemma-2D-redcalc} (applied with $m=m'$, $n=n'$, $x=|A|=|A''|$ and $y=|B|=|B''|$) imply $|A+B|\geq |A''+B''|\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, contrary to hypothesis. Likewise, if $|L|=2$ and $n'=2$, then $|B''|=|B|\geq 2|L|-|L|+3=5\geq 3n'-3$, whence Theorem \ref{2D-brunn-Mink} (applied with $\ell=\R x$) and Lemma \ref{lemma-2D-redcalc} (applied with $m=m'$, $n=2$, $x=|A|=|A''|$ and $y=|B|=|B''|$) again yield the contradiction $|A+B|\geq |A''+B''|\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$.
We are left to consider the case when $|L|=2$ and $n'\geq 3$, in which case $|B''|=|B|\geq n'|L|-|L|+3=2n'+1\geq 7$. Each horizontal line that intersects $B''$ contains at most $|L|+1\leq 3$ elements (as $B=B_0\cup B_1$ with $|B_1|=1$ and the elements of $B_0$ distinct modulo $n$), ensuring via the definition of compression that $B''$ is contained in $n''\leq 3$ vertical lines. Note $|B|\geq n'|L|-|L|+3=2n'+1>n'$ ensures some horizontal line has at least two elements, whence $n''\geq 2$. Thus Theorem \ref{2D-brunn-Mink} (applied with $\ell=\R y$) and Lemma \ref{lemma-2D-redcalc} (applied with $n=n''\in [2,3]$, $x=|A|=|A''|$ and $y=|B|=|B''|$, noting that $|B''|=|B|\geq 7$ ensures $3n'-3\leq 6<7\leq |B|$) again yields the contradiction $|A+B|\geq |A''+B''|\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, completing Case 2.
\end{proof}
\subsection*{Case 3:} $|A_0+B_0|\geq |A_0|+|B_0|-1$.
\begin{proof}
Decompose $A=\bigcup_{i=1}^{|A_0|}X_i$, \ $B=\bigcup_{j=1}^{|B_0|}Y_j$ and $A+B=\bigcup_{i=1}^{|A_0|}\bigcup_{j=1}^{|B_0|}(X_i+Y_j)=\bigcup_{k=1}^{|A_0+B_0|}Z_k$ modulo $n$, where the $X_i\subseteq A$ are the maximal nonempty subsets of elements congruent modulo $n$, and likewise for the $Y_j\subseteq B$ and $Z_k\subseteq A+B$. For $i\in [1,|A_0|]$, let $X'_i$ be obtained from $X_i$ by removing the smallest element from $X_i$. Set $A'=\bigcup_{i=1}^{|A_0|}X'_i$ and decompose $A'+B=\bigcup_{k=1}^{|A_0+B_0|}Z'_k$ with the $Z'_k\subseteq Z_k$ (possibly empty). Each $X'_i+Y_j\subseteq X_i+Y_j$ is missing the smallest element of $X_i+Y_j$, as this was a unique expression element in $X_i+Y_j$. As a result, since each $Z_k$ is a union of sets of the form $X_i+Y_j$, it follow that each $Z'_k\subseteq Z_k$ is missing the smallest element of $Z_k$. In consequence,
\be \label{tangent2}|A'|=|A|-|A_0|\quad\und\quad |A'+B|\leq |A+B|-|A_0+B_0|\leq |A+B|-|A_0|-|B|+2,\ee
with the final inequality above in view of $|B_0|=|B|-1$ and the case hypothesis.
\medskip
If $|A|=|A_0|$, then Theorem \ref{modular-red-cor} and the case hypothesis imply that $|A+B|\geq |\wtilde A+\wtilde B|=|A_0+B_0|+|A_0+B_1|=|A_0+B_0|+|A_0|\geq 2|A_0|+|B_0|-1=2|A|+|B|-2$, while $2|A|+|B|-2\geq(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$ follows by Lemma \ref{lemma-2D-redcalc} (applied with $x=|A|$, $y=|B|$, $m=1$ and $n=2$), yielding $|A+B|\geq (\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, which is contrary to hypothesis. Therefore we instead conclude that $|A_0|<|A|$, ensuring that $A'$ is nonempty.
\medskip
Let $s'\geq 1$ be the integer such that \be\label{wiggleworm}(s'-1)s'\left(\frac{|B|}{2}-1\right)+s'-1<|A'|\leq s'(s'+1)\left(\frac{|B|}{2}-1\right)+s'.\ee
Note, since $|A'|<|A|$, that $s'\leq s$.
If $|A'+B|<(\frac{|A'|}{s}+\frac{|B|}{2}-1)(s+1)$, then applying the induction hypothesis to $A'+B$ yields the desired conclusion for $B$. Therefore we can assume
$$|A'+B|\geq (\frac{|A'|}{s'}+\frac{|B|}{2}-1)(s'+1).$$ Combined with \eqref{tangent2}, we find
\begin{align} |A+B|&\geq (\frac{|A|-|A_0|}{s'}+\frac{|B|}{2}-1)(s'+1)+|A_0|+|B|-2\nn\\
&=|A|+\frac{|A|}{s'}+\frac{s'+3}{2}|B|-s'-3-\frac{|A_0|}{s'}.\label{gocu}\end{align}
Now Corollary \ref{cor-modred} and the case hypothesis imply $|A+B|\geq|A_0+B_0|+|A|\geq |A_0|+|B_0|-1+|A|=|A|+|B|-2+|A_0|$. Combined with the hypothesis $|A+B|<(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)$, we conclude that \be\label{A0-small} |A_0|<\frac{|A|}{s}+(s-1)(\frac{|B|}{2}-1).\ee
\medskip
\subsection*{Subcase 3.1} $1\leq s'\leq s-2$.
In this case, $s\geq 3$ and \eqref{wiggleworm} gives $|A|-|A_0|=|A'|\leq (s-2)(s-1)(|B|/2-1)+s-2$, which combined with \eqref{A0-small} yields $\frac{s-1}{s}|A|-(s-1)(\frac{|B|}{2}-1)<(s-2)(s-1)(\frac{|B|}{2}-1)+s-2$, in turn implying $$|A|<s(s-1)(\frac{|B|}{2}-1)+\frac{s(s-2)}{s-1}<s(s-1)(\frac{|B|}{2}-1)+s.$$ However, this contradicts the hypothesis $|A|\geq (s-1)s(\frac{|B|}{2}-1)+s$.
\subsection*{Subcase 3.2:} $s'=s$.
In this case, the bounds defining $s$ and $s'$ ensure $$|A_0|=|A|-|A'|\leq \Big(s(s+1)(|B|/2-1)+s\Big)-\Big(s(s-1)(|B|/2-1)+s\Big)=s(|B|-2).$$ Thus \eqref{gocu} implies
\begin{align*}|A+B|&\geq |A|+\frac{|A|}{s}+\frac{s+1}{2}|B|-s-1+|B|-2-\frac{|A_0|}{s}
\\ &\geq |A|+\frac{|A|}{s}+\frac{s+1}{2}|B|-s-1=(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1),
\end{align*} which is contrary to hypothesis.
\subsection*{Subcase 3.2:} $1\leq s'=s-1$.
In this case, $s\geq 2$, while \eqref{gocu} and \eqref{A0-small} yield
$$|A+B|>|A|+\frac{|A|}{s-1}+\frac{s+2}{2}|B|-s-2-\frac{|A|}{s(s-1)}-(\frac{|B|}{2}-1).$$
Combined with the hypothesis $|A+B|<(\frac{|A|}{s}+\frac{|B|}{2}-1)(s+1)=|A|+\frac{|A|}{s}+\frac{s+1}{2}|B|-s-1$, we conclude that
$$\frac{|A|}{s}=\frac{|A|}{s-1}-\frac{|A|}{s(s-1)}<\frac{|A|}{s},$$
which is not possible.
\end{proof}
As the above cases exhaust all possibilities, the proof is complete.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor-3k-4-minimprove}] For $|B|\leq 2$, we have $B=P_B$ being itself an arithmetic progression, with $|P_B|=|B|\leq |B|+r+1$ in view of Theorem \ref{CDT-for-Z}. For $|B|\geq 3$,
the result is an immediate consequence of Theorem \ref{thm-3k-4-minimprove} and Proposition \ref{prop-rough-estimate} (applied with $x=|A|$,\ $y=|B|$ and $s$ as defined in the statement of Theorem \ref{thm-3k-4-minimprove}).
\end{proof}
\section{Concluding Remarks}
As mentioned in the introduction, the bound $|P_B|\leq |B|+r+1$ is tight in Theorem \ref{thm-3k-4-minimprove}. However, the examples showing this bound to be tight (including variations of that given in the introduction) require \emph{both} $A$ and $B$ to be contained in short arithmetic progressions. Thus a strengthening of Theorem \ref{thm-3k-4-minimprove}, where the bound on $|P_B|$ is improved when $A$ is not contained in a short arithmetic progression, is expected. Indeed, it might be hoped that $|P_A|$ could be reasonably bounded so long as there is no partition $A=A_0\cup A_1$ of $A$ into nonempty subsets with $A_0+B$ and $A_1+B$ disjoint.
|
2,869,038,157,050 | arxiv | \section{#1}}
\renewcommand{\arabic{section}.\arabic{equation}}{\thesection.\arabic{equation}}
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{\orange{*)}}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\begin{document}
\begin{titlepage}
\begin{center}
\begin{flushright}
ARC-17-6
\end{flushright}
\vskip20pt
{\Large \bf Crepant Resolutions of $\mathbb{C}^3/\mathbb{Z}_4$ and the
Generalized Kronheimer \\[8pt] Construction
(in view of the Gauge/Gravity Correspondence )}\\[6mm]
{{\sc Ugo Bruzzo${}^{\; a,g}$, Anna Fino$^{\; b,g}$,
Pietro Fr\'e${}^{\; c,f,g}$, \\
Pietro Antonio Grassi$^{\;d,f,g}$ and Dimitri Markushevich${}^{\;e,g}$}
\\[5pt] \small \sl
${}^a$ SISSA (Scuola Internazionale Superiore di Studi Avanzati), \\
via Bonomea 265, 34136 Trieste, Italy; \\ INFN (Istituto Nazionale di Fisica Nucleare), Sezione di
Trieste; \\ IGAP (Institute for Geometry and Physics), Trieste \\
\emph{e-mail:} { \tt [email protected]}\\[4pt]
${}^b$ Dipartimento di Matematica G. Peano,
Universit\`a di Torino, \\ Via Carlo Alberto 10, 10123 Torino,
Italy \\
\emph{e-mail:} \quad {\small {\tt [email protected]}}\\[4pt]
${}^c$ Dipartimento di Fisica, Universit\`a
di Torino, via P. Giuria 1, 10125 Torino, Italy\\
\emph{e-mail:} {\tt [email protected]}\\[4pt]
$^{d}$ Dipartimento di Scienze e Innovazione Tecnologica, Universit\`a del Piemonte Orientale, \\
viale T. Michel 11, 15121 Alessandria, Italy \\
\emph{e-mail:} {\tt [email protected]}\\[4pt]
${}^e$ Department de Math\'ematique, Universit\'e de Lille, B\^atiment M2, \\ Cit\'e Scientifique, 59655 Villeneuve-d'Ascq, France \\
\emph{e-mail:} {\tt [email protected]}\\[4pt]
$^{f}$ INFN (Istituto Nazionale di Fisica Nucleare), Sezione di Torino \\[4pt]
$^g$ Arnold-Regge Center for Algebra, Geometry and Theoretical Physics, \\ via P. Giuria 1, 10125 Torino,
Italy}
\bigskip
February 18, 2019; revised \today
\bigskip
\begin{abstract}
As a continuation of a general program started in two previous
publications, in the present paper we study the K\"ahler quotient
resolution of the orbifold $\mathbb{C}^3/\mathbb{Z}_4$, comparing
with the results of a toric description of the same.
In this way we
determine the algebraic structure of the exceptional divisor, whose
compact component is the second Hirzebruch surface $\mathbb F_2$. We determine the
explicit K\"ahler geometry of the smooth resolved manifold $Y$, which is the total space
of the canonical bundle of $\mathbb F_2$. We study in detail the chamber structure
of the space of stability parameters (corresponding in gauge theory to the Fayet-Iliopoulos
parameters) that are involved in the construction of the desingularizations either by generalized
Kronheimer quotient, or as algebro-geometric quotients. The walls of the chambers correspond
to two degenerations; one is a partial desingularization of the quotient, which is the total space
of the canonical bundle of the weighted projective space $\mathbb P[1,1,2]$, while the other
is the product of the ALE space $A_1$ by a line, and is related to the full resolution is a subtler way.
These geometrical results will be used to look for exact supergravity brane
solutions and dual superconformal gauge theories.
\end{abstract}
\end{center}
\end{titlepage}
\setcounter{tocdepth}{2}
\tableofcontents \noindent {}
\bigskip
\section{Introduction}
\label{introito} The present paper is the next step of
an investigation program initiated in
\cite{pappo1,Bruzzo:2017fwj}, whose final aim is to fully
elucidate the relation among the physical building blocks of
$D=3/D=4$ supersymmetric gauge theories, the mathematical
constructions of the generalized McKay correspondence, and the
generalized Kronheimer-like resolution of $\mathbb{C}^n/\Gamma$
singularities.
From the physics side, the context of these investigations is, in a
broad sense, the \textit{gauge/gravity correspondence}
\cite{Maldacena:1997re,Kallosh:1998ph,Ferrara:1998jm,Ferrara:1998ej,sergiotorino,witkleb,Fabbri:1999hw,Fabbri:1999ay,
Aharony:2008ug,Gaiotto:2007qi,Gaiotto:2009tk}. In particular there
are two main paradigms:
\begin{description}
\item[a)] The case of M2-branes solutions of $D=11$ supergravity
where the eight-dimensional space transverse to the brane world
volume is taken to be, to begin with, of the form $\mathbb{C}\times
\frac{\mathbb{C}^3}{\Gamma}$, having denoted by $\Gamma \subset
\mathrm{SU(3)}$ a finite subgroup.
\item[b)] The case of D3-brane solutions of type IIB supergravity
where the 6-dimensional transverse space to the brane is taken to
be, to begin with, just the singular orbifold $\frac{\mathbb{C}^3}{\Gamma}$
mentioned in the previous lines.
\end{description}
In both cases the idea is the following. Provided one takes into
account also the \text{twisted states}, String Theory can
confortably live also on singular orbifolds, on the contrary
Supergravity does require smooth manifolds. Hence we are
interested in the crepant\footnote{We recall that a morphism of varieties
$X \to Y$, which in particular can be a resolution of singularities, is crepant when the canonical sheaf of $X$ is the pullback of
the canonical sheaf of $Y$.} resolution of the orbifold
singularity:
\begin{equation}\label{resolvo}
Y \longrightarrow \mathbb{C}^n/\Gamma
\end{equation}
and we look for exact solutions of $D=11$ or $D=10$ type IIB
supergravities, respectively of the M2-brane and of the D3-brane
type, where the transverse space is, respectively, $\mathbb{C}\times
Y$ or $Y$. At the same time we are interested in the construction
of the $D=3$, respectively, $D=4$ matter coupled supersymmetric
gauge theories that become superconformal at a suitable infrared
fixed point and are, supposedly, the dual partners of such brane
solutions.
\paragraph{Matter content of the gauge theory.}
As it was strongly emphasized in \cite{pappo1,Bruzzo:2017fwj}, the
mathematical lore on the generalization of the McKay correspondence
\cite{mckay} and of the Kronheimer construction of ALE manifolds
\cite{kro1,kro2} enter at this point in an essential way. Indeed
the basic starting point of such constructions, \textit{i.e.} the
space\footnote{For the definition of the space \eqref{Sgamma} see
\cite{Bruzzo:2017fwj}. For its realization in the specific model
studied in the present paper see eq.~\eqref{carnevalediPaulo}.}
\begin{equation}\label{Sgamma}
\mathcal{S}_\Gamma \, \equiv \,
\mathrm{Hom}_\Gamma\left( R,\mathcal{Q}\otimes R\right),
\end{equation}
where $\mathcal{Q}$ is the given embedding of $\Gamma$ into $\mathrm{SU(n)}$ and $R$ is the regular representation of $\Gamma$.
The structure of the representation ring of $\Gamma$ is described by the McKay quiver matrix,\footnote{See both \cite{Bruzzo:2017fwj} for the general
definition and below, (eq.~\eqref{carteodollo}) for the present
$\mathbb{Z}_4$ case.} and the latter determines
the gauge group
$\mathcal{F}_\Gamma$ and the whole spectrum of Wess-Zumino hypermultiplets of the associated gauge theory.\footnote{ Singularities
$\mathbb{C}^n/\Gamma$ can have a crepant resolution and fall in the
class analyzed in \cite{Bruzzo:2017fwj} only if $\Gamma$ acts
through its representation $Q$ as a subgroup of $\mathrm{SU}(n)$. It
is important to recall that the $\mathbb{Z}_k$ quotient of
$\mathbb{C}^4$ utilized in the ABJM construction
\cite{Aharony:2008ug} does not satisfy this condition and indeed admits
only discrepant resolutions.}
\paragraph{The McKay quivers.}
The quiver matrix admits a representation by means of a quiver
diagram and quiver gauge theories have been over the last twenty years
the target of a large
physical literature (see for
instance \cite{Bianchi:2000de,Bianchi:2009bg,Bianchi:2007wy} and all
references therein). Quiver diagrams interpreted as a recipe to
construct a supersymmetric gauge theory are more general than McKay
quivers and up to our knowledge the inverse problem of defining
necessary and sufficient conditions which, in the vast class of
quivers, select those that are of McKay type is unsolved. Each McKay
quiver is associated with a discrete group $\Gamma\subset
\mathrm{SU(n)}$ and its nodes are in one-to-one correspondence with
the irreducible representations of $\Gamma$. Its lines codify the
data to construct the above mentioned space
$\mathrm{Hom}_\Gamma\left( R,\mathcal{Q}\otimes R\right)$ which
hosts the matter multiplets of the corresponding gauge theory. In a
general quiver the lines provide the same information about matter
multiplets, yet there is no a priori guarantee that these latter
fill a space $\mathrm{Hom}_\Gamma\left( R,\mathcal{Q}\otimes
R\right)$ for any suitable $\Gamma$. Said differently, any quiver
diagram provides information for the construction of a K\"ahler or
HyperK\"ahler quotient. McKay quivers provide information how to
resolve a $\mathbb{C}^n/\Gamma$ by means of a K\"ahler or
HyperK\"ahler quotient defined according with a generalized
Kronheimer construction; yet not all (Hyper)K\"ahler quotients are
devised to solve quotient singularities. A notable counterexample
is provided by the case where the (Hyper)K\"ahler quotient provides
the resolution of a conifold singularity. The relation between gauge
theories of the McKay type and of conifold type is addressed in a
forthcoming publication \cite{conmasbia}.
\paragraph{The first example: ALE manifolds.}
Historically the first case that was fully developed both
mathematically and physically is that of the Kleinian singularities
$\mathbb{C}^2/\Gamma$, where the finite groups $\Gamma \subset
\mathrm{SU(2)}$ admit the celebrated ADE classification (for a
comprehensive recent review see chapter 8 of \cite{advancio}).
Relying on the properties of the relevant McKay quiver matrices
which, due to the ADE classification, are identified with the
extended Cartan matrices of the ADE Lie algebras, and on the
recently introduced hyperk\"ahler quotient constructions\cite{HKLR},
Kronheimer \cite{kro1,kro2} succeeded in providing an algorithmic
construction of all four-dimensional ALE gravitational instantons,
the A subclass of which had been previously exhibited by Gibbons and
Hawking as multicenter metrics \cite{Gibbons:1979zt,Gibbons:1979xm},
generalizing the first and simplest example of the Eguchi-Hanson
one--center metric \cite{eguccio}. ALE-manifolds and more general
gravitational instantons were extensively studied and utilized in
various capacities in string theory and in supergravity
\cite{Billo:1992zv,Billo:1992uw,Billo:1992ei,Bianchi:1994gi,Bianchi:1995ad,Bianchi:1996zj}.
The first paper where the Kronheimer construction was applied to
2D-conformal field theories in stringy setups dates back to 1994
\cite{mango}, while the first example of an exact D3-brane solution
of type IIB supergravity with $\mathrm{ALE}\times \mathbb{C}$
transverse space was produced in 2000 \cite{Bertolini:2002pr}.
The ALE resolutions of Kleinian singularities are hyperk\"ahler
manifolds\ and the Kronheimer construction is based on the
hyperk\"ahler quotient. All this goes hand in hand with
$\mathcal{N}=2$ supersymmetry in $D=4$ and $\mathcal{N}=4$ (broken
to $\mathcal{N}=3$ by Chern-Simons couplings
\cite{ringoni,Fre1999xp}) in $D=3$.
\paragraph{The generalization to three dimensions.}
In the second half of the nineties of the last century, a few
mathematicians from the algebraic geometry community addressed the
generalization of the Kronheimer construction to the case of
$\mathbb{C}^3/\Gamma$ singularities
\cite{crawthesis,itoriddo,roanno,marcovaldo,SardoInfirri:1994is,SardoInfirri:1996ga,SardoInfirri:1996gb}.
In this case the resolved variety is simply a K\"ahler manifold and
the Kronheimer construction reduces to a K\"ahler quotient. All this
goes hand in hand with $\mathcal{N}=1$ supersymmetry in $D=4$ or
$\mathcal{N}=2$ in $D=3$. As we discussed at length in
\cite{Bruzzo:2017fwj} the additional necessary item in the
generalized Kronheimer construction, besides the real moment maps,
is a universal holomorphic equation which substitutes the
holomorphic part of the tri-holomorphic moment maps and amounts to a
universal cubic superpotential. Instead the generalized McKay quiver
diagrams have the same group theoretical definition as in the
$\mathbb{C}^2$ case although they no longer represent extended
Cartan matrices. The choice of the appropriate gauge group
$\mathcal{F}_\Gamma$ follows from the quiver in exactly the same way
as in the $n=2$ case. Henceforth the finite group $\Gamma$ and its
homomorphism $\mathcal{Q}$ into $\mathrm{SU(3)}$ provide, once more
as in the original Kronheimer case, all the data to construct the
corresponding unique supersymmetric gauge theory on the brane
world-volume.
\paragraph{The Ito--Reid theorem and the
tautological bundles} The two most important results of the
mentioned mathematical activity are:\footnote{In view of the
previous discussion about McKay quivers as a subclass of the set of all
possible quivers, the below listed mathematical results apply only
to case of supersymmetric gauge theories derived from a McKay quiver
and hence associated with a $\mathbb{C}^3/\Gamma$ singularity. They
do not apply to general quiver gauge theories.}
\begin{description}
\item[1)] the Ito-Reid theorem
\cite{itoriddo} that relates the generators of the cohomology groups
$H^{(q,q)}(Y)$ of the resolved variety to the conjugacy classes of
$\Gamma$ that in the $\mathcal{Q}$ homomorphic image have
\textit{age} $q$.
\item[2)] the definition \cite{degeratu} of the
tautological vector bundles $E_i \longrightarrow Y$ associated with
each of the irreducible nontrivial {representations} $\mathcal{D}_i$ of
$\Gamma$, with
$$ \text{rank} \, E_i \, = \, n_i \, \equiv \, \text{dim} \, \mathcal{D}_i $$
\end{description}
As we emphasized in \cite{Bruzzo:2017fwj}, the first Chern forms
$\omega_i^{(1,1)}$ of the tautological bundles $E_i$ {usually provide a
redundant set} of generators for the $H^{(1,1)}(Y)$
group, whose dimension is fixed by the Ito-Reid theorem. One would
like to have a constructive approach to single out both the
components of the exceptional divisor $D_E$ and a good
basis of homological compact two-cycles $\mathcal{C}_I$
($I=1,\dots,\ell$) so as to be able to calculate the periods of the
$\omega_i^{(1,1)}$ on them:
\begin{equation}\label{seniorito}
\Pi^i_I \, \equiv \, \int_{\mathcal{C}_i} \omega_I^{(1,1)} \quad
; \quad I=1,\dots, r\, = \, |\Gamma|-1 \quad ; \quad i\, = \,
1,\dots , \ell \, = \, \# \, \mbox{ of senior c.c.}
\end{equation}
The rationale of the above counting is the following. Because of
Poincar\'e duality between $H^{(2,2)}(Y)$ and $H^{(1,1)}_c(Y)$ with
compact support, the number of senior classes is equal to the number
of $\omega_i^{(1,1)}$ with compact support.
\paragraph{Relevance of the cocycles and of the cycles for the
gauge/gravity dual pairs.} The above geometrical information is of
vital importance for the physical interpretation of the orbifold
resolution within the correspondence between supergravity brane
solutions and gauge theories on the world volume. The levels $\zeta$
of the moment maps are in the gauge-theory the Fayet-Iliopoulos
parameters. On the supergravity side these latter emerge as fluxes
of $p$-forms partially or fully wrapped on homology cycles of the
space transverse to the brane\footnote{See in particular
\cite{Bertolini:2002pr} for an exemplification of this mechanism in
the case of a D3-brane with transverse space $\mathbb{C}\times
\mathrm{ALE}$.}.
\par
More specifically, given the blow--down morphism
\begin{equation}\label{bolligiu}
Y \, \longrightarrow \, \mathbb{C}^3/\Gamma
\end{equation}
in order to construct a \textit{bona--fide} M2--brane or D3--brane
solution of the relevant supergravity that is dual to the considered
world--volume gauge--theory we need a \textit{Ricci flat} K\"ahler
metric on the resolved manifold $Y$. This latter, which is clearly
identified, both topologically and algebraically, by the Kronheimer
construction, is not endowed by the corresponding K\"ahler quotient
with a Ricci-flat metric. The derivation of a Ricci-flat K\"ahler on
$Y$ is a difficult mathematical problem discussed in a forthcoming
publication \cite{conmasbia}, yet one thing is clear a priori. The
parameters of such a Ricci-flat K\"ahler metric must be into
correspondence with the Fayet-Iliopoulos parameters appearing in the
Kronheimer construction and should parameterize the volumes of the
homology cycles of $Y$. When one or more of these cycles shrink to
zero a singularity develops. When all cycles shrink to zero we
come back to singular orbifold $\mathbb{C}^3/\Gamma$ both on the
supergravity and on the gauge theory side.
\paragraph{What we do in this paper.} In this article we study a
specific nontrivial example of orbifold, whose analysis was only
sketched in \cite{Bruzzo:2017fwj}. It corresponds to a specific
embedding:
$$ \mathcal{Q} \quad : \quad \mathbb{Z}_4 \, \longrightarrow \, \mathrm{SU(3)}$$
In the case of cyclic groups the McKay correspondence
and the costruction of the desingularizations of the singular quotient can be realized using
toric geometry \cite{itoriddo}. It is then fairly straightforward to identify the possible resolutions
and study in full detail their geometry. In particular, one identifies the toric divisors
and curves, including the compact divisors (a copy of the second Hirzebruch surface
$\mathbb F_2$ for the full resolution, and the weighted projective space $\mathbb P[1,1,2]$
for one of the partial desingularizations).
Actually utilizing the equations of the divisors provided by toric
geometry we are able, by means of the restriction of the moment map
equations to these special loci, to compute the periods of the
$(1,1)$-forms $\omega^{(1,1)}_{1,2,3}$ on the basis of compact
cycles in the general case.
The construction of the desingularizations, be it made as a generalized Kronheimer
quotient \cite{degeratu,Bruzzo:2017fwj}, or as an algebro-geometric GIT quotient \cite{itoriddo,CrawIshii},
depends on a set of parameters, living in a linear space, called the {\em stability parameter space.}
This is partitioned in chambers, with the property that the quotient does not change when one moves
within the interior of a chamber, while nontrivial topological changes may occur while crossing a wall
between two chambers. In Sections \ref{camerataccademica} to \ref{summatheologica} we study in great
detail this chamber structure, by means of explicit calculations of the periods of the Chern classes
of the tautological bundles on the the cycles that generate the 2-homology of the resolutions.
This geometrical study provides the basis for the construction of an
explicit dual pair either in M-theory or in type IIB theory. This we
plan to do in a new publication \cite{conmasbia}. Further comments
on the next steps in our program are left for the conclusions.
\section{The $\mathbb{C}^3/\mathbb{Z}_4$ model, its McKay quiver and
the associated Kronheimer construction} \label{maccaius} The action
of the group $\mathbb{Z}_4$ on $\mathbb{C}^3$ is defined by
introducing the three-dimensional unitary representation
$\mathcal{Q}(A)$ of its abstract generator $A$ that satisfies the
defining relation $A^4 \, = \, \mathbf{e}$. We set:
\begin{equation}\label{generatoreAZ4}
\mathcal{Q}(A) \, = \, \left(
\begin{array}{ccc}
i & 0 & 0 \\
0 & i & 0 \\
0 & 0 & -1 \\
\end{array}
\right) \quad ; \quad \mathcal{Q}(A)^4 \, = \, \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
\end{equation}
Since $\mathbb{Z}_4$ is abelian and cyclic, each of its four
elements corresponds to an entire conjugacy class of which we can
easily calculate the age-vector and the ages according with the
conventions established in \cite{Bruzzo:2017fwj}. We obtain:
\begin{equation}\label{panefresco}
\begin{array}{|c||c|c|c|c|}
\hline
\text{Conj.
Class}&\text{Matrix}&\text{age-vector}&\text{age}&\text{name}\\
\hline
\mathrm{Id} & \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right) & \ft 14 \, (0,0,0) & 0 & \text{null class} \\
\hline
\mathcal{Q}(A) & \left(
\begin{array}{ccc}
i & 0 & 0 \\
0 & i & 0 \\
0 & 0 & -1 \\
\end{array}
\right) & \ft 14 \, (1,1,2) & 1 & \text{junior class} \\
\hline \mathcal{Q}(A)^2 & \left(
\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right) & \ft 14 \, (2,2,0) & 1 & \text{junior class} \\
\hline
\mathcal{Q}(A)^3 & \left(
\begin{array}{ccc}
-i & 0 & 0 \\
0 & -i & 0 \\
0 & 0 & -1 \\
\end{array}
\right) & \ft 14 \, (3,3,2) & 2 & \text{senior class}\\
\hline
\end{array}
\end{equation}
Therefore, according with the theorem of Ito and Reid
\cite{itoriddo}, as reviewed in \cite{Bruzzo:2017fwj}, in the
crepant resolution:
\begin{equation}\label{pirollo}
Y \, \equiv \, \mathcal{M}_\zeta \, \longrightarrow \, \frac{\mathbb{C}^3}{\mathbb{Z}_4}
\end{equation}
the Hodge numbers of the smooth resolved variety are as follows:
\begin{equation}\label{ganimellus}
h^{(0,0)}\left(\mathcal{M}_\zeta\right) \, = \,1 \quad ; \quad h^{(1,1)}\left(\mathcal{M}_\zeta\right) \, =
\,2 \quad ; \quad h^{(2,2)}\left(\mathcal{M}_\zeta\right) \, =
\,1
\end{equation}
Furthermore the existence of a senior class implies that one of the
two generators of $H^{(1,1)}\left(\mathcal{M}_\zeta\right)$
{can be chosen to have} compact support while the other {will have} non-compact support. As we
later discuss studying the resolution with the help of toric
geometry, this distinction goes hand in hand with the structure of
the exceptional divisor that has two components, one compact and one
non-compact.
The character table of the $\mathbb{Z}_4$ group is easily calculated
and it foresees four one-dimensional representations that we
respectively name {$\mathcal{D}_i$}, $(i\, = \, 0,1,2,3)$. The table is
given below.
\begin{equation}\label{caratteruccio}
\begin{array}{c||cccc}
\begin{array}{ccc}
\text{Irrep}& \setminus & \text{C.C.} \\
\end{array} & \mathbf{e} &A & A^2 & A^3\\
\hline \hline
{\mathcal{D}_0} &1 & 1 & 1 & 1 \\
{\mathcal{D}_1}& 1 & i & -1 & -i \\
{\mathcal{D}_2} & 1 & -1 & 1 & -1 \\
{\mathcal{D}_3} & 1 & -i & -1 & i \\
\end{array}
\end{equation}
\subsection{The McKay quiver diagram and its representation}
The information encoded in
eqs. \eqref{panefresco}, \eqref{caratteruccio} is sufficient to
calculate the McKay quiver matrix defined by:
\begin{equation}\label{carteodollo}
\mathcal{Q} \otimes \mathcal{D}_I\, = \, \bigoplus_{J=0}^3 \,\mathcal{A}_{IJ} \,\mathcal{D}_J
\end{equation}
where $\mathcal{D}_I$ denotes the $4$ irreducible representation of
the group
{defined in eq.} \eqref{caratteruccio}, while $\mathcal{Q}$ is the
representation \eqref{panefresco} describing the embedding
$\mathbb{Z}_4 \hookrightarrow \mathrm{SU(3)}$. Explicitly we obtain
\begin{equation}\label{quiverroz4}
\mathcal{A}_{IJ} \, = \, \left(
\begin{array}{cccc}
0 & 2 & 1 & 0 \\
0 & 0 & 2 & 1 \\
1 & 0 & 0 & 2 \\
2 & 1 & 0 & 0 \\
\end{array}
\right)
\end{equation}
A graphical representation of the quiver matrix \eqref{quiverroz4}
is provided in fig.~\ref{mckayquivz4}.
\begin{figure}
\centering
\includegraphics[height=7cm]{quivz4dia.png}
\caption{ \label{mckayquivz4} The McKay quiver of the $\mathbb{Z}_4$
group embedded into $\mathrm{SU(3)}$ according {to}
eq.~\eqref{panefresco}.}
\end{figure}
Every node of the diagram corresponds to an irreducible
representation $\mathcal{D}_I$ and, in the Kronheimer construction, to a gauge
group factor $\mathrm{U}\left(\dim \mathcal{D}_I \right )$.
As one sees in every node enter three lines and go out three lines.
Both the incoming and the outgoing lines are subdivided in a double
line in some direction (or from some direction) and a single line to
some direction, or from some direction.
This information is sufficient to derive the number of Wess-Zumino
multiplets in the corresponding supersymmetric gauge theory and
assign their representations with respect to the three gauge groups
(actually four minus one for the barycentric motion) as we will more
extensively discuss in the forthcoming paper \cite{conmasbia}.
\par
From the mathematical viewpoint each line of the diagram corresponds
to an independent parameter appearing in the explicit construction
of the space $\mathcal{S}_{\mathbb{Z}_4}$. This latter is
constructed as follows. Let $R$ denote the regular representation of
$\Gamma$. We consider the space of triplets of $4\times 4$ complex
matrices:
\begin{eqnarray}
p \in \mathcal{P}_{\mathbb{Z}_4} \, \equiv \, \mbox{Hom}\left(R,\mathcal{Q}\otimes R\right) \, \Rightarrow\,
p\,=\, \left(\begin{array}{c}
A \\
B \\
C
\end{array}
\right) \label{homqg}
\end{eqnarray}
The action of the discrete group $\mathbb{Z}_4$ on the space
$\mathcal{P}_\Gamma$ is defined in full analogy with the Kronheimer
case by:
\begin{equation}\label{gammazione}
\forall \gamma \in \mathbb{Z}_4: \quad \gamma\cdot p \,\equiv\, \mathcal{Q}(\gamma)\,\left(\begin{array}{c}
R(\gamma)\, A \, R(\gamma^{-1})\\
R(\gamma)\, B \, R(\gamma^{-1}) \\
R(\gamma)\, C \, R(\gamma^{-1})
\end{array}
\right)
\end{equation}
where $R(\gamma)$ denotes its $4 \times 4$-matrix image in the
regular representation.
The subspace $\mathcal{S}_{\mathbb{Z}_4}$ is obtained by setting:
\begin{equation}
\mathcal{S}_{\mathbb{Z}_4} \, \equiv \,
\mbox{Hom}\left(R,\mathcal{Q}\otimes R\right)^{\mathbb{Z}_4}\, = \,
\left\{p\in\mathcal{P}_{\mathbb{Z}_4} / \forall \gamma\in
\mathbb{Z}_4 , \gamma\cdot p = p\right\}\,\,
\label{carnevalediPaulo}
\end{equation}
As we know from the general theory exposed in \cite{Bruzzo:2017fwj}
the space $\mathcal{S}_{\mathbb{Z}_4}$ must have complex dimension
$3\times |\mathbb{Z}_4| \, = \, 12$ which is indeed the number of
lines in the quiver diagram of fig. \ref{mckayquivz4}. In the basis
where the regular representation has been diagonalized with the help
of the character Table \eqref{caratteruccio} the general form of the
triplet of matrices composing
$\mbox{Hom}_{\mathbb{Z}_4}\left(R,\mathcal{Q}\otimes R\right)$ and
therefore providing the representation of the quiver diagram of fig.
\ref{mckayquivz4} is the following one:
\begin{equation}\label{pastrugno}
\begin{array}{ccccccc}
A & = & \left(
\begin{array}{cccc}
0 & 0 & 0 & \Phi^{(1)}_{0,3} \\
\Phi^{(1)}_{1,0} & 0 & 0 & 0 \\
0 & \Phi^{(1)}_{2,1} & 0 & 0 \\
0 & 0 & \Phi^{(1)}_{3,2} & 0 \\
\end{array}
\right) & ; & B & = & \left(\begin{array}{cccc}
0 & 0 & 0 & \Phi^{(2)}_{0,3} \\
\Phi^{(2)}_{1,0} & 0 & 0 & 0 \\
0 & \Phi^{(2)}_{2,1} & 0 & 0 \\
0 & 0 & \Phi^{(2)}_{3,2} & 0 \\
\end{array}
\right) \\
\null&\null& \null & \null & \null&\null& \null \\
C & = & \left(
\begin{array}{cccc}
0 & 0 & \Phi^{(3)}_{0,2} & 0 \\
0 & 0 & 0 & \Phi^{(3)}_{1,3} \\
\Phi^{(3)}_{2,0} & 0 & 0 & 0 \\
0 & \Phi^{(3)}_{3,1} & 0 & 0 \\
\end{array}
\right) & \null & \null & \null & \null
\end{array}
\end{equation}
The twelve complex parameters $\Phi^{(J)}_{p,q}$ with $J=1,2,3$,
$p,q=0,1,2,3$, promoted to be functions of the space-time
coordinates $\xi^\mu$:
$$\Phi^{(J)}_{p,q}(\xi)$$
are the complex scalar fields filling the flat K\"ahler manifold of
the Wess-Zumino multiplets in the microscopic lagrangian of the
corresponding gauge theory.
\subsection{The locus $\mathbb{V}_6 \subset \mathcal{S}_{\mathbb{Z}_4}$}
As it was explained in the context of the general framework in
\cite{Bruzzo:2017fwj}, the $3|\Gamma|$-dimensional flat K\"ahler
manifold $\mathcal{S}_\Gamma$ contains always a {subvariety}
$\mathbb{V}_{|\Gamma|+2} \subset \mathcal{S}_\Gamma$ of dimension
$|\Gamma|+2$ which is singled out by the following set of quadratic
equations:
\begin{equation}\label{ceramicus}
\left[A\, , \, B\right] \, = \, \left[B\, , \, C\right]\, = \, \left[C\, , \,
A\right]\, = \, 0
\end{equation}
From the physical point of view, the holomorphic equation
\eqref{ceramicus} occurs as the vanishing of the superpotential
derivatives $\partial_i\mathcal{W}(\Phi) \, = \, 0$ while looking
for the scalar potential extrema, namely for the classical vacua of
the gauge theory. From the mathematical viewpoint the locus
$\mathbb{V}_{|\Gamma|+2}$ is the one we start from in order to
calculate the K\"ahler quotient $\mathcal{M}_\zeta$ which
provides the crepant resolution of the singularity. At the end of
the day $\mathcal{M}_\zeta$ is just the manifold of
classical vacua of the gauge theory.
As discussed in \cite{Bruzzo:2017fwj}, the vanishing locus
\eqref{ceramicus} {consists of several irreducible components} of different
dimensions, {and $\mathbb{V}_{|\Gamma|+2}=\mathbb{V}_6$ is the only component of dimension 6. It can be
represented in the form $\mathcal{G}_\Gamma\cdot L_\Gamma$, where
$\mathcal{G}_\Gamma$ is the holomorphic quiver group,
defined in the next Section, and $L_\Gamma$ is the three dimensional locus
that we will shortly characterize. An open part of this principal component $\mathbb{V}_6$ can be given by the following explicit equations:}
\begin{eqnarray}\label{bagnomarietta}
&& \Phi^{(2)}_{1,0}\, = \, \frac{\Phi^{(1)}_{1,0} \Phi^{(2)}_{0,3}}{\Phi^{(1)}_{0,3}},\quad \Phi^{(2)}_{2,1}
\, = \, \frac{\Phi^{(1)}_{2,1}
\Phi^{(2)}_{0,3}}{\Phi^{(1)}_{0,3}},\quad \Phi^{(2)}_{3,2}
\, = \, \frac{\Phi^{(1)}_{3,2} \Phi^{(2)}_{0,3}}{\Phi^{(1)}_{0,3}},\nonumber\\
&& \Phi^{(3)}_{1,3}\, = \,
\frac{\Phi^{(1)}_{1,0} \Phi^{(3)}_{0,2}}{\Phi^{(1)}_{3,2}},\quad
\Phi^{(3)}_{2,0}\, = \, \frac{\Phi^{(1)}_{1,0} \Phi^{(1)}_{2,1}
\Phi^{(3)}_{0,2}}{\Phi^{(1)}_{0,3} \Phi^{(1)}_{3,2}},\quad \Phi^{(3)}_{3,1}\, = \,
\frac{\Phi^{(1)}_{2,1} \Phi^{(3)}_{0,2}}{\Phi^{(1)}_{0,3}}\end{eqnarray}
\subsection{The holomorphic quiver group $\mathcal{G}_{\mathbb{Z}_4}$ and the gauge group $\mathcal{F}_{\mathbb{Z}_4}$ }
Following the general scheme outlined in \cite{Bruzzo:2017fwj}, we
see that the locus $\mathcal{S}_{\mathbb{Z}_4}$ is mapped into
itself by the action of the \textit{complex quiver group}:
\begin{equation}\label{cromostatico}
\mathcal{G}_{\mathbb{Z}_4} \, = \, \mathbb{C}^\star \times \mathbb{C}^\star
\times \mathbb{C}^\star \, \simeq \,\pmb{\Lambda} \, \equiv \, \left(
\begin{array}{c|c|c|c}
\mathbb{C}^\star & 0 & 0 & 0 \\
\hline
0 & \mathbb{C}^\star & 0 & 0 \\
\hline
0 & 0 & \mathbb{C}^\star & 0 \\
\hline
0 & 0 & 0 & \mathbb{C}^\star\\
\end{array}
\right) \quad ; \quad
\mbox{det}\, \pmb{\Lambda}\, =
\, 1
\end{equation}
The gauge group of the final gauge theory is the maximal compact
subgroup of $\mathcal{G}_{\mathbb{Z}_4}$, namely:
\begin{equation}\label{cromodinamico}
\mathcal{F}_{\mathbb{Z}_4} \, = \, \mathrm{U(1)} \times \mathrm{U(1)}
\times \mathrm{U(1)} \, \simeq \,\pmb{\Xi} \, \equiv \, \left(
\begin{array}{c|c|c|c}
\mathrm{U(1)} & 0 & 0 & 0 \\
\hline
0 & \mathrm{U(1)} & 0 & 0 \\
\hline
0 & 0 & \mathrm{U(1)} & 0 \\
\hline
0 & 0 & 0 & \mathrm{U(1)}\\
\end{array}
\right) \quad ; \quad
\mbox{det}\, \pmb{\Xi}\, =
\, 1
\end{equation}
The explicit form of the matrices $\pmb{\Lambda}$ and $\pmb{\Xi}$ is
that appropriate to the basis where the regular representation is
diagonalized. In that basis the charge assignments (representation
assignments) of the scalar fields are read off from the
transformation rule:
\begin{equation}\label{caricopendente}
A(\Phi^\prime) \, = \, \pmb{\Xi}^{-1} \, A(\Phi)\, \pmb{\Xi},
\quad B(\Phi^\prime) \, = \, \pmb{\Xi}^{-1} \, B(\Phi) \,\pmb{\Xi},
\quad C(\Phi^\prime) \, = \, \pmb{\Xi}^{-1} \, C(\Phi) \,\pmb{\Xi}
\end{equation}
Then we consider the locus $L_{\mathbb{Z}_4}$ made by those triplets
of matrices $A,B,C$ that belong to $\mathcal{S}_\Gamma$ and are
diagonal in the natural basis of the regular representation. In the
diagonal basis of the regular representation the same matrices
$A,B,C$ have the following form:
\begin{equation}\label{baldovinus}
\begin{array}{ccccccc}
A_0 & = & \left(
\begin{array}{cccc}
0 & 0 & 0 & Z^1 \\
Z^1 & 0 & 0 & 0 \\
0 & Z^1 & 0 & 0 \\
0 & 0 & Z^1 & 0 \\
\end{array}
\right) & ; & B_0 & = & \left(\begin{array}{cccc}
0 & 0 & 0 & Z^2 \\
Z^2 & 0 & 0 & 0 \\
0 & Z^2 & 0 & 0 \\
0 & 0 & Z^2 & 0 \\
\end{array}
\right) \\
\null&\null& \null & \null & \null&\null& \null \\
C_0 & = & \left(
\begin{array}{cccc}
0 & 0 & Z^3 & 0 \\
0 & 0 & 0 & Z^3\\
Z^3 & 0 & 0 & 0 \\
0 & Z^3 & 0 & 0 \\
\end{array}
\right) & \null & \null & \null & \null
\end{array}
\end{equation}
The fields $Z^{1,2,3}$ provide a set of three coordinates spanning
the three-dimensional locus $L_{\mathbb{Z}_4}$. The complete locus
$\mathbb{V}_6$ coincides with the orbit of $L_{\mathbb{Z}_4}$ under
the free action of $\mathcal{G}_{\mathbb{Z}_4}$:
\begin{equation}\label{gestaltus}
\mathbb{V}_6 \, = \,
{\mathcal{G}_{\mathbb{Z}_4}\cdot L_{\mathbb{Z}_4}}
\, = \, \left(\begin{array}{c}
\pmb{\Lambda}^{-1} \, A_0 \,\,\pmb{\Lambda} \\
\pmb{\Lambda}^{-1} \, B_0 \,\,\pmb{\Lambda} \\
\pmb{\Lambda}^{-1} \, C_0 \,\,\pmb{\Lambda}
\end{array}
\right)
\end{equation}
\subsection{The moment map equations}
Implementing once more the general procedure outlined in
\cite{Bruzzo:2017fwj} we arrive at the moment map equations and at
the final crepant resolution of the singularity in the following
way.
We refer the reader to \cite{Bruzzo:2017fwj} for the general
definition of the moment map
$$\mu \quad : \quad \mathcal{S}_\Gamma \, \longrightarrow \, \mathbb{F}_\Gamma^\star$$
where $\mathbb{F}_\Gamma^\star$ denotes the dual (as vector spaces)
of the Lie algebra $\mathbb{F}_\Gamma$ of the gauge group. We recall
that the preimage of the level zero moment map is the $\mathcal{F}_\Gamma$
orbit of the locus $L_\Gamma$:
\begin{equation}\label{quadriglia}
\mu^{-1}\left(0\right) \, = \,
{\mathcal{F}_\Gamma\cdot L_\Gamma.}
\end{equation}
Note that $L_\Gamma$ is actually $\mathbb C^3$,
so that the {image of $L_\Gamma$ in} the K\"ahler quotient of level zero
coincides with the original singular variety
$\mathbb{C}^3/\Gamma$ (cf.~Lemma 3.1 in \cite{kro1}).
Next, as in \cite{Bruzzo:2017fwj}, we consider the following
decomposition of the Lie quiver group algebra:
\begin{eqnarray}
\mathbb{G}_{\mathbb{Z}_4} &=& \mathbb{F}_{\mathbb{Z}_4} \oplus
\mathbb{K}_{\mathbb{Z}_4}\\
\left[\mathbb{F}_{\mathbb{Z}_4} \, , \,
\mathbb{F}_{\mathbb{Z}_4}\right] &\subset & \mathbb{F}_{\mathbb{Z}_4}
\quad ; \quad
\left[\mathbb{F}_{\mathbb{Z}_4} \, , \,
\mathbb{K}_{\mathbb{Z}_4}\right] \,\subset \,
\mathbb{K}_{\mathbb{Z}_4} \quad ; \quad
\left[\mathbb{K}_{\mathbb{Z}_4} \, , \,
\mathbb{K}_{\mathbb{Z}_4}\right] \,\subset \,
\mathbb{F}_{\mathbb{Z}_4} \label{salameiolecco}
\end{eqnarray}
where $\mathbb{F}_{\mathbb{Z}_4}$ is the maximal compact subalgebra.
A special feature of all the quiver {g}roups and Lie {a}lgebras is that
$\mathbb{F}_\Gamma$ and $\mathbb{K}_\Gamma$ have the same real
dimension $|\Gamma|-1$ and one can choose a basis of {H}ermitian
generators $T^I$ such that:
\begin{equation}\label{sacherdivuli}
\begin{array}{ccccccc}
\forall \pmb{\Phi} \in \mathbb{F}_\Gamma & : & \pmb{\Phi} & = &
{\rm i} \times \sum_{I=1}^{|\Gamma|-1} c_I T^I & ; &
c_I \in \mathbb{R} \\
\forall \pmb{K} \in \mathbb{K}_\Gamma & : & \pmb{K} & = &
\sum_{I=1}^{|\Gamma|-1} b_I T^I & ; &
b_I \in \mathbb{R} \\
\end{array}
\end{equation}
Correspondingly a generic element $g\in \mathcal{G}_{\mathbb{Z}_4}$
can be split as follows:
\begin{equation}\label{consolatio}
\forall g \in \mathcal G_{\mathbb{Z}_4} \quad : \quad g=\mathcal{U} \,
\mathcal{H} \quad ; \quad \mathcal{U} \in
\mathcal{F}_{\mathbb{Z}_4} \quad ; \quad \mathcal{H} \in
\exp\left[ \mathbb{K}_{\mathbb{Z}_4}\right]
\end{equation}
Using the above property we arrive at the following parametrization
of the space $\mathbb{V}_6$
\begin{equation}\label{krumiro}
\mathbb{V}_6 \, = \,
{\mathcal{F}_{\mathbb{Z}_4}\cdot}
\left(\exp\left[
\mathbb{K}_{\mathbb{Z}_4}\right]\cdot L_{\mathbb{Z}_4}\right)
\end{equation}
where, by definition, we have set:
\begin{eqnarray}
p\in \exp\left[
\mathbb{K}_{\mathbb{Z}_4}\right]\cdot L_{\mathbb{Z}_4} &\Rightarrow &
p=\left\{\exp\left[-\pmb{K}\right]\, A_0
\exp\left[\pmb{K}\right], \, \exp\left[-\pmb{K}\right]\, B_0\,
\exp\left[\pmb{K}\right],\, \exp\left[-\pmb{K}\right]\, C_0
\exp\left[\pmb{K}\right]\right\} \nonumber\\
\left\{ A_0, \, B_0,\, C_0\right\} &\in & L_{\mathbb{Z}_4} \nonumber\\
{ \pmb{K}} & { \in}& { \mathbb{K}_{\mathbb{Z}_4}} \label{centodiquestigiorni}
\end{eqnarray}
In our case the three generators $T^I$ of the real subspace
{$\mathbb{K}_{\mathbb{Z}_4}$} have been chosen as follows:
\begin{equation}\label{TIgener}
\begin{array}{ccccccc}
T^1 & = & \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right) & ; & T^2 & = & \left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right) \\
\null & \null & \null & \null & \null & \null & \null \\
T^3 & = & \left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right) & \null & \null & \null & \null
\end{array}
\end{equation}
So that the relevant real group element takes the following form:
\begin{equation}\label{sirenus}
\exp[\pmb{K}] \, = \, \left(
\begin{array}{cccc}
\mathfrak{H}_1 & 0 & 0 & 0 \\
0 & \frac{\mathfrak{H}_2}{\mathfrak{H}_1}
& 0 & 0 \\
0 & 0 &
\frac{\mathfrak{H}_3}{\mathfrak{H}_2} &
0 \\
0 & 0 & 0 & \frac{1}{\mathfrak{H}_3} \\
\end{array}
\right)
\end{equation}
where $\mathfrak{H}_I$ are, by definition, real. Relying on this, in
the K\"ahler quotient we can invert the order of the operations.
First we quotient the action of the compact gauge group
$\mathcal{F}_{\mathbb{Z}_4}$ and then we implement the moment map
constraints. We have:
\begin{equation}\label{cascapistola}
\mathbb{V}_6/\!\!/_{\mathcal{F}_{\mathbb{Z}_4}}\, =
\, {\left(\exp\left [\mathbb{K}_{\mathbb{Z}_4}\right]\cdot L_{\mathbb{Z}_4}\right)/\mathbb{Z}_4,}
\end{equation}
{where $\mathbb{Z}_4$ acts on $L_{\mathbb{Z}_4}$ via the action induced by that of the stabiliser of $L_{\mathbb{Z}_4}$ in $\mathcal{F}_{\mathbb{Z}_4}$.}
The explicit form of {the triple} of matrices mentioned in
eq.~\eqref{centodiquestigiorni} is easily calculated on the basis of
eqs.~\eqref{baldovinus} and \eqref{sirenus}
\begin{equation}\label{caglioccio}
p\, = \,\left(\begin{array}{c}
A \\
B \\
C
\end{array}
\right) \, \equiv \, \left(\begin{array}{c}
\exp[-\pmb{K}] \, A_0 \, \exp[\pmb{K}]\\
\exp[-\pmb{K}] \, B_0 \, \exp[\pmb{K}]\\
\exp[-\pmb{K}] \, C_0 \, \exp[\pmb{K}]
\end{array}
\right)
\end{equation}
The moment map {is given by:}
\begin{eqnarray}\label{momentidimappa}
\mu\left(p\right)& = & \left\{ \mathfrak{P}_1,\mathfrak{P}_2,\mathfrak{P}_3 \right\} \nonumber\\
\mathfrak{P}_I & = &\mathrm{Tr}\left[ T_I \, \left(\left[A\,
, \, A^\dagger\right]\, +\, \left[B\,
, \, B^\dagger\right]\, +\, \left[C\,
, \, C^\dagger\right]\right)\right]
\end{eqnarray}
Imposing the moment map constraint we find:
\begin{equation}\label{carampana}
\mu^{-1}\left( \zeta\right)/\!\!/_{\mathcal{F}_{\mathbb{Z}_4}}\, =
\, \left\{ p\, \in \exp\left [\mathbb{K}_{\mathbb{Z}_4}\right]\cdot
L_{\mathbb{Z}_4}\,
\parallel \, \mathfrak{P}_I (p) \,= \, \zeta_I \right\} {/\mathbb{Z}_4.}
\end{equation}
Eq.\,\eqref{carampana} provides an explicit algorithm to calculate
the K\"ahler potential of the final resolved manifold if we are able
to solve the constraints for $\mathfrak{H}_I$ in terms of the triple
of complex coordinates $Z^i$ ($i = 1,2,3$). Indeed we recall that the
K\"ahler potential $\mathcal{K}_{\mathcal{M}_\zeta}$ of the resolved variety
$\mathcal M_\zeta = \mathcal N_\zeta/\mathcal{F}_{\mathbb{Z}_4}$, where $ \mathcal N_\zeta = \mu^{-1}(\zeta)\subset \mathcal S_\Gamma$,
is given by the
celebrated formula (see \cite[eq.~(3.58)]{HKLR} and also \cite{Billo:1993rd})
\begin{equation}\label{celeberro}
\mathcal{K}_{\mathcal{M}_\zeta}\, = \mathcal{K}_{\mathcal{N}_\zeta} \,
+ \zeta_I \, \mathfrak{C}^{IJ} \,
\log
{ \mathfrak{H}_J^{2\alpha_{\zeta_J}}} . \end{equation}
Here $ \mathcal{K}_{\mathcal{N}_\zeta}$ is the restriction to $\mathcal{N}_\zeta$ of the K\"ahler potential of the flat K\"ahler metric on $\mathcal S_\Gamma$,
which is $\mathcal{F}_{\mathbb{Z}_4}$-invariant and therefore can be regarded as a function on $\mathcal{M}_\zeta$.
The positive rational constants $\alpha_\zeta$ are to be chosen so that the functions $ { \mathfrak{H}_J^{2\alpha_{\zeta_J}}}$
are hermitian fiber metrics on the three
tautological bundles; these constants are completely determined by the geometry, but it will be easier to fix them later on,
using the fact that Chern characters of the tautological line bundles are a basis of the cohomology ring of $M_\zeta$ \cite{CrawIshii}.
Moreover,
\begin{equation}\label{romualdo}
\mathfrak{C}^{IJ} \, = \,
\mbox{Tr}\left(T^I \, T^J\right ) \, = \, \left(
\begin{array}{ccc}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2 \\
\end{array}
\right)
\end{equation}
is the matrix of scalar products of the gauge group generators.
The final outcome of this calculation was already presented in
Section 9 of \cite{Bruzzo:2017fwj}. As it was done there, it is
convenient to consider the following linear combinations:
\begin{equation}\label{innominata}
\left(
\begin{array}{ccc}
1 & 0 & -1 \\
1 & -1 & 1 \\
0 & 1 & 0 \\
\end{array}
\right)\, \left(\begin{array}{c}
\mathfrak{P}_1 -\zeta_1\\
\mathfrak{P}_2 -\zeta_2\\
\mathfrak{P}_3-\zeta_3
\end{array}
\right) \, = \, 0
\end{equation}
In this way we obtain:
\begin{eqnarray}
\left(
\begin{array}{c}
-\frac{\left(X_1^2-X_3^2\right) \left(X_1 X_3
\left(\Delta _1^2+\Delta _2^2\right)+\left(1+X_2^2\right)
\Delta _3^2\right)}{X_1 X_2 X_3} \\
\frac{\left(X_2+X_2^3-X_1 X_3 \left(X_1^2+X_3^2\right)\right)
\left(\Delta _1^2+\Delta _2^2\right)}{X_1 X_2 X_3} \\
-\frac{\left(-1+X_2^2\right) \left(X_2 \left(\Delta _1^2+\Delta _2^2\right)
+\left(X_1^2+X_3^2\right)
\Delta _3^2\right)}{X_1 X_2 X_3} \\
\end{array}
\right) & = & \left(
\begin{array}{c}
\zeta _1-\zeta _3 \\
\zeta _1-\zeta _2+\zeta _3 \\
\zeta _2 \\
\end{array}
\right) \label{sakerdivoli}
\end{eqnarray}
where $\Delta_i \, = \, \mid Z^{ i}\mid$ are the moduli of the three
complex coordinates $Z^i$, and $X_J= \mathfrak{H}_J^2$ for $J=1,2,3$.
Applying the general framework developed in \cite{Bruzzo:2017fwj} we
have
\begin{equation}
\mathcal{H}\text{ }\equiv \text{ }\left(
\begin{array}{|c|c|c|}
\hline
\mathfrak{H}_1 & 0 & 0 \\
\hline
0 & \mathfrak{H}_2 & 0 \\
\hline
0 & 0 & \mathfrak{H}_{3} \\
\hline
\end{array}
\right)\label{tautobundmetro}
\end{equation}
and the positive definite hermitian matrix $\mathcal{H}^{2\alpha_{\zeta_J}}$ is the
fiber metric on the direct sum:
\begin{equation}\label{direttosummo}
\mathcal{R}\,=\,\bigoplus_{I=1}^{r} \, \mathcal{R}_I
\end{equation}
of the $r=3$ tautological bundles that, by construction, are
holomorphic vector bundles with rank equal to the dimensions of the
three nontrivial irreducible representations of $\Gamma$, which in
this case is always one:
\begin{equation}\label{tautibundiEach}
\mathcal{R}_I \, \stackrel{\pi}{\longrightarrow}\,
\mathcal{M}_\zeta\quad\quad ;
\quad \quad\forall p \in \mathcal{M}_\zeta\quad :\quad
\pi^{-1}(p) \approx \mathbb{C}^{n_I}
\end{equation}
The compatible connection\footnote{Following standard mathematical
nomenclature, we call compatible connection on a holomorphic vector
bundle, one whose $(0,1)$ part is the Cauchy-Riemann operator of
the bundle.} on the holomorphic vector bundle $\mathcal{R}=\bigoplus_I\mathcal R_I$ is given
by $\vartheta = \bigoplus_I\vartheta_I$, where
\begin{eqnarray}\label{comancio}
\vartheta_I = \alpha_{\zeta_I} \,\partial \log {X}_I = \mathcal{H}^{-2{\alpha_{\zeta_I}}} \,\partial\mathcal{H}^{2\alpha_{\zeta_I}}\end{eqnarray}
which is a 1-form with values in ${\mathbb C}$, the Lie algebra of the
structural group ${\mathbb C}^\ast $ of the $I$-th tautological vector
bundle. The natural connection of the $\mathcal{F}_{\mathbb{Z}_4}$
principal bundle, mentioned in eq.\,\eqref{cromodinamico} is just,
according
{to the} universal scheme, the imaginary part of the
connection $\vartheta$.
In order to solve the system of equations \eqref{sakerdivoli} it is
convenient to change variables and write:
\begin{eqnarray}\label{lobus}
&& \Sigma \,= \, \Delta_1^2 + \Delta_2^2 \quad ; \quad U \, = \,
\Delta_3^2
\end{eqnarray}
In this way we obtain:
\begin{equation}
\left\{ \begin{array}{lcl}
-U X_2^2 X_1^2-U X_1^2+U X_2^2 X_3^2+U X_3^2-\zeta _1 X_2 X_3 X_1+\zeta _3 X_2 X_3 X_1-\Sigma X_3 X_1^3
+\Sigma X_3^3 X_1 & = &0 \\
-\zeta _1 X_2 X_3 X_1+\zeta _2 X_2 X_3 X_1-\zeta _3 X_2 X_3 X_1-\Sigma X_3 X_1^3-\Sigma X_3^3 X_1+\Sigma X_2^3
+\Sigma X_2 & = &0\\
-U X_1^2 X_2^2-U X_3^2 X_2^2+U X_1^2+U X_3^2-\zeta _2 X_1 X_3 X_2-\Sigma X_2^3+\Sigma X_2& = &0 \\
\end{array}\right. \label{sistemico}
\end{equation}
This is the fundamental algebraic system encoding all information
about the singularity resolution.
\section{Properties of the moment map algebraic system and chamber
structure}\label{GenMap} The resolubility of the system
\eqref{sistemico}, viewed as a set of algebraic equations of higher
order has some very peculiar properties that actually encode the
topology and analytic structure of the resolved manifold $Y$ and of
its possible degenerations. The most relevant property of
\eqref{sistemico} is that for generic values $U>0,\Sigma
>0$ it has always one and only one root where all $X_i > 0$ are real
positive. This is of course expected from the general theory, as we know there
are a well-defined quotient, and a K\"aher metric on the quotient, for every generic choice of the
{\em level parameters} $\zeta$ (these are called {\em Fayet-Iliopoulos parameters} in gauge theory,
while in algebraic geometry they correspond to the so-called {\em stability parameters}); but it
has also been verified numerically at an arbitrary large
collection of random points $\zeta \in \mathbb{R}^3$ and for
an arbitrary large collection of points $\{U,\Sigma\} \in
\mathbb{R}^2_+$. \paragraph{The special surface $\mathcal{S}_2
$.} Although it is not a wall there is in the $\zeta$ space a
planar surface defined by the following conditions
\begin{equation}\label{essedue}
\mathcal{S}_2 \, \equiv \, \left\{\zeta_1\, =\, \zeta_3\, =\, a
, \quad\zeta_2\, =\, b \neq 2a \right\}
\end{equation}
where the algebraic system \eqref{sistemico} acquires a more
manageable form without loosing generality. The strong
simplification is encoded in the following condition:
\begin{equation}\label{simplicius}
X_1 \, = \, X_2
\end{equation}
Thanks to eq.\eqref{simplicius}, on the plane $\mathcal{S}_2$ the
moment map system reduces to a system of two rather than three
equations. To understand eq.\eqref{simplicius} we need to recall
some results already obtained in \cite{Bruzzo:2017fwj}. Considering
the system \eqref{sistemico}, it was there observed that one of
the three functions $X_i$ can always be algebraically solved
in terms of the other two. Indeed we can write:
\begin{eqnarray}\label{caragamba}
X_3 &=&
X_1 \sqrt[2]{\frac{\zeta
_2+\zeta _3 \left(X_2^2-1\right)}{\zeta
_2+\zeta _1 \left(X_2^2-1\right)}}
\end{eqnarray}
This relation is the algebraic counterpart, in the moment map
equations of the topological result that the homology and cohomology
of the resolved variety $Y$ has dimension $2$. If, inspired by
eq.\eqref{caragamba} we consider the case where all parameters
$\zeta_i$ are different from zero but two of them, namely $\zeta_1$
and $\zeta_3$ are equal among themselves, \text{i.e.} we localize
our calculations on the surface $\mathcal{S}_2$, then
eq.\eqref{simplicius} follows, yielding a reduced system of moment
map equations:
\begin{equation}\label{grumildus}
\left(
\begin{array}{c}
-2 a X_2 X_1^2+b X_2 X_1^2-2 \Sigma X_1^4+\Sigma
X_2^3+\Sigma X_2 \\
-b X_1^2 X_2-2 U X_1^2 X_2^2+2 U X_1^2-\Sigma X_2^3+\Sigma
X_2 \\
\end{array}
\right) \, = \, \left(
\begin{array}{c}
0 \\
0 \\
\end{array}
\right)
\end{equation}
\subsection{Generalities on the chamber structure}
\label{classiwall} In general, the space of parameters
$\zeta$ (which are closely related to the
{\rm stability parameters} of the GIT quotient construction)\footnote{Geometric
invariant theory, usually shortened into GIT, is the standard way of taking quotients
in algebraic geometry \cite{mumford-GIT,newstead-GIT}. The relation between
the GIT approach and the K\"ahler quotient \`a la Kronheimer in the problem at hand
is explored in \cite{degeratu}.}
has a chamber structure. Let $C$ be a
chamber in that space, and $\mathcal{W}$ a wall of $C$; denote by
$\mathcal M_C$ the resolution corresponding to a generic $\zeta$ in
$C$ (they are all isomorphic), and by $\mathcal M_{\mathcal{W}}$ the resolution
corresponding to a generic $\zeta$ in ${\mathcal{W}}$. There is a well-defined
morphism $\gamma\,_{\mathcal{W}}\colon \mathcal M_C \to \mathcal M_{\mathcal{W}}$ (actually
one should take the normalization of the second space, but we skip
such details). In general, the morphism $\gamma\,_{\mathcal{W}}$ contracts
curves or divisors; in \cite{CrawIshii} the walls are classified
according to the nature of the contractions performed by
$\gamma\,_{\mathcal{W}}$. One says that ${\mathcal{W}}$ is of
\begin{enumerate}\item type 0 if $\gamma\,_{\mathcal{W}}$ is an isomorphism;
\item type I if $\gamma\,_{\mathcal{W}}$ contracts a curve to a point;
\item type III if $\gamma\,_{\mathcal{W}}$ contracts a divisor to a curve.
\end{enumerate}
Walls of type II, that should contract a divisor to a point, do not
actually exist, as shown in \cite{CrawIshii}.
The chamber structure pertaining to our example is analyzed and
reconstructed in detail in Section \ref{camerataccademica}. A guide
to the localization of walls and chambers is provided by the
existence of some lines in $\zeta$ space where the system
\eqref{sistemico} becomes solvable by radicals or reduces to a
single algebraic equation. These lines {either turn out} to be edges
of the convex chambers occuring at intersections of walls, or
just belong to walls. We begin by analyzing such solvable
lines.
\subsection{The solvable lines located in $\zeta$ space} In
the $\zeta$ moduli space there are few subcases where the
solution of the algebraic system \eqref{sistemico} can be reduced to
finding the roots of a single algebraic equation whose order is
equal or less than $6$. As anticipated these solvable cases are
located on walls of the chamber and in most case occur at the
intersection of two walls.
\begin{description}
\item[A)] \textbf{Case Cardano I, $\zeta_1=0, \, \zeta_2=\zeta_3 =s$}. With this choice the general
solution of the system \eqref{sistemico} is provided by setting the
ansatz displayed below and by solving the quartic equation for $X$
contained in the next line:
\begin{eqnarray}
&& X_1\, = \, 1, \quad X_2\, = \,X, \quad X_3\, = \, X \nonumber \\
&&s X^2-U X^4+U-\Sigma X^3+\Sigma X \, = \, 0 \label{equatura}
\end{eqnarray}
Obviously we need to choose a branch of the solution such that $X$
is real and positive. As we discuss later on, this is always
possible for all values of $U$ and $\Sigma$ and the required branch
is unique.
To this effect a simple, but very crucial observation is the
following. The arithmetic square root $\sqrt{|s|}$ of the level
parameter $s$ can be used as length scale of the considered space by
rescaling the coordinates as follows: $Z^i \, \to \, \sqrt{|s|}
\,\tilde{Z}^i$ so that equation \eqref{equatura} can be rewritten as
follows
\begin{equation}\label{baldop}
-\tilde{U} X^4+U-\tilde{\Sigma} X^3+ \mathfrak{s}
\,X^2+\tilde{\Sigma } X \, = \,0
\end{equation}
where $\mathfrak{s}$ denotes the sign of the moment map level. This
implies that we have only three cases to be studied, namely:
\begin{equation}\label{xenofonte}
\mathfrak{s} \, = \, \left \{ \begin{array}{c}
1 \\
0\\
-1
\end{array}\right.
\end{equation}
The second case corresponds to the original singular orbifold while
the first and the third yield one instance of what we name the
Cardano manifold. We will see that it corresponds to one of the
possible degenerations of the full resolution $Y$. In the following
we disregard the tildas and we simply write:
\begin{equation}\label{pirettus}
-{U} X^4+U-{\Sigma} X^3 \pm
\,X^2+{\Sigma } X \, = \,0
\end{equation}
\item[B)] \textbf{Case Cardano II $\zeta_3=0, \, \zeta_1=\zeta_2 =s$}. With this choice the general
solution of the system \eqref{sistemico} is provided by setting the
ansatz displayed below and by solving the quartic equation for
$X$ contained in the next line:
\begin{eqnarray}\label{raschiotto}
&& X_1\, = \, X,\quad X_2\, = \,X, \quad X_3\, = \, 1
\nonumber\\
&&-s X^2-U X^4+U-\Sigma X^3+\Sigma X\, = \, 0
\label{konigsberg}
\end{eqnarray}
As one sees eq.~\eqref{konigsberg} can be reduced to the form
\eqref{pirettus} by means of a rescaling similar to that utilized
in the previous case. All previous conclusions apply to this case
upon the exchange of $X_1$ and $X_3$.
\item[C)] \textbf{Case Eguchi-Hanson $\zeta_2=0, \, \zeta_1=\zeta_3 =s $}.
With this choice the general
solution of the system \eqref{sistemico} is provided by setting the
ansatz displayed below and by solving the quartic equation for
$X$ contained in the next line:
\begin{eqnarray}\label{raschiotto}
&& X_1\, = \, X,\quad X_2\, = \,1\, \quad X_3\, = \, X
\nonumber\\
&&-2 s X^2+2 \Sigma -2 \Sigma X^4\, = \, 0 \label{ridiculite}
\end{eqnarray}
The unique real positive branch of the solution to
eq.~\eqref{ridiculite} is given below:
\begin{equation}\label{segretusquid}
X\to \frac{\sqrt{\frac{\sqrt{s^2+4 \Sigma ^2}}{\Sigma }-\frac{s}{\Sigma
}}}{\sqrt{2}}
\end{equation}
We will see in a later Section that eq.\eqref{segretusquid} leads to
a complex three-dimensional manifold that is the tensor product
$\mathrm{EH} \times \mathbb{C}$, having denoted by $\mathrm{EH}$ the
Eguchi-Hanson hyperk\"ahler manifold.
\item[D)] \textbf{Case Kamp\'{e} $\zeta_2=2s,
\, \zeta_1=\zeta_3 =s$}. With this choice the general solution of
the system \eqref{sistemico} is provided by setting the ansatz
displayed below and by solving the sextic equation for $X$ contained
in the next line:
\begin{eqnarray}\label{raschiotto}
&& X_1\, = \, \frac{\sqrt[4]{Z^3+Z}}{\sqrt[4]{2}},
\quad X_2\, = \,Z, \quad X_3\, = \, \frac{\sqrt[4]{Z^3+Z}}{\sqrt[4]{2}}
\nonumber\\
&& 2 \left(Z^2+1\right) \left(s Z-U Z^2+U\right)^2-\Sigma ^2 Z
\left(Z^2-1\right)^2\, = \,0
\end{eqnarray}
As in other cases the root of the sextic equation must be chosen
real and positive. Furthermore the absolute value of the parameter
$s$ can be disposed off by means of a rescaling.
\end{description}
\subsection{The K\"ahler potential of the quotient manifolds}
\label{kallusquidam} Before discussing the chamber structure guided
by the discovery of the above mentioned solvable edges $A,B,C,D$ it
is useful to complete the determination of the K\"ahler manifolds
singled out by such edges. This requires considering the explicit
form of the K\"ahler potential for the quotient manifolds. Following
the general rules of the K\"ahler quotient resolution \`a la
{Kronheimer}, as developed in \cite{Bruzzo:2017fwj}, the restriction
of the K\"ahler potential of the linear space $\mathcal{S}_\Gamma =
{\mbox{Hom}_\Gamma\left(R,\mathcal{Q}\otimes R\right)}$ to the
algebraic locus $\mathcal{D}(L_\Gamma )$ and then to the level
surface $\mathcal{N}_\zeta$ is, for the case under
consideration, the following one:
\begin{equation}\label{KelloN}
\mathcal{K}_0 \mid_{\mathcal{N}_\zeta} \, = \,\frac{U \left(X_2^2+1\right)
\left(X_1^2+X_3^2\right)+\Sigma
\left(X_2^3+X_2+X_1\, X_3
\left(X_1^2+X_3^2\right)\right)}{X_1
X_2 X_3}
\end{equation}
The complete K\"ahler potential of the resolved variety is given by:
\begin{equation}\label{caramboletta}
\mathcal{K}_{\mathcal{M}_\zeta} \, = \, \mathcal{K}_0 \mid_{\mathcal{N}_\zeta}
\, + \alpha_\zeta \, \zeta_I \, \mathfrak{C}^{IJ} \,
\log\left[X_J\right]
\end{equation}
The main point we need to stress is that, depending on the
choices of the moduli $\zeta_I$ (up to rescalings) we can obtain
substantially different manifolds, both topologically and
metrically.
The generic case which captures the entire algebraic structure of
the resolved variety, to be discussed in later sections by means of
toric geometry, is provided by
\begin{equation}\label{comancho}
\zeta_1 \neq 0 \quad, \quad \zeta_2 \neq 0 \quad, \quad \zeta_3 \neq 0
\end{equation}
We name the corresponding K\"ahler manifold $Y$. The explicit
calculation of the K\"ahler geometry of the manifold $Y$ is
discussed in the later Section \ref{Ysezia}, relying on the
particular case $\zeta_1=\zeta_3=\ft 12, \,\, \zeta_2=2$.
For the solvable edges of \textit{moduli space} which we have
classified in the previous Section we have instead the following
results
\begin{description}
\item[A)] \textbf{Cardano case $\mathcal{M}_{0,1,1}$}. We name
\textit{Cardano manifold} the one emerging from the choice
$\zeta_1=0,\, \zeta_2=1, \, \zeta_3 \, = \, 1$ where the
solution of the moment map equations is reduced to the solution
of the quartic algebraic equation \eqref{pirettus}. Choosing the
sign plus in that equation and performing the substitution $X_1 =
1,\, X_2= X, \, X_3=X$ the K\"ahler potential of the
Cardano manifold $\mathcal{M}_{0,1,1}$ takes the form:
\begin{equation}\label{KpotCardan}
\mathcal{K}_{\mathcal{M}_{0,1,1}} \, = \, 2\, {\alpha_\zeta}\, \log{X} \, + \,
\frac{\left(X^2+1\right) \left(U \left(X^2+1\right)+2 \Sigma X\right)}{X^2}
\end{equation}
where, depending on the $\Sigma,U$ region, $X$ is the positive real
root of the quartic equation
\begin{equation}\label{baldoppo}
-U X^4+U-\Sigma X^3+ X^2+\Sigma X \, = \,0
\end{equation}
We already argued that this exists and is unique in all regions of
the $\Sigma,U$ plane.
\item[B)] \textbf{Cardano case $\mathcal{M}_{1,1,0}$}. This turns
out to be an identical copy of the previous Cardano manifold. It
emerges from the choice $\zeta_1=1,\, \zeta_2=1, \, \zeta_3 \, = \,
0$ for which the solution of the moment map equations are also
reduced to the solution of the quartic algebraic equation
\eqref{pirettus}. Performing the substitution $X_1 = X,\, X_2= X,
\, X_3=1$ the K\"ahler potential of the Cardano manifold
$\mathcal{M}_{1,1,0}$ takes the form:
\begin{equation}\label{croccus}
\mathcal{K}_{\mathcal{M}_{1,1,0}} \, = \, 2 \, {\alpha_\zeta} \,\log
(X)+\frac{\left(X^2+1\right) \left(U X^2+U+2 \Sigma X\right)}{X^2}
\end{equation}
which is identical with eq. \eqref{KpotCardan} and $X$ is once again
the positive real root of the quartic equation \eqref{baldoppo}.
Let us name $B_i(\Sigma,U)$ the four roots of eq.\eqref{baldoppo}
enumerated in the order chosen by MATHEMATICA to implement Cardano's
formula. For all the points $\Sigma>0,U>0$ the fourth branch
$B_4(\Sigma,U)$ is the unique real positive one. This property is
visualized in fig. \ref{brancettus}.
\begin{figure}\label{brancettus}
\begin{center}
\includegraphics[height=8cm]{cardusX.png}
\caption{\label{brancettus}{\small Plot of the 4th branch of the
solution to the quartic equation \eqref{baldop}. This branch is the
unique real positive one. The surface is plotted with respect to the
new variables $\varpi$ and $\mho$ defined in eq. \eqref{mhovarpi}} }
\end{center}
\end{figure}
Hence let us consider the K\"ahler potential of
eq.~\eqref{KpotCardan} where $X\to \mathfrak{X}(\varpi,\mho)$ is
the positive real root of the quartic equation \eqref{baldoppo}.
We write the positive real solution to the quartic equation
\eqref{baldoppo} in terms of two new variables defined below:
\begin{equation}\label{mhovarpi}
U \, = \, \sqrt{\mho} \quad ; \quad \Sigma\, = \, \sqrt{\varpi}
\, \sqrt[4]{\mho}
\end{equation}
As the reader will appreciate later on, these variables are
specially prepared to perform the limit to the compact exceptional
divisor and are justified by the toric analysis of Section
\ref{toricresolution}.
The explicit form of $\mathfrak{X}(\varpi,\mho)$ (plotted in
fig.\ref{brancettus}) is the following one:
\begin{eqnarray}
\mathfrak{X}&=&\frac{1}{12 \mho^{1/4}}\, \left(\sqrt{6}
\sqrt{-\frac{3 \sqrt{3} \mathcal{C}}{\sqrt{\frac{\frac{4 \sqrt[3]{2}
\mathcal{A}}{\sqrt[3]{\mathcal{D}}}+2\
2^{2/3} \sqrt[3]{\mathcal{D}}+3 \varpi -8}{\varpi }}}-\frac{2 \sqrt[3]{2}
\mathcal{A}}{\sqrt[3]{\mathcal{D}}}-2^{2/3} \sqrt[3]{\mathcal{D}}+3 \varpi -8}\right.\nonumber\\
&&\left.+\sqrt{3} \sqrt{\frac{4 \sqrt[3]{2}
\mathcal{A}}{\sqrt[3]{\mathcal{D}}}+2\ 2^{2/3} \sqrt[3]{\mathcal{D}}+3 \varpi -8}-3 \sqrt{\varpi
}\right)
\label{caponatasiciliana}
\end{eqnarray}
The symbols $\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D}$
utilized in \eqref{caponatasiciliana} are just shorthands for
certain combinations of the parameters $(\varpi,\mho)$ that we
mention below.
\begin{eqnarray}
\mathcal{A} &=& 3 \varpi \sqrt{\mho }-12 \mho +1\nonumber \\
\mathcal{B} &=& 9 \varpi \sqrt{\mho }+72 \mho +2 \nonumber\\
\mathcal{C} &=& \varpi -8 \sqrt{\mho }-4 \nonumber\\
\mathcal{D}&=& \sqrt{\mathcal{B}^2-4
\mathcal{A}^3}+\mathcal{B}\label{princiapisellus}
\end{eqnarray}
\item[C)] \textbf{Eguchi Hanson case $\mathcal{M}_{s,0,s}$}.
When we choose $\zeta_1=s,\, \zeta_2=0, \, \zeta_3 \, = \, s$ the
moment map system reduces to eqs. \eqref{ridiculite}. Performing
the substitution $X_1 = X,\, X_2= 1, \, X_3=X$ and using
eq.~\eqref{segretusquid} the K\"ahler potential of the manifold
$\mathcal{M}_{s,0,s}$ takes the form:
\begin{equation}\label{canebirbo}
\mathcal{K}_{\mathcal{M}_{s,0,s}}\, =\underbrace{4\, \alpha\, s \, \log [X] \,+2 \Sigma X^2+\frac{2 \Sigma
}{X^2}}_{\mathcal{K}_2}\, +\,4 U
\end{equation}
What we immediately observe from eq.~\eqref{canebirbo} is that the
K\"ahler potential is of the form:
\begin{equation}\label{cancellino}
\mathcal{K}_{\mathcal{M}_{s,0,s}}\, =
\,\underbrace{\mathcal{K}_2\left(Z_1,Z_2,\bar{Z}_1,\bar{Z}_2\right)}_{\text{K\"ahler potential of a two-fold}}\, + \,
4\times|Z_3|^2
\end{equation}
Hence the manifold $\mathcal{M}_{s,0,s}$ is a direct product:
\begin{equation}\label{sapientulo}
\mathcal{M}_{s,0,s} \, = \, \mathcal{M}_2 \, \times \, \mathbb{C}
\end{equation}
It is not difficult to realize that the manifold $\mathcal{M}_2$ is
just the Eguchi-Hanson space $EH \, \equiv \, ALE_{\mathbb{Z}_2}$.
To this effect it suffices to set $s= \frac{\ell}{2}$ and rescale
the coordinates $Z^{1,2} \, \to \, \frac{\check{Z}^{1,2}}{2}$. This
implies $\Sigma = \ft 14 |\check{\mathbf{Z}}|^2$ and the K\"ahler
potential $\mathcal{K}_2\left(Z_1,Z_2,\bar{Z}_1,\bar{Z}_2\right)$
turns out to be
\begin{equation}\label{certamino}
\mathcal{K}_2 \, = \, \sqrt{\ell^2 +|\check{\mathbf{Z}}|^4} \, +
\, {\alpha_\zeta} \,\ell \, \log
\left[\frac{-\ell + \sqrt{\ell^2 +|\check{\mathbf{Z}}|^4}}{|\check{\mathbf{Z}}|^2}\right]
\,
\end{equation}
which is essentially equivalent to the form of the Eguchi-Hanson
K\"ahler potential given in eq. (7.22) of \cite{Bruzzo:2017fwj}.
This might be already conclusive, yet for later purposes it is
convenient to consider the further development of the result
\eqref{certamino} since the known and fully computable case of the
Eguchi-Hanson space allows us to calibrate the general formula for
the K\"ahler potential \eqref{celeberro} fixing the value of the so
far unknown parameter $\alpha_\zeta$ that, in this case,
turns out to be $\alpha_\zeta \, = \, 1$. To this effect
recalling the topological structure of the Eguchi Hanson space that is the
total space of the line bundle $\mathcal{O}_{\mathbb{P}^1}(-2)$ we
perform the change of variables:
\begin{equation}\label{ciucius}
\check{Z}_1 \, = \, u \, \sqrt{v} \quad ; \quad \check{Z}_2 \, = \,\sqrt{v}
\end{equation}
where $u$ is the complex coordinate of the compact base
$\mathbb{P}^1$, while $v$ is the complex coordinate spanning the
non-compact fiber. Upon such a change the K\"ahler potential
\eqref{certamino} becomes:
\begin{equation}\label{cabacchio}
\mathcal{K}_2 \, = \, \sqrt{|v|^2 (|u|^2 +1)^2+\ell ^2}\,+\, {\alpha_\zeta} \,\ell \log \left(\frac{\sqrt{|v|^2
(|u|^2 +1)^2+\ell ^2}-\ell }{(|u|^2 +1) \sqrt{|v|^2
}}\right)
\end{equation}
A further important information can be extracted from the present
case. Setting $v=0$ we perform the reduction to the exceptional
divisor of this partial resolution which is just the base manifold
$\mathbb{P}^1$ of Eguchi-Hanson space. The reduction of the K\"ahler
2-form to this divisor is very simple and it is the following one:
\begin{equation}\label{lampsuco}
\mathbb{K} \mid_{\mathbb{P}^1} \, = \, \frac{\rho \sqrt{\ell ^2}
d\rho\wedge d\theta}{\pi \left(\rho
^2+1\right)^2}
\end{equation}
where we have set $u=\rho \, \exp[i\,\theta]$. It follows that the
period integral of the K\"ahler 2-form on the unique homology cycle
$C_1$ of the partial resolution $\mathrm{EH}\times
\mathbb{C}$ which is the above mentioned $\mathbb{P}^1$ is:
\begin{equation}\label{califragilisti}
\int_{C_1} \,\mathbb{K} \, = \, 2\pi \, \int_0^\infty
\frac{\rho \sqrt{\ell ^2}
d\rho}{\pi \left(\rho
^2+1\right)^2} \, = \, \sqrt{\ell ^2}
\end{equation}
Equation \eqref{califragilisti} sends us two important messages:
\begin{itemize}
\item Whether the level parameter $s=\ell/2$ is positive or
negative does not matter.
\item The absolute value $|s|$ encodes the size of the homology
cycle in the exceptional divisor. When it vanishes the homology
cycle shrinks to a point and we have a further degeneration.
\end{itemize}
\item[D)] \textbf{Sextic case or the \textit{Kamp\'{e} manifold}\footnote{Since
it is generally stated that the roots of a general sextic equation
can be written in terms of Kamp\'{e} de {F\'{e}riet} functions,
although explicit generic formulae are difficult to be found, we
have decided to call $\mathcal{M}_{s,2s,s}$ the \textit{Kamp\'{e}
manifold} with the same logic that led us to name
$\mathcal{M}_{0,s,s}$ the Cardano manifold.}
$\mathcal{M}_{s,2s,s}$}.
When we choose $\zeta_1=s,\, \zeta_2=2s, \, \zeta_3 \, = \, s$ the
moment map system reduces to eqs. \eqref{raschiotto}. With the same
positions used there,
the K\"ahler potential of the $\mathcal{M}_{s,2s,s}$-manifold turns
out to be the following one:
\begin{equation}\label{rodriguez}
\mathcal{K}_{\mathcal{M}_{1,2,1}} \, = \, 2\, s\,
\log (Z)\, +\, \frac{\sqrt{2}
\sqrt{Z^3+Z} \left(\sqrt{2} U \sqrt{Z^3+Z}+2 \Sigma
Z\right)}{Z^2}
\end{equation}
where the function $Z(\Sigma,U)$ of the complex coordinates is the
positive real root, depending on the $\Sigma,U$ region of the sextic
equation:
\begin{equation}\label{sesticina}
2\left(Z^2+1\right) \left(s Z-U Z^2+U\right)^2-\Sigma ^2 Z
\left(Z^2-1\right)^2\, = \,0
\end{equation}
\end{description}
In the next Section we discuss the geometry of the crepant
resolution of the singularity $\frac{\mathbb{C}^3}{\mathbb{Z}_4}$
utilizing toric geometry. Then we return to the formulae for the
K\"ahler potential displayed in the present Section in order to see
how the K\"ahler geometry of the entire space and in particular of
the various components of the exceptional divisor is realized in the
various corners of the moduli-space. This will allow us to discuss
the Chamber Structure of this particular instance of K\"ahler
quotient resolution ${\grave{a}}$ la Kronheimer.
As we are going to see, all the four cases analyzed in the present
Section, the two Cardano cases, the Eguchi-Hanson case and the
Kamp\'e case correspond to partial resolutions of the orbifold
singularity and indeed they are located on walls or even on edges
where some homology cycles shrink to zero. The K\"ahler geometry of
the full resolution will be analyzed in Section \ref{Ysezia}.
\section{Toric geometry description of the crepant resolution}
\label{toricresolution}
As announced above in the present Section we study the full and partial resolutions of
the singularity $\mathbb{C}^3/\mathbb{Z}_4$ in terms of
toric geometry. Both resolutions turn to be the total space of the canonical line bundle over an algebraic surface, the second Hirzebruch surface $\mathbb F_2$ and the
weighted projective plane $\mathbb P[1,1,2]$.
The main output of this study is provided by two
informations:
\begin{enumerate}
\item The identification as algebraic varieties of the irreducible components of the
exceptional divisor $\mathcal{D}_E$ introduced by the resolution.
\item The explicit form of the atlas of coordinate patches that describe the
resolved manifold and the coordinate transformation from the
original $Z_i$ to the new $u,v,w$ (appropriate to each patch) that constitute
the blowup of the singularities.
\end{enumerate}
The second information of the above list and in particular the
equation of the exceptional divisor in each patch is the main tool
that allows to connect the K\"ahler quotient description outlined in
the previous Section with the algebraic description. In particular
by this token we arrive at the determination of the K\"ahler metric
of the exceptional divisor components induced by the Kronheimer
construction.
\subsection{An initial cautionary remark}
Let $\Gamma$ be a finite subgroup of $\operatorname{SL}(3,\mathbb{C})$, and let
let $X_0 = (\mathbb C^3)^{ss}/\!\!/_0\ \Gamma$.\footnote{{For an affine scheme, the GIT quotient is constructed
as the spectrum of the $\Gamma$-invariant subring of the coordinate ring of the affine scheme.
In the non-affine case the GIT quotient is obtained by glueing local affine quotients. It is a {\em categorical} quotient of the open subscheme of $\theta$-semistable points and is denoted by the symbol} $X^{ss} /\!\!/\!_\theta\,\Gamma$, after choosing a stability parameter $\theta$.
See e.g.~\cite{mumford-GIT,newstead-GIT}.}
By general theory (see e.g.~\cite{Hauzer-Langer}) for every generic value of the stability parameter $\theta$ there is a morphism
$\mathbb C^3/\!\!/_\theta\ \Gamma \to X_0 $. Sardo Infirri \cite[Thm.~4,4 and Rmk.~4.5]{SardoInfirri:1996ga} and Craw-Ishii \cite[Prop.~2.2]{CrawIshii}
notice that there always is a closed embedding $\mathbb C^3/\Gamma \to X_0$ (this makes
$\mathbb C^3/\Gamma $ into an irreducible component of $X_0$), and that this is an isomorphism if and only if $\Gamma$ acts freely away from 0. This happens for $\mathbb C^2/{\mathbb Z}_n$ both for the standard $\mbox{SU}(2)$ and $\mbox{U}(2)$ actions, and for the $\mathbb C^3/{\mathbb Z}_3$ case treated in
\cite{Bruzzo:2017fwj}, but not for the present $\mathbb C^3/{\mathbb Z}_4$ case, where each point of the $z$ axis has a ${\mathbb Z}_2$ isotropy subgroup. On the other hand, one can shows the existence
of at least one stability chamber such that for generic values of $\theta$ in that chamber,
$ \mathbb C^3 /\!\!/_\theta\ \Gamma$ is a resolution of singularities of $\mathbb C^3/\Gamma$ \cite{CrawIshii}.
So in that case one has a commutative diagram
$$ \xymatrix{ \mathbb C^3 /\!\!/_\theta\ \Gamma \ar[d]\ar[dr] \\
\mathbb C^3/\Gamma\ \ar@{^{(}->}[r] & X_0
}$$
Actually we shall see in Sections \ref{camerataccademica} and \ref{Periodstaut} that in the present $\mathbb C^3/{\mathbb Z}_4$ case {\em all} stability chambers
correspond to the full resolution of singularities.
\subsection{The singular variety $Y_0=\mathbb C^3/\mathbb Z_4$}
We shall denote by
$(x,y,z)$ the coordinates of $\mathbb{C}^3$, by $\{\mathbf e_i\}$
the standard basis of $\mathbb R ^3$ and by $\{ \epsilon^i\}$ the
dual basis. The action of $\mathbb Z_4$ is given by
$$(x,y,z) \mapsto (\omega x, \omega y, \omega^2 z )$$
with $\omega^4=1$. It is easy to find a basis for the space of
invariant Laurent polynomials:
\begin{equation}\label{formulauno}
\mathcal{I}_1 \, = \,x \, y^{-1}, \quad \mathcal{I}_2 \, =
\,y^2\,z^{-1}, \quad \mathcal{I}_3 \, = \, z^2
\end{equation}
so that the three vectors
$$ \mathbf u^1 = \epsilon^1 - \epsilon^2, \qquad
\mathbf u^2 =2 \epsilon^2 - \epsilon^3, \qquad
\mathbf u^3 = 2\epsilon^3
$$
generate the lattice $M$ of invariants, which is a sublattice of the
standard (dual) lattice $M_0$. The lattice $N$ dual to $M$ is a
superlattice of the standard lattice $N_0$, and is generated by the
vectors
\begin{equation}\label{doppievu}
\mathbf w_1 = \mathbf e_1, \qquad
\mathbf w_2 =\tfrac12 \mathbf e_1+\tfrac12 \mathbf e_2, \qquad
\mathbf w_3 =\tfrac14 \mathbf e_1+\tfrac14 \mathbf e_2+\tfrac12 \mathbf e_3.
\end{equation}
The generators of the rays giving the cone associated with the variety $Y_0$
are obtained by inverting these relations, i.e.,
$$\mathbf v_1 = \mathbf w_1, \qquad
\mathbf v_2 = 2 \mathbf w_2 - \mathbf w_1, \qquad
\mathbf v_3 = - \mathbf w_2+2 \mathbf w_3.
$$
From now on, unless differently stated, coordinate expressions will
always refer to this basis $\{\mathbf w_i\}$
of $N$. So the rays of the fan $\Sigma_0$ of $Y_0$
are
$$ \mathbf v_1 = (1,0,0),\qquad \mathbf v_2=(-1,2,0),\qquad \mathbf v_3=(0,-1,2)$$
They do not form a basis of $N$, according to the fact that
$Y_0$ is singular (note indeed that that $N/\sum_i{\mathbb Z}\mathbf v_i={\mathbb Z}_4$).
\subsection{The full resolution $Y$ of $Y_0=\mathbb C^3/\mathbb Z_4$}
In this Section we study the full (smooth) resolution $Y$ of the singular quotient
$Y_0$, describing its torus-invariant divisors and curves and the natural coordinate systems
on its affine patches.
\subsubsection{The fan}
The fan of $Y$ is obtained by adding to the fan
$\Sigma_0$ the rays generated by the lattice points lying
on the triangle with vertices $\{\mathbf v_i\}$. These
are
$$\mathbf w_2 = (0,1,0),\qquad \mathbf w_3 = (0,0,1).$$
The torus invariant divisors corresponding to the two new rays
of the fan are the components of the exceptional divisor. Since $\mathbf w_3$ is in the interior of the triangle,
the corresponding component of the exceptional divisor is compact, while
the component corresponding to $\mathbf w_2$, which lies on the border, is noncompact. Note indeed that,
according to the equations \eqref{doppievu}, $\mathbf w_2$ and $\mathbf w_3$
correspond to the junior classes $\frac12(1,1,0)$ (noncompact) and $\frac14(1,1,2)$ (compact)
associated with the given representation of $\mathbb Z_4$.
We shall denote by $Y$ this resolution of singularities. Figure \ref{SigmaY} shows the fan of $Y$ and the associated
planar graph. The planar graph is obtained by projecting the generators of the rays onto the triangle formed by the three original vertices; this is shown in a 3-dimensional perspective in Figure
\ref{z4trian}.
One can explicitly check that all cones of $\Sigma_Y$ are smooth, so that $Y$ is indeed smooth.
Note that all cones of $\Sigma_Y$ are contained in the cones of $\Sigma_0$, which corresponds to the existence of
a morphism $Y \to Y_0$.
If we first make the blowup corresponding to the ray $\mathbf w_3$,
i.e., to the junior class $\frac14(1,1,2)$, according to the general theory the exceptional divisor is
a copy of the weighted projective plane $\mathbb P[1,1,2]$. When we make the second blowup, i.e. we blow up the $z$ axis,
we also blowup $\mathbb P[1,1,2]$ at its singular point, so that the compact component of
the exceptional divisor of the resolution of $Y_0$ is a copy of the second Hirzebruch surface
$\mathbb F_2$. Moreover, the noncompact component of the exceptional divisor is isomorphic to $\mathbb P^1\times\mathbb C$
(which, by the way, turns out to be the weighted projective space $\mathbb P[1,1,0]$). This will be shown in more detail in the next sections (in particular, the compact exceptional divisor will be characterized as the Hirzebruch surface $\mathbb F_2$ by computing its fan).
By general theory \cite{itoriddo} (see also \cite{Bruzzo:2017fwj}) we know
$$h_2(Y,{\mathbb Q}) = 2, \qquad h^2(Y,{\mathbb Q})=2, \qquad h^2_c(Y,{\mathbb Q})=1, \qquad h^4(Y,{\mathbb Q})=1.$$
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.50]
\draw [thick,->] (0,0) -- (-0.5,1.5) ;
\draw [thick,->] (0,0) -- (1.5,0.5) ;
\draw [thick,->] (0,0) -- (-0.8,-0.8) ;
\draw [thick,->,green] (0,0) -- (1,-0.2);
\node at (1.2,-0.4){\blu{\footnotesize $\mathbf w_2= (0,1,0)$}};
\node at (-1,-1) {\footnotesize$\mathbf v_1=(1,0,0)$};
\node at (2.7,0.5) {\footnotesize$\mathbf v_2=(-1,2,0)$};
\node at (-0.8,1.7) {\footnotesize$\mathbf v_3=(0,-1,2)$};
\draw [thick,->,green] (0,0) -- (0,1);
\node at (1,1.1){\blu{\footnotesize $\mathbf w_3= (0,0,1)$}};
\draw [fill] (0,0) circle (1.5pt) ;
\end{tikzpicture}\hskip1cm
\begin{tikzpicture}[scale=0.50]
\path [fill=pink] (0,0) to (4,2.3) to (4,6.9) to (0,0) ;
\path [fill=yellow] (8,0) to (4,2.3) to (4,6.9) to (8,0) ;
\path [fill=green] (0,0) to (4,2.3) to (4,0) to (0,0);
\path [fill=brown] (4,0) to (4,2.3) to (8,0) to (4,0);
\draw [fill] (0,0) circle (3pt);
\draw [fill] (8,0) circle (3pt);
\draw [fill] (4,6.9) circle (3pt);
\draw [fill] (4,2.3) circle (3pt);
\draw (0,0) -- (8,0); \draw (0,0) -- (4,6.9); \draw (8,0) --
(4,6.9); \draw (0,0) -- (4,2.3); \draw (8,0) -- (4,2.3); \draw
(4,6.9) -- (4,2.3); \node at (-0.5,0) {$\mathbf v_1$}; \node at (8.6,0)
{$\mathbf v_2$}; \node at (4.2,7.4)
{$\mathbf v_3$}; \node at
(4.7,2.6) {$\mathbf w_3$};
\node at (3,3) {$\sigma_4$}; \node at (5,3.7) {$\sigma_3$};
\node at
(4,-0.5) {$\mathbf w_2$}; \draw [fill] (4,0) circle (3pt); \draw
(4,0) -- (4,4);
\node at (5,1) {$\sigma_2$}; \node at (3,1) {$\sigma_1$};
\end{tikzpicture}
\caption{\label{SigmaY} \small The fan $\Sigma_Y$ of the resolution $Y$ and the associated planar graph}
\end{center}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm]{triangulus.png}
\vskip -2cm \caption{ \label{z4trian} The figure displays a finite
portion of the lattice $N$ dual {to} the lattice $M$ of
$\mathbb{Z}_4$-invariants and the generators of the cone
$\sigma$ describing the singular quotient $\mathbb{C}^3/\mathbb Z_4$ marked as fat dark arrows. The
extremal points of the three generators single out a triangle,
which intersects the lattice $N$ in two
additional points, namely the extremal points of the vector
$\mathbf w_3$ and of the vector $\mathbf w_2$, marked as lighter
arrows in the figure. These vectors have to be added to the
fan and divide the original cone into four maximal cones,
corresponding to as many open charts of the resolved smooth toric
variety.}
\end{figure}
\subsubsection{Divisors} We analyze the toric divisors of $Y$; they are summarized in Table \ref{tableDivY}. Each of these is associated with a ray of the fan $\Sigma_Y$. The divisors corresponding
to $\mathbf w_3$, $\mathbf w_2$, $\mathbf v_1$, $\mathbf v_3$, $\mathbf v_2$ will be denoted $D_c$, $D_{nc}$, $D_{EH}$,
$D_{4}$, $D'_{EH}$ respectively. Since $Y$ is smooth all of them are Cartier. Table \ref{divisors} shows the fans of these divisors and what variety they are as intrinsic varieties. The fans are depicted in Figure \ref{DivY}.\footnote{In the cases when the ray associated to the divisor is not a coordinate axis we made a change of basis.} The fan of $D_c$ is
generated by the rays $\mathbf v_2$, $\mathbf w_2$, $\mathbf v_1$, $\mathbf v_3$, which shows that $D_c$ is the second Hirzebruch surface $\mathbb F_2$. The corresponding curves in $D_c$ have been denoted $E_1$, $E_2$, $E_3$, $E_4$ respectively. From the self-intersections of these curves (in $D_c$)
$$E_1^2=0, \qquad E_2^2 = -2,\qquad E_3^2 = 0, \qquad E_4^2 = 2 $$
(see \cite[Example 6.4.6]{CoxLS}) we see that $E_2$ is the section of
$\mathbb F_2 \to \mathbb P^1$ which squares to $-2$, i.e., the exceptional divisor of the blowup
$\mathbb F_2 \to \mathbb P[1,1,2]$, while $E_4$ is the section that squares to 2, and $E_1$, $E_3$ are the toric fibers of $\mathbb F_2 \to \mathbb P^1$.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.50}
\caption{Toric Divisors in $Y$. The last column shows the components of the divisor class on the basis given by $(D_{nc}, D_c)$. The variety $\mbox{ALE}_{A_1}$ is the Eguchi-Hanson space,
the crepant resolution of the singular space $\mathbb C^2/{\mathbb Z}_2$.
\label{tableDivY}}
\vskip10pt
\centering
\begin{tabular}{|c |c| c| c| c |}
\hline
Ray & Divisor & Fan & Variety & Components \\ [1ex]
\hline
$\mathbf w_3$ & $D_c$ & \small $(1,0),\ (-1,2),\ (0,-1),\ (0,1)$ & $\mathbb F_2$ & $ (0,1)$\\ \hline
$\mathbf w_2$ & $D_{nc}$ & \small $(1,0),\ (-1,0),\ (0,1)$ & $\mathbb P^1\times\mathbb C$ & $(1,0)$ \\ \hline
$\mathbf v_1$ & $D_{EH}$ & $ (1,0),\ (-1,2),\ (0,1) $& $\mbox{ALE}_{A_1}$ & $(-\frac12,-\frac14)$ \\ \hline
$\mathbf v_3$ & $D_{4}$ & $ (1,0),\ (-1,4),\ (0,1) $ & \small tot $(\mathcal O(-4) \to \mathbb P^1$) & $(0,-\frac12)$\\ \hline
$\mathbf v_2$ & $D'_{EH}$ & $ (1,0),\ (-1,2),\ (0,1) $ & $\mbox{ALE}_{A_1}$ & $(-\frac12,-\frac14)$ \\ [1ex]
\hline
\end{tabular}
\label{divisors}
\end{table}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ; \node at (1.2,0) {\small $E_3$};
\draw [thick,->] (0,0) -- (-1,2) ; \node at (-1.1,2.2) {\small $E_1$};
\draw [thick,->] (0,0) -- (0,-1) ; \node at (0,-1.4) {\small $E_4$};
\draw [thick,->] (0,0) -- (0,1); \node at (0,1.2) {\small $E_2$};
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-2) {$D_c$};
\node at (0.5,0.5) {$\tau_2$} ;
\node at (0.5,-0.5) {$\tau_3$} ;
\node at (-0.7,-0.2) {$\tau_4$} ;
\node at (-0.4,1.5) {$\tau_1$} ;
\end{tikzpicture}
\hskip5mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,0) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_{nc}$};
\end{tikzpicture}
\hskip5mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,2) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_{EH}$, $D'_{EH}$};
\end{tikzpicture}
\hskip5mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,4) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_4$};
\end{tikzpicture}
\caption{\label{DivY} \small The fans of the 5 toric divisors of $Y$. In the fan of $D_c$ we have labelled the rays with the names of the corresponding divisors; the $\tau_i$'s are the maximal cones. }
\end{center}
\end{figure}
Among the 5 divisors only 2 are independent in cohomology. We consider the exact sequence \cite[Thm.~4.2.1]{CoxLS}
\begin{equation}\label{Pic}
0 \to M \xrightarrow{A} \operatorname{Div}_{\mathbb T}(Y) \xrightarrow{B} \operatorname{Pic}(Y) \to 0
\end{equation}
where $M$ is the dual lattice, $ \operatorname{Div}_{\mathbb T}(Y) $ is the group of torus-invariant divisors,
and $\operatorname{Pic}(Y) $ is the Picard group\footnote{The Picard group $\operatorname{Pic}(X)$ of a complex variety $X$ is the group of isomorphism classes of holomorphic line bundles on $X$. Using \v Cech cohomology it can be represented as the cohomology group $H^1(X,{\mathcal O}_X^\ast)$, where $
{\mathcal O}_X^\ast$ is the sheaf of nowhere vanishing holomorphic functions on $X$.} of the full resolution $Y$. The morphism $B$ simply takes the
class of a divisor in the Picard group, while for every $m\in M$, $A(m)$ is the divisor associated with the rational
function defined by $m$.
Moreover we know that the classes of the divisors $D_{nc}$ and $D_{c}$ generate $\operatorname{Pic}(Y) $ over ${\mathbb Q}$ \cite{itoriddo}.
With that choice of basis in $\operatorname{Pic}(Y) \otimes{\mathbb Q}$, with the basis given by the 5 divisors in $\operatorname{Div}_{\mathbb T}(Y) \otimes{\mathbb Q}$, and the basis in $M\otimes{\mathbb Q}$ given by the duals of the $\{\mathbf w_i\}$, the morphisms $A$ and $B$ are represented over the rationals by the matrices
$$ A =\begin{pmatrix} 1 & 0 & 0 \\ -1 & 2 & 0 \\ 0 & -1 & 2 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix},\qquad
B =\begin{pmatrix} -\tfrac12 & -\tfrac12 & 0 & 1 & 0 \\
-\tfrac14 & -\tfrac14 & -\tfrac12 & 0 & 1
\end{pmatrix}
$$
We deduce that the relations among the classes of the 5 toric divisors in the Picard group are\footnote{The notation $[D]$ means the class in the Picard group of the line bundle ${\mathcal O}_X(D)$.}
\begin{gather} [D_{EH}] = [D'_{EH}]= -\tfrac12 [D_{nc}] - \tfrac14 [D_c] \label{Div1} \\
{[}D_{4} ] = -\tfrac12 [D_c] \label{eqDiv2}
\end{gather}
Since the canonical divisor can be written as minus the sum of the torus-invariant divisors,
one has
$$ [ K_Y] = - [D_{nc}] - [D_c] - [D_{EH}] - [D'_{EH}] - [D_4] = 0 $$
consistently with the fact that the resolution $Y \to Y_0$ is crepant. (Note that $\operatorname{Pic}(Y)$ is free
over ${\mathbb Z}$ \cite[Prop.~4.2.5]{CoxLS}, so that the equality $ [ K_Y] =0$ in $\operatorname{Pic}(Y) \otimes{\mathbb Q}$ also implies that $ [ K_Y] =0$ in $\operatorname{Pic}(Y)$).
The matrix $B$ can also be chosen as
$$ B = \begin{pmatrix} 1 & 1 & 0 & -2 & 0 \\
0 & 0 & 1 & 1 & -2 \end{pmatrix} $$
which corresponds to taking the classes of $D_{EH}$ and $D_4$ as basis of the Picard group.
In this way the matrix $B$ is integral, which means that $D_{EH}$ and $D_4$ generate
$\operatorname{Pic}(Y)$ over the integers.
\subsubsection{Toric curves and intersections} \label{curvesY}
The planar graph in Figure \ref{SigmaY} shows that $Y$ has 4 compact toric curves, corresponding to the inner edges of the graph.
The intersection between a smooth irreducible curve $C$ and a Cartier divisor $D$ is defined as
$$ C\cdot D = \deg (f^\ast {\mathcal O}_Y(D))$$
where $f\colon C \to Y$ is the embedding, and ${\mathcal O}_Y(D)$ is the line bundle associated to the divisor\footnote{This also allows one to compute the intersection between a curve $C$ and a Weil divisor $D$. Indeed simplicial toric varieties are ${\mathbb Q}$-factorial, i.e., every Weil divisor has a multiple that is Cartier. So if $mD$ is Cartier, one defines
$$ C \cdot D = \tfrac1m \,C\cdot(mD).$$ This may be a rational number. \label{nota}}
(we shall use this definition in Section \ref{Periodstaut} to compute the periods of the tautological line bundles).
Inspection of the fan allows one to detect when the intersection is transversal (in which case the intersection mumber is 1), empty (intersection number 0), or the curve is inside the divisor.
The intersections are shown in Table \ref{tableintersec}.
\begin{table}[ht]
\caption{Intersections among the toric curves and divisors in the full resolution $Y$. For the curves that are inside $D_c$ the last two columns also show the identifications with the curves corresponding to the rays in the fan of $D_c$ of Figure \ref{DivY}, and what they are inside the second Hirzebruch surface. Note that $C_1$ is the intersection between the two components of the exceptional divisor.
The basis of the fibration $D_c \to \mathbb P^1$ may be identified with $C_1$, while $C_2$, $C_4$
are the fibers over two toric points, which correspond to the cones $\sigma_1$ and $\sigma_2$.
\label{tableintersec}}
\vskip10pt
\centering
\begin{tabular}{|c|c |c| c| c| c| c| c| c| }
\hline
Edge/face & Curve & $D_c$ & $D_{nc}$ & $D_{EH}$ & $D'_{EH}$ & $D_4$ & Inside $D_c$ & \\ \hline
$(\mathbf w_2\mathbf w_3)$ & $C_1$ & 0 & - 2 & 1 & 1 & 0 & $ E_2$ & $-2$-section \\ \hline
$(\mathbf v_1\mathbf w_3)$ & $C_2$ & -2 & 1 & 0 & 0 & 1 & $E_3$ & fiber \\ \hline
$(\mathbf v_2\mathbf w_3)$ & $C_4$ & -2 & 1 & 0 & 0 & 1 & $E_1$ & fiber \\ \hline
$(\mathbf v_3\mathbf w_3)$ & $C_5$ & -4 & 0 & 1 & 1 & 2 & $E_4$ & 2-section\\ \hline
$(\mathbf v_1\mathbf w_2)$ & $C_3$ & 1 & 0 & 0 & 0 & 0 & & \\
\hline
\end{tabular}
\label{intersections}
\end{table}
The intersection numbers of $D_{EH}$ and $D_4$ with $C_1$, $C_2$ show that
the latter are a basis of $H_2(Y,{\mathbb Z})$ dual to $\{[D_4],[D_{EH}]\}$.
\subsubsection{Coordinate systems and curves}\label{coorcurves}
The four 3-dimensional cones in the fan of $Y$ correspond to four affine open varieties, and since all cones are smooth (basic), they are copies of $\mathbb C^2$. The variables attached to the rays generating a cone provide a coordinate system on the corresponding affine set. A face between two 3-dimensional cones corresponds to the intersection between the two open sets. Note that all charts have a common intersection,
as they all contain the 3-dimensional torus corresponding to the origin of the fan.
Table \ref{coordinates} shows the association among cones, rays, coordinates and coordinate expressions of toric curves.
Below we provide a list of {coordinate systems}, with all transition functions between them, and the expressions of the toric curves in the coordinate systems of the charts they belong to.
Table \ref{changes} displays the coordinate transformations among the four coordinate systems.
We have denoted
$C_i$, $i=1\dots 5$ as before, and moreover $C_6$, $C_7$, $C_8$ are the noncompact toric curves in the charts 2, 3 and 4 (analogously, $C_3$ was the noncompact curve in the chart 1). The column ``Dual gen.'' displays the generators of the dual cone.
In each chart, the coordinates $(u,v,w)$ are related to the invariants \eqref{formulauno} as follows:
\begin{equation}\label{settorio}
\begin{array}{lclcrcccccl}
\left\{u,v,w\right\}_1 &=&\{&\frac{x}{y} &,& \frac{y^2}{z} &,&
z^2&\} \\
\left\{u,v,w\right\}_2 &=&\{&\frac{y}{x} &,& \frac{x^2}{z} &,&
z^2&\} \\
\left\{u,v,w\right\}_3&=&\{& \frac{y}{x} &,& \frac{z}{x^2} &,&
x^4 &\}\\
\left\{u,v,w\right\}_4 &=&\{&\frac{x}{y} &,& y^4 &,&
\frac{z}{y^2}&\} \\
\end{array}
\end{equation}
Eqs. ~\eqref{settorio} can be easily inverted and one obtains:
\begin{equation}\label{subberulle}
\begin{array}{clclcl}
\mbox{Chart } X_{\sigma_1}& x\to u \sqrt{v} \sqrt[4]{w} &,&
y\to \sqrt{v} \sqrt[4]{w} &,&
z\to \sqrt{w} \\
\mbox{Chart } X_{\sigma_2}& x\to \sqrt{v} \sqrt[4]{w} &,&
y\to u \sqrt{v} \sqrt[4]{w}
&,& z\to \sqrt{w} \\
\mbox{Chart } X_{\sigma_3}& x\to \sqrt[4]{w} &,& y\to u
\sqrt[4]{w} &,& z\to v
\sqrt{w} \\
\mbox{Chart } X_{\sigma_4}& x\to u \sqrt[4]{v} &,& y\to
\sqrt[4]{v} &,& z\to \sqrt{v}
w \\
\end{array}
\end{equation}
The irrational coordinate transformations \eqref{subberulle} derived
from the toric construction are the essential tool to relate the
results of the K\"ahler quotient construction with the geometry of
the exceptional divisor as identified by the toric resolution of the
singularity.
The coordinates $x,y,z$ in the above equation are to be identified
with the $Z^{1,2,3}$ that parameterize the locus $L_{\mathbb{Z}_4}$
composed by the matrices $A_0,B_0,C_0$ of eq.~\eqref{baldovinus}. As
we know this locus is lifted to the resolved variety $Y$ by the
action of the quiver group element $\exp [\pmb{\Phi}]$, whose
corresponding Lie algebra element $\pmb{\Phi}$ satisfies the moment
map equations \eqref{sakerdivoli}.
\newcommand{\eps}[1]{\epsilon_{#1}}
\begin{table}[ht] \small
\caption{For each cone the table assigns a name to the coordinates associated to the rays. The third column lists the generators of the dual cones. The $\epsilon$'s here are the dual basis to the $\mathbf w$. We also write the equations of the toric curves in these coordinates.}
\vskip10pt
\centering
\begin{tabular}{|c|c |c| c| c| }
\hline
Cone & Rays & Dual gen. & Coordinates & Curves \\ \hline
$\sigma_1$ &$ \mathbf v_1 \mathbf w_2 \mathbf w_3 $ & $\epsilon_1,\epsilon_2,\epsilon_3 $ &
$u_1,v_1,w_1$ & $C_1: v_1=w_1=0, \ C_2: u_1=w_1=0,\ C_3 = u_1 = v_1 =0 $\\ \hline
$\sigma_2$ & $ \mathbf v_2 \mathbf w_2 \mathbf w_3 $ &$ -\eps1,\eps2+2\eps1,\eps3$ &
$u_2,v_2,w_2$ & $ C_1: v_2=w_2=0, \ C_4: u_2=v_2=0, \ C_6 : u_2 = w_2=0 $\\
\hline
$\sigma_3$ & $ \mathbf v_2 \mathbf v_3 \mathbf w_3 $ &$-\eps1,-2\eps1-\eps2,\eps3+2\eps2+4\eps1$ &
$u_3,v_3,w_3$ & $C_4: u_3=w_3=0, \ C_5: v_3=w_3=0, C_7: u_3=v_3=0 $ \\
\hline
$\sigma_4$ & $ \mathbf v_1 \mathbf v_3 \mathbf w_3 $ &$\eps1,2\eps2+\eps3,-\eps2$ &
$u_4,v_4,w_4$ & $C_2: u_4=w_4=0,\ C_5: v_4=w_4=0, C_8: u_4=v_4=0$ \\
\hline
\end{tabular}
\label{coordinates}
\end{table}
\begin{table}[ht] \footnotesize
\hskip-2mm\parbox{\textwidth}{
\caption{Coordinate changes among the charts described in Table \ref{coordinates}}
\vskip10pt
\centering
\begin{tabular}{|c|c |c| c| c| }
\hline
& $\sigma_1$ & $\sigma_2$ & $\sigma_3$ & $\sigma_4$ \\ \hline
$\sigma_1$ & id & $ u_2=\frac1{u_1}, v_2 = u_1^2v_1, w_2=w_1 $ & $ u_3=\frac1{u_1}, v_3 = \frac{1}{u_1^2v_1}, w_3 = u_1v_1^2w_1 $ &
$u_4=u_1, \ v_4 = v_1^2w_1,\ w_4=\frac1{v_1}$ \\ \hline
$\sigma_2$ & $u_1 = \frac1{u_2},v_1=u_2^2v_2,w_1=w_2 $ & id & $u_3=u_2,v_3=\frac1{v_2},w_3=v_2^2w_2$ &
$u_4=\frac1{u_2},v_4=u_2^4v_2^2w_2,w_4=\frac1{u_2^2v_2}$\\ \hline
$\sigma_3$ &$ u_1=\frac1{u_3},v_1=\frac{u_3^2}{v_3},w_1=v_3^2w_3 $ & $u_2=u_3,v_2=\frac1{v_3},w_2={v_3^2}{w_3}$& id &
$u_4=\frac1{u_3},v_4=u_3^4w_3,w_4=\frac{v_3}{u_3^2}$\\ \hline
$\sigma_4$ & $ u_1=u_4,v_1=\frac1{w_4},w_1=v_4w_4^2$& $u_2=\frac1{u_4},v_2=\frac{u_2^4}{w_4},w_2=v_4w_4^2$&$u_3=\frac1{u_4},v_3=
\frac{w_4}{u_4^2},w_3=u_4^4v_4$& id \\ \hline
\end{tabular}
\label{changes}
}
\end{table}
\subsubsection{$Y$ as a line bundle on $\mathbb F_2$}\label{Ylinebundle}
The full resolution $Y$ is the total space
of the canonical bundle of the second Hirzebruch surface $\mathbb F_2$; this is quite clear
from the blowup procedure (cf. Section 2.3 in \cite{itoriddo})
and was
explicitly noted in \cite{bouchard}.
Following \cite[\S 7.3]{CoxLS} we give a toric description of
this fact. The canonical bundle of $\mathbb F_2$ is
the line bundle ${\mathcal O}_{\mathbb F_2}(-2H)$, where $H$ is the section of $\mathbb F_2\to\mathbb P^1$ squaring to 2.
We regard $H$ as the toric divisor $E_4$, see Figure \ref{DivY}. To each of the cones $\tau_i$ of the fan
of $\mathbb F_2$
one associates a 3-dimensional cone $\tilde\sigma_i$, obtaining
\begin{eqnarray*}
\tilde\sigma_1 &=& \mbox{Cone}((0,0,1),(0,1,0),(1,0,0)) \\
\tilde\sigma_2 &=& \mbox{Cone}((0,0,1),(-1,2,0),(0,1,0)) \\
\tilde\sigma_3 &=& \mbox{Cone}((0,0,1),(0,-1,2),(-1,2,0)) \\
\tilde\sigma_4 &=& \mbox{Cone}((0,0,1),(1,0,0),(0,-1,2))
\end{eqnarray*}
This is the fan of $Y$. So,
$ \mbox{tot}({\mathcal O}_{\mathbb F_2}(-2H)) \simeq Y$,
i.e., $Y$ is the total space of the canonical bundle of $\mathbb F_2$.
Thus, the canonical bundle of $Y$ is trivial.\footnote{Let $L$ be the total space
of ${\mathcal O}_{\mathbb F_2}(-2E)$, with projection $\pi\colon L\to \mathbb F_2$. Then we have an exact sequence
$$ 0 \to \pi^\ast\Omega^1_{\mathbb F_2} \to \Omega^1_L \to \Omega^1_{L/\mathbb F_2} \to 0.$$
The bundle of relative differentials $ \Omega^1_{L/\mathbb F_2}$ is isomorphic
to $\pi^\ast L^\ast$. As a result,
$$ K_L = \det (\Omega^1_L ) \simeq \pi^\ast K_{\mathbb F_2}\otimes \pi^\ast L^\ast
\simeq \pi^\ast{\mathcal O}_{\mathbb F_2}(-2H) \otimes \pi^\ast{\mathcal O}_{\mathbb F_2}(2H) \simeq {\mathcal O}_L.$$}
This will be the key to the computations of the K\"ahler potential of $Y$, of its K\"ahler metric,
and of the K\"ahler 2-form integrals on the homology cycles that
we shall perform in Section \ref{Ysezia}.
\subsection{The partial resolution $Y_3$} \label{Y3sezia}
The full resolution $Y$ is obtained by adding two rays to the fan of $Y_0=\mathbb C^3/{\mathbb Z}_4$. If we add just one we obtain a partial resolution. Here we examine the partial resolution that will occur in correspondence of some walls of the stability parameters space.
\subsubsection{The fan}
We consider the toric 3-fold $Y_3$ whose fan $\Sigma_3$ is generated by the 4 rays $\mathbf v_1$, $\mathbf v_2$, $\mathbf v_3$,
$\mathbf w_3$, which is a partial resolution of $Y_0$. This will appear
as the partial desingularization occuring at some of the walls of the $\zeta$ parameter space (space of stability conditions). The fan and the associated planar graph are shown in Figure \ref{Sigma3}. The cone $\sigma_1$ is singular, while $\sigma_2$ and $\sigma_3$ are smooth, i.e., $Y_3$ has one singular toric point.
By general theory we know that
$$ h_2(Y_3,{\mathbb Q}) = 2, \qquad h^2(Y_3,{\mathbb Q}) = h^2_c(Y_3,{\mathbb Q}) = h^4(Y_3,{\mathbb Q}) = h^2(Y_3,{\mathbb Q}) =1.$$
\subsubsection{Divisors} The divisors corresponding
to $\mathbf w_3$, $\mathbf v_1$, $\mathbf v_3$, $\mathbf v_2$ will be denoted $D_c$, $D_{EH}$,
$D'_{EH}$, $D_{4}$, respectively. They are described in Table \ref{divisors3}. The corresponding fans are shown in Figure \ref{Div3}.
For the variety $Y_3$, which is not smooth, the Picard group in the sequence \eqref{Pic} must be replaced by the class group $\operatorname{Cl}(Y_3)$,
however after tensoring by the rationals the two groups coincide, so that we may ignore this fact.
The group $\operatorname{Pic}(Y_3)\otimes {\mathbb Q}$ is generated by the class of $D_c$.
The matrices $A$ and $B$ are now
$$ A =\begin{pmatrix} 1 & 0 & 0 \\ -1 & 2 & 0 \\ 0 & -1 & 2 \\ 0 & 0 & 1 \end{pmatrix},\qquad
B =\begin{pmatrix}
-\tfrac14 & -\tfrac14 & -\tfrac12 & 1
\end{pmatrix}
$$
(the order of the toric divisors is $D_{EH}$, $D'_{EH}$, $D_4$, $D_c$). The relations we get among the divisor classes are
$$[D_{EH}]=[D'_{EH}] = -\tfrac14 [D_c],\qquad [D_4] = -\tfrac12 [D_c].$$
Again, this implies $[K_{Y_3}]=0$.
The integral generator of $\operatorname{Pic}(Y_3)$ is $D_4$. $D_{EH}$ and $D'_{EH}$
are linearly equivalent, and both generate the class group.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.50]
\draw [thick,->] (0,0) -- (-0.5,1.5) ;
\draw [thick,->] (0,0) -- (1.5,0.5) ;
\draw [thick,->] (0,0) -- (-0.8,-0.8) ;
\draw [thick,->,green] (0,0) -- (0,1);
\node at (1,1.1){\blu{\footnotesize $\mathbf w_3= (0,0,1)$}};
\node at (-1,-1) {\footnotesize$\mathbf v_1=(1,0,0)$};
\node at (2.7,0.5) {\footnotesize$\mathbf v_2=(-1,2,0)$};
\node at (-0.8,1.7) {\footnotesize$\mathbf v_3=(0,-1,2)$};
\draw [fill] (0,0) circle (1.5pt) ;
\end{tikzpicture}\hskip1cm
\begin{tikzpicture}[scale=0.50]
\path [fill=pink] (0,0) to (4,2.3) to (4,6.9) to (0,0) ;
\path [fill=yellow] (8,0) to (4,2.3) to (4,6.9) to (8,0) ;
\path [fill=green] (0,0) to (4,2.3) to (4,0) to (0,0);
\path [fill=green] (4,0) to (4,2.3) to (8,0) to (4,0);
\draw [fill] (0,0) circle (3pt);
\draw [fill] (8,0) circle (3pt);
\draw [fill] (4,6.9) circle (3pt);
\draw [fill] (4,2.3) circle (3pt);
\draw (0,0) -- (8,0); \draw (0,0) -- (4,6.9); \draw (8,0) --
(4,6.9); \draw (0,0) -- (4,2.3); \draw (8,0) -- (4,2.3); \draw
(4,6.9) -- (4,2.3); \node at (-0.5,0) {$\mathbf v_1$}; \node at (8.6,0)
{$\mathbf v_2$}; \node at (4.2,7.4)
{$\mathbf v_3$}; \node at
(4.7,2.6) {$\mathbf w_3$};
\node at (3,3) {$\sigma_3$}; \node at (5,3.5) {$\sigma_2$};
\node at (4,1) {$\sigma_1$};
\end{tikzpicture}
\caption{\label{Sigma3} \small The fan $\Sigma_3$ of the partial resolution $Y_3$ and the associated planar graph}
\end{center}
\end{figure}
\begin{table}[ht]\renewcommand{\arraystretch}{1.50}
\caption{Toric Divisors in $Y_3$ \label{divisors3}}
\vskip10pt
\centering
\begin{tabular}{|c |c| c| c| c| c| }
\hline
Ray & Divisor & Fan & Variety & Type & Component \\ [1ex]
\hline
$\mathbf w_3$ & $D_c$ & \small $(1,0),\ (-1,2),\ (0,-1) $ & $\mathbb P[1,1,2]$ & Cartier & 1 \\ \hline
$\mathbf v_1$ & $D_{EH}$ & \small$ (1,0),\ (-1,2),\ (0,1) $& $\mbox{ALE}_{A_1}$ & Weil & $-\frac14$ \\ \hline
$\mathbf v_2$ & $D'_{EH}$ & \small$ (1,0),\ (-1,2),\ (0,1) $ & $\mbox{ALE}_{A_1}$ & Weil & $-\frac14$ \\ \hline
$\mathbf v_3$ & $D_{4}$ & \small$ (1,0),\ (-1,4),\ (0,1) $ & \small tot $(\mathcal O(-4) \to \mathbb P^1$)& Cartier & $-\frac12$ \\ [1ex]
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,2) ;
\draw [thick,->] (0,0) -- (0,-1) ;
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.6) {$D_c$};
\node at (0.4,-1.1) {\small $D_3$};
\node at (1.4,0) {\small $D_1$};
\node at (-1.4,2) {\small $D_2$};
\end{tikzpicture}
\hskip10mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,2) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_{EH}$, $D'_{EH}$};
\end{tikzpicture}
\hskip10mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,4) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_4$};
\end{tikzpicture}
\caption{\label{Div3} \small The fans of the 4 toric divisors of $Y_3$}
\end{center}
\end{figure}
\subsubsection{Toric curves and intersections}
Inspection of the planar graph in Figure \ref{Sigma3} shows that $Y_3$ has 3 toric compact curves, however we know that
there is only one independent class in $H_2(Y_3,\mathbb Q)$.
The intersection numbers of the curves with the 4 toric divisors are shown in Table \ref{intersections3}.
The curves $C_1$, $C_2$, $C_4$ are the 3 compact curves. $C_3$
is a noncompact curve.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.50}
\caption{Intersections among the toric curves and divisors in $Y_3$. The toric curves are images of curves in $Y$ via the natural map $Y\to Y_3$, and we have used the same notation for a curve in $Y$ and its image in $Y_3$. The curve $C_8$ is singular (it is actually the only singular curve among the 6 toric curves of $Y_3$).
Note that $C_8$ is indeed the strict transform of the $z$ axis, whose points have nontrivial isotropy,
and $Y_3$ does not resolve this singularity as $D_{nc}$ has been shrunk onto $C_8$. The curve $C_1$ of $Y$ has shrunk to a point. This will be explicitly checked in Section \ref{Y3Kallero} by noting that
the period of the K\"ahler form on $C_1$ goes to zero under the blow-down morphism
$Y\to Y_3$.
}
\vskip10pt
\centering
\begin{tabular}{|c|c |c| c| c| c| }
\hline
Face & Curve & $D_c$ & $D_{EH}$ & $D'_{EH}$ & $D_4$ \\ \hline
$(\mathbf v_1\mathbf w_3)$ & $C_2$ & $-2 $ & $\frac12$ & $\frac12$ & 1 \\ \hline
$(\mathbf v_2\mathbf w_3)$ & $C_4$ & $-2 $ & $\frac12$ & $\frac12$ & 1 \\ \hline
$(\mathbf v_1\mathbf v_2)$ & $C_8$ & 1 & & & 0 \\ \hline
$(\mathbf v_3\mathbf w_3)$ & $C_5$ & $-4 $ & $1$ & 1 & 2 \\
\hline
\end{tabular}
\label{intersections3}
\end{table}
\subsubsection{The class group}
The class group enters the exact sequence
$$ 0 \to M \xrightarrow{A} \operatorname{Div}{\mathbb T}(Y_3) \to \operatorname{Cl}(Y_3)\to 0.$$
So the problem is that of computing the quotient of two free abelian groups; it may have torsion.
The matrix $A$ can be put into a normal form called the Smith normal form \cite{smith}.
This is diagonal, and the diagonal entries determine the class group: a diagonal block equal to the identity corresponds to a free summand of the appropriate rank, and if a value $m$ appears, there is a summand ${\mathbb Z}_m$. For $Y_3$ the Smith normal form of the matrix $A$ is the identity matrix, so that the quotient is ${\mathbb Z}$, i.e.,
$$\operatorname{Cl}(Y_3) = {\mathbb Z}.$$
Comparing with Table \ref{divisors3} we see that the morphism $ \operatorname{Pic}(Y_3)\to \operatorname{Cl}(Y_3)$
is the multiplication by $2$ (indeed, $2[D'_{EH}]=[D_4]$).
\subsubsection{$Y_3$ as a line bundle over $\mathbb P[1,1,2]$}
Also $Y_3$ is the total space of a line bundle, in this case over $\mathbb P[1,1,2]$.
The fan of $\mathbb P[1,1,2]$ is depicted on the left in Figure \ref{Div3}; it is generated by
the vectors $(1,0)$, $(-1,2)$, $(0,-1)$, corresponding respectively to the divisors $D_1$, $D_2$, $D_3$. The divisors $D_1$ and $D_2$ are Weil,
while $D_3$ is Cartier. In the class group, which is $\mathbb Z$, they are related by
$$ [D_3] = 2 [D_2] = 2 [D_1].$$
Each of $D_1$ and $D_2$ generates the class group, and $D_3$ generates the Picard group.
We study the line bundle ${\mathcal O}_{\mathbb P[1,1,2]}(-2D_3) = {\mathcal O}_{\mathbb P[1,1,2]}(-4)$. By applying the algorithm in \cite[\S 7.3]{CoxLS} we
see that its fan is that of $Y_3$, i.e., $Y_3=\mbox{tot}({\mathcal O}_{\mathbb P[1,1,2]}(-4))$. Again, since
$-2D_3$ is a canonical divisor of $Y_3$, $K_{Y_3}$ is trivial.
Again the toric divisors of $Y_3$ may be obtained from this description: $D_{EH}$ and $D'_{EH}$ are the inverse
images of $D_1$ and $D_2$, while $D_4$ is the inverse image of $D_3$; since $D_3\cdot D_3= 2$, then $D_4$ is the total space
of ${\mathcal O}(-4)$ on $\mathbb P^1$. Moreover, $D_c$ is the image of the zero section.
Note that $Y_3$ is obtained by shrinking $D_{nc}$ to a $\mathbb P^1$; actually $D_{nc}$ is the total space of the trivial line bundle on
the divisor $E$ in $\mathbb F_2$, and $\mathbb P[1,1,2]$ is indeed obtained by shrinking that divisor to a point.
In Section \ref{Y3Kallero} using the generalized Kronheimer construction we shall calculate the K\"ahler potential, K\"ahler metric and
K\"ahler form of $Y_3$, verifying that the base of the bundle is indeed singular, as the
periods of the K\"ahler form on the cycle $C_1$ vanish. This means that $C_1$ shrinks to
a point, and since $C_1$ inside the compact exceptional divisor $\mathbb F_2$ is
the exceptional divisor of the blow-down $\mathbb F_2\to \mathbb P[1,1,2]$, the base variety becomes singular.
\subsection{The degeneration $Y_{EH}$}
Another degeneration occuring on some edges of the space of stability parameters
is the product $Y_{EH} = \mbox{ALE}_{A_1} \times \mathbb C$.
\subsubsection{The fan}
We consider the toric 3-fold whose fan is generated by the rays
$\mathbf v_1$, $\mathbf w_2$, $\mathbf v_3$, $\mathbf w_3$. The
three rays $\mathbf w_2$, $\mathbf w_3$, $\mathbf v_3$ generate the
fan of the ALE space $\mbox{ALE}_{A_1}$ and are orthogonal to
$\mathbf v_1$, so that this is indeed a product manifold $Y_{EH} =
\mbox{ALE}_{A_1} \times \mathbb C$. Its fan and planar graph are
shown in Figure \ref{EH}. All cones are contained in the cones of
$\Sigma_Y$ (this is easily visualized by noting that the planar
graph is a subgraph of that of $Y$), so that there is a morphism
$Y_{EH} \to Y$; on the contrary, there does not seem to be morphism
$Y \to Y_{EH}$, so that $Y_{EH}$ does not appear to be a
degeneration of $ Y$, and $Y_{EH} $ is not a desingularization of
$Y_0$.
\begin{remark} The fans generated by collections of rays $\mathbf v_2$, $\mathbf w_2$, $\mathbf v_3$, $\mathbf w_3$,
and $\mathbf v_1$, $\mathbf w_2$, $\mathbf v_2$, $\mathbf w_3$ describe the same variety.
\end{remark}
The cohomology of this variety is
$$h_2(Y_{EH},{\mathbb Q}) = 1, \qquad h^2_c(Y_{EH},{\mathbb Q}) = 0, \qquad h^2(Y_{EH},{\mathbb Q}) =1, \qquad h^4_c(Y_{EH},{\mathbb Q}) = 1, \qquad h^4(Y_{EH},{\mathbb Q}) = 0$$
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.50]
\draw [thick,->] (0,0) -- (0,1) ;
\draw [thick,->] (0,0) -- (-0.5,1.7) ;
\draw [thick,->] (0,0) -- (-0.8,-0.8) ;
\draw [thick,->] (0,0) -- (1,-0.2);
\node at (1.2,-0.4){\blu{\footnotesize $\mathbf w_2$}};
\node at (0.1,1.2){\blu{\footnotesize $\mathbf w_3$}};
\node at (-0.6,1.9){\blu{\footnotesize $\mathbf v_3$}};
\node at (-0.95,-0.95){\blu{\footnotesize $\mathbf v_1$}};
\draw [fill] (0,0) circle (1.5pt) ;
\end{tikzpicture}\hskip1cm
\begin{tikzpicture}[scale=0.50]
\path [fill=yellow] (0,0) to (4,3.45) to (4,6.9) to (0,0) ;
\path [fill=green] (0,0) to (4,0) to (4,3.45) to (0,0);
\draw [fill] (0,0) circle (3pt);
\draw [fill] (4,6.9) circle (3pt);
\draw (0,0) -- (4,3.45);
\draw (0,0) -- (4,0); \draw (0,0) -- (4,6.9); \draw
(4,6.9) -- (4,2.3); \node at (-0.5,0) {$\mathbf v_1$}; \node at (4.2,7.4)
{$\mathbf v_3$}; \node at (4.7,3.45)
{$\mathbf w_3$};
\node at
(4,-0.6) {$\mathbf w_2$}; \draw [fill] (4,0) circle (3pt); \draw
(4,0) -- (4,4); \draw [fill] (4,3.45) circle (3pt); \draw
(4,0) -- (4,4);
\node at (2.8,3.4) {$\sigma_2$}; \node at (2.8,0.8) {$\sigma_1$};
\end{tikzpicture}
\caption{\label{EH} \small The fan $\Sigma_{EH}$ of the product variety $Y_{EH}$ and the associated planar graph}
\end{center}
\end{figure}
\subsubsection{Divisors}
The 4 toric divisors corresponding to the rays generated by $\mathbf v_1$, $\mathbf w_3$, $\mathbf v_2$, $\mathbf v_3$ will be denoted
$D_{EH}$, $D_{3}$, $D_0$, $D_4$ respectively. They are described in Table \ref{divisors4}. They are all Cartier as $Y_{EH}$ is smooth. Their fans are depicted in Figure \ref{Div4}. The matrices $A$, $B$ in this case are
$$ A =\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 2 \\ 0 & 0 & 1\end{pmatrix},\qquad
B =\begin{pmatrix}
0 & -\tfrac12 & -\tfrac12 & 1
\end{pmatrix}
$$
with the divisors ordered as $D_{EH}$, $D_0$, $D_4$, $D_3$. The relations among the divisor classes are
$$[D_{EH}]=0,\qquad [D_0]=[D_4]=-\tfrac12 [D_3] .$$
Again, $[K_{Y_{EH}}]=0$. The fact that $D_{EH}$ is cohomologous to zero is consistent with
$\operatorname{Pic}(Y_{EH}) \simeq p_1^\ast \operatorname{Pic}(\mbox{ALE}_{A_1})$,
with $p_1\colon Y_{EH} \to \mbox{ALE}_{A_1}$ the projection onto the first factor.
\begin{table}[ht]\renewcommand{\arraystretch}{1.50}
\caption{Toric Divisors in $Y_{EH}$ }
\vskip10pt
\centering
\begin{tabular}{|c |c| c| c| c| }
\hline
Ray & Divisor & Fan & Variety & Component \\ [1ex]
\hline
$\mathbf w_2$ & $D_{0} $ & \small $(1,0), \ (0,1) $ & $\mathbb C^2$ & $-\frac12$ \\ \hline
$\mathbf v_1$ & $D_{EH}$ & \small$ (1,0),\ (-1,2),\ (0,1) $& $\mbox{ALE}_{A_1}$ & 0 \\ \hline
$\mathbf w_3$ & $D_{3}$ & \small $(1,0),\ (-1,0),\ (0,1) $ & $\mathbb P^1\times\mathbb C$ & 1 \\ \hline
$\mathbf v_3$ & $D_{4}$ & \small$ (1,0),\ (-1,4),\ (0,1) $ & \small tot $(\mathcal O(-4) \to \mathbb P^1$) & $ -\frac12 $\\ [1ex]
\hline
\end{tabular}
\label{divisors4}
\end{table}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,0) ;
\draw [thick,->] (0,0) -- (0,1) ;
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_3$};
\end{tikzpicture}
\hskip10mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,2) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_{EH}$};
\end{tikzpicture}
\hskip10mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (0,1) ;
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_0$};
\end{tikzpicture}
\hskip10mm
\begin{tikzpicture}
\draw [thick,->] (0,0) -- (1,0) ;
\draw [thick,->] (0,0) -- (-1,4) ;
\draw [thick,->] (0,0) -- (0,1);
\draw [fill] (0,0) circle (1.5pt) ;
\node at (0,-1.3) {$D_4$};
\end{tikzpicture}
\caption{\label{Div4} \small The fans of the 4 toric divisors of $Y_{EH}$}
\end{center}
\end{figure}
\subsubsection{Toric curves and intersections}
There is one compact toric curve, corresponding to the face
$(\mathbf v_1 \mathbf w_3)$. It lies in $D_{EH}$ and $D_3$.
The intersection numbers are shown in Table \ref{intersections4}.
\begin{table}[ht]
\caption{Intersections among the toric curves and divisors in $Y_{EH}$}
\vskip10pt
\centering
\begin{tabular}{|c |c| c| c| c| c|}
\hline
& $D_0$ & $D_{EH}$ & $D_3$ & $D_4$ \\ \hline
$C$ & 1 & $0 $ & $ -2 $ & 1 \\
\hline
\end{tabular}
\label{intersections4}
\end{table}
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=1.50]
\draw [thick,->] (0,0) -- (0,1) ;
\draw [thick,->] (0,0) -- (1.5,0.5) ;
\draw [thick,->] (0,0) -- (-0.8,-0.8) ;
\draw [thick,->] (0,0) -- (1,-0.2);
\node at (1.3,-0.4){\blu{\footnotesize $\mathbf w_2$}};
\node at (-1,-1){\blu{\footnotesize $\mathbf v_1$}};
\node at (1.7,0.6){\blu{\footnotesize $\mathbf v_2$}};
\node at (0.1,1.2){\blu{\footnotesize $\mathbf w_3$}};
\draw [fill] (0,0) circle (1.5pt) ;
\end{tikzpicture}\hskip1cm
\begin{tikzpicture}[scale=0.40]
\path [fill=yellow] (0,0) to (4,2.3) to (4,6.9) to (0,0) ;
\path [fill=green] (8,0) to (4,2.3) to (4,6.9) to (8,0) ;
\path [fill=yellow] (0,0) to (4,2.3) to (4,0) to (0,0);
\path [fill=green] (4,0) to (4,2.3) to (8,0) to (4,0);
\draw [fill] (0,0) circle (3pt);
\draw [fill] (8,0) circle (3pt);
\draw [fill] (4,6.9) circle (3pt);
\draw (0,0) -- (8,0); \draw (0,0) -- (4,6.9); \draw (8,0) --
(4,6.9); \draw
(4,6.9) -- (4,2.3); \node at (-0.5,0) {$\mathbf v_1$}; \node at (8.6,0)
{$\mathbf v_2$}; \node at (4.2,7.4)
{$\mathbf w_3$};
\node at
(4,-0.5) {$\mathbf w_2$}; \draw [fill] (4,0) circle (3pt); \draw
(4,0) -- (4,4);
\node at (5.6,1.5) {$\sigma_2$}; \node at (2.4,1.5) {$\sigma_1$};
\end{tikzpicture}
\caption{\label{SigmaEH2} \small The fan of the second realization of $Y_{EH}$ and the associated planar graph}
\end{center}
\end{figure}
\subsubsection{A relation with the full resolution} \label{relation}
Let $g=(\omega,\omega,\omega^2)$ be the generator of the action of ${\mathbb Z}_4$.
The square $g^2$ leaves every point of the $z$ axis fixed, so that
$$\mathbb C^3/ \langle g^2 \rangle \simeq \mathbb C^2/{\mathbb Z}_2\times\mathbb C.$$
Blowing up we get $Y_{EH}$. Now $g$ still acts on $Y_{EH}$ producing a quotient which
is singular along a $\mathbb P^1$.
Blowing up we get $Y$; the corresponding exceptional divisor is $D_c$,
the second Hirzebruch surface. So $Y_{EH}$ is the desingularization of a partial quotient of
$\mathbb C^3$, and by a further quotient and subsequent desingularization it produces $Y$. The following diagram depicts this situation.
\begin{equation}\label{diagYEH} \xymatrix{
& Y \ar[d] \\
Y_{EH} \ar[r]\ar[d] & Y_{EH}/{\mathbb Z}_2 \ar[d] \\
\mathbb C^3/{\mathbb Z}_2 \ar[r] & \mathbb C^3/{\mathbb Z}_4}
\end{equation}
Note that $ Y_{EH}/{\mathbb Z}_2$ is therefore a partial desingularization of $\mathbb C^3/{\mathbb Z}_4$,
and indeed it corresponds to the fan obtained by adding the ray $\mathbf w_2$ to the
fan of $\mathbb C^3/{\mathbb Z}_4$. Note also that the composed map $Y_{EH}\to {\mathbb C}^3/{\mathbb Z}_4$ in diagram \eqref{diagYEH} is not
a resolution of singularities as it is 2:1.
\subsubsection{$Y_{EH}$ as a fibration}
As it happens also for $Y$ and $Y_3$, $Y_{EH}$ is the total space of a vector bundle, actually in two different way.
In particular it is the pullback of the line bundle ${\mathcal O}(-2)$ on $\mathbb P^1$ via the projection $\mathbb P^1\times \mathbb C \to \mathbb P^1$
(namely, the total space of the canonical bundle of $\mathbb P^1\times \mathbb C $).
\subsection{Summary: the exceptional divisor, curves and homology $2$-cycles}
Some of the main upshots of the discussion and computations made in this Section
are the following.
$\bullet$ The general theory
encoded in the Ito-Reid theorem \cite{itoriddo}, confirmed by the
explicit toric constructions, tells us that the quotient ${\mathbb{C}^3}/{\mathbb{Z}_4}$ has a
crepant resolution of singularities $Y$. This may be computed using
toric geometry. The exceptional divisor has a compact component,
$D_c$, isomorphic to the second Hirzebruch surface
$\mathbb F_2$, and a noncompact component $D_{nc}$,
isomorphic to $\mathbb C\times\mathbb P^1$. This agrees with the age
computations which show that the groups $H^{1,1}_{c}\left(Y\right)$
and $H^{2,2}\left(Y\right)$ are both one-dimensional.
\smallskip
$\bullet$ Let
$ \pi \colon Y \to
\mathbb{C}^3/\mathbb{Z}_4 $
be the blow-down morphism. The compact exceptional
divisor $D_c$ is the inverse image $\pi^{-1}(0)$ of the fixed
point $0\in \mathbb{C}^3$,
while the noncompact component $D_{nc}$
is the preimage of the fixed line $
\{x=y=0\}$.
\smallskip
$\bullet$
Using this information it is immediate to identify the
equation of the compact exceptional divisor in the open chart
associated with the cone $\sigma_1$ as
\begin{equation}\label{totointurchia} w=0. \end{equation}
This, together with the substitutions \eqref{subberulle},
is all what we need to compute the periods of the first Chern
classes of the tautological bundles represented by the following closed $(1,1)$-forms:
\begin{equation}\label{giraldo} \omega^{(1,1)}_{1,2,3} \, = \, \frac{i}{2\pi} \, {\partial} \,
\bar\partial \, \log \left[ X_{1,2,3} \right]^{\alpha_{1,2,3}}
\end{equation}
As we explain further on, although we are not able to compute the forms
$\omega^{(1,1)}_{1,2,3}$ for the general case, yet we succeed in
calculating their periods on the basis of the homology cycles by
restricting the moment map equations to the latter, using to this
purpose the equations of such loci as derived from the toric
description.
Again in the chart corresponding to the cone $\sigma_1$, the noncompact
component $D_{nc}$ of the exceptional divisor has equation
\begin{equation}\label{totoinegitto} v=0. \end{equation}
The two components
intersect along the curve
\begin{equation}\label{c2tilde}
C_1 \, = \,\{u,v,w \mid w=0, \, v=0\}.
\end{equation}
\smallskip
$\bullet$ Finally, we consider the curve
\begin{equation}\label{c1}
C_2 \, = \,\{u,v,w \mid w=0,\, u=0\}.
\end{equation}
i.e., the intersection between $D_{EH}$ and
$D_c$. It corresponds to the face $xz$ of the fan of
$Y$. As a curve in $\mathbb F_2$, it corresponds to the ray
generated by $(1,0)$, squares to $-2$, and is the exceptional
divisor of the blowup $\mathbb F_2 \to \mathbb P^2/\mathbb Z_2$, and
a section of the fibration $\mathbb F_2\to\mathbb P^1$ (and of course it
is a copy of $\mathbb P^1$). It intersects $D_{nc}$ again
in the point $u=v=w=0$, which corresponds to the cone generated by
$(1,0)$ and $(0,1)$ in the fan of $D_{nc}$. This is the
$\mathbb P^1$ which generates the Picard group of the Eguchi-Hanson
space $D_{EH}$.
\smallskip
$\bullet$ Note that $\dim \, H_2(Y)=2$, and indeed the curves $ C_1$
and $ C_2$ provide a basis for $H_2(Y,\mathbb Z)$.
\section{Chamber Structure and the tautological bundles}
\label{camerataccademica}
\begin{figure}
\label{hilton1} \centering
\includegraphics[height=7cm]{Valletti.png}
\caption{\label{hilton1} The structure of the stability
chambers. The space $\mathbb{R}^3$ where the moment map equations
always admit real nonnegative solutions is divided in two halves by
the presence of a wall of type 0, named $\mathcal{W}_0$ which is
defined by the equation $\zeta_2=0$ and is marked in
the figure as a cyan transparent surface without meshing. In
addition there are other three walls, respectively described by
$\mathcal{W}_1 \Leftrightarrow \zeta=\{x+y,x,y\}$,
$\mathcal{W}_2 \Leftrightarrow \zeta=\{x,x+y,y\}$ and
$\mathcal{W}_3 \Leftrightarrow \zeta=\{x,y,x+y\}$, where
$x,y\in \mathbb{R}$. The planes $\mathcal{W}_{1,3}$ are of type
0, while $\mathcal{W}_{2}$ is of type $1$. These three infinite
planes provide the partition of $\mathbb{R}^3$ into eight disjoint
chambers that are described in the main text. The three planes
$\mathcal{W}_{1,2,3}$ are marked in the figure as meshed surfaces
of three different colors. On the three intersections of two of
these planes we find the already discussed lines where the moment map
equations can be solved by radicals,
corresponding to the Eguchi-Hanson degeneration $ Y_{EH}\times
\mathbb{C}$ (blue line) and to the two Cardano degenerations (red
lines).}
\end{figure}
We can now compare the analytical results obtained from the K\"ahler
quotient \`a la Kronheimer with the general predictions of the
resolution of the singularity provided by toric geometry. This
provides a concrete example of how the chamber structure of the
$\zeta$ parameter space controls the topology of the
resolutions of the singularity \cite{CrawIshii}.
In the following we evaluate the periods of the differential forms
arising from the Kronheimer construction on the cycles given by the
curves $C_1$, $C_2$ that both are contained in
$D_c$, namely in the compact component of the
exceptional divisor (actually we are pulling back the differential
forms from $\mathcal{M}_{a,b,c}$ to $Y$ via the relevant contraction
morphism $\gamma\colon Y \to \mathcal{M}_{a,b,c}$, and we use the
fact that the pullback is injective in cohomology; however we shall
understand that pullback in the notation). We succeed evaluating the
differential form periods on the considered curves by restricting
the moment map equations to the relevant exceptional divisor
$D_c$ and then to its relevant sub-loci.
According {to} the general theory of the Kronheimer-like
construction presented in \cite{Bruzzo:2017fwj}, there are three
tautological bundles; their first Chern classes are encoded in the
{triple} of (1,1)-forms given in eq.~\eqref{giraldo}.
\subsection{The stability chambers}
\label{camerataccademicaWeyl} The result of the various calculations
presented in subsequent sections singles out the structure of the
stability chambers which is summarized in
figs.~\ref{hilton1} and \ref{hilton2}.
\begin{figure}
\label{hilton2} \centering
\includegraphics[height=7cm]{camerata.png}
\caption{\label{hilton2} In fig.\ref{hilton1} we displayed only
the walls defining the chamber structure. In the present picture
besides the walls we show also one of the eight chambers, namely
Chamber 1. It is marked as a transparent greenish-blue colored portion of
three dimensional space. It is a convex cone delimited by the
aforementioned walls. }
\end{figure}
Let us illustrate this structure in detail. Understanding the
geometry of these pictures is a very useful guide through the
subsequent computations. The original data from which we start are
the following ones. To begin with we know that the entire
$\zeta$ space is just $\mathbb{R}^3$.
In figs.~\ref{hilton1} and \ref{hilton2} we have drawn 3 lines. These
lines correspond to the following 4 instances of degenerate spaces
where the algebraic system of the moment map equations becomes
solvable or partially solvable:
\begin{description}
\item[1)] Eguchi-Hanson case
\begin{equation}\label{ehcasus}
\zeta_1 \, = \, s \quad ; \quad \zeta_2 \, = \, 0 \quad ; \quad
\zeta_3 \, = \, s
\end{equation}
In pictures \ref{hilton1}, \ref{hilton2} this line is fat and drawn
in blue color:
\item[2)] Cardano 1
\begin{equation}\label{Carcasus1}
\zeta_1 \, = \, s \quad ; \quad \zeta_2 \, = \, s \quad ; \quad
\zeta_3 \, = \, 0
\end{equation}
In pictures \ref{hilton1}, \ref{hilton2} this line is solid fat and
drawn in red color. We remind the reader that the name Cardano is
due to the fact that the solution of the entire moment map equation
system reduces to the solution of a single algebraic equation of the
fourth order (see eq.~\eqref{baldop}).
\item[3)] Cardano 2
\begin{equation}\label{Carcasus2}
\zeta_1 \, = \, 0 \quad ; \quad \zeta_2 \, = \, s \quad ; \quad
\zeta_3 \, = \, s
\end{equation}
In pictures \ref{hilton1}, \ref{hilton2} this line is solid fat and
drawn in red color.
\end{description}
In addition we have a 4th line that we have not drawn in fig.
\ref{hilton1} and \ref{hilton2}. This line entirely lies on one of
the walls to be described below.
\begin{description}
\item[4)] Kamp\'{e}
\begin{equation}\label{kampus1}
\zeta_1 \, = \, s \quad ; \quad \zeta_2 \, = \, 2 \,s \quad ; \quad
\zeta_3 \, = \, s
\end{equation}
In fig.~\ref{pianoiwdue} this line is dashed fat and drawn in black
color. We remind again the reader that the manifold was named
Kamp\'{e} because the solution of the moment map equations reduces
to finding the roots of a single algebraic equation of the sixth
order.
\end{description}
In all these cases $s$ is a nonzero real number.
Since the exceptional solvable cases must lie on some walls we have
tried to conjecture which planar cones might partition the infinite
cube into chambers so that the exceptional lines could lie on such
planar cones and possibly be edges at some of their intersections.
With some ingenuity we introduced the following four planar walls
(here $x,y$ are real parameters)
\begin{align}
\mathcal{W}_1 :\qquad & \{x+y,x,y\} \label{IW1muro} \\
\mathcal{W}_2 : \qquad &\{x,x+y,y\} \label{IW2muro} \\
\mathcal{W}_3 : \qquad & \{x,y,x+y\} \label{IW3muro} \\
\mathcal{W}_0 : \qquad & \{x,0,y\} \label{IW0muro}
\end{align}
that were depicted in fig.~\ref{hilton1}, \ref{hilton2}. and split
the space $\mathbb{R}^3$ into eight convex three--dimensional cones.
The list of the eight convex cones that provide as many stability chambers is obtained through the following argument. The three
planes $\mathcal{W}_{1,2,3}$ are respectively orthogonal to the
following three vectors:
\begin{eqnarray}
\pmb{n}_1 &=& \left\{-1,1,1 \right\} \\
\pmb{n}_2 &=& \left\{1,-1,1 \right\} \\
\pmb{n}_3 &=& \left\{1,1,-1 \right\} .\label{normalini}
\end{eqnarray}
The eight convex regions are defined by choosing the signs of the
projections $\pmb{n}_{1,2,3}\cdot \zeta$ in all possible ways.
In this way we obtain:
\begin{equation}
\begin{array}{rcl}
\mbox{Chamber I}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,> \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,> \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,> \,0
\right\} \\
\mbox{Chamber II}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,> \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,> \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,< \,0
\right\} \\
\mbox{Chamber III}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,> \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,< \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,> \,0
\right\} \\
\mbox{Chamber IV}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,< \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,> \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,> \,0
\right\} \\
\mbox{Chamber V}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,< \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,< \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,> \,0
\right\} \\
\mbox{Chamber VI}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,< \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,> \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,< \,0
\right\} \\
\mbox{Chamber VII}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,> \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,< \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,< \,0
\right\} \\
\mbox{Chamber VIII}
& \equiv &
\, \left\{ \zeta_1\, -\,\zeta_2\,-\,\zeta_3\,< \,0
\quad , \quad
-\zeta_1\, +\,\zeta_2\,-\,\zeta_3\,< \,0 \quad , \quad
-\zeta_1\, -\,\zeta_2\,+\,\zeta_3\,< \,0
\right\} \\
\end{array}
\label{celle8}
\end{equation}
Four of the above eight chambers are displayed in fig.~\ref{pinetti}.
\begin{figure}
\centering
\includegraphics[height=9cm]{pianoIW2.png}
\caption{ \label{pianoiwdue} This picture shows the type I plane
$\mathcal{W}_2$. The two Cardano manifolds (red lines) and the
Kamp\'{e} manifold (dashed black line) lay all in this plane, whose
generic point corresponds, as we are going to see, to the
degeneration Y3. The Eguchi-Hanson degeneration (blue line) is
instead out of this plane and intersects it only in the origin. }
\end{figure}
\begin{figure}
\centering
\includegraphics[height=9cm]{partizione.png}
\caption{ \label{pinetti} The partition of $\mathbb{R}^3$ into 8
convex cones. In the picture, out of the eight regions, we show only
four, marking them in different transparent colors.}
\end{figure}
\subsection{Edges} The special solvable cases that we have found
all sit at the intersection of two of the three walls
$\mathcal{W}_{1,2,3}$. In particular the Cardano manifolds are
edges at the intersection of the following walls:
\begin{equation}\label{cardanici}
\text{Cardano 1} \, = \, \mathcal{W}_{1}\bigcap \mathcal{W}_{2} \quad ;
\quad \text{Cardano 2} \, = \, \mathcal{W}_{2}\bigcap \mathcal{W}_{3}
\end{equation}
while the Eguchi-Hanson case is the intersection:
\begin{equation}\label{ansoniani}
Y_{EH} \, = \, \mathcal{W}_{1}\bigcap
\mathcal{W}_{3}
\end{equation}
Note also that this edge lays entirely on the
wall $\mathcal{W}_0$. From this point of view the Eguchi-Hanson
case is similar to the Kamp\'{e} case, that lays entirely on the wall
$\mathcal{W}_2$. The difference however is that, as we advocate
below, the wall $\mathcal{W}_0$ is of type 0 while
$\mathcal{W}_{2}$ is of type 1. In the first case the Eguchi-Hanson
line is the only degeneracy pertaining to the wall $\mathcal{W}_0$,
while in the second case the Kamp\'{e} line yields an instance of
the degeneracy $Y_3$ as any other point of the same wall. Actually
also the Cardano cases that lay on the same wall correspond to a
different realization of $Y_3$.
\section{Periods of the Chern classes of the tautological bundles}
\label{Periodstaut}
The most appropriate instrument to verify the
degeneracy/non-degeneracy of the singularity resolutions provided by
the K\"ahler quotient with given $\zeta$ parameters is
provided by the calculation of the period
matrix:
\begin{equation}\label{periodare}
\pmb{\Pi} \, \equiv \Pi_{i,J} \, = \, \int_{C_i} \omega_{J}
\quad ; \quad
i=(1,2), \quad J=(1,2,3)
\end{equation}
where $\omega_J$ are the first Chern classes of the tautological
bundles and the curves $C_i$ provide a basis of homology
2-cycles. In particular the combination:
\begin{equation}\label{saccius}
\pmb{Vol}_i \, = \zeta_I \, \mathfrak{C}^{IJ} \, \int_{C_i} \omega_J
=
\, \tfrac{i}{2\pi}\, \zeta_I \, \mathfrak{C}^{IJ} \,
\int_{C_i} \, \partial\bar\partial \log (X_J)^{\alpha_\zeta}\, = \, \int_{C_i}
\mathbb{K}_\zeta
\end{equation}
is the volume of the cycle $C_i$ in the resolution
identified by the level parameters $\zeta$, having denoted by
$\mathbb{K}_\zeta$ the corresponding K\"ahler 2-form.\footnote{Let us remark that in agreement with
eq.~\eqref{caramboletta} the contribution
$\partial\bar{\partial}\mathcal{K}_0$ to the K\"ahler
2-form is an exact form, whose integral on homology cycles therefore vanishes.
Hence the volume of the homology cycles is provided the linear
combination of periods specified in eq.~\eqref{saccius}.
We also remark that, as the volume of a nonzero cycle is always positive,
this is consistent with the positivity of the so-called Hodge line bundle
$\otimes_{J=1}^3 L_J^{\Theta(\mathcal{D}_J)}$.} If the
volume of the two homology cycles yielding the homology basis is
nonzero there is no degeneration. Instead in case of degenerations at
least one of such volumes vanishes. This is precisely what happens
on the walls of type III, while in the interior of all chambers no
degeneration appears.
Through the calculations detailed in the following subsections we
have been able to compute the periods of the first Chern forms
$\omega_{1,2,3}^{(1,1)}$ on the basis of homology cycles
$C_{1,2}$ both for the interior points of all the
chambers and for all the walls. As far as the interior chamber case
is concerned our results are summarized in Table \ref{periodico}.
For the walls the results are instead summarized in Table
\ref{muraria}, while for the edges they are given in Table
\ref{spigolosa}.
\begin{table}
\centering
$$ \begin{array}{|c||c|}
\hline
\hline
\text{Chamber 1} &\begin{array}{c|c|c|c}
\text{cycle} & \int \omega_1 & \int \omega_2 & \int \omega_3 \\
\hline \hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 1& 1 & 1\\
\end{array}\\
\hline \hline
\text{Chamber 2} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 1& 1 & 1\\
\end{array} \\
\hline
\hline
\text{Chamber 3} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 0 & -1 & 0 \\
\end{array} \\
\hline
\hline
\text{Chamber 4} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 1& 1 & 1\\
\end{array} \\
\hline
\hline
\text{Chamber 5} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 0 & -1 & 0 \\
\end{array} \\
\hline
\hline
\text{Chamber 6} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & -1 & -1 & -1 \\
\end{array} \\
\hline
\hline
\text{Chamber 7} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 0 & -1 & 0 \\
\end{array} \\
\hline
\hline
\text{Chamber 8} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1& 0 & 1\\
\hline
C_2 & 0 & -1 & 0 \\
\end{array} \\
\hline
\hline
\end{array}
$$
\caption{The periods of the tautological bundle first Chern classes on the basis of homological cycles
calculated in the interior points of all the chambers.}\label{periodico}
\end{table}
\begin{table}\renewcommand{\arraystretch}{1.50}
\centering
$$ \begin{array}{|c||c|}
\hline
\hline
\text{Wall $\mathcal{W}_0$} &\begin{array}{c|c|c|c}
\text{cycle} & \int \omega_1 & \int \omega_2 & \int \omega_3 \\
\hline \hline
C_1 & 3 & 0 & 3\\
\hline
C_2 & 2 & 0 & -2 \\
\end{array}\\
\hline \hline
\text{Wall $\mathcal{W}_1$} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1 & 0 & 1 \\
\hline
C_2 &0 & -2 & 0 \\
\end{array} \\
\hline
\hline
\text{Wall $\mathcal{W}_2$} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 0 & 0 & 0 \\
\hline
C_2 & 1 & 4 & 1 \\
\end{array} \\
\hline
\hline
\text{Wall $\mathcal{W}_3$} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & 1 & 0 & 1 \\
\hline
C_2 &0 & -2 & 0 \\
\end{array} \\
\hline
\hline
\end{array}
$$
\caption{The periods of the tautological bundle first Chern Classes on the basis of homological cycles
calculated on the 4 walls. For the walls $\mathcal{W}_0$ and $\mathcal{W}_2$
we have chosen $\alpha=4$ instead of $\alpha=2$ to get integer values for the periods.
\label{muraria}}
\end{table}
\begin{table}\renewcommand{\arraystretch}{1.50}
\centering
$$ \begin{array}{|c||c|}
\hline
\hline
\text{Cardano $\zeta = \{0,s,s\}$} &\begin{array}{c|c|c|c}
\text{cycle} & \int \omega_1 & \int \omega_2 & \int \omega_3 \\
\hline \hline
C_1 & 0 & 0 & 0 \\
\hline
C_2 & -1 & -1 & 0 \\
\end{array}\\
\hline \hline
\text{Cardano $\zeta = \{s,s,0\}$} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 &0 & 0 & 0 \\
\hline
C_2 &0 & -1 & -1 \\
\end{array} \\
\hline
\hline
\text{Eguchi-Hanson $\zeta = \{s,0,s\}$} & \begin{array}{c|c|c|c}
{\text{cycle}} & {\int \omega_1} & {\int \omega_2} & {\int \omega_3} \\
\hline
C_1 & -1 & 0 & \-1 \\
\hline
C_2 & 0 & 0 & 0 \\
\end{array} \\
\hline
\hline
\end{array}
$$
\caption{The periods of the tautological bundle first Chern Classes on the basis of homological cycles
calculated on the edges. Again, on the Eguchi-Hanson edge we have taken $\alpha=4$.
}\label{spigolosa}
\end{table}
Let us discuss the results in Table \ref{periodico}. The three leftmost columns display the degrees of the tautological line bundles
$\mathcal R_I$ restricted to the curves $C_1$, $C_2$; as the classes of the latter provide a basis of $H_2(Y,{\mathbb Z})$, these numbers give the
first Chern classes of the line bundles over the integral basis of the Picard group $\operatorname{Pic}(Y)$ given by the divisors
$D_{EH}$, $D_4$, dual to the previously mentioned basis of $H_2(Y,{\mathbb Z})$. Note that the three columns correspond to the compact junior conjugacy class, the noncompact junior class, and the senior class respectively, and the Poincar\'e duality between $H^2_c(Y) $ and
$H^4(Y)$ explains why in all chambers $\mathcal R_1$ and $\mathcal R_3$ have the same Chern classes, and are therefore isomorphic.
We know from the McKay correspondence that the cohomology of $Y$ is generated by algebraic classes which are in a one-to-one correspondence with the elements of ${\mathbb Z}_4$. One issue is how these classes are expressable in terms of the Chern characters of the taulological bundles.
In general, this correspondence is highly nontrivial and is governed by a complicated combinatorics \cite{Reid-Kino,Craw-Hilb}. This is indeed exemplified by our calculations. For instance, one can check that in the chambers 1, 2 and 4 one has
\begin{equation}\label{chern} \langle \operatorname{ch}_2(\mathcal R_1), w\rangle = \langle\operatorname{ch}_2(\mathcal R_3) ,w\rangle= 1, \qquad
\langle\operatorname{ch}_2(\mathcal R_2), w\rangle = 0 ,\end{equation}
(the triangular brackets are the pairing between cohomology and homology in dimension 4),\footnote{In general, homology and cohomology are not dual to each other, but are rather related by the exact sequence
$$ 0 \to \operatorname{Ext}^1_{\mathbb Z}(H_{n-1}(Y,{\mathbb Z}),{\mathbb Z}) \to H^n(Y,{\mathbb Z}) \to H_n(X,{\mathbb Z})^\vee \to 0 .$$
However in our case the Ext groups are zero as the homology groups are free over ${\mathbb Z}$ (see e.g.~\cite[Thm.~3.2]{Hatch}).}
where $w$ is the generator of $H_4(Y,{\mathbb Z})$ given by the divisor $D_4$. So the second Chern character of $ \mathcal R_1\simeq \mathcal R_3$ does provide a ${\mathbb Z}$-basis for $H^4(Y,{\mathbb Z})$,
and similary in chamber 6 with $-1$ instead of 1. On the other hand
in the chambers 3, 5, 7 and 8 all line bundles have zero second Chern character, and one needs to take a suitable combination of them to get a generator.
In order to illustrate the procedure which leads to the results
presented in the mentioned tables there are several steps one has to
take that are explained in the following subsections.
\subsection{The fundamental algebraic system and a dense
toric chart covering the variety}\label{ballavantana} The main
difficulty with any explicit calculation within the framework of the
K\"ahler quotient \`{a} la Kronheimer is that, for this case as for
the majority of all cases, the moment map equation system is
algebraic of higher order and explicit analytic solutions are mostly
out of reach.
However, in the case under consideration and similarly in most of
the other cases, the homology cycles on which we would like to
integrate our differential forms are entirely contained in the
compact component of the exceptional divisor $D_c$.
This is not too surprising since all the homology is produced by the
resolution and disappears in the original orbifold case. Hence the
first step in a strategy that leads to the calculation of the period
matrix \eqref{periodare} necessarily foresees a reduction of the
moment map algebraic system \eqref{sistemico} to the exceptional
divisor in the hope that there it becomes of lower effective degree
and therefore manageable.
In view of the above observation we consider the transcription of
the algebraic system \eqref{sistemico} in terms of toric coordinates
that expose the presence of the exceptional divisor. There are 4
toric charts (see eq. \eqref{subberulle}), yet from the point of
view of the algebraic system of moment map equations there are only
two distinguishable charts, namely the chart $X_{\sigma _1}$ and
$X_{\sigma _4}$. Indeed, at this level, chart
$X_{\sigma _2}$ is identical with chart $X_{\sigma _1}$ and
chart $X_{\sigma _3}$ is identical with chart $X_{\sigma _4}$.
We choose chart $X_{\sigma _1}$ since, as it is evident from the planar diagram in
Fig.~\ref{SigmaY}, the charts 1 and 2 are the only ones that
contain both curves $C_1$ and $C_2$.
\subsubsection{Chart $X_{\sigma _1}$}
In order to perform our calculations we find it convenient to
introduce the following notation. With reference to the coordinates
$u_1$,$v_1$,$w_1$ we set:
\begin{equation}
\lambda = \sqrt{\left|w_1\right|} \quad ; \quad \sigma =
\left|v_1\right| \quad ; \quad \delta
=\left(1+\left|u_1|^2\right.\right)\label{cavedanorosso}
\end{equation}
which yields
\begin{equation}
\Sigma =\delta \lambda \sigma \quad ; \quad U=\lambda ^2
\label{nocdue}
\end{equation}
The next point in the algorithm leading to the calculation of the
period matrix on the compact exceptional divisor consists of the
replacement of the variables $X_{1,2,3}$ with new ones
$T_{1,2,3}$ related to the previous ones in the following way:
\begin{equation}
X_1=\frac{\sqrt{\lambda } T_1}{\sqrt{\sigma }} \quad ;\quad
X_2=\lambda ^2 T_{2 }\quad ;\quad X_3=\frac{\sqrt{\lambda }
T_3}{\sqrt{\sigma }}\label{3subia}
\end{equation}
The rationale of the transformation \eqref{3subia} is as follows.
The reduction to the compact exceptional divisor consists of setting
$w_1=0$ and hence $\lambda \to 0$. From the point of view of the
(1,1)-forms $\omega_I$ defined in eq.~\eqref{giraldo},
multiplication of $X_I$ by any power of the modulus square of any
complex coordinate is uneffective because of the logarithm. In other
words, instead of eq.~\eqref{giraldo} we can write equally
well:
\begin{equation}\label{carnacina}
\omega_I^{(1,1)} \, = \, \frac{i}{2\pi} \, \partial \,
\bar{\partial} \, \log\left[T_I\right]^{\alpha_I}
\end{equation}
The specific choice of powers of $\lambda$ and $\sigma$ utilized in
\eqref{3subia} is the result of a search of the appropriate values
that lead to a finite algebraic system with nontrivial solutions in
the two limits $\lambda\to 0$ and $\sigma\to 0$, corresponding
respectively to the compact and noncompact exceptional divisor.
After these replacements in eqs.~\eqref{sakerdivoli}, the
final form of the system of the moment map equations is turned into
the following one: {\small
\begin{eqnarray}
&&\left(
\begin{array}{c}
-T_1^2 \left(1+\lambda ^4 T_2^2\right)-\delta T_1^3 T_3+\left(1+\lambda ^4 T_2^2\right) T_3^2+T_1 T_3
\left(\delta T_3^2+T_2 \left(-\zeta _1+\zeta
_3\right)\right) \\
\delta \lambda ^4 \sigma ^2 T_2^3-\delta T_1 T_3 \left(T_1^2+T_3^2\right)
+T_2 \left(\delta \sigma ^2-T_1 T_3 \left(\zeta _1-\zeta _2+\zeta _3\right)\right)
\\
-\left(-1+\lambda ^4 T_2^2\right) \left(T_1^2+\delta \sigma ^2 T_2+T_3^2\right)-T_1 T_2 T_3 \zeta _2 \\
\end{array}
\right)\, = \,\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)\label{brillo}
\end{eqnarray}
}
\subsubsection{Equations of the two homology cycles}
In order to calculate the periods of the differential forms
\eqref{carnacina} on the basis of the homology curves
$C_1$ and $C_2$ we need the equations of
such loci in the coordinates $\lambda $, $\sigma$, $\delta $ defined
in equation \eqref{cavedanorosso}. We have
\begin{equation}
\begin{array}{lclclcl}
\text{Curve} \ C_1 & : & \sigma = 0 &; &\lambda
\rightarrow 0
& ; & \delta = (1+\Delta )\\
\text{Curve} \ C_2 &: & \sigma = \sqrt{\Delta }& ;
&\lambda \rightarrow 0 & ; & \delta = 1\\
\end{array}
\end{equation}
Conventionally, we denote $\Delta = \vert t\vert^2$ the modulus
squared of the complex coordinate $t$ spanning the curve, whatever it
might be. In the case $C_1$ we have $\Delta = \vert u\vert^2$, while in the case $C_2$ we have
$\Delta= \vert v\vert^2$.
In the sequel we use also polar coordinates, setting:
\begin{eqnarray}
\begin{array}{rclcrcl}
\rho & \equiv & \sqrt{\left|u\right|^2}& ;& r &\equiv&
\sqrt{\left|v\right|^2} \nonumber\\
u & = & \rho \, \exp [i \theta] & ; & v & = & r \, \exp [i \psi]\\
\end{array}
\label{brchefreddo}
\end{eqnarray}
\subsubsection{Reduction to the compact exceptional divisor}
Furthermore it is convenient to introduce the following
variables, already utilized in eq.~\eqref{mhovarpi}:
\begin{equation}\label{ciuciosardo}
\varpi \, \equiv \, \delta^2 \, \sigma^2 \quad ; \quad \mho \,
\equiv \, \lambda^4
\end{equation}
and the following relation between the unknowns $T_{1,2,3}$ which
follows from linear combinations of the equations in the system
\eqref{brillo}
\begin{equation}\label{banalata}
T_3\, = \, T_1 \sqrt{\frac{\zeta _2+\zeta _3 \left(T_2^2 \mho -1\right)}{\zeta _2+\zeta _1 \left(T_2^2 \mho
-1\right)}}
\end{equation}
We see that in the limit $\mho \to 0$, which is the equation of the
compact exceptional divisor $D_c$, the expression
for $T_3$ is proportional to that for $T_1$ in any point of
$\zeta$ space. This has the nontrivial consequence that the
periods of $\omega_1$ and $\omega_3$ are always equal. In other
words the first Chern classes of the first and third tautological
bundles are always cohomologous.
Inserting eqs.~\eqref{banalata}, \eqref{ciuciosardo} into
\eqref{brillo} and performing the limit $\mho \to 0$ we obtain the
new system:
\begin{equation}\label{sacrabusta}
\left(
\begin{array}{c}
-\frac{\left(\zeta _1-\zeta _3\right) T_1^2 \left(\delta \sqrt{\frac{\zeta _3-\zeta _2}{\zeta _1-\zeta _2}}
T_1^2+\left(\zeta _1-\zeta _2\right) \sqrt{\frac{\zeta _3-\zeta _2}{\zeta _1-\zeta _2}} T_2+1\right)}{\zeta
_1-\zeta _2} \\
T_2 \left(\varpi -\sqrt{\frac{\zeta _3-\zeta _2}{\zeta _1-\zeta _2}} \left(\zeta _1-\zeta _2+\zeta _3\right)
T_1^2\right)-\frac{\delta \left(\zeta _1-2 \zeta _2+\zeta _3\right) \sqrt{\frac{\zeta _3-\zeta _2}{\zeta
_1-\zeta _2}} T_1^4}{\zeta _1-\zeta _2} \\
\frac{\left(\zeta _2-\zeta _3\right) T_1^2}{\zeta _2-\zeta _1}-\zeta _2 \sqrt{\frac{\zeta _3-\zeta _2}{\zeta
_1-\zeta _2}} T_2 T_1^2+T_2 \varpi +T_1^2 \\
\end{array}
\right)\, = \,\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation}
which is the appropriate reduction to the exceptional divisor
$D_c$ of the moment map equations. By construction
the three equations in \eqref{sacrabusta} are linear dependent and
can be solved for the two unknowns $T_{1,2}$ in terms of the
variables $\delta,\varpi$. Indeed they are quadratic, cubic, or
quartic and can be solved by radicals.
\subsection{Periods inside the
chambers.} Let us begin with the periods in the interior points of
the chambers, the result of whose calculation was summarized in
Table \ref{periodico}. We found it convenient to choose eight
rational points as representatives of the eight chambers. Explicitly we
utilize the following ones:
\begin{equation}\label{pirillini}
\begin{array}{|l|ccc||}
\hline
\text{Chamber 1}&\zeta _1\to -\frac{1}{2} & \zeta _2\to -\frac{3}{4} & \zeta _3\to -\frac{1}{2} \\[3pt]
\hline
\text{Chamber 2}&\zeta _1\to -\frac{1}{16} & \zeta _2\to -\frac{3}{16} & \zeta _3\to -\frac{3}{4} \\[3pt]
\hline
\text{Chamber 3}&\zeta _1\to -\frac{1}{16} & \zeta _2\to -\frac{3}{4} & \zeta _3\to -\frac{3}{16} \\[3pt]
\hline
\text{Chamber 4} &\zeta _1\to -\frac{3}{4} & \zeta _2\to -\frac{1}{16} & \zeta _3\to -\frac{3}{16} \\[3pt]
\hline
\text{Chamber 5}& \zeta _1\to -\frac{1}{4} & \zeta _2\to -\frac{1}{2} & \zeta _3\to \frac{1}{2} \\[3pt]
\hline
\text{Chamber 6}& \zeta _1\to -\frac{1}{4} & \zeta _2\to \frac{1}{2} & \zeta _3\to -\frac{1}{2} \\[3pt]
\hline
\text{Chamber 7}& \zeta _1\to \frac{3}{4} & \zeta _2\to -\frac{1}{4} & \zeta _3\to -\frac{1}{2} \\[3pt]
\hline
\text{Chamber 8}& \zeta _1\to \frac{3}{4} & \zeta _2\to \frac{1}{2} & \zeta _3\to \frac{3}{4} \\[3pt]
\hline
\end{array}
\end{equation}
Inserting the values \eqref{pirillini} into the system
\eqref{sacrabusta} we obtain eight algebraic systems that we
specialize to either the $C_1$ or the $C_2$ cycle by setting:
\begin{equation}\label{gramellino}
\begin{array}{|l|rclcrcl|}
\hline
C_1 & \varpi & = & 0 & ; & \delta & = & \left(1+\rho^2\right) \\
\hline
C_2 & \varpi & = & r^2 & ; & \delta & = & 1\\
\hline
\end{array}
\end{equation}
The 16 linear systems obtained in this way can be solved for
$T_1,T_2$, and thanks to the relation \eqref{banalata}, each solution
for $T_{1,2}$ yields also a solution for $T_3$. Excluding the
spurious solutions $ T_1 \to 0,\, T_2\to 0$ and $T_1\to 0$ in
the case of the $C_1$ systems we find 2 branches, while
in the case of the $C_2$ systems we find 4 branches. We
know that in each case only one branch is the limit on the
considered curve of the unique positive real solution of the full
system \eqref{brillo} yet in absence of the analytic form of the
solution of \eqref{brillo} we do not know which is the right branch.
We circumvent this difficulty in the following way. First we remark
that in the case of a form:
\begin{equation}\label{grimaldello}
\Omega \, = \, \tfrac{i}{2\pi} \partial \, \bar{\partial}\, \mathrm{J}(t,\bar{t})
\end{equation}
where $t=\mathfrak{r}\, e^{i \xi}$ is the complex coordinate
written in polar form, of a $\mathbb{P}^1$ , when
$\mathrm{J}(t,\bar{t})$ is a function only of the modulus
$\mathfrak{r}$
\begin{equation}\label{batuffolo}
\mathrm{J}(t,\bar{t}) \, = \, {J}(\mathfrak{r})
\end{equation}
we have:
\begin{equation}\label{fioredicampo}
\Omega \, = \, \tfrac{1}{4\pi} \,
\left(\frac{d{J}(\mathfrak{r})}{d\mathfrak{r}}+\mathfrak{r}\,\frac{d^2{J}(\mathfrak{r})}{d\mathfrak{r}^2}\right)
\, d\mathfrak{r} \wedge d \xi
\end{equation}
Correspondingly for the integral of $\Omega$ on the supporting space
we find:
\begin{equation}\label{giugurta}
\int_{\mathbb{P}^1} \, \Omega \, = \, \tfrac{1}{2}\,
\int_0^\infty \left(\frac{d{J}(\mathfrak{r})}{d\mathfrak{r}}+\mathfrak{r}\,
\frac{d^2{J}(\mathfrak{r})}{d\mathfrak{r}^2}\right)\,
d\mathfrak{r} \, = \, \left. \mathfrak{r} \,
\frac{d{J}(\mathfrak{r})}{d\mathfrak{r}} \, \right|^{\infty}_{0}
\end{equation}
Utilizing this idea we calculate $\rho \,
\frac{d{J}_{1,2,3}(\rho)}{d\rho}$ in the case when
\begin{equation}\label{Jrho}
\mathrm{J}_{1,2,3}(u,\bar{u}) \, = \,
\log[T_{1,2,3}(\rho)]\,\mid_{C_1}
\end{equation}
is defined in term of a solution $T_{1,2,3}(\rho)$ of the
moment map equations reduced to $C_1$, and $r \,
\frac{d{J}_{1,2,3}(r)}{dr}$ in the case when
\begin{equation}\label{Jerre}
\mathrm{J}_{1,2,3}(v,\bar{v}) \, = \,
\log[T_{1,2,3}(r)]\,\mid_{C_2}
\end{equation}
is defined in term of a solution $T_{1,2,3}(r)$ of the moment map
equations reduced to $C_2$. In both cases we used all the
available nontrivial branches.
\paragraph{Cycle $C_1$.} In this case the result is very
simple and uniform. For all chambers and for all branches we always
have:
\begin{equation}\label{garatusco}
\rho \frac{d{J}_{1,3}(\rho)}{d\rho} \, =\, -\frac{1}{ \left(\rho
^2+1\right)} \quad ; \quad \rho \frac{d{J}_{2}(\rho)}{d\rho} \,
=\, 0
\end{equation}
This implies that in all chambers we have the periods:
\begin{equation}\label{risultoC1}
\int_{C_1} \, \omega_1 \, = \, 1 \quad ; \quad
\int_{C_1} \, \omega_2 \, = \, 0 \quad ; \quad \int_{C_1} \, \omega_3 \, = \,1
\end{equation}
which is the result shown in Table \ref{periodico}. There is however
an implication of this universal result on the factor
$\alpha_\zeta$. Utilizing \eqref{risultoC1} in
eq.\eqref{saccius} we obtain that the volume of the cycle
$C_1$ is given by
\begin{equation}\label{caramellamu}
\left.\pmb{Vol}_1\right|_{\text{Chamber k}} \, = \,
\alpha _k \, \left(\zeta _1-\zeta _2+\zeta _3\right) \, = \,
\alpha_k \, \,\pmb{n}_2\,\cdot\,\zeta
\end{equation}
Hence in order for the volume of the cycle $C_1$ to be
positive in every chamber the factor $\alpha _k$ has to change sign
so as to compensate the negative sign of $
\pmb{n}_2\,\cdot\,\zeta$ when this occurs. Specifically we have:
\begin{equation}\label{alfetta}
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
\text{Ch. I} & \text{Ch. II} & \text{Ch. III} & \text{Ch. IV} & \text{Ch. V} & \text{Ch. VI} &
\text{Ch. VII} & \text{Ch. VIII} \\
\hline
\alpha_1 < 0 & \alpha_2 < 0 & \alpha_3 > 0 & \alpha_4 < 0 & \alpha_5 > 0 & \alpha_6 < 0 & \alpha_7 > 0 & \alpha_8
> 0\\
\hline
\end{array}
\end{equation}
As we have previously mentioned, in the interiors of the chambers we always take
$\vert\alpha\vert=2$, while on some walls we have taken $\vert\alpha\vert=4$.
\paragraph{Cycle $C_2$.} In the case of the second cycle,
the result is more complex since, as we already mentioned, we have
four branches. The explicit analytic forms of $r \,
J^\prime_{1,2,3}(r)$ for all the branches and in all the chambers is
displayed in Table \ref{integraluschi}.
\begin{table}
\centering
$$ \begin{array}{|c||l|}
\hline
\hline
\text{Chamber 1} & \begin{array}{l|ccc}
\null & -r \, J^\prime_1(r) & -r \, J^\prime_2(r)&
-r\,J^\prime_3(r)\\
\hline
\text{br. 1} &\frac{4 r^2-1}{4 \sqrt{16 r^4-40 r^2+1}} &
\frac{4 r^2}{\sqrt{16 r^4-40 r^2+1}} & \frac{4 r^2-1}{4 \sqrt{16
r^4-40
r^2+1}} \\
\text{br. 2} & \frac{4 r^2-1}{4 \sqrt{16 r^4-40 r^2+1}} & \frac{4
r^2}{\sqrt{16 r^4-40 r^2+1}} & \frac{4 r^2-1}{4 \sqrt{16 r^4-40
r^2+1}} \\
\text{br. 3} & \frac{1-4 r^2}{4 \sqrt{16 r^4-40 r^2+1}} & -\frac{4
r^2}{\sqrt{16 r^4-40 r^2+1}} & \frac{1-4 r^2}{4 \sqrt{16 r^4-40
r^2+1}} \\
\text{br. 4} & \frac{1-4 r^2}{4 \sqrt{16 r^4-40 r^2+1}} & -\frac{4
r^2}{\sqrt{16 r^4-40 r^2+1}} & \frac{1-4 r^2}{4 \sqrt{16 r^4-40
r^2+1}} \\
\end{array} \\
\hline \hline
\text{Chamber 2} & \begin{array}{l|ccc}
\text{br. 1} & \frac{5-8 r^2}{4 \sqrt{64 r^4+32 r^2+25}} & -\frac{8
r^2}{\sqrt{64 r^4+32 r^2+25}} & \frac{5-8 r^2}{4 \sqrt{64 r^4+32
r^2+25}} \\
\text{br. 2} &\frac{5-8 r^2}{4 \sqrt{64 r^4+32 r^2+25}} & -\frac{8 r^2}{\sqrt{64 r^4+32 r^2+25}}
& \frac{5-8 r^2}{4 \sqrt{64 r^4+32
r^2+25}} \\
\text{br. 3} & \frac{8 r^2-5}{4 \sqrt{64 r^4+32 r^2+25}} & \frac{8
r^2}{\sqrt{64 r^4+32 r^2+25}} & \frac{8 r^2-5}{4 \sqrt{64 r^4+32
r^2+25}} \\
\text{br. 4} & \frac{8 r^2-5}{4 \sqrt{64 r^4+32 r^2+25}} & \frac{8
r^2}{\sqrt{64 r^4+32 r^2+25}} & \frac{8 r^2-5}{4 \sqrt{64 r^4+32
r^2+25}} \\
\end{array} \\
\hline
\hline
\text{Chamber 3} & \begin{array}{l|ccc}
\text{br. 1}&\frac{2 r^2+1}{4 \sqrt{4 r^4-16 r^2+1}} & \frac{2 r^2}{\sqrt{4 r^4-16 r^2+1}} &
\frac{2 r^2+1}{4 \sqrt{4 r^4-16
r^2+1}} \\
\text{br. 2}& \frac{2 r^2+1}{4 \sqrt{4 r^4-16 r^2+1}} & \frac{2
r^2}{\sqrt{4 r^4-16 r^2+1}} & \frac{2 r^2+1}{4 \sqrt{4 r^4-16
r^2+1}} \\
\text{br. 3}& \frac{-2 r^2-1}{4 \sqrt{4 r^4-16 r^2+1}} & -\frac{2
r^2}{\sqrt{4 r^4-16 r^2+1}} & \frac{-2 r^2-1}{4 \sqrt{4 r^4-16
r^2+1}} \\
\text{br. 4}& \frac{-2 r^2-1}{4 \sqrt{4 r^4-16 r^2+1}} & -\frac{2
r^2}{\sqrt{4 r^4-16 r^2+1}} & \frac{-2 r^2-1}{4 \sqrt{4 r^4-16
r^2+1}} \\
\end{array}\\
\hline
\hline
\text{Chamber 4} & \begin{array}{l|ccc}
\text{br. 1}& \frac{7-8 r^2}{4 \sqrt{64 r^4+96 r^2+49}} & -\frac{8
r^2}{\sqrt{64 r^4+96 r^2+49}} & \frac{7-8 r^2}{4 \sqrt{64 r^4+96
r^2+49}} \\
\text{br. 2}& \frac{7-8 r^2}{4 \sqrt{64 r^4+96 r^2+49}} & -\frac{8
r^2}{\sqrt{64 r^4+96 r^2+49}} & \frac{7-8 r^2}{4 \sqrt{64 r^4+96
r^2+49}} \\
\text{br. 3}& \frac{8 r^2-7}{4 \sqrt{64 r^4+96 r^2+49}} & \frac{8
r^2}{\sqrt{64 r^4+96 r^2+49}} & \frac{8 r^2-7}{4 \sqrt{64 r^4+96
r^2+49}} \\
\text{br. 4}& \frac{8 r^2-7}{4 \sqrt{64 r^4+96 r^2+49}} & \frac{8
r^2}{\sqrt{64 r^4+96 r^2+49}} & \frac{8 r^2-7}{4 \sqrt{64 r^4+96
r^2+49}} \\
\end{array}\\
\hline
\hline
\text{Chamber 5} & \begin{array}{l|ccc}
\text{br. 1}& \frac{4 r^2+3}{4 \sqrt{16 r^4-56 r^2+9}} & \frac{4
r^2}{\sqrt{16 r^4-56 r^2+9}} & \frac{4 r^2+3}{4 \sqrt{16 r^4-56
r^2+9}} \\
\text{br. 2}& \frac{4 r^2+3}{4 \sqrt{16 r^4-56 r^2+9}} & \frac{4
r^2}{\sqrt{16 r^4-56 r^2+9}} & \frac{4 r^2+3}{4 \sqrt{16 r^4-56
r^2+9}} \\
\text{br. 3}& \frac{-4 r^2-3}{4 \sqrt{16 r^4-56 r^2+9}} & -\frac{4
r^2}{\sqrt{16 r^4-56 r^2+9}} & \frac{-4 r^2-3}{4 \sqrt{16 r^4-56
r^2+9}} \\
\text{br. 4}& \frac{-4 r^2-3}{4 \sqrt{16 r^4-56 r^2+9}} & -\frac{4
r^2}{\sqrt{16 r^4-56 r^2+9}} & \frac{-4 r^2-3}{4 \sqrt{16 r^4-56
r^2+9}} \\
\end{array} \\
\hline
\hline
\text{Chamber 6} & \begin{array}{l|ccc}
\text{br. 1}& \frac{5-4 r^2}{4 \sqrt{16 r^4+72 r^2+25}} & -\frac{4
r^2}{\sqrt{16 r^4+72 r^2+25}} & \frac{5-4 r^2}{4 \sqrt{16 r^4+72
r^2+25}} \\
\text{br. 2}& \frac{5-4 r^2}{4 \sqrt{16 r^4+72 r^2+25}} & -\frac{4
r^2}{\sqrt{16 r^4+72 r^2+25}} & \frac{5-4 r^2}{4 \sqrt{16 r^4+72
r^2+25}} \\
\text{br. 3}& \frac{4 r^2-5}{4 \sqrt{16 r^4+72 r^2+25}} & \frac{4
r^2}{\sqrt{16 r^4+72 r^2+25}} & \frac{4 r^2-5}{4 \sqrt{16 r^4+72
r^2+25}} \\
\text{br. 4}& \frac{4 r^2-5}{4 \sqrt{16 r^4+72 r^2+25}} & \frac{4
r^2}{\sqrt{16 r^4+72 r^2+25}} & \frac{4 r^2-5}{4 \sqrt{16 r^4+72
r^2+25}} \\
\end{array} \\
\hline
\hline
\text{Chamber 7} & \begin{array}{l|ccc}
\text{br. 1}& \frac{-2 r^2-1}{4 \sqrt{4 r^4-8 r^2+1}} & \quad
-\frac{2 r^2}{\sqrt{4 r^4-8 r^2+1}} \quad & \frac{-2 r^2-1}{4
\sqrt{4 r^4-8
r^2+1}} \\
\text{br. 2}& \frac{-2 r^2-1}{4 \sqrt{4 r^4-8 r^2+1}} & -\frac{2
r^2}{\sqrt{4 r^4-8 r^2+1}} & \frac{-2 r^2-1}{4 \sqrt{4 r^4-8
r^2+1}} \\
\text{br. 3}& \frac{2 r^2+1}{4 \sqrt{4 r^4-8 r^2+1}} & \frac{2
r^2}{\sqrt{4 r^4-8 r^2+1}} & \frac{2 r^2+1}{4 \sqrt{4 r^4-8 r^2+1}}
\\
\text{br. 4}& \frac{2 r^2+1}{4 \sqrt{4 r^4-8 r^2+1}} & \frac{2
r^2}{\sqrt{4 r^4-8 r^2+1}} & \frac{2 r^2+1}{4 \sqrt{4 r^4-8 r^2+1}}
\\
\end{array} \\
\hline
\hline
\text{Chamber 8} & \begin{array}{l|ccc}
\text{br. 1}& \quad\frac{-r^2-1}{4 \sqrt{r^4+1}}\quad &\quad\quad
-\frac{r^2}{\sqrt{r^4+1}}\quad\quad &
\quad \frac{-r^2-1}{4 \sqrt{r^4+1}}\quad \\
\text{br. 2}& \frac{-r^2-1}{4 \sqrt{r^4+1}} & -\frac{r^2}{\sqrt{r^4+1}} & \frac{-r^2-1}{4 \sqrt{r^4+1}} \\
\text{br. 3}& \frac{r^2+1}{4 \sqrt{r^4+1}} & \frac{r^2}{\sqrt{r^4+1}} & \frac{r^2+1}{4 \sqrt{r^4+1}} \\
\text{br. 4}& \frac{r^2+1}{4 \sqrt{r^4+1}} & \frac{r^2}{\sqrt{r^4+1}} & \frac{r^2+1}{4 \sqrt{r^4+1}} \\
\end{array} \\
\hline
\hline
\end{array}
$$
\caption{The indefinite integrals for the calculation of periods of the tautological Chern Classes along
the cycle $C_2$.}\label{integraluschi}
\end{table}
Utilizing eq. \eqref{giugurta} and the results of Table
\ref{integraluschi} we obtain the candidate values of the periods
that are displayed in Table \ref{mamertino}.
\begin{table}\small
\renewcommand{\arraystretch}{1.20}
\centering
$$ \begin{array}{|c||l|}
\hline
\hline
\text{Chamber 1} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & -1 & -2 & -1 \\
\text{branch}_2 & -1 & -2 & -1 \\
\text{branch}_3 & 1 & 2 & 1 \\
\text{branch}_4 & 1 & 2 & 1 \\
\end{array}\\
\hline \hline
\text{Chamber 2} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 1 & 2 & 1 \\
\text{branch}_2 & 1 & 2 & 1 \\
\text{branch}_3 & -1 & -2 & -1 \\
\text{branch}_4 & -1 & -2 & -1 \\
\end{array} \\
\hline
\hline
\text{Chamber 3} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 0 & -2 & 0 \\
\text{branch}_2 & 0 & -2& 0 \\
\text{branch}_3 & 0 & 2 & 0 \\
\text{branch}_4 & 0 & 2 & 0 \\
\end{array}\\
\hline
\hline
\text{Chamber 4} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 1 & 2 & 1 \\
\text{branch}_2 & 1 & 2 & 1 \\
\text{branch}_3 & -1 & -2 & -1 \\
\text{branch}_4 & -1 & -2 & -1\\
\end{array} \\
\hline
\hline
\text{Chamber 5} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 0 & -2 & 0 \\
\text{branch}_2 & 0 & -2 & 0 \\
\text{branch}_3 & 0 & 2 & 0 \\
\text{branch}_4 & 0 & 2 & 0 \\
\end{array} \\
\hline
\hline
\text{Chamber 6} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 1 & 2 & 1 \\
\text{branch}_2 & 1 & 2 & 1 \\
\text{branch}_3 & -1 & -2 & -1 \\
\text{branch}_4 & -1& -2 & -1\\
\end{array}\\
\hline
\hline
\text{Chamber 7} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 0 & 2 & 0 \\
\text{branch}_2 & 0 & 2 & 0 \\
\text{branch}_3 & 0 & -2& 0 \\
\text{branch}_4 & 0 & -2 & 0 \\
\end{array} \\
\hline
\hline
\text{Chamber 8} & \begin{array}{l|ccc}
\null & \int_{C_2}\, \omega_1 & \int_{C_2}\,
\omega_2& \int_{C_2}\, \omega_3\\
\hline
\text{branch}_1 & 0 & 2 & 0 \\
\text{branch}_2 & 0 & 2& 0 \\
\text{branch}_3 & 0 & -2 & 0 \\
\text{branch}_4 & 0 & -2 & 0 \\
\end{array} \\
\hline
\hline
\end{array}
$$
\caption{The candidate period integrals of the tautological Chern classes along
the cycle $C_2$.}\label{mamertino}
\end{table}
As one sees from that table, in every chamber there is only the
ambiguity of an overall sign. In view of the result \eqref{alfetta}
for the factor $\alpha_k$ relative to the various chambers, in each
of them we choose the overall sign for the $C_2$ periods
that leads to a positive value for the volume of that
cycle, according to equation \eqref{saccius}. Performing such a
choice, one finally arrives at the result displayed in Table
\ref{periodico}.
\subsection{Periods on the walls and on the edges}
Utilizing the same algorithm as in the case of the interior of the
chambers, we have calculated the periods also on the walls and on the
edges obtaining the results displayed in Tables \ref{muraria} and
\ref{spigolosa}. We skip the details for the type 0 walls
$\mathcal{W}_{1,3}$, since these calculations are identical with
those presented in the previous subsection, simply with different
values of the $\zeta$ parameters. For the case of the wall
$\mathcal{W}_2$ the detailed calculation of the Kamp\'{e} case
presented in section \ref{Y3Kallero} produces the periods displayed in Table \ref{muraria}. Additional care is
instead required while treating the case of the type 0 wall
$\mathcal{W}_0$ and the Cardano edges.
\subsubsection{The type 0 wall $\mathcal{W}_0$}
If we choose:
\begin{equation}\label{cramenio}
\zeta \, = \, \left\{p,0,q\right\} \quad ; \quad p,q\in \mathbb{R}
\end{equation}
we are, by definition, on the wall $\mathcal{W}_0$. The main
property of this wall is that the moment map algebraic system
\eqref{sistemico} implies
\begin{equation}\label{xdueuno}
X_2 \, = \, 1\,.
\end{equation}
Hence the rescaling ansatz \eqref{3subia} is not appropriate on this
wall for the reduction of the moment map equations to the
exceptional compact divisor $D_c$. Another rescaling
scheme is required.
Choosing \eqref{cramenio} and implementing eq.~\eqref{xdueuno} the
third of eqs.~\eqref{sistemico} is automatically satisfied and we
are left with
\begin{equation}\label{carmencitaabitaqui}
\left(
\begin{array}{c}
X_3 X_1 \left(-p+q+X_3^2 \sqrt[4]{\mho } \sqrt{\delta \varpi }\right)+X_3 X_1^3 \sqrt[4]{\mho } \left(-\sqrt{\delta
\varpi }\right)-2 X_1^2 \sqrt{\mho }+2 X_3^2 \sqrt{\mho } \\
2 \sqrt[4]{\mho } \sqrt{\delta \varpi }-X_3 X_1 \left(p+q+X_3^2 \sqrt[4]{\mho } \sqrt{\delta \varpi }\right)+X_3
X_1^3 \sqrt[4]{\mho } \left(-\sqrt{\delta \varpi }\right) \\
0 \\
\end{array}
\right) \, = \, \left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation}
where we used the same real variables $\delta,\varpi,\mho$
utilized in previous sections. Given eq.~\eqref{carmencitaabitaqui},
an appropriate rescaling that assures a finite limit $\mho\to 0$ is
provided by the one below:
\begin{equation}\label{iconica}
X_1 \, = \, T_1 \mho ^{3/8} \quad , \quad X_3\, = \, \frac{T_3}{\sqrt[8]{\mho
}}
\end{equation}
Implementing the above substitution in the system
\eqref{carmencitaabitaqui} and factoring out a common factor
$\mho^{1/4}$ we can perform the limit $\mho \to 0$ and we get:
\begin{equation}\label{risculando}
\left(
\begin{array}{c}
T_3 \left(T_1 \left(-p+q+T_3^2 \sqrt{\delta \varpi }\right)+2 T_3\right) \\
2 \sqrt{\delta \varpi }-T_1 T_3 \left(p+q+T_3^2 \sqrt{\delta \varpi }\right) \\
0 \\
\end{array}
\right) \, = \, \left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation}
The limiting system \eqref{risculando} is solvable by radicals and
it has four branches that can be treated exactly as in the case of
the previous section, calculating the derivatives with respect to the
$\rho$ and $r$ variables that lead to the evaluation of the K\"ahler
form and of its integrals. An additional care which is required in
this case is that one has to calculate first the derivatives and
then set either $r$ or $\rho$ to zero. Doing the operations in the
opposite order one meets undefined limits for either $T_1$ or $T_2$.
Just as above we have to choose the right branch in order to get a
positive volume for the two homology cycles. Through these steps one
arrives at the result displayed for $\mathcal{W}_0$ in Table
\ref{muraria}.
\subsubsection{Periods of the Cardano manifold}
The case of the Cardano manifold was treated in section
\ref{kallusquidam}. In particular it was shown that the K\"ahler
potential for the two instances of this manifold is given by
eq.~\eqref{croccus}, where $X=\mathfrak{X}(\varpi,\mho)$ is the 4th
root of the quartic equation \eqref{baldoppo}, explicitly written
down in eq.~\eqref{caponatasiciliana}. At the same time the triplet
of (1,1)-forms $\omega_I$ is defined as
\begin{eqnarray}
\begin{array}{cccccccc}
\zeta=\left\{1,1,0\right\} & \omega_1=0 & \omega_2 = \Omega & \omega_3= \Omega & ; &\Omega & = &
\frac{i}{2\pi} \, \partial \bar{\partial} \,\mathfrak{X}(\varpi,\mho)^2 \\
\zeta=\left\{0,1,1\right\} & \omega_1=\Omega & \omega_2 = \Omega & \omega_3= 0 & ; & \Omega & = &
\frac{i}{2\pi} \, \partial \bar{\partial} \,\mathfrak{X}(\varpi,\mho)^2
\end{array}
\end{eqnarray}
Developing the function $\mathfrak{X}(\varpi,\mho)$ in power series
of the variable $\mho$ we find
\begin{equation}\label{sviluserio}
\mathfrak{X}(\varpi,\mho) \, = \, \frac{1}{2} \left(\sqrt{\varpi }+\sqrt{\varpi +4}\right)
+\frac{1}{12} \left(-\frac{6 \left(\varpi ^2+3 \varpi
\right)}{\sqrt{\varpi +4}}-6 \sqrt{\varpi } (\varpi +1)\right) \sqrt{\mho } +\mathcal{O}\left(\mho\right)\,.
\end{equation}
In this case the reduction of the forms to the exceptional compact
divisor can be performed in complete analytic safety
restricting $\mathfrak{X}(\varpi,\mho)$ to its zeroth order term in
$\mho$. In this way we obtain the precise expression of the
(1,1)-form $\Omega$ whose integrals are then easily calculated. From
eq.~\eqref{sviluserio} and from the definition of the variable
$\varpi$ we arrive at the conclusion that
\begin{eqnarray}\label{Omegone}
\Omega & = & \frac{i}{2\pi} \, \partial \bar{\partial}
\,\mathfrak{Q}(\rho,r) \nonumber\\
\mathfrak{Q}(\rho,r)& = & \log \left(\frac{\left(\rho ^2+1\right)^2
r^2+\sqrt{\left(\rho ^2+1\right)^2 r^2} \sqrt{\left(\rho
^2+1\right)^2
r^2+4}+4}{2 \sqrt{\left(\rho ^2+1\right)^2 r^2+4}}\right)
\end{eqnarray}
From the above result it immediately follows that:
\begin{equation}\label{integomegone}
\left. \Omega \right|_{C_1} \, = \, 0 \quad ; \quad
\int_{C_2} \Omega \, = \, -1\,.
\end{equation}
This yields the values of the periods as displayed in
Table \ref{spigolosa}.
It appears that the Cardano manifold is another realization of the
$Y_3$ degeneration of the full resolution $Y$ just as the Kamp\'{e}
manifold to be discussed in section \ref{Y3Kallero}, see in particular subsection
\ref{interpretazia}. The same arguments presented there apply here
as well and lead to the same conclusion. Hence also the Cardano
manifold is a line bundle over the singular weighted projective
space $\mathbb{P}[1,1,2]$.
\subsection{Summary of the chamber and wall structure}
We have 8 chambers and 4 walls.
\begin{itemize} \item All interiors of the 8 chambers correspond
to the full resolution $Y$. This is consistent with Theorem 1.1 in \cite{CrawIshii}, which states that there exists
a chamber where the variety corresponding to the generic point is a resolution of singularities, and with
the fact that according to toric geometry there appears to be only one full resolution of singularities.
\item Among the four walls $\mathcal W_i$, only $\mathcal W_2$ is if type III, while the others are type 0.
The generic point on $\mathcal W_2$ corresponds to the partial resolution $Y_3$ (note that $Y_3$ is obtained
from $Y$ by collapsing the noncompact exceptional divisor to a line).
\item The triple intersection $\mathcal W_0\cap\mathcal W_1 \cap \mathcal W_3$
is an edge whose generic point corresponds to
the smooth variety $Y_{EH}$. This is quite interesting as $Y_{EH}$ is not a resolution of singularities
of ${\mathbb C}^3/{\mathbb Z}_4$; a morphism $Y_{EH}\to {\mathbb C}^3/{\mathbb Z}_4$ exists, but it is 2:1, as we discussed in Section
\ref{relation}.
\end{itemize}
\section{The resolved variety $Y$}
\label{Ysezia} As stressed in the previous Section about the chamber
structure, for all points of the $\zeta$ space that do not lay
on walls, the topological and algebraic character of the resolution
obtained from the K\"ahler quotient \`{a} la Kronheimer is always the
same, namely the variety we named $Y$. Hence, in order to describe
the K\"ahler Geometry of the resolved variety $Y
\rightarrow \mathbb{C}^3/\mathbb{Z}_{4 }$, we can utilize any preferred convenient point in $\zeta$ space
that avoids the walls. Furthermore we utilize the crucial
information that $Y$ is the total space of the canonical line bundle over the second Hirzebruch surface $\mathbb{F}_2$. \\
The strategy that we adopt to find the explicit form of the K\"ahler geometry of the variety $Y$ is based on the following steps:
\begin{enumerate}
\item We choose the realization of the $Y$ space provided by the Kronheimer construction
in the plane $\zeta _1 = \zeta _3 =a$ and $\zeta _2 =b \neq
2a$. In this case, as we know, two of the tautological fiber bundles
are identified having $X_1=X_3$ and the moment
map equations are simpler.
\item We reduce the moment map equations to the compact exceptional divisor (the second Hirzebruch surface)
and we obtain a system solvable by radicals whose explicit solution
is particularly simple.
\item We obtain the complete solution for the full variety starting from the solution
on the Hirzebruch surface and expressing the required fiber metrics
$T_{1,2}$ in terms of a unique function of two variables that is
defined as a particular root of a sextic equation.
\end{enumerate}
\subsection{The two addends of the K\"ahler potential}
First we write the restriction to the hypersurface
$\mathcal{N}_{\zeta }$ of the K\"ahler potential of the flat
variety $\text{Hom}(Q\otimes R,R)^{{\mathbb Z}_4}$. It is the following
object.
\begin{equation}
\mathcal{K}_0=\frac{U \left(X_2^2+1\right)
\left(X_1^2+X_3^2\right)+\Sigma \left(X_2^3+X_2+X_1 X_3
\left(X_1^2+X_3^2\right)\right)}{X_1 X_2 X_3} \label{Kappazero}
\end{equation}
As we know from eq.~\eqref{caramboletta} the final K\"ahler potential
of the resolved variety is of the form
\begin{equation}
\mathcal{K}=\mathcal{K}_0\,+\,\mathcal{K}_{\log }\,\\
\end{equation}
where
\begin{equation}
\mathcal{K}_{\log }\, \equiv \, \sum _{I=1}^3 \, \sum _{J=1}^3\zeta
_I \, \mathfrak{C}^{IJ} \, \log\left[X_J\right]^{\alpha_\zeta}\,
\label{kappalogatto}
\end{equation}
is the logarithmic part of the K\"ahler potential that contains the
information on the tautological bundle Chern classes. The matrix
$\mathfrak{C}^{IJ}$ is defined by
\begin{equation}
\mathfrak{C}^{IJ}= \text{Tr}\left(\tau ^I\tau ^J\right) \, =
\, \left(
\begin{array}{ccc}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2 \\
\end{array}
\right) \, = \, \text{Cartan matrix of $\mathfrak{a}_3$}
\end{equation}
where $\tau ^I$ are the generators of the U(1)$\times $U(1)$\times
$U(1) gauge group $\mathcal{F}_{\mathbb{Z}_4}$ in the 4$\times $4
dimensional representation corresponding to the regular
representation of $\mathbb{Z}_4$ advocated by the Kronheimer
construction.
Next, according to the general strategy outlined at the beginning
of this Section we extract the K\"ahler geometry induced on the
compact exceptional divisor by the Kronheimer--like crepant
resolution of the orbifold singularity.
Explicitly, we derive the K\"ahler potential of the Hirzebruch surface
from the K\"ahler quotient construction when
\begin{equation}
\zeta _1=\zeta _3=a \quad ; \quad \zeta _2 =b \neq 2a.
\end{equation}
\subsubsection{Reduction to the compact exceptional
divisor of the moment map equations} The final form of the moment
map equations in the coordinates \eqref{cavedanorosso} was given
above in eq.~\eqref{brillo}. After some experiments we found that a
convenient point in chamber number 2
is
\begin{equation}
\zeta _1 =\zeta _3=\frac{1}{2} \quad ; \quad \zeta _2=2\,.
\end{equation}
If we choose such values for the moment map levels and furthermore
if we set $T_3 = T_1,$ as indeed we must do in this case, the
system \eqref{brillo} reduces to
\begin{equation}
\left(
\begin{array}{c}
0 \\
-2 \delta T_1^4+T_1^2 T_2+\delta \sigma ^2 T_2 \left(1+\lambda ^4 T_2^2\right) \\
-2 T_1^2 T_2-\left(2 T_1^2+\delta \sigma ^2 T_2\right) \left(-1+\lambda ^4 T_2^2\right) \\
\end{array}
\right)=\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)\,.
\end{equation}
\subsection{Exact solution of the moment map equations} Let us
consider the following six order algebraic equation for an unknown
$F$:
\begin{eqnarray}
&&\mathfrak{P}(F) \equiv 2+F (-5-\varpi )+F^2 (3-2 \mho )+2 F^6 \mho
^3+F^3 (2 \mho +2 \varpi \mho )\nonumber\\
&&+F^4 \left(\mho -2 \mho ^2\right)+F^5 \left(3 \mho ^2-\varpi \mho
^2\right) = 0 \label{prillini}
\end{eqnarray}
The coefficients of the above equations are written in terms of
the two quantities:
\begin{equation}\label{prillini}
\varpi = \delta ^2 \sigma ^2 = \left(1+\left|u|^2\right. \right)^2
|v|^2 \quad ; \quad \mho = \lambda ^4 =\left|w|^2 \right.
\end{equation}
which depend on the toric coordinates $u,v,w$ of the variety $Y$.
Since $w=0$ is the equation of the compact component of the
exceptional divisor, which is isomorphic to the second Hirzebruch
surface $\mathbb{F}_2$, it follows that $u,v$ are coordinates of
$\mathbb{F}_2$. Furthermore taking into account the fibered
structure of $\mathbb{F}_2$$\longrightarrow $ $\mathbb{P}_1$ which
is a $\mathbb{P}_1$-bundle over $\mathbb{P}_1$, $u$ is a coordinate
for the $\mathbb{P}_1$ basis while $v$ is a coordinate for the
$\mathbb{P}_1$ fiber.
\par
Equation \eqref{prillini} has six roots that implicitly { }define as many
functions of { }$\varpi $, $\mho $. The sextic { }polynomial
$\mathfrak{P}$(F) has the important property that for $\varpi >0 $
and $\mho >0 $ there are always two real roots and two pairs of
complex conjugate roots. Hence the largest real root is a unique
identification of one of the six roots. We unambiguously define a
function $\mathfrak{F}$($\varpi $,$\mho $) by saying that it is the
largest real root of equation (4)
\begin{equation}
\mathfrak{F}(\varpi ,\mho ) \, \equiv \, \text{largest real root
of } \, \mathfrak{P}(F)\,.
\end{equation}
\par
The exact solution of the moment map equations can be written as
follows: {
\begin{eqnarray}
T_1&=& \sqrt{\frac{\mathfrak{F}}{2 \delta }} \left(-\varpi
+\mathfrak{F}^2 (3+2 \varpi ) \mho +2 \mathfrak{F}^5 \mho ^3-2
(1+\mho )+\mathfrak{F}^4 \mho ^2 (3-\varpi +2 \mho
)\right.\nonumber\\
&&\left.+\mathfrak{F}^3 \mho (1+\mho -\varpi \mho )+\mathfrak{F}
(3+\mho +\varpi \mho
)\right)^{\frac{1}{2}}\nonumber\\
T_2 &=&\mathfrak{F} \label{esatamence}
\end{eqnarray}
} where by $\mathfrak{F}$ we obviously mean $\mathfrak{F}( \varpi ,
\mho )$.
\subsubsection{Properties of the function $\mathfrak{F}(\varpi,\mho)$}
The function $\mathfrak{F}$($\varpi $,$\mho $) is
well defined and it can be developed in power series of the
parameter $\mho $:
\begin{equation}
\mathfrak{F}(\varpi ,\mho )\, = \, \sum _{n=0}^{\infty }
\mathfrak{F}_n(\varpi )\, \mho ^n
\end{equation}
We display the first two terms of this development:
\begin{eqnarray}
\mathfrak{F}_0(\varpi )&=&\frac{1}{6} \left(5+\varpi +\sqrt{1+10
\varpi +\varpi ^2}\right)\nonumber\\
\mathfrak{F}_1(\varpi )&=&-\left(\left(\left(5+\varpi
+\sqrt{1+\varpi (10+\varpi )}\right)^2 \left(7+11 \sqrt{1+\varpi
(10+\varpi )}\right.\right.\right.\nonumber\\
&&\left.\left.\left. +\varpi \left(46+7 \varpi +7 \sqrt{1+\varpi
(10+\varpi )}\right)\right)\right)/\left(648 \sqrt{1+\varpi
(10+\varpi )}\right)\right) \label{Fgotsvilup}
\end{eqnarray}
The function $\mathfrak{F}(\varpi , \mho)$ can also be plotted and
its behavior is displayed in fig. \ref{effegotica}.
\begin{figure}
\label{effegotica} \vskip -1.5 cm
\begin{center}
\includegraphics[height=8cm]{Fgothic.png}
\caption{\label{effegotica}{ Plot of the function
$\mathfrak{F}(\varpi , \mho)$ }}
\end{center}
\end{figure}
\subsection{Induced K\"ahler geometry of the exceptional divisor $D_c \sim
\mathbb{F}_2$} \label{inducedF2}
Next we study the K\"ahler geometry of the compact
exceptional divisor induced by the Kronheimer construction of the
K\"ahler geometry of $Y$.
\subsubsection{Solution of the moment map equations reduced to the
compact exceptional divisor}
If we perform the limit $\lambda \rightarrow $ 0 in the moment map
equations \eqref{prillini} we reduce them to compact exceptional divisor, namely
to second Hirzebruch surface
\begin{equation}
\left(
\begin{array}{c}
0 \\
-2 \delta T_1^4+\left(\delta \sigma ^2+T_1^2\right) T_2 \\
\delta \sigma ^2 T_2-T_1^2 \left(-2+2 T_2\right) \\
\end{array}
\right)=\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right) \label{sistema12}
\end{equation}
The system \eqref{sistema12} has five different solutions but the
only one that is positive real both for $T_1$ and $T_2$ is the
following one:
\begin{eqnarray}
T_1 &=&\frac{1}{2} \sqrt{\frac{1+\delta ^2 \sigma ^2+\sqrt{1+10
\delta ^2 \sigma ^2+\delta ^4 \sigma ^4} }{\delta }}\nonumber\\
T_2 &=&\frac{1}{6} \left(5+\delta ^2 \sigma ^2+\sqrt{1+10 \delta ^2
\sigma ^2+\delta ^4 \sigma ^4}\right)
\end{eqnarray}
This in terms of the invariants reads as follows:
\begin{eqnarray}
T_1|_{\mathbb{F}_2} &=&\frac{1}{2} \sqrt{\frac{1+\varpi +\sqrt{1+10
\varpi +\varpi ^2} }{\delta }}\text{ };\text{
}\nonumber\\
T_2|_{\mathbb{F}_2}&=&\frac{1}{6} \left(5+\varpi +\sqrt{1+10 \varpi
+\varpi ^2}\right) = \mathfrak{F}_0(\varpi ) \label{soluziacinque}
\end{eqnarray}
Comparing with eqs. \eqref{Fgotsvilup} and \eqref{esatamence} we see
that indeed the reduction to the compact exceptional divisor of the
solution corresponds to the zero-th order terms in the series
expansion in $\mho $.
\subsubsection{Derivation of the K\"ahler potential of
$\mathbb{F}_2$.} In order to study the K\"ahler geometry we have
first to calculate the K\"ahler potential. Performing the
substitutions \eqref{3subia} and \eqref{nocdue} in $\mathcal{K}_0$
as defined by equation \eqref{Kappazero}, substituting the solution
\eqref{esatamence} for $T_{1,2}$ and performing the limit $\lambda
\rightarrow $0, namely $\mho \, \rightarrow\, 0$, we obtain:
\begin{equation}
\mathcal{K}_0|_{\mathbb{F}_2}=\frac{3+7 \varpi +3 \sqrt{1+10 \varpi
+\varpi ^2}}{1+\varpi +\sqrt{1+10 \varpi +\varpi ^2}}\,.
\end{equation}
For the logarithmic part of the K\"ahler potential defined in
eq. \eqref{kappalogatto} we have instead
\begin{equation}
\mathcal{K}_{\log }|_{\mathbb{F}_2}\equiv \frac{\kappa _1}{2}
\log\left[\frac{1}{2} \sqrt{\frac{1+\varpi +\sqrt{1+10 \varpi
+\varpi ^2} }{\delta }}\right]+\kappa _2\log\left[\frac{1}{6}
\left(5+\varpi +\sqrt{1+10 \varpi +\varpi ^2}\right) \right]\,.
\end{equation}
The parameters $\kappa _{1,2}$ have been introduced to keep track of
the consequences of the pairing matrix $\mathfrak{C}_{IJ}$ choice.
\paragraph{The K\"ahler potential in polar coordinates.}
The final outcome of the above construction is that the K\"ahler
potential of the metric induced on the second Hirzebruch surface is
the following one
\begin{equation}
\mathcal{K}_{\mathbb{F}_2}=J\left(\rho ,r,\kappa _1,\kappa _2\right)
\label{samotracia}
\end{equation}
where $\rho$ and $r$, defined in eq.~\eqref{brchefreddo} are the
norms of the two complex coordinates and the function $J\left(\rho
,r,\kappa _1,\kappa _2\right)$ is the following explicit one:
\begin{eqnarray}
J(\rho ,r,\kappa _1,\kappa _2)&=&\frac{3+7r^2\left(1+\rho
^2\right)^2+3\sqrt{1+10r^2\left(1+\rho ^2\right)^2+r^4\left(1+\rho
^2\right)^4}}{1+r^2\left(1+\rho
^2\right)^2+\sqrt{1+10r^2\left(1+\rho ^2\right)^2+r^4\left(1+\rho
^2\right)^4}}\nonumber\\
&&+\frac{\kappa
_1}{2}\log\left[\frac{1}{2}\sqrt{\frac{1+r^2\left(1+\rho
^2\right)^2+\sqrt{1+10r^2\left(1+\rho ^2\right)^2+r^4\left(1+\rho
^2\right)^4}}{1+\rho ^2}}\right]\nonumber\\
&&+\kappa _2 \,\log\left[\frac{1}{6}\left(5+r^2\left(1+\rho
^2\right)^2+\sqrt{1+10r^2\left(1+\rho ^2\right)^2+r^4\left(1+\rho
^2\right)^4}\right)\right]\nonumber\\
\label{Jpolare}
\end{eqnarray}
\subsubsection{Calculation of the K\"ahler metric and of the Ricci tensor of $\mathbb{F}_2$}
From the above data we can calculate the K\"ahler metric, the K\"ahler
2-form and also the determinant of the metric which finally yields the
Ricci tensor and the Ricci 2-form. We performed this calculation
utilizing a {\sc mathematica} code and in the following lines we present
these results. The best way to display them is in terms of polar
coordinates, namely performing the transformation
\begin{equation}
u=\exp[i \theta ]\,\rho \quad ;\quad v= \exp[i \psi ]\, r
\end{equation}
\paragraph{The K\"ahler 2-form.}
We do not display the explicit form of the K\"ahler metric since it
is too long and messy. As usual it is just given by the derivatives
with respect to the original complex coordinates of the K\"ahler
potential:
\begin{equation}
g_{IJ^*} = \partial _i\partial _{j^*}J(\rho
,r,\kappa_1,\kappa_2) \quad ; \quad z_i \equiv \{u,v\} \quad ;
\quad \bar{z}_{j^*} \equiv
\left\{\bar{u},\bar{v}\right\}\,;\label{Kmetricozza}
\end{equation}
once transformed to the polar coordinates $\rho,r,\theta,\psi$ the K\"ahler form has
the following structure:
\begin{eqnarray}
\mathbb{K}& = & \mathbb{K}_{\text{r$\theta $}} \text{dr}\wedge
\text{d$\theta $}+\mathbb{K}_{\text{r$\rho $}} \text{dr}\wedge
\text{d$\rho $}-\mathbb{K}_{\text{$\psi $r}} \text{dr}\wedge
\text{d$\psi $} \nonumber\\ & + & \mathbb{K}_{\theta \rho } \text{d$\theta $}\wedge
\text{d$\rho $}-\mathbb{K}_{\psi \theta } \text{d$\theta $}\wedge
\text{d$\psi $}+\mathbb{K}_{\rho \psi } \text{d$\rho $}\wedge
\text{d$\psi $}
\label{Kformafredda}
\end{eqnarray}
where the explicit form of the components calculated by the
{\sc mathematica} code are also too long and messy to be displayed.
\paragraph{The Ricci tensor and the Ricci 2- form}
Using the same MATHEMATICA code we have calculated the Ricci tensor
of the above metric defined by:
\begin{equation}
R_{ij^*} = \partial _i\partial_{j^*}\log[\text{Det}[g]]
\end{equation}
and the Ricci 2-form defined by
\begin{equation}
\text{$\mathbb{R}$ic}= - \, \tfrac{i}{2\pi }R_{ij^*}
\text{dz}^i\wedge d\bar{z}^{j^*} \, = \, \tfrac{i}{2\pi }
\bar{\partial} \, \partial \log[\text{Det}[g]]\label{F2riccioide}
\end{equation}
Once transformed to polar coordinates the Ricci 2-form has the same
structure as the K\"ahler 2-form, namely
\begin{eqnarray}
\text{$\mathbb{R}$ic} &=&\text{Ric}_{\text{r$\theta $}}
\text{dr}\wedge \text{d$\theta $}+\text{Ric}_{\text{r$\rho $}}
\text{dr}\wedge \text{d$\rho $}-\text{Ric}_{\text{$\psi $r}}
\text{dr}\wedge \text{d$\psi $}+\text{Ric}_{\theta \rho }
\text{d$\theta $}\wedge \text{d$\rho $}\nonumber\\
& - & \text{Ric}_{\psi \theta } \text{d$\theta $}\wedge \text{d$\psi
$}+\text{Ric}_{\rho
\psi } \text{d$\rho $}\wedge \text{d$\psi $}
\label{Ricformozza}
\end{eqnarray}
The explicit form of the components of $\text{$\mathbb{R}$ic}$ is
even more massive than those of the K\"ahler 2-form and cannot be
exhibited. An important issue is whether the constructed K\"ahler
metric might be a K\"ahler-Einstein metric, namely whether the Ricci
tensor might be proportional to the metric coefficients. However it
is well known that Hirzebruch surfaces cannot carry K\"ahler-Einstein metrics \cite{Besse}.
An easy way of checking this fact is to note that, since the Ricci form represents the
first Chern class of the tangent bundle to the Hirzebruch surface, we have
\begin{equation} \int_{C_1} \mathbb{R}\text{ic} = \int_{C_1} c_1(T_{\mathbb F_2})
= 2 H\cdot C_1 = 0\,, \label{periodRicci1}\end{equation} where $H$
is the divisor described in Sections \ref{coorcurves} and
\ref{Ylinebundle}, while on the other hand the integral of K\"ahler
form on the curve $C_1$ is of course positive; note that one also
must have:
\begin{equation} \int_{C_2} \mathbb{R}\text{ic} = 2 H \cdot C_2 = 2.\label{periodRicci2} \end{equation}
To check the robustness of our computations we verified explicitly these equations, as we show below.
\paragraph{Reduction to the
homology cycle $C_1$.} The reduction to the homology cycle
$C_1$ is obtained by setting r = $\psi $ = 0 together
with the vanishing of their differentials. Applying such a procedure
to the K\"ahler 2-form and to the Ricci 2-form we obtain:
\begin{equation}
\mathbb{K}|_{C_1}= -\kappa _1\frac{ \rho \,
d\rho\wedge d\theta}{4 \pi \left(1+\rho ^2\right)^2} \quad
;\quad \text{$\mathbb{R}$ic}|_{C_1}= 0
\end{equation}
This confirms eq.~\eqref{periodRicci1}.
\paragraph{Period of the K\"ahler two form on $C_1$.}
Next we can calculate the period of the K\"ahler form on
$C_1$ and we obtain:
\begin{equation}
\int _{C_1}\mathbb{K} = - \int _0^{2\pi }d\theta
\int_0^{\infty } \frac{ \kappa _1 \rho }{4 \pi \left(1+\rho
^2\right)^2} \, d\rho = -\frac{\text{ }\kappa _1 }{4}
\end{equation}
\paragraph{Period of the K\"ahler two form on $C_2$.}
Here we calculate the restriction of the K\"ahler 2-form to the
homology cycle $C_2$ and we obtain
\begin{equation}
\mathbb{K}|_{C_2}\, = - \,\left(f_0(r)+\kappa_1
\,f_1(r)+\kappa_2 \,f_2(r)\right)\, dr\wedge d\psi
\end{equation}
where {\small
\begin{eqnarray}
f_0(r) &=& \frac{r \left(r^6-\left(\sqrt{r^4+10 r^2+1}-15\right) r^4+\left(27-10
\sqrt{r^4+10 r^2+1}\right) r^2-\sqrt{r^4+10 r^2+1}+5\right)}{2 \pi
\left(r^4+10 r^2+1\right)^{3/2}}\nonumber \\
f_1(r) &=& \frac{3 r \left(r^2+1\right)}{4 \pi \left(r^4+10 r^2+1\right)^{3/2}} \nonumber\\
f_2(r) &=& \frac{r \left(5 r^2+1\right)}{\pi \left(r^4+10 r^2+1\right)^{3/2}}
\end{eqnarray}
} We find
\begin{equation}
\int_0^{\infty } f_1(r) \, dr \,=\, -\frac{1}{4}\quad; \quad
\int_0^{\infty } f_2(r) \, dr\, = \, -1 \quad ; \quad \int_0^{\infty
} f_0(r) \, dr \,=\,0
\end{equation}
So that the period of the K\"ahler 2-form on $C_2$ is
the following one:
\begin{equation}
\int _{C_2}\mathbb{K} = \frac{\text{ }\kappa _1
}{4} + \kappa _2\,.
\end{equation}
Recalling eq.~\eqref{kappalogatto}, and with the present choice of $\zeta$, the
values of $\kappa_1$ and $\kappa_2$ are
$$\kappa_1 = -4\alpha, \quad \kappa_2 = 3\alpha, $$
so that the volumes of $C_1$ and $C_2$ are
$$\int_{C_1}\mathbb K = \alpha,\qquad
\int_{C_2}\mathbb K = 2\alpha, $$
As we have previosly discussed, we choose $\alpha=2$ so that the
periods of the tautological bundles are all integral.
\par
In a similar way we calculated the period of the Ricci 2-form on
$C_2$ and as it was expected we found:
\subsection{The isometry group of the second Hirzebruch surface and of the full resolution $Y$}
The K\"ahler potential is function only of the following combination
\begin{equation}
\varpi = |v|^2\left(1+\left|u|^2 \right)^2\right)\,.
\end{equation}
Given an element of $\mathrm{SU (2)}$, namely a 2$\times $2
matrix
\begin{equation}
\gamma =\left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right) ;\quad ad-bc \, ; \quad \gamma
^{\dagger }=\gamma ^{-1}\,,
\end{equation}
the object $\varpi $ is invariant under the holomorphic
transformation
\begin{equation}
(u,v)\longrightarrow \left(\frac{a u + b}{ c u + d\text{ }},v (c
u+d)^2\right)\,. \label{pretarabaccus}
\end{equation}
According to the description we give in the Appendix, the coordinate $(u,
t)$ transform as follows:
\begin{equation}
(u,t)\longrightarrow \left(\frac{a u + b}{ c u + d\text{ }},t (c
u+d)^{-2}\right) \,.\label{tarabaccus}
\end{equation}
Hence we conclude that the coordinate $t$ spanning the fiber in the
Hirzebruch surface is related with the toric coordinate $v$ by \begin{equation}
t= \frac{1}{v}
\end{equation}
It is also evident that the complete isometry group of the K\"ahler metric $\mathbb K$ is
$\mathrm{SU(2)\times U(1)}$, where $\text{SU}(2)$ acts as in eq.~\eqref{pretarabaccus}, while
$\text{U}(1)$ is simply the phase transformation of $v$.
This is inherited by the
K\"ahler metric of the full variety $Y$. Actually in the case of $Y$
the isometry group extends by means of an extra
$\mathrm{U(1)}$ factor corresponding to the phase transformation of
the coordinate $w$ spanning the fiber of the line bundle
$Y\longrightarrow \mathbb{F}_2$:
\begin{equation}\label{balordus}
\mathrm{Iso}_{Y} \, = \, \mathrm{SU(2)\times U(1)\times U(1)}\,.
\end{equation}
\subsection{Ricci-flat metrics on Y} All smooth resolutions of singularities of ${\mathbb C}^3/\Gamma$, where
$\Gamma$ acts as a subgroup of $\operatorname{SU}(3)$, carry
Ricci-flat K\"ahler metrics, as proved in \cite{Joyce-QALE}.
Therefore, the variety $Y$ carries a Ricci-flat metric --- actually,
as $\dim H^2(Y,{\mathbb Q})=2$, it carries a 2-parameter family of such
metrics. However {\em one should not expect} the metric coming from
the generalized Kronheimer construction to be one of these metrics;
we shall check this point in calculations that will appear in a
future publication \cite{conmasbia}. Moreover, as the action of
${\mathbb Z}_4$ on ${\mathbb C}^3-\{0\}$ is not free (all points of the $z$ axis have
a ${\mathbb Z}_2$ isotropy, compare discussion in Section \ref{relation}),
the Ricci-flat metrics are not ALE; they do have a suitable
asymptotically Euclidean behaviour away from the singular locus, but
as the latter is not compact, their asymptotics is more complicated
than that of an ALE metric (these metrics have been called ``QALE''
by Joyce).
\section{The partial resolution $Y_3$ and its K\"ahler geometry}
\label{Y3Kallero} In this Section we construct the K\"ahler
Geometry of the partial desingularization $P_3$ of the quotient
$\mathbb{C}^3/\mathbb{Z}_{4 }$ that occurs at some walls of the
$\zeta$ space; the computations suggest that the partial
desingularization $P_3$ is again the total space of a line bundle,
this time over a singular variety $Q_2$. Actually $P_3$
is $ Y_3$, one of the degenerations arising in our toric analysis,
and the base variety $Q_2$ is the weighted projective space $\mathbb{P}$[1,1,2].
The strategy that we adopt is analogous to that utilized for the
nondegenerate variety $Y$ and it goes
along the following steps:
\begin{enumerate}
\item We choose the partial desingularization space $P_3$ provided by the Kronheimer
construction in the plane $\zeta _1$ = $\zeta _3 = a$, $ \zeta
_2 = 2a$ .
\item Reducing the moment map equations to the compact exceptional
divisor (the singular surface $Q_2$) we obtain a system
solvable by radicals and the explicit solution is particularly simple.
\item We obtain the complete solution for the
full variety starting from the solution on the $Q_2$
surface and expressing the sought-for fiber metrics $T_{1,2}$ as
power series in the coordinate $w$ that represents the section of
the line bundle over $Q_2$.
\end{enumerate}
\subsection{Construction of the K\"ahler geometry of $Q_2$}
With the same logic utilized in the case of the full resolution we
begin with the analysis of the K\"ahler Geometry of the singular base
manifold $Q_2$ of the line bundle $P_3\longrightarrow
Q_2$. Our main weapon in this analysis is the reduction of
the algebraic system of moment map equation to the exceptional
divisor by means of the limit $\lambda \longrightarrow 0$. We
construct the K\"ahler potential of $Q_2$ performing the
limit $\lambda \longrightarrow 0$ both on the nonlogarithmic part
of the K\"ahler potential of the resolution and on the logarithmic
one.
\subsubsection{Construction of the K\"ahler potential}
We choose the following special point on the chamber wall
$\mathcal{W}_1$
\begin{equation}\label{briscola}
\zeta _1= 1, \quad \zeta _3= 1, \quad \zeta _2= 2
\end{equation}
so that the system in eq.~\eqref{brillo} becomes
\begin{equation}\label{eqqa1}
\left(
\begin{array}{c}
\left(-1-\lambda ^4 T_2^2-\delta T_1 T_3\right) \left(T_1^2-T_3^2\right) \\
\delta \left(\sigma ^2 T_2+\lambda ^4 \sigma ^2 T_2^3-T_1 T_3 \left(T_1^2+T_3^2\right)\right) \\
-2 T_1 T_2 T_3-\left(-1+\lambda ^4 T_2^2\right) \left(T_1^2+\delta \sigma ^2 T_2+T_3^2\right) \\
\end{array}
\right)=\left(
\begin{array}{l}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation}
According with what we discussed in Section \ref{ballavantana} and
eq.~\eqref{raschiotto} we perform the replacement
\begin{equation}\label{eqqa2}
T_1=\sqrt{\frac{\sigma }{\lambda }} \frac{
\sqrt[4]{Z^3+Z}}{\sqrt[4]{2}} \quad;\quad T_2=\frac{1}{\lambda ^2}
Z \quad ;\quad T_3=\sqrt{\frac{\sigma }{\lambda }}\frac{
\sqrt[4]{Z^3+Z}}{\sqrt[4]{2}}
\end{equation}
and introduce the appropriate rescaling
\begin{equation}\label{eqqa3}
Z= \lambda ^2 z\,.
\end{equation}
In this way the system \eqref{eqqa1} becomes
\begin{equation}\label{eqqa4}
\left(
\begin{array}{c}
0 \\
0 \\
-\sqrt{2} \left(-1+z+z^2 \lambda ^4\right) \sqrt{z+z^3 \lambda ^4} \sigma +\left(z \delta -z^3 \delta \lambda ^4\right) \sigma ^2 \\
\end{array}
\right)=\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)\,.
\end{equation}
Then we reduce the system to the exceptional divisor performing the
limit $\lambda \to 0$ and we obtain the algebraic equation
\begin{equation}\label{eqqa5}
\sqrt{z} \sigma \left(\sqrt{2}-\sqrt{2} z+\sqrt{z} \delta
\sigma \right) =0
\end{equation}
whose unique everywhere positive real solution is \begin{equation}\label{eqqa6}
z = \frac{1}{4} \left(4+\delta ^2 \sigma ^2+\delta \sigma
\sqrt{8+\delta ^2 \sigma ^2}\right)\,.
\end{equation}
\subsection{The K\"ahler potential addends}
As in the case of the fully resolved variety $Y$, we begin by writing
the restriction to the hypersurface $\mathcal{N}_{\zeta }$ of the
K\"ahler potential of the flat variety $\text{Hom}(Q\otimes R,R)^{{\mathbb Z}_4}$. Inserting the above choices in
eq.~\eqref{balordus} we obtain
\begin{equation}\label{eqqa7}
\mathcal{K}_0 = \frac{2+2 z^2 \lambda ^4+2 \sqrt{2}\text{
}\sqrt{z+z^3 \lambda ^4} \delta \sigma }{z}
\end{equation}
For the logarithmic part of the K\"ahler potential we have
\begin{equation}\label{eqqa8}
\mathcal{K}_{\log }\equiv \zeta_I\, \mathfrak{C}^{IJ } \,
\log \left[X_J\right]^{\alpha_{\zeta_J}}
\end{equation}
with the above choices and disregarding addends $\log[\lambda $]
since $\lambda $ is a modulus square of a holomorphic function,we
find:
\begin{equation}\label{eqqa9}
\mathcal{K}_{\log } = 2 \alpha_{\zeta_J}\log\left[T_2\right] = 2 \alpha_{\zeta_2} \log[z]
\end{equation}
so that
\begin{equation}\label{eqqa10}
\mathcal{K} = \frac{2+2 z^2 \lambda ^4+2 \sqrt{2}
\delta \sigma
\sqrt{z+z^3 \lambda ^4} }{z}+ 2 \alpha_{\zeta_2}\log[z]
\end{equation}
This determines the K\"ahler geometry of the singular variety
$P_3$, provided we are able to write the appropriate positive real
solution $z=\hat{\mathfrak{F}}(\lambda ,\delta \sigma )$ of the
moment map equation \eqref{eqqa4}.
\subsubsection{The K\"ahler potential of $Q_2$}
Performing the limit $\lambda \to 0$ and substituting for
$z$ the solution of the reduced moment map equations presented in
eq.~\eqref{eqqa6} we obtain the K\"ahler potential of the divisor
$Q_2$:
\begin{eqnarray}\label{eqqa11}
\mathcal{K} |_{Q_2}&=& \frac{2 \left(4+2 \sqrt{2} \delta \sigma \sqrt{4+\delta \sigma \left(\delta \sigma +\sqrt{8+\delta
^2 \sigma ^2}\right)}\right) }{4+\delta \sigma \left(\delta \sigma
+\sqrt{8+\delta ^2 \sigma
^2}\right)}\nonumber\\
&&+2 \alpha_{\zeta_J} \log\left[\frac{1}{4} \left(4+\delta
\sigma \left(\delta \sigma +\sqrt{8+\delta ^2 \sigma ^2}\right)\right)\right]
\end{eqnarray}
Naming
\begin{equation}\label{eqqa12}
T= \delta \sigma \,=\, r \left(1+\rho ^2\right)\quad ;\quad
W = \frac{1}{4} \left(4+T
\left(T+\sqrt{8+T^2}\right)\right)
\end{equation}
where, just as before,
\begin{equation}\label{eqqa13}
r= |v| \quad ; \quad \rho = |u|
\end{equation}
the result \eqref{eqqa11} can be rewritten in the following much
simpler form:
\begin{equation}\label{eqqa14}
\mathcal{K} |_{\mathbb{Q}_{2 }}= 4-\frac{2}{W}+2 \alpha_{\zeta_2}\log[W]
\approx -\frac{2}{W}+2 \alpha_{\zeta_2}\log[W]
\end{equation}
From these data one can compute the K\"ahler metric; when $ \alpha_{\zeta_2} =1$
this takes the
particularly simple form
\begin{equation}\label{eqqa15}
g_{ij^*}\, = \,\left(
\begin{array}{cc}
\displaystyle \frac{2 r \left(r+\frac{2 \sqrt{2}}{\sqrt{W}}\right)}{1+W}
& \displaystyle \frac{2 \sqrt{2} e^{-i (\theta -\psi )} \rho }{\sqrt{W} (1+W)} \\[10pt]
\displaystyle \frac{2 \sqrt{2} e^{i (\theta -\psi )} \rho }{\sqrt{W} (1+W)}
& \displaystyle \frac{2 (-1+W)}{r^2 \left(W+W^2\right)} \\
\end{array}
\right)
\end{equation}
where $\theta $ and $\psi $ are the phases of $u$ and $v$,
respectively. This is enough for our purposes since the periods we are interested in
scale multiplicatively with respect to $ \alpha_{\zeta_2}$.
\subsubsection{The determinant of the K\"ahler metric and the Ricci tensor}
Calculating the determinant of $g_{ij^*}$ we find
\begin{equation}\label{eqqa16}
\det [g] = \frac{4}{W+W^2}
\end{equation}
then calculating the Ricci tensor we obtain:
\begin{eqnarray}
\text{Ric}_{\text{11}^*}&=& -\frac{(-1+W) W (1+5 W)}{r^2 (1+W)^4}\nonumber\\
\text{Ric}_{\text{12}^*}&=& -\frac{\sqrt{2} e^{i (\theta -\psi )}
W^{3/2} (1+5 W)
\rho }{(1+W)^4}\nonumber\\
\text{Ric}_{\text{21}^*}&=& -\frac{\sqrt{2} e^{-i (\theta -\psi )}
W^{3/2}
(1+5 W) \rho }{(1+W)^4}\nonumber\\
\text{Ric}_{\text{22}^*}&=& \frac{1}{\sqrt{2} \sqrt{W} (1+W)^5}r
(140-4 W (70
+W (-34+W (6+5 W)))\nonumber\\
&&+\left.\sqrt{2} r \sqrt{W} \left(140+W \left(-139+W \left(4+W-2
W^2\right)\right)\right)\right.\nonumber\\
&&\left.-70 r^2 W \left(-1+\rho ^4\right)\right)\label{eqqa17}
\end{eqnarray}
Again, from here we can see that the constructed metric is not K\"ahler-Einstein.
Next considering the transformation to polar coordinates
\begin{equation}\label{eqqa18}
u=e^{i \theta } \rho \quad ; \quad v=e^{i \psi } r
\end{equation}
we write out the form of the K\"ahler 2-form explicitly:
\begin{eqnarray}\label{eqqa19}
\mathbb{K}&=&\frac{2 \sqrt{2} \rho ^2 \text{dr}\wedge \text{d$\theta
$}}{\pi \sqrt{W} (1+W)}+\frac{2 (-1+W) \text{dr}\wedge \text{d$\psi
$}}{\pi
r W (1+W)}\nonumber\\
&&-\frac{2 r \left(2 \sqrt{2}+r \sqrt{W}\right) \rho \text{d$\theta $}\wedge
\text{d$\rho $}}{\pi \sqrt{W} (1+W)}+\frac{2 \sqrt{2} r \rho
\text{d$\rho $}\wedge \text{d$\psi $}}{\pi \sqrt{W} (1+W)}
\end{eqnarray}
\subsubsection{Calculations of periods of the K\"ahler 2-form
on homology cycles} Starting from eq.~\eqref{eqqa18} we
can calculate the periods of the K\"ahler 2-form on the
homology cycles $C_1$ and $C_2$. This
allows one to get a clear picture of the degeneration of the $Y$ variety,
showing which cycles in $Y_3$ shrink to a vanishing volume.
\paragraph{Cycle $C_1$.}
Setting $r=0$ and $\psi =0$ we obtain the reduction of the K\"ahler
2-form to the cycle $C_1$. Expanding in power series of
$r $ around $ r=0$ we obtain\begin{equation}\label{eqqa20}
\mathbb{K} = -\frac{2 \left(\sqrt{2} \rho \text{d$\theta $}\wedge
\text{dr}\right) r}{\pi }+\mathcal{O}\left(r^2\right)
\end{equation}
It follows that for $r = 0$ (equation of the cycle
$C_1$) the K\"ahler 2-form goes to zero, namely the
$C_1$ cycle shrinks to zero.
\paragraph{Cycle $C_2$.}
The reduction to the cycle $C_2$ is obtained in the
limit $\rho \rightarrow $0, $\theta $$\rightarrow $0. We obtain
\begin{equation}\label{eqqa21}
\mathbb{K}|_{C_2}=\frac{\left(4+r^2-r\sqrt{8+r^2}\right)\text{dr}
\wedge\text{d$\psi $}}{2\pi \sqrt{8+r^2}}
\end{equation}
so that for $\alpha_{\zeta_J}=1$ we get
\begin{equation}\label{eqqa22}
\int _{C_2}\mathbb{K}\text{ }=\text{ }2\pi
\int_0^{\infty } \frac{\left(4+r^2-r \sqrt{8+r^2}\right) }{2 \pi
\sqrt{8+r^2}} \, dr = 2
\end{equation}
and the result of a generic $\alpha_{\zeta_J}$ is therefore
$$ \mathbb{K}|_{C_2} =2 \alpha_{\zeta_J}.$$
It follows that for $\rho $ = 0 (equation of the cycle
$C_2$) the K\"ahler form has period $2\alpha_{\zeta_J}$.
\subsection{Interpretation}
\label{interpretazia}
The interpretation of the above results is sufficiently clear. The
cycle $C_1$ is the intersection of the two components of the
exceptional divisor $D_c$ and $D_{nc}$, where $D_c$ is the
Hirzebruch surface $\mathbb{F}_2$ and $D_{nc}$ = $\mathbb{P}^1
\times \mathbb{C}$. The vanishing of the cycle $C_1$ means that the
exceptional divisor $D_{nc}$ has disappeared, while the compact one
$D_c$ remains in the form of the singular variety
$\mathbb{P}[1,1,2]$. This means that $P_3$ is precisely the partial
resolution of the orbifold singularity called $Y_3$ in Section
\ref{Y3sezia}.
\subsubsection{Periods of the Chern forms $\omega _{1,2,3}$.}
The fiber metrics of the three tautological bundles are for the
chosen point of the $\zeta $ space given by
\begin{equation}\label{eqqa24}
\mathcal{H}_{1,2,3}=\left\{
\alpha_{\zeta_J}\, \log\left[\frac{\left(Z+Z^3\right)^{1/4}}{2^{1/4}}\right],\
\alpha_{\zeta_J}\, \log[Z],\
\alpha_{\zeta_J}\, \log\left[\frac{\left(Z+Z^3\right)^{1/4}}{2^{1/4}}\right]\right\}\;
\end{equation}
substituting $Z\, \to \, \lambda ^2 \, z$ we obtain:
\begin{equation}\label{eqqa25}
\mathcal{H}_{1,2,3} = \left\{\frac{1}{4}
\alpha_{\zeta_J}\, \log\left[\frac{1}{2} \left(z \lambda ^2+z^3 \lambda
^6\right)\right],\ \alpha_{\zeta_J}\, \log\left[z \lambda ^2\right],\ \frac{1}{4} \alpha_{\zeta_J}\,
\log\left[\frac{1}{2} \left(z \lambda ^2+z^3 \lambda
^6\right)\right]\right\}
\end{equation}
Performing the reduction to compact exceptional divisor we have;
\begin{eqnarray}\label{eqqa26}
\mathcal{H}_{1,2,3}
&=&\left\{\alpha_{\zeta_J}\, \frac{\log[z]}{4}, \
\alpha_{\zeta_J}\, \log[z],\ \alpha_{\zeta_J}\, \frac{\log[z]}{4}\right\}\nonumber\\
z &=&\frac{1}{4} \left(4+r^2 \left(1+\rho ^2\right)^2+r \left(1+\rho
^2\right) \sqrt{8+r^2 \left(1+\rho ^2\right)^2}\right)\,.
\end{eqnarray}
So that for the period integrals, whatever is the supporting cycle,
we obtain:
\begin{equation}\label{eqqa27}
\left\{\int \, \omega_1,\, \int \, \omega_2,\, \int \,
\omega_3\right\} \, = \,\left\{ \frac 14 \, \alpha_{\zeta_J}\, \int \Omega , \ \alpha_{\zeta_J}\, \int
\Omega , \ \frac 14 \, \alpha_{\zeta_J}\, \int \Omega \right\}\quad ; \quad \Omega =
\frac{i}{2\pi }\partial \bar{\partial } \log[z]
\end{equation}
The periods of $\Omega $ are very easily calculated since in
cohomology we have:
\begin{equation}\label{eqqa28}
\alpha_{\zeta_J}\, \left[\Omega \right] = \frac{1}{2}\, \left[\mathbb{K}\right]
\end{equation}
where $\mathbb{K}$ is the K\"ahler 2-form. The above equation follows
from eq.~\eqref{eqqa10} and the observation that the non-logarithmic
part of the K\"ahler potential gives rise to a cohomologically
trivial addend in $\mathbb{K}$.
Hence we have:
\begin{equation}\label{eqqa29}
\int _{C_1}\Omega = 0 \quad;\quad
\int_{C_2}\Omega = 1
\end{equation}
In conclusion we have that there is only one independent
tautological bundle corresponding to $\omega _2$ and we have:
\begin{equation}\label{eqqa30}
\int _{C_1}\omega _{1,2,3}= 0 \quad ; \quad
\int_{C_2}\omega _2 =\alpha_{\zeta_J} \quad ; \quad
\int_{C_2}\omega _{1,3 }= \frac{\alpha_{\zeta_J}}{4}\quad ; \quad
\left[\mathbb{K}\right]= 2\left[\omega _2\right]\,.
\end{equation}
\subsection{The K\"ahler geometry of the singular variety $Y_3$}
Let us finally derive the K\"ahler geometry of the singular threefold
$Y_3$ which is the total space of a line bundle over the singular
exceptional divisor $\mathbb{P}$[1,1,2]. To this
effect let us consider the following sextic equation for an unknown
F, where the coefficients are functions of the same invariants
$\varpi $ and $\mho $ previously defined in eqs.~\eqref{prillini}:
\begin{eqnarray}
\mathcal{P}(F) &\equiv& 2+F (-4-\varpi )+F^2 (2-2 \mho )+2 F^3
\varpi \mho +2 F^6 \mho ^3+F^4 \left(2 \mho -2 \mho
^2\right)\nonumber\\
&&+F^5 \left(4 \mho ^2-\varpi
\mho ^2\right) \, = \, 0 \label{eqqa31}
\end{eqnarray}
Just as in the case of the full variety $Y$, the sextic polynomial
$\mathcal{P}$(F) has the property that, for all positive values of
the parameters $\varpi $ and $\mho $, it has two positive real roots
and four complex roots arranged in two pairs of complex conjugate
roots. Hence the largest real root is uniquely identified and
singles out a precise function $\mathfrak{G}$($\varpi $,$\mho $) of
the parameters:
\begin{equation}\label{eqqa32}
\mathfrak{G}(\varpi ,\mho ) \equiv \text{largest real root of }
\mathcal{P}(F)\,.
\end{equation}
The function $\mathfrak{G}$($\varpi $,$\mho $) can be developed in
power series of $\mho $ and we have: \begin{eqnarray}
\mathfrak{G}(\varpi ,\mho )&=&\frac{1}{4}
\left(4+\varpi +\sqrt{\varpi (8+\varpi )}\right)\nonumber\\
&&-\frac{\left(\left(4+\varpi +\sqrt{\varpi (8+\varpi )}\right)^2
\left(3 \varpi ^2+4 \sqrt{\varpi (8+\varpi )}+\varpi \left(16+3
\sqrt{\varpi (8+\varpi )}\right)\right)\right) \mho }{64
\sqrt{\varpi (8+\varpi )}}\nonumber\\&&+O[\mho ]^2 \label{eqqa33}
\end{eqnarray}
In terms of this function and of the above variables the K\"ahler
potential of the complete $Y_3$ variety takes the form
\begin{equation}\label{eqqa34}
\mathcal{K}_{\text{Y3}} = \frac{2 \left(1+\mathfrak{G}^2 \mho
+\sqrt{2} \sqrt{\mathfrak{G} \varpi \left(1+\mathfrak{G}^2 \mho
\right)}+\mathfrak{G} \log[\mathfrak{G}]\right)}{\mathfrak{G}}
\end{equation}
The function $\mathfrak{G}(\varpi , \mho)$ is displayed in Figure \ref{gigotica}.
\begin{figure}
\begin{center}
\includegraphics[height=8cm]{KFgothic.png}
\caption{\label{gigotica} Plot of the function
$\mathfrak{G}(\varpi , \mho)$. }
\end{center}
\end{figure}
\section{Summary of the Chamber Structure Discussion} \label{summatheologica}
We can now try to summarize our long and detailed discussion of the
chamber structure pertaining to the K\"ahler quotient resolution
\`{a} la Kronheimer of the singularity $\mathbb{C}^3/\mathbb{Z}_4$.
First of all let us stress that the chamber structure is one of the
most relevant aspects of the entire construction from the point of
view of any physical application in the context of the gauge/duality
correspondence. Indeed the $\zeta$ parameters are the
Fayet-Iliopoulos parameters in the gauge theory side of the
correspondence, while they should be retrievable as fluxes of
suitable $p$-forms on the supergravity side of the correspondence
and hence as \textit{parameters of the Ricci flat K\"ahler metric}
existing on the same resolved smooth manifold. Therefore, loosely
speaking, the chamber structure is a mathematical synonymous of
\textit{Phase Diagram} of the Gauge Theory.
\par
Having clarified the physical relevance of the topic, let us state
what results we found.
The toric analysis of the problem has revealed several possible
degenerations of the full resolution
$Y\,\longrightarrow\mathbb{C}^3/\mathbb{Z}_4$. Only three of such
manifolds are realized by the K\"ahler quotient:
\begin{description}
\item[a)] The complete resolution $Y$, which happens to be the
total space of the canonical line bundle of the compact exceptional divisor
$D_c\,\simeq \, \mathbb{F}_2$ where $\mathbb{F}_2$
is the second Hirzebruch surface. In this case the exceptional
divisor has an additional non-compact component $D_{nc}\,\simeq \, \mathbb{P}^1\times \mathbb{C}$
\item[b)] The partial resolution $Y_3$ which happens to be the
total space of the canonical line bundle over the singular compact exceptional
divisor ${D}_c\,\simeq \,
\mathbb{P}[1,1,2]$. In this case the non-compact exceptional
divisor $D_{nc}$ disappears since its compact factor
$\mathbb{P}^1$ shrinks to zero.
\item[c)] The partial resolution $Y_{EH} = \text{ALE}_{A_1}\times{\mathbb C}$,
which again can be seen as the total
space of the canonical line bundle of the noncompact exceptional divisor
$D_{nc}$.
In this case it is the compact exceptional divisor that
disappears.
\end{description}
The three aforementioned manifolds are realized in
$\zeta$ space $\mathbb{R}^3$ in the way we summarize
below.
There are four intersecting planar walls $\mathcal{W}_{0,1,2,3}$
that partition the entire $\zeta$ space into eight disjoint
convex cones (stability chambers).
\begin{enumerate}
\item
The smooth space $Y$ is realized in all interior points of all the
chambers and in all generic points of the three walls of type 0,
namely $\mathcal{W}_0$, $\mathcal{W}_1$, $\mathcal{W}_3$.
\item The singular space $Y_3\longrightarrow \mathbb{P}[1,1,2]$ is
realized in all generic points of the type $1$ wall $\mathcal{W}_2$. The
latter contains the Kamp\'{e} line and the two Cardano edges that
are also realization of $Y_3$.
\item The smooth space $Y_{EH}$ is realized on
the homonymous edge, which is the intersection of the wall
$\mathcal{W}_0$ with the wall $\mathcal{W}_2$.
\end{enumerate}
\paragraph{Wall Crossing.} Crossing a wall is the mathematical
analog of a physical phase transition. When the wall we cross is
of type 0, we simply go from one realization of the $Y$ manifold
to another one. On the two sides of the wall the supporting
variety is the same, yet the tautological bundles may be
different. On the walls of type 0 we have a third realization of
$Y$, but the configuration of the tautological line bundles may vary.
Most dramatic is the crossing of a wall of type III. In this case we
go from one realization of the $Y$ manifold to another one passing
trough a degenerate singular manifold that is located on the wall.
There is a simple numerical procedure to visualize such a
phenomenon. Let us explain how it works.
Considering the three-dimensional Euclidean space with coordinates $X_1,X_2,X_3$, a
solution of the moment map system \eqref{sistemico} defines a
two-dimensional surface in this space. This is so because the
coefficients of the system depend on the two parameters $\Sigma$, $U$,
or alternatively $\varpi,\mho$. Creating a grid of
$\varpi,\mho$-values we can obtain a picture of the aforementioned
two-dimensional surface by interpolating the numerical solutions of
the system \eqref{sistemico} in all points of the grid at fixed
$\zeta$ parameters. Such a picture is like a photogram of a movie.
The other photograms of the this movie are provided by repeating the
same procedure with other $\zeta$ parameters. The effect of
crossing is better visualized when all the photograms are arranged
along a line in $\zeta$ space that crosses the wall and is
orthogonal to it.
Let us consider the unique type III wall $\mathcal{W}_2$ and the
line that crosses it orthogonally, displayed in
fig.\ref{direttoria}.
\begin{figure}
\begin{center}
\includegraphics[height=8cm]{traversmuroIW2.png}
\caption{\label{direttoria}{Path crossing the $\mathcal{W}_2$ wall.
The line with an arrowhead shown in this figure is the one we have
chosen to create a movie of wall crossing. }}
\end{center}
\end{figure}
Along this line we have numerically constructed a few photograms as
previously described. In fig.~\ref{fotogrammi} we display three of
them. One before crossing, one on the wall and one after crossing.
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{IW2cross2.png}
\includegraphics[height=5cm]{IW2cross3.png}
\includegraphics[height=5cm]{IW2cross4.png}
\caption{\label{fotogrammi}{Crossing the $\mathcal{W}_2$ wall.
Before crossing the solution of the algebraic system
\eqref{sistemico} traces a surface in $\pmb{X}$ space. After
crossing the solution traces another surface. Just on the wall the
solution traces a line rather than a surface. This is just the
symptom of degeneration of the corresponding manifold. }}
\end{center}
\end{figure}
As it is evident from the figure, just on the wall the surface
representing the solution is substituted by a line. This happens
because the solution for $X_1,X_2,X_3$ is expressed in terms of
simple functions of single function $Z(\varpi,\mho)$ of the two
variables. This obviously yields the parametric equation
of a line. We already know this for the Kamp\'{e}
line, where we have eq.~\eqref{raschiotto} and the function $Z$ is
defined as a positive real root of the sextic equation
\eqref{sesticina}. Actually a similar result applies to all points of
the $\mathcal{W}_2$ plane, namely for:
\begin{equation}\label{circopedestre}
\zeta \, = \, \left\{x,x+y,y\right\}
\end{equation}
Indeed we can easily prove by direct substitution that for such a
choice of the level parameters the solution of the moment map
equations \eqref{sistemico} can be written as follows:
\begin{eqnarray}
X_1 &=& \sqrt[4]{\frac{Z \left(x Z^2+y\right)^{3/2}}{(x+y) \sqrt{x+y
Z^2}}} \nonumber \\[5pt]
X_2 &=& Z \label{portapannolini} \\
X_3 &=& \frac{\sqrt[4]{\frac{Z}{x+y}} \left(x+y
Z^2\right)^{3/8}}{\sqrt[8]{x Z^2+y}} \nonumber
\end{eqnarray}
where $Z$ is, by definition, the real positive solution of the
following equation:
\begin{equation}
\frac{U \left(Z^4-1\right) \sqrt{Z (x+y)} \left(x
Z^2+y\right)^{3/4}}{\sqrt[4]{x+y Z^2}}+Z \left(x Z^2+y\right)
\left(\sqrt{Z (x+y)} \sqrt[4]{\left(x Z^2+y\right) \left(x+y
Z^2\right)}+\Sigma \left(Z^2-1\right)\right) \, = \,
0\label{ekaterina}
\end{equation}
Note that the solution \eqref{portapannolini}, \eqref{ekaterina}
becomes that of the Kamp\'{e} manifold
\eqref{raschiotto}, \eqref{sesticina} when $x=y=s$. Similarly for
either $x=0$ or $y=0$ the solution
\eqref{portapannolini}, \eqref{ekaterina} degenerates in either version
of the solution defining the Cardano manifold. In that case the
degree of the equation reduces from six to four allowing for the use
of Cardano's formula. This is the conclusive prove that at any point
of the type III wall we realize the same manifold $Y_3$. Why is it
degenerate? The answer is that the very fact that we express all the
$X_I$ in terms of a single function $Z(\varpi,\mho)$ implies that
the Chern classes of all the tautological bundles are cohomologous
and conversely that the homology group is of dimension $1$ rather
than two.
\section{Conclusions}
What we have done in this paper was already described at the end of
the introductory Section \ref{introito} and we do not repeat it
here. We rather point out the perspectives opened up by our results
and what are the necessary steps that have still to be taken in the
development of our program.
As we have advocated, for the sake of the conceivable physical
applications in the context of brane theory, of the quotient
singularity resolutions, it is quite relevant to understand the
explicit analytic structure of the exceptional divisors and of the
homology 2-cycles, having, at the same time, command on the explicit
form of the forms $\omega^{(1,1)}$ representing the first Chern
classes of the tautological bundles. In this paper we succeeded in
these two tasks because we had two very strong allies:
\begin{enumerate}
\item Toric Geometry, applicable to the case of abelian cyclic
groups $\Gamma$, that allows one to identify the possible full
or partial resolutions of the quotient singularity and study in great detail their
geometry,
determining also the appropriate coordinate
transformations leading to the equations of the divisors.
\item The technique of localizing the moment map equations on
divisors and curves, which allows for their solutions on these loci.
\end{enumerate}
Furthermore, we were able to clarify the Chamber Structure which, as
already stressed, encodes the phase diagram of the corresponding
gauge theory.
However it is on the two fronts mentioned above that mathematical
work is still needed. Ito-Reid theorem applies in general and in
particular to nonabelian groups. An urgent task is the extension of
the sort of explicit results obtained in this paper to nonabelian
cases. A second urgent task which is equally important for abelian
and nonabelian cases is the explicit solution of the moment map
equations in cases where the algebraic equation is of degre $d\, >
\, 4$. To this effect a quite valuable help might come from the
recent developments in algebraic geometry that represent roots of
higher order algebraic equations in terms of theta constants
associated with suitable hyperelliptic Riemann surfaces
\cite{gicovo6istica}. We plan to study that case.
From the physical side, as we already stated, we plan to utilize the
present explicit singularity resolution to study the corresponding
supergravity brane solutions and dual gauge theories. In this
context the geometrical results obtained here might be the origin of
new interesting D3-brane and M2-brane physics. Such applications to
the holographic scheme demand the construction of \textit{Ricci flat
metrics} on the same manifolds $Y$ that are derived from the
Kronheimer construction and the precise identification of the their
deformation parameters with the Fayet-Iliopoulos parameters $\zeta$.
These questions and issues will be addressed in forthcoming
publications \cite{conmasbia}.
\section*{Acknowledgements}
We acknowledge with great pleasure important and illuminating
discussions with our colleagues and close friends, Mario Trigiante,
Massimo Bianchi, Francisco Morales and Francesco Fucito.
U.B.~likes to thank CEMPI (Centre Europ\'een pour les Math\'ematiques,
la Physique et leur Interactions) for supporting his visit to Universit\'e
de Lille in February-March 2019 (Labex CEMPI ANR-11-LABX-0007-01).
U.B.'s research is also supported by GNSAGA-INdAM and by the PRIN
project ``Geometria delle Variet\`a Algebriche."
\addcontentsline{toc}{section}{Appendix: structure of the Hirzebruch surfaces}
\section*{Appendix: structure of the Hirzebruch surfaces}
\label{hirzegeo}
Let us give some details about the geometry of the second Hirzebruch surface
$\mathbb F_2$, which appears as the compact component of the exceptional
divisor of the resolution $Y\to \mathbb C^3/Z_4$.
Let $(U,V)$ be homogeneous coordinates on
$\mathbb{P}^1$ and $(X,Y,Z)$ homogeneous
coordinates of $\mathbb{P}^2$.
\begin{definizione}
The $n$-th Hirzebruch surface $\mathbb F_n$ is defined as
the locus cut out in $\mathbb{P}^1\times \mathbb{P}^2$ by the
following equation of degree $n+1$:
\begin{equation}\label{irgobrutto}
0 \, = \, \mathcal{P}_n(U,V,X,Y,Z) \, = \, X \, U^n \, - \, Y\,
V ^n
\end{equation}
\end{definizione}
It is convenient to describe the Hirzebruch surface in terms of
inhomogeneous coordinates choosing open charts both for
$\mathbb{P}^1$ and for $\mathbb{P}^2$. We can
cover $\mathbb{P}^1$ with two charts: that where $V\neq 0$ and that
where $U\neq 0$. For $\mathbb{P}^2$, we need instead three charts
respectively corresponding to $Z \neq 0$, $Y\neq 0$ and $X\neq 0$.
Hence we have a total of six charts.
\paragraph{Description of $\mathbb F_n$ in the chart $V\neq 0, Z\neq 0$.} According
with the chosen conditions we set:
\begin{equation}\label{nonmolli1}
s=\frac{U}{V} \quad ; \quad v= \frac{X}{Z} \quad ; \quad w = \frac{Y}{Z}
\end{equation}
and from eq.~\eqref{irgobrutto} we obtain:
\begin{equation}\label{irgobruttissimo}
0 \, = \, v \, s^n \, - \, w.
\end{equation}
Introducing a new complex variable $t$, a simple parametric
description of the locus satisfying the constraint
\eqref{irgobruttissimo} is provided by setting:
\begin{equation}\label{compiacio}
v(u,t)= tu^{-1} \quad ; \quad w(u,t) = t \, u^{n-1} \quad ;
\quad s(u,t) \, = \, u
\end{equation}
Hence $(u,t)$ can be identified with a system of local coordinates
on $\mathbb F_n$. Let us now recall that the group $\mathrm{SU(2)}$
is the isometry group of $\mathbb{P}^1$ equipped with the
Fubini-Study K\"ahler metric. Given a group element
\begin{equation}\label{gelemento}
\mathbf{g} = \left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right) \, \in \, \mathrm{SU(2)}
\end{equation}
its action on the coordinate $u$ (regarded as a coordinate of
$\mathbb{P}^1$) is given by the corresponding fractional linear
transformation
\begin{equation}\label{marescalzo}
\mathbf{g}(u) \, = \, \frac{a \, u + b}{c\, u + d}
\end{equation}
The action of $\mathbf{g}$ on the coordinate $u$ can be extended
also to the variable $t$ in such a way that the image point still
belongs to the Hirzebruch surface $\mathbb F_n$. We just
set:
\begin{equation}\label{ciabattarotta}
\forall \mathbf{g} \in \mathrm{SU(2)} \quad : \quad
\mathbf{g}\left(u,t\right) \, = \,\left(\frac{a \, u + b}{c\, u +
d}, \quad t \, \left(c \,u+d\right)^{-n}\right)
\end{equation}
and we easily verify:
\begin{description}
\item[a)] The transformation \eqref{ciabattarotta} respects the
group product and provides a nonlinear representation since:
\begin{equation}\label{belan}
\mathbf{g}_1\left(\mathbf{g}_2\left(u,t\right)\right) \, =
\,\mathbf{g_1}\cdot\mathbf{g}_2\left(u,t\right)
\end{equation}
\item[b)] The transformation \eqref{ciabattarotta} maps points of
the Hirzebruch surface into points of the same surface since:
\begin{equation}\label{gerogammo}
v\left(\mathbf{g}(u,t)\right) \, \left(s\left(\mathbf{g}(u,t)\right)\right)^n -
w\left(\mathbf{g}(u,t) \right) \, = \,0
\end{equation}
if:
\begin{equation}\label{gerogammus}
v\left(u,t\right) \, \left(s(u,t)\right)^n -
w\left(u,v \right) \, = \,0
\end{equation}
\end{description}
On the Hirzebruch surface, as described in the current open chart
(which is dense in the manifold), we can introduce nice K\"ahler
metrics that are invariant under the action of $\mathrm{SU(2)}$ as
given in eq.~\eqref{ciabattarotta}. In Section \ref{inducedF2} we
derive one of them (see eqs.~\eqref{samotracia} and \eqref{Jpolare})
induced on the compact exceptional divisor by the K\"ahler quotient
construction.
\bigskip \frenchspacing
\addcontentsline{toc}{section}{References}
|
2,869,038,157,051 | arxiv | \section{Introduction}
Recent research has suggested deep neural networks are dramatically over-parametrized. In natural language processing alone, most state-of-the-art neural networks have computational and memory complexities that scale with the size of the vocabulary. Practitioners have developed numerous methods to reduce the complexity of these models---either before, during, or after training---while retaining existing performance. Some of these methods include quantization \cite{gong2014compressing, hubara2017quantized}, and different flavors of pruning \cite{zhu2017prune,liu2018rethinking,frankle2018the,gale2019state}.
In particular, the Lottery Ticket Hypothesis \cite{frankle2018the} proposes that small, sparse subnetworks are embedded within large, over-parametrized neural networks. When trained in isolation, these subnetworks can achieve commensurate performance using the same initialization as the original model. The lottery ticket training procedure is formalized as an iterative three-stage approach: (1) train an over-parametrized model with initial parameters $\theta_0$; (2) prune the trained model by applying a mask $m \in \{0,1\}^{|\theta|}$ identified by a sparsification algorithm; (3) reinitialize the sparse subnetwork by resetting its non-zero weights to the initial values ($m \odot \theta_0$) and retrain it. These three stages are repeated for multiple rounds. If the final subnetwork achieves similar (or better) test performance in comparison to the original network, a winning \textit{lottery ticket} has been identified.
Evidence of the existence of winning tickets has been empirically shown on a range of tasks, including computer vision, reinforcement learning, and natural language processing \cite{frankle2018the, yu2019play}. However, the merits of lottery ticket training has recently been called into question. In particular, (1) whether keeping the same initialization (e.g., $\theta_0$) is crucial for acquiring tickets \cite{liu2018rethinking}; and (2) if tickets can generalize across multiple datasets \cite{morcos2019one}.
Our paper investigates the efficacy of lottery tickets when the data distribution changes. We define multiple data domains such that their input distributions are varied. Then, we consider whether subnetworks obtained in a source domain $\mathcal{D}_s$ can be used to specify and train subnetworks in a target domain $\mathcal{D}_t$ where $s \ne t$. Inspired by \citet{liu2018rethinking}, we also experiment with different initialization methods at transfer-time, probing at the importance of initial (source domain) values in disparate target domains. We find that subnetworks obtained through lottery ticket training do not completely overfit to particular input distributions, showing some generalization potential when distributional shifts occur. In addition, we discover a \textit{phase transition} point, at which subnetworks reset to their initial values show better and more stable generalization performance when transferred to an arbitrary target domain.
In summary, our contributions are (1) continuing the line of work on the Lottery Ticket Hypothesis \cite{frankle2018the}, showing that tickets exist in noisy textual domains; (2) performing comprehensive experiments pointing towards the transferability of lottery tickets under distributional shifts in natural language processing; and (3) publicly releasing our code and datasets to promote further discussion on these topics\footnote{https://github.com/facebookresearch/pytext}.
\section{Related Work}
There is a large body of work on transfer learning for neural networks \cite{deng2013sparse, yosinski2014how, liu2017sparse, zoph2018learning, kornblith2019better}. Most of these works focus on improving the transferred representation across tasks and datasets. The representation from a source dataset is fine-tuned or learned collaborately on a target dataset. In contrast, we focus on understanding whether the \textit{architecture} can be transferred and retrained, and whether transferring the initialization is required. Our work is also related to Neural Architecture Search (NAS) \cite{zoph2018learning, liu2018darts, elsken2018neural}. The goal of NAS is to identify well-performing neural networks automatically. Network pruning can be viewed as a form of NAS, where the search space is the sparse topologies within the original over-parameterized network \cite{liu2018rethinking, gale2019state, frankle2018the}.
Iterative magnitude pruning \cite{frankle2018the, frankle2019stable} is a recently proposed method for finding small, sparse subnetworks from large, over-parameterized neural networks that can be trained in isolation to reach a similar (or better) test accuracy. To obtain these re-trainable sparse subnetworks, \citet{frankle2018the} uses an iterative pipeline that involves training a model, removing ``redundant'' network connections identified by a sparsification algorithm, re-training the subnetwork with the remaining connections. In particular, the experiments in \citet{frankle2018the} show it is critical to re-initialize the subnetworks using the \textit{same} initial values after each round of the iterative pipeline.
However, the importance of re-using the original initialization is questioned in \citet{liu2018rethinking}, where the authors show that competitive performance of the sparse subnetworks can be achieved with random initialization as well. \citet{morcos2019one} investigate the transferability of lottery tickets across multiple optimizers and datasets for supervised image classification, showing that tickets can indeed generalize \cite{morcos2019one}. Beyond the differences between our domain, task, and datasets, our work carries an important distinction. In \citet{morcos2019one}, the authors refer to the \textit{transfer of initialization} as both the \textit{transfer of the sparse topologies} and the \textit{transfer of the initial values} of the subnetworks. Therefore, it is unclear whether the \textit{sparse topology} alone can be transferred across datasets or the topology combined with the initial values must be exploited jointly to achieve transferability. In our work, we decouple this question by investigating the influence of different initialization strategies on the sparse architecture during the process of finding the winning tickets and after the transfer to other domains.
\section{Task and Datasets}
\paragraph{Distributional Shifts}
Let $(x^s_i, y^s_i) \in \mathcal{X} \times \mathcal{Y}$ denote a pair of training samples from domain $\mathcal{D}_s$. Let $f(x; \theta)$ be a function (e.g., deep neural network) that maps an input from $\mathcal{X}$ to the label space $\mathcal{Y}$, parameterized by $\theta$. In this work, the sparsity of $\theta$ is induced by the lottery ticket training process \cite{frankle2018the}. To model distributional shifts, we characterize each domain $\mathcal{D}_i$ as a dataset from the Amazon Reviews corpus \cite{mcauley2013hidden}. The differences in unigram frequencies, semantic content, and random noise mimic the type of distributional shifts that occur in machine learning.
\paragraph{Subword Vocabulary} We ensure each domain $\mathcal{D}$ shares an identical support on $\mathcal{X}$ by encoding the inputs using a vocabulary common across all datasets. Word-level vocabularies may introduce problems during domain transfer as certain words potentially only appear within a particular domain. On the other end of the spectrum, character-level vocabularies ameliorate this issue but may not contain enough expressive power to model the data. We elect to use a subword vocabulary, balancing the out-of-vocabulary and effectiveness problems introduced by the word- and character-level vocabularies, respectively. Technical details for creating the shared subword vocabulary are presented in \S\ref{method:vocab}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{jsd.png}
\caption{Jenson-Shannon Divergence scores on subword unigram distributions for each domain pair $(\mathcal{D}_i, \mathcal{D}_{i'})$. Domains include Books (B), Electronics (E), Movies (M), CDs (C), and Home (H). Values are scaled by $1e^5$ for presentation.}
\label{fig:jsd}
\end{figure}
\paragraph{Divergence Scores} Given an identical support for all data distributions, we now quantify the distributional shifts between our domains using Jenson-Shannon Divergence (JSD). JSD is a symmetric measure of similarity between two (continuous) probability distributions $p$ and $q$ with a proxy, averaged distribution $m = \frac{1}{2}(p+q)$:
\begin{equation}
\mathrm{JSD}(p||q) = \frac{1}{2}\mathrm{KL}(p||m) + \frac{1}{2}\mathrm{KL}(q||m)
\label{eq:jsd}
\end{equation}
where $\mathrm{KL}(p||q)$ in Eq. \ref{eq:jsd} denotes the Kullback-Leibler divergence, defined as:
\begin{equation}
\mathrm{KL}(p||q) = \int_{-\infty}^{\infty} p(x)\log \frac{p(x)}{q(x)}dx
\end{equation}
Figure \ref{fig:jsd} displays the divergence scores between our datasets. On average, there is high disagreement with respect to the prevalence and usage of subwords in each domain, with Electronics$\rightarrow$Home the most similar and CDs$\rightarrow$Home the most dissimilar.
\paragraph{Sentiment Analysis} Finally, we introduce our base task for experimentation. Our models are evaluated on a binary sentiment analysis task constructed from five categories in the Amazon Reviews corpus: books (B), electronics (E), movies (M), CDs (C), and home (H). The dataset originally provides fine-grained sentiment labels ($1$ through $5$) so we group $1$, $2$ as negative and $4$, $5$ as positive. Following \citet{peng2018cross}, reviews with neutral ratings ($3$) are discarded. We sample 20K train, 10K validation, and 10K test samples from each category, ensuring there is an equal distribution of positive and negative reviews.
\section{Methods}
In this section, we discuss our technical methods. First, we describe the subword vocabulary creation process (\S\ref{method:vocab}). Second, we cover the underlying model used in the sentiment analysis task (\S\ref{method:model}). Third, we detail the lottery ticket training and transferring methods (\S\ref{method:tickets}).
\subsection{Vocabulary} \label{method:vocab}
We use the SentencePiece\footnote{https://github.com/google/sentencepiece} library to create a joint subword vocabulary for our datasets \cite{kudo2018sentencepiece}. The subword model is trained on the concatenation of all five training datasets (100K sentences) using the byte-pair encoding algorithm \cite{sennrich2016neural}. We set the vocabulary size to 8K. The final character coverage is 0.9995, ensuring minimal out-of-vocabulary problems during domain transfer.
\subsection{Model} \label{method:model}
We use convolutional networks (CNN) as the underlying model given their strong performance on numerous text classification tasks \cite{kim2014convolutional, mou2016transferable, gehring2017convolutional}. Let $V$ and $n$ represent the vocabulary of the corpus and maximum sequence length, respectively. Sentences are encoded as an integer sequence $t_1, \cdots, t_n$ where $t_i \in V$. The embedding layer replaces each token $t_i$ with a vector $\mathbf{t}_i \in \mathbb{R}^d$ that serves as the corresponding $d$-dimensional embedding. The vectors $\mathbf{t}_1, \cdots, \mathbf{t}_n$ are concatenated row-wise to form a token embedding matrix $\mathbf{T} \in \mathbb{R}^{n \times d}$.
Our model ingests the embedding matrix $\mathbf{T}$, then performs a series of convolutions to extract salient features from the input. We define a convolutional filter $\mathbf{W} \in \mathbb{R}^{h \times d}$ where $h$ represents the \textit{height} of the filter. The filter is not strided, padded, or dilated, Let $\mathbf{T}[i:j] \in \mathbb{R}^{h\times d}$ represent a sub-matrix of $\mathbf{T}$ extracted from rows $i$ through $j$, inclusive. The feature map $\mathbf{c} \in \mathbb{R}^{n-h+1}$ is induced by applying the filter to each possible window of $h$ words, i.e.,
\begin{equation}
c_i = f\Big( \big\langle \mathbf{T}[i:i+h],\mathbf{W}\big\rangle_{\fro} + b\Big)
\end{equation}
for $1\leq i \leq n-h+1$, where $b \in \mathbb{R}$ is a bias term, $f$ is a non-linear function, and the Frobenius inner product is denoted by $\langle \mathbf{A},\mathbf{B}\rangle_{\fro} = \sum_{i=1}^h \sum_{j=1}^d \mathbf{A}_{ij} \mathbf{B}_{ij}$. 1-max pooling \cite{collobert2011natural} is applied on $\mathbf{c}$, defined as $\hat{c} = \textnormal{max}\{\mathbf{c}\}$. This is performed to propagate the maximum signal throughout the network and reduce the dimensionality of the input.
The process described above creates \textit{one} feature from \textit{one} convolution with window $h$ followed by a pooling operation. To extract multiple features, the model uses several convolutions with varying $h$ to obtain features from different sized $n$-grams in the sequence. The convolutional (and pooled) outputs are concatenated along the channel dimension, then fed into a one-layer MLP to obtain a distribution over the $c$ classes.
\subsection{Lottery Tickets} \label{method:tickets}
\subsubsection{Initialization}
\label{sec:init}
The embedding matrix is initialized from a unit Gaussian, $\mathbf{T} \sim \mathcal{N}(0,1)$. The convolutional and MLP layers use He initialization \cite{he2015delving}, whose bound is defined as
\begin{equation}
b = \sqrt{\frac{6}{(1+a^2) \times \mathrm{fan\_in}}}
\end{equation}
where $a$ and $\mathrm{fan\_in}$ are parameters calculated for each weight. The resulting weights have values uniformly sampled from $\mathcal{U}(-b,b)$.
\subsubsection{Training}
\label{sec:train}
We use iterative pruning with alternating cycles of training and pruning to obtain the tickets \cite{han2015learning, frankle2018the}.
For clarity, we define a \textit{round} as training a network for a fixed number of epochs. We begin with a seed round $r_0$ where the model does not undergo any pruning, then begin to procure tickets in a series of lottery ticket training rounds.
In each successive round $r_{i>0}$, a fraction $p$ of the weights that survived round $r_{i-1}$ are pruned (according to a sparsification algorithm, discussed below) to obtain a smaller, sparser subnetwork; this is denoted by $f(x;m_i\odot \theta_i)$ where $m_i$ and $\theta_i$ represent the sparse mask and weights at round $r_i$. The weights $\theta_i$ of this subnetwork are set according to an \textit{initialization strategy} and the subnetwork is re-trained to convergence. We refer to the \textit{sparsity} as the fraction of weights in the network that are exactly zero. In each round, we prune $p\%$ of the weights in the model. Therefore, the resulting ticket has sparsity $1-(1 - p\%)^{r_{total}}$, where $r_{total}$ is the total number of lottery ticket training rounds.
Next, we discuss the sparsification algorithm used to prune weights in each round $r_i$. Let $\mathbf{p}_i$ denote the vectorized collection of trainable parameters in layer $i \ge 0$, with the embedding layer as layer $0$. After re-training the (sub-)networks in each round, we apply the $\ell_0$ projection on the parameters in each layer, i.e.
\begin{equation}
\argmin_{\mathbf{p}} ||\mathbf{p}-\mathbf{p}_i||^{2}_{2}
\label{eq:10}
\end{equation}
subject to $\card(\mathbf{p}) \le k_i$, where $\card(\mathbf{p})$ denotes the number of non-zeros in $\mathbf{p}$. The optimization problem in Eq. \ref{eq:10} can be solved analytically by sorting the elements of $\mathbf{p}_i$ with respect to their absolute values and picking the top $k_i$ elements with the largest magnitude \cite{jain2017non, zhu2017prune}. We use the sparsity hyperparameter $p$ introduced above to decide $k_i$ for each layer. Let $\mathrm{len}(\mathbf{p}_i)$ denote the total number of trainable parameters in layer $i$. We set $k_i = p\% \times \mathrm{len}(\mathbf{p}_i)$ for each layer. In accordance with our training procedure, once a weight is pruned, it is no longer a trainable parameter; hence, $\mathrm{len}(\mathbf{p}_i)$ is strictly decreasing after each round.
\begin{figure}[t]
\centering
\includegraphics[scale=0.15]{transfer/transfer.png}
\caption{Visualization of the subnetwork transfer process. Purple denotes elements from the source domain, while blue denotes elements from the target domain. Tickets are composed of two elements: (1) the sparsified mask ($m_i$) and (2) the initial parameter values ($\theta_i$). During transfer, we create subnetworks in the source domain with the mask borrowed from the source domain, but with potentially different parameters. We use $\theta_i'$ to denote that these parameters are set according to some \textit{initialization strategy}, which we discuss further in our experiments (\S\ref{sec:exp}).}
\label{fig:clarify}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.325\textwidth]{transfer/books.png}
\includegraphics[width=0.325\textwidth]{transfer/movies.png}
\includegraphics[width=0.325\textwidth]{transfer/electronics.png}
\includegraphics[width=0.325\textwidth]{transfer/cds.png}
\includegraphics[width=0.325\textwidth]{transfer/home.png}
\end{center}
\label{fig:books}
\caption{Results obtaining lottery tickets on the Books, Movies, Electronics, CDs, and Home categories of the Amazon Reviews dataset \cite{mcauley2013hidden}. Experiments are repeated five times, where the solid lines represent the mean and shaded regions represent the standard deviation. Note that the $x$-axis ticks are \textit{not} uniformly spaced.}
\label{fig:obtain}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.325\textwidth]{transfer/books_cds.png}
\includegraphics[width=0.325\textwidth]{transfer/electronics_home.png}
\includegraphics[width=0.325\textwidth]{transfer/cds_books.png}
\includegraphics[width=0.325\textwidth]{transfer/books_electronics.png}
\includegraphics[width=0.325\textwidth]{transfer/electronics_movies.png}
\includegraphics[width=0.325\textwidth]{transfer/cds_home.png}
\includegraphics[width=0.325\textwidth]{transfer/books_home.png}
\includegraphics[width=0.325\textwidth]{transfer/electronics_cds.png}
\includegraphics[width=0.325\textwidth]{transfer/cds_movies.png}
\end{center}
\label{fig:books}
\caption{Results transferring lottery tickets on nine transfer tasks constructed from the five categories of the Amazon Reviews dataset \cite{mcauley2013hidden}. Experiments are repeated five times, where the solid lines represent the mean and shaded regions represent the standard deviation. Note that the $x$-axis ticks are \textit{not} uniformly spaced.}
\label{fig:transfer}
\end{figure*}
\subsubsection{Transferring}
The lottery ticket training procedure outlined in \S\ref{sec:train} yields a batch of subnetworks $f(x^s;m_1\odot \theta), \cdots, f(x^s;m_n\odot \theta)$ where $x^s$ represents the inputs from a \textit{source} domain $\mathcal{D}_s$ and $m_i$ represents the sparse mask used to prune weights at round $r_i$. During transfer, we construct a new batch of subnetworks $f(x^t;m_1\odot \theta'), \cdots, f(x^t;m_n\odot \theta')$ to be evaluated on inputs from a (non-identical) \textit{target} domain $\mathcal{D}_t$ with masks derived from the \textit{source} domain. The change in parameter notation ($\theta \rightarrow \theta'$) implies that the subnetworks evaluated in a disparate domain can potentially use a different \textit{transfer} initialization strategy. We clarify this process in Figure \ref{fig:clarify}. In contrast, \citet{morcos2019one} transfers the entire ticket (sparse masks and initial values) to the target domain. Finally, using the new batch of subnetworks, we evaluate each subnetwork $f(x^t;m_i \odot \theta')$ in the target domain for $r_{total}$ rounds. Unlike the canonical ticket training rounds, we do not (additionally) sparsify the subnetworks during transfer. All in all, our transfer task is designed to answer the following question: can the \textit{sparse masks} found in a source domain using lottery ticket training (\S\ref{method:tickets}) be transferred to a target domain with \textit{different initialization strategies} to match the performance of a ticket obtained in same target domain?
\section{Experiments} \label{sec:exp}
\subsection{Settings}
\label{sec:expsetting}
Our CNN uses three filters ($h \in [3,4,5]$), each with $127$ channels, and ReLU activation \cite{nair2010rectified}. We fix the maximum sequence length to $500$ subwords. The embeddings are $417$-dimensional and trained alongside the model. We opt not to use pre-trained embeddings to ensure the generalizability of our results. Additionally, we regularize the embeddings with dropout \cite{srivastava2014dropout}, $p=0.285$. The MLP contains one hidden layer with a dimension of $117$. Hyperparameters were discovered using Bayesian hyperparameter optimization \cite{snoek2012practical} on the Books validation set. The models are trained with a batch size of $32$ for a maximum of $15$ epochs. Early stopping is used to save iterative model versions that perform well on a development set. We use the Adam optimizer \cite{kingma2014adam} with a learning rate of $1e^{-3}$ and $\ell_2$ regularization with a weight of $1e^{-5}$.
\subsection{Obtaining Tickets} \label{exp:obtain}
First, we use the lottery ticket training procedure outlined in \S\ref{sec:train} to obtain tickets for our five datasets with $p=35\%$ and $r_{total}=20$. We compare the test performance of the subnetworks using the following baselines:
\begin{itemize}
\item \textsc{Full-Model:} This baseline evaluates the performance of the original network \textit{without} any pruning. In other words, we train a model for a seed round $r_0$, then record its performance.
\item \textsc{Ticket-Reset:} The values of the subnetwork are reset to their \textit{original values} before training. This initialization strategy was used in the earliest formation of the Lottery Ticket Hypothesis \cite{frankle2018the}.
\item \textsc{Ticket-Random:} The values of the subnetwork are reset to \textit{random values} drawn from the initialization distribution(s) of the original network. We sample weights from the distributions outlined in \S\ref{sec:init} to initialize the subnetworks.
\end{itemize}
The results are shown in Figure \ref{fig:obtain}. For all datasets, \textsc{Ticket-Reset} shows the best performance, notably outperforming \textsc{Full-Model} in early stages of sparsification (0-90\%) for the Books, Electronics, and Home datasets. This demonstrates that deep neural networks---especially those for sentiment analysis---are highly over-parameterized, and the sparsity induced by lottery ticket training can help to increase performance. This observation is consistent with \citet{louizos2018learning}, which also showed sparse networks fashion a regularization effect that results in better generalization performance. In addition, we observe that \textsc{Ticket-Reset} and \textsc{Ticket-Random} have similar test performance until about 96\% sparsity. This casts some doubt around whether the initial values truly matter for sparse models as the randomly sampled values seem to fit sparse masks well.
However, a \textit{phase transition} occurs in the high sparsity regime, where the differences between \textsc{Ticket-Reset} and \textsc{Ticket-Random} are significantly enlarged. The performance of \textsc{Ticket-Random} becomes highly unstable and drops off much faster than \textsc{Ticket-Reset} after 96\% sparsity. In contrast, \textsc{Ticket-Reset} remains relatively stable---even with sparsity levels over 99.9\%---pointing towards the enigmatic importance of original values in extreme levels of sparsity.
\subsection{Transferring Tickets}
Next, we use the lottery ticket transferring procedure outlined in \S\ref{method:tickets} to transfer (obtained) subnetworks from a \textit{source} domain to a non-identical \textit{target} domain. Identical to the previous experiment, we use $r_{total}=20$. We compare the test performance of the \textit{transferred} subnetworks using the following baselines:
\begin{itemize}
\item \textsc{Ticket-Target:} This baseline is comprised of the subnetworks obtained in the target domain using lottery ticket training. We borrow the values for this baseline (without modification) from the \textsc{Ticket-Reset} subnetworks shown in Figure \ref{fig:obtain}, albeit from the domain of interest.
\item \textsc{Masks-Reset:} Under this initialization strategy, the masks obtained in the source domain is used on the target domain and the subnetwork is trained from the \textit{same} initial values as in the source domain.
\item \textsc{Masks-Random:} Under this initialization strategy, \textit{only} the masks are used from the subnetwork obtained in the source domain. The parameters are randomly initialized from the distributions outlined in \S\ref{sec:init} before training on the target domain.
\end{itemize}
The results are shown in Figure \ref{fig:transfer}. Both \textsc{Masks-Reset} and \textsc{Masks-Random} show signs of generalization in the early stages of sparsification. Most notably, subnetworks obtained in the CDs domain are extremely robust; both the \textsc{Masks-Reset} and \textsc{Masks-Random} results show stronger performance than \textsc{Ticket-Target}, even in sparsity levels over 99\%. This is relatively surprising as the \textsc{Full-Model} in \S\ref{exp:obtain} achieved the worst performance in the CDs domain. Further inspection of representations learned in this domain will be required to understand its strong ticket performance, which may or may not be a coincidence.
We see a 3-5\% dropoff in performance (up to 90\% sparsity) from tickets identified from the Books and Electronics tasks after transferring. These results together imply that tickets are not completely immune to distributional shifts, although the degradation in test accuracy is not substantial until reaching high sparsity. Nevertheless, we notice the accuracies of \textsc{Masks-Reset} and \textsc{Masks-Random} stay relatively stable from 0-90\% sparsity; they only begin to steadily decline after this point.
Finally, we compare the performance of \textsc{Masks-Reset} and \textsc{Masks-Random}. In the Books tasks, \textsc{Masks-Random} performs better overall in comparison to \textsc{Masks-Reset}. Its performance is slightly worse in the Electronics and CDs tasks, although it is relatively comparable to \textsc{Masks-Reset} up to 96\%. Similar to the results in \S\ref{exp:obtain}, we notice a \textit{phase transition} point where the initial values (e.g., \textsc{Masks-Reset}) play a much bigger role in maintaining stability and performance in the deeper stages of sparsification.
\section{Discussion}
In this section, we briefly recap our findings, highlighting key points observed through our ticket procuring and transfer experiments. For each section, we also touch on areas for future work.
\paragraph{Evidence of transferability of winning tickets in natural language processing.}
Our experiments show that ``winning tickets'' can indeed be identified in a sentiment task formulated from noisy, user-generated datasets. Moreover, the ``winning tickets'', up to extreme level of sparsity (e.g., $\> 90\%$), can be transferred across domains without much loss in accuracy. The fact that tickets can be obtained in noisy environments shows its prominence across multiple data sources. However, our work only considers a binary sentiment analysis task. Future work can explore other tasks such as multi-class text classification, language modeling, and machine translation.
\paragraph{Randomly initialized tickets are strong baselines.} Consistent with the observations in \citet{liu2018rethinking}, initializing tickets to their \textit{original values} before training is not necessarily required for strong performance. In our experiments, we show that in high sparsity conditions (up to 90\%), there is no noticeable difference between the performance of the \textit{originally} and \textit{randomly} initialized subnetworks. Although the sparse masks build on top of each other from round $r_i$ to $r_{i+1}$, randomly initialized subnetworks are still able to settle in a local minima with comparable performance to that of the originally initialized subnetworks. However, our work fixes the optimizer and learning rate across experiments. It may be possible that randomly initialized subnetworks using varying optimization reach better minima.
\paragraph{A \textit{phase transition} point largely influences ticket performance.} As alluded to above, there is almost no difference in performance when considering originally and randomly initialized subnetworks. However, our experiments point towards a crucial turning point---the \textit{phase transition}---in which the initialization begins to matter. In particular, especially in extreme levels of sparsity (e.g., 99.99\%) originally initialized networks exhibit less variance than randomly initialized tickets in test accuracy. However, the specific sparsity at which the phase transition happens is dataset-dependent. Understanding why this occurs and its relation with other models, datasets, and optimization algorithms can further unveil and explain the phenomena behind lottery tickets.
\section{Applications in Federated Learning}
Federated learning is a scenario where a centralized model is trained over decentralized data, distributed across millions (if not billions) of clients (e.g., electronic devices) \cite{jakub2016federated,bonawitz2019towards}. Crucially, the clients are not allowed to exchange \textit{data} with the central server or each other. Instead, each client can fine-tune a model for a couple of iterations on their own data, then send their (encrypted) parameters or gradients to a server for aggregation. This ``collaborative learning" setup effectively maintains a level of user privacy by ensuring the data always stays on-device. However, this poses several challenges for optimization; as the centralized server does not have access to the data distribution of each client, any neural architecture selection has to be done on either (a) a \textit{different} data source the server has access to or (b) on each individual client. Since (b) is generally quite expensive, the server usually maintains some seed data, as alluded to in (a).
With the transferability of lottery tickets, the server can procure lottery tickets on server-accessible data, then retrain the tickets on client data under the federated learning framework. While there may be a large performance drop when transferring \textit{extremely} sparse networks, our results show that clients can still re-train \textit{moderately} sparse networks with commensurate performance. We believe that this ``sparsify and transfer" procedure has two immediate benefits: (1) past work---including the original incarnation of the lottery ticket hypothesis---has shown that sparse networks can be, under certain conditions, easier to optimize \cite{frankle2018the, morcos2019one, gale2019state}; and (2) sparser sub-networks have significantly less capacity than their large, over-parameterized counterparts, which can alleviate client-server communication costs (e.g., model uploading and downloading) \cite{jakub2016federated,sattler2019robust}.
\section{Conclusion}
The Lottery Ticket Hypothesis \cite{frankle2018the} posits that large, over-parameterized networks contain small, sparse subnetworks that can be re-trained in isolation with commensurate test performance. In this paper, we examine whether these tickets are robust against distributional shifts. In particular, we set up domain transfer tasks with the Amazon Reviews dataset \cite{mcauley2013hidden} to obtain tickets in a \textit{source} domain and transfer them in a disparate \textit{target} domain. Moreover, we experiment with the \textit{transfer} initialization of the networks, determining if resetting to initial values (obtained in the source domain) are required for strong performance in the target domain. Our experiments show that tickets (under several initialization strategies) can be transferred across different text domains without much loss up to a very high level of sparsity.
In addition, there is a lot of debate on whether initial value resetting is critical to achieve commensurate test performance. While \citet{frankle2018the, frankle2019stable} present evidence supporting the importance of resetting, \citet{gale2019state, liu2018rethinking} show that sparse re-trainable subnetworks can be found independent of resetting. Our experiments show that this is \textit{not} a yes or no question. Specifically, we show there is a \textit{phase transition} related to sparsity. Resetting is not critical before extreme levels of sparsity (i.e., below 99\%), but the effect of resetting is magnified in high sparsity regimes. Finally, we demonstrate the practical applications of our results in federated learning.
\section*{Acknowledgments}
Thanks to Veselin Stoyanov and our anonymous reviewers for their helpful comments.
\bibliographystyle{acl_natbib_nourl}
\section{Experiments}
\subsection{Hyperparameter Settings}
Our CNN uses three filters ($h \in [3,4,5]$), each with $127$ channels, and ReLU activation \cite{nair2010rectified}. We fix the maximum sequence length to $500$ words. The embeddings are $417$-dimensional and trained alongside the model. We opt not to use pre-trained embeddings to ensure the generalizability of our results. Additionally, we regularize the embeddings with dropout \cite{srivastava2014dropout}, $p=0.285$. The MLP contains one hidden layer with a dimension of $117$. Hyperparameters were discovered using Bayesian hyperparameter optimization \cite{snoek2012practical} on the Books validation set.
The model is trained with a batch size of $32$ for a maximum of $15$ epochs. Early stopping is used to save iterative model versions that perform well on a development set. We use the Adam optimizer \cite{kingma2014adam} with a learning rate of $1e^{-3}$ and $\ell_2$ regularization with a weight of $1e^{-5}$.
\bibliographystyle{acl_natbib_nourl}
|
2,869,038,157,052 | arxiv | \section{Introduction}
The control of molecular dynamics takes an important role in quantum physics and chemistry because of the variety of its applications,
starting from well-established ones such as rotational state-selective excitation of chiral molecules (\cite{perez,sandra}), and going further to applications in quantum information (\cite{yu}). For a general overview of controlled molecular dynamics one can see, for example, \cite{koch}.
Rotations can, in general, couple to vibrations in the so-called ro-vibrational states. In our mathematical analysis, however, we shall restrict ourselves to the rotational states of the molecule. Due to its discrete quantization, molecular dynamics
perfectly fits the mathematical quantum control theory which has been established until now. In fact, the control of
the Schr{\"o}dinger equation has attracted substantial interest in the last 15 years
(see
\cite{Altafini,Coron,
BGRS,
Glaser2015,Keyl,nersesyan} and references therein).
Rigid molecules are subject to the classification of rigid rotors in terms of their inertia moments $I_1\leq I_2\leq I_3$: one distinghuishes asymmetric-tops ($I_1<I_2<I_3$), prolate symmetric-tops ($I_1<I_2=I_3$), oblate symmetric-tops ($I_1=I_2<I_3$), spherical-tops ($I_1=I_2=I_3$), and linear-tops ($I_1=0,\,I_2=I_3$).
The problem of controlling the rotational dynamics of a planar molecule
by means of two orthogonal
electric fields
has been analyzed in \cite{BCCS}, where
approximate controllability has been proved using
a suitable non-resonance property of
the spectrum of the rotational Hamiltonian.
In \cite{BCS} the approximate controllability of a linear-top controlled by three orthogonal
electric fields has been established. There, a new sufficient condition for controllability, called the Lie--Galerkin tracking condition, has been introduced in an abstract framework, and applied to the linear-top system.
Here, we study the symmetric-top (prolate, oblate, or spherical) as a generalization of the linear one, characterizing its controllability in terms of the position of its electric dipole moment.
While for the linear-top two quantum numbers $j,m$ are needed to describe the motion, the main and more evident difference here is the presence of a third quantum number $k$, which classically represents the projection of the total angular momentum on the symmetry axis of the molecule. This should not be a surprise, since the configuration space of a linear-top is the $2$-sphere $S^2$, while the symmetric-top evolves on the Lie group ${\rm SO}(3)$, a three-dimensional manifold. As a matter of fact, by fixing $k=0$, one recovers the linear-top as a subsystem inside the symmetric-top. It is worth mentioning that the general theory developed in \cite{BCCS,BCMS,nersesyan} is based on non-resonance conditions on the spectrum of the internal Hamiltonian. A major difficulty in studying the controllability properties of the rotational dynamics is that, even in the case of the linear-top, the spectrum of the rotational Hamiltonian has severe degeneracies at the so-called $m$-levels. The symmetric-top is even more degenerate, due to the additional presence of the so-called $k$-levels.
The Schr{\"o}dinger equation for a rotating molecule controlled by three orthogonal
electric fields reads
\[
\mathrm{i}\dfrac{\partial}{\partial t} \psi(R,t)= H\psi(R,t)+\sum_{l=1}^3u_l(t)B_l(R,\delta)\psi(R,t), \quad \psi(\cdot,t) \in L^2({\rm SO}(3)),
\]
where $H=\frac{1}{2}\Big(\frac{P_1^2}{I_1}+\frac{P_2^2}{I_2}+\frac{P_3^2}{I_3}\Big)$ is the rotational Hamiltonian, $I_1,I_2,I_3$ are the moments of inertia of the molecule, $P_1,P_2,P_3$ are the angular momentum differential operators, and $B_i(R,\delta)=-\langle R \delta, e_i\rangle$ is the interaction Hamiltonian between the dipole moment $\delta$ of the molecule and the direction $e_i$, $i=1,2,3$. Finally, $R \in {\rm SO}(3)$ is the matrix which describes the configuration of the molecule in the space.
\begin{figure}[ht!]
\subfigure[]{
\includegraphics[width=0.3\linewidth, draft = false]{top1.png} \label{top1} }
\subfigure[]{
\includegraphics[width=0.3\linewidth, draft = false]{top2.png} \label{top2} }
\subfigure[]{
\includegraphics[width=0.3\linewidth, draft = false]{top3.png} \label{top3} }
\label{top}
\end{figure}
We shall study the symmetric-top, and set $I_1=I_2$. Anyway, our analysis does not depend on whether $I_3\geq I_2$ or $I_3\leq I_2$, so we are actually treating in this way both the cases of a prolate or oblate symmetric-top.
The principal axis of inertia with associated inertia moment $I_3$ is then called \emph{symmetry axis} of the molecule. The position of the electric dipole with respect to the symmetry axis plays a crucial role in our controllability analysis: a symmetric molecule with electric dipole collinear to the symmetry axis will be called \emph{genuine}, otherwise it will be called \emph{accidental} (\cite[Section 2.6]{gordy}).
Most symmetric molecules present in nature are genuine. Nevertheless, it can happen that two moments of inertia of a real molecule are almost equal, by ``accident", although the molecule does not possess a $n$-fold axis of symmetry with $n\geq 3$\footnote{ The existence of a $n$-fold axis of symmetry (i.e., an axis such that a rotation of angle $2\pi/n$ about it leaves unchanged the distribution of atoms in the space) with $n\geq 3$, implies that the top is genuine symmetric.}For instance, the inertia moments of the
chiral molecule HSOH are $I_1\sim I_2\ll I_3$,
while its dipole components are $\delta_1>\delta_2=\delta_1/2\gg \delta_3\neq 0$ (\cite{WinnewisserCEJ2003}). Such slightly asymmetric-tops
are often studied in chemistry and physics in their symmetric-top approximations (see, e.g., \cite{WinnewisserCEJ2003},\cite[Section 3.4]{gordy}), which correspond in general to accidentally symmetric-tops. In this case, closed expression for the spectrum and the eigenfunctions of $H$ are known. The case of the asymmetric-top goes beyond the scope of this paper, but we remark that accidentally symmetric-tops may be used to obtain controllability of asymmetric-tops with a perturbative approach.
The idea of studying the controllability of quantum systems in general configurations starting from symmetric cases (even if the latter have more degeneracies) has already been exploited, e.g., in \cite{panati,mehats}.
The position of the dipole moment turns out to play a decisive role: when it is neither along the symmetry axis, nor orthogonal to it, as in
Figure~\ref{top}\subref{top2}, then approximate controllability holds, under some non-resonance conditions, as it is stated in Theorem~\ref{rare}. To prove it, we introduce in Theorem \ref{LGTC}
a new controllability test for the multi-input Schr{\"o}dinger equation,
closely related to the Lie--Galerkin tracking condition.
We then apply this result to the symmetric-top system.
The control strategy is based on the excitation of the system with external fields in resonance with three families of frequencies, corresponding to internal spectral gaps.
One frequency is used to overcome the $m$-degeneracy in the spectrum, and this step is quite similar to the proof of the linear-top approximate controllability (Appendix~\ref{appendixA}). The other two frequencies are used in a next step to break the $k$-degeneracy, in a three-wave mixing scheme (Appendix~\ref{appendixB}) typically used in quantum chemistry to obtain enantio- and state-selectivity for chiral molecules (\cite{AGGT,GKL,YY}).
The two dipole configurations to which Theorem \ref{rare} does not apply are extremely relevant from the physical point of view. Indeed, the dipole moment of a symmetric-top lies usually along its symmetry axis (Figure~\ref{top}\subref{top1}), and if not, for accidentally symmetric-tops, it is often found in the orthogonal plane (Figure~\ref{top}\subref{top3}). Here two different symmetries arise, implying the non-controllability of these systems, as we prove, respectively, in Theorems~\ref{genuine} and \ref{accidentally}.
These two conserved quantities stimulated and motivated the study of the classical dynamics of the symmetric-top, presented in the first part of the paper: the first conserved quantity, appearing in Theorem~\ref{genuine}, corresponds to a classical observable, that is, the component of the angular momentum along the symmetry axis, and it turns out to be a first integral also for the classical controlled dynamics, as remarked in Theorem~\ref{genuinecla}.
The second conserved quantity, appearing in Theorem~\ref{accidentally}, is more challenging, because it does not have a counterpart in the classical dynamics, being mainly due to the superposition of $k$ and $-k$ states in the quantum dynamics. We show that this position of the dipole still corresponds to a controllable system for the classical-top, while it does not for the quantum-top. Thus, the latter is an example of a system whose quantum dynamics are not controllable even though the classical dynamics are. The possible discrepancy between quantum and classical controllability has been already noticed, for example, in the harmonic oscillator dynamics (\cite{Mirra}).
It should be noticed that the classical dynamics of a rigid body controlled with external torques (e.g., opposite pairs of gas jets) or internal torques (momentum exchange devices such as wheels) as studied in the literature (see, e.g., \cite[Section 6.4]{AS},
\cite{krishna},\cite{crouch}, \cite[Section 4.6]{Jurdje}) differ from the ones considered here, where the controlled fields (i.e., the interaction between
the
electric field and the electric dipole) are not left-invariant and their action depends on the configuration of the rigid body in the space.
The paper is organized as follows: in Section \ref{classical} we study the controllability of the classical Hamilton equations for a symmetric-top. The main results are
Theorems \ref{genuinecla} and \ref{accidentallycla}, where we prove, respectively, the non-controllability when the dipole lies along the symmetry axis of the body and the controllability in any other case. In Section \ref{quantum} we study the controllability of the Schr{\"o}dinger equation for a symmetric-top. The main controllability result is Theorem \ref{rare}, where we prove the approximate controllability when the dipole is neither along the symmetry axis, nor orthogonal to it. In the two cases left, we prove the non-controllability in Theorems~\ref{genuine} and \ref{accidentally}.
\section{Classical symmetric-top molecule}\label{classical}
\subsection{Controllability of control-affine systems with recurrent drift}
We recall in this section some useful results on the controllability properties of (finite-dimensional) control-affine systems.
Let $M$ be an $n$-dimensional manifold, $X_0,X_1,\dots,X_\ell$ a family of smooth (i.e., $C^\infty$) vector fields on $M$,
$U\subset \mathbb{R}^\ell$ a set of control values which is
a neighborhood of the origin.
We consider the control system
\begin{equation}\label{control}
\dot{q}=X_0(q)+\sum_{i=1}^\ell u_i(t)X_i(q), \qquad q \in M,
\end{equation}
where the control functions $u$ are taken in $L^\infty(\mathbb{R},U)$. The vector field $X_0$ is called the \emph{drift}.
The \emph{reachable set} from $q_0\in M$ is
\begin{align*}
\mathrm{Reach}(q_0):= &\{q \in M \mid \exists \; u,T \text{ s.t. the solution to (\ref{control}) with } q(0)=q_0 \\ & \text{ satisfies } q(T)=q \}.
\end{align*}
System (\ref{control}) is said to be \emph{controllable} if $\mathrm{Reach}(q_0)=M$ for all $q_0\in M$.
The family of vector fields $X_0,X_1,\dots,X_\ell$ is said to be \emph{Lie bracket generating} if
$$\dim(\mathrm{Lie}_q\{X_0,X_1,\dots,X_\ell \})=n $$
for all $q\in M$, where $\mathrm{Lie}_q\{X_0,X_1,\dots,X_\ell \}$ denotes the evaluation at $q$ of the Lie algebra generated by $X_0,X_1,\dots,X_\ell$.
The following is a basic result in geometric control theory (see, for example, \cite[Section 4.6]{Jurdje}). Recall that a complete vector field $X$ on $M$
is said to be \emph{recurrent} if for every open nonempty subset $V$ of $M$ and every time $t>0$, there exists $\tilde{t}>t$ such that $\phi_{\tilde{t}}(V)\cap V \neq \emptyset$, where $\phi_{\tilde{t}}$ denotes the flow of $X$ at time ${\tilde{t}}$.
\begin{theorem}\label{basic}
Let $U\subset \mathbb{R}^m$ be
a neighborhood of the origin.
If $X_0$ is recurrent and the family $X_0,X_1,\dots,X_{\ell}$ is Lie bracket generating, then system (\ref{control}) is controllable.
\end{theorem}
A useful test to check that the Lie bracket generating condition holds true
is given by the following simple lemma, whose proof is given for completeness.
\begin{lemma}\label{lemmino}
If the family of analytic vector fields $X_0,X_1,\dots,X_{\ell}$ is Lie bracket generating on the complement of a subset
$N\subset M$ and $\mathrm{Reach}(q)\not\subset N$, for all $q\in N$, then the family is Lie bracket generating on $M$.
\end{lemma}
\begin{proof}
Let $q\in N$ and $q_1\in \mathrm{Reach}(q)\setminus N$.
By the Orbit theorem applied to the case of analytic vector fields (see, e.g., \cite[Chapter 5]{AS}) the dimension of
$\mathrm{Lie}_q\{X_0,X_1,\dots,X_{\ell} \}$ and $\mathrm{Lie}_{q_1}\{X_0,X_1,\dots,X_{\ell} \}$ coincide. By assumption the latter is equal to $n$, which implies that the same is true for the former.
\end{proof}
\subsection{The classical dynamics of a molecule subject to electric fields}
Since the translational motion (of the center of mass) of a rigid body is decoupled from the rotational motion, we shall assume that the molecule can only rotate around its center of mass. In detail, for any vector $v\in \mathbb{R}^3$, denoting by $e_1,e_2,e_3$ a fixed orthonormal frame of $\mathbb{R}^3$ and by $a_1,a_2,a_3$ a moving orthonormal frame with the same orientation, both attached to the rigid body's center of mass, the configuration of the molecule is identified with the unique $g \in {\rm SO}(3)$ such that $g\;(x,y,z)^T=(X,Y,Z)^T$, where $(x,y,z)$ are the coordinates of $v$ with respect to $a_1,a_2,a_3$, and $(X,Y,Z)$ are the coordinates of $v$ with respect to $e_1,e_2,e_3$. In order to describe the equations on the tangent bundle ${\rm SO}(3)\times \mathfrak{so}(3)$,
we shall make use of the isomorphism of Lie algebras
\[
A:(\mathbb{R}^3,\times) \rightarrow (\mathfrak{so(3)},[\cdot,\cdot]), \quad P=
\begin{pmatrix}
P_1 \\
P_2\\
P_3
\end{pmatrix} \mapsto A(P)=
\begin{pmatrix}
0 & -P_3 & P_2\\
P_3 & 0 & -P_1\\
-P_2 & P_1 & 0
\end{pmatrix}
\]
where $\times$ is the vector product. As external forces to control the rotation of the molecule, we consider three orthogonal
electric fields with intensities $u_1(t)$, $u_2(t)$, $u_3(t)$ and directions $e_1,e_2,e_3$.
We assume that
$$(u_1,u_2,u_3)\in U\subset \mathbb{R}^3, \quad (0,0,0)\in {\rm Interior}(U), $$
that is, the set $U\subset \mathbb{R}^3$ of admissible values
for the triple $(u_1,u_2,u_3)$ is
a neighborhood of the origin.
Denoting by $\delta$ the dipole of the molecule written in the moving frame, the three forces due to the interaction with the electric fields are
$
u_i(t)(g^{-1}(t)e_i)\times \delta$, $i=1,2,3.$
Then, the equations for the classical rotational dynamics of a molecule with inertia moments $I_1,I_2,I_3$ controlled with electric fields read
\begin{equation}\label{euler1}
\begin{pmatrix}
\dot{g} \\
\dot{P}
\end{pmatrix}=X(g,P)+\sum_{i=1}^3u_i(t)Y_i(g,P), \quad (g,P)\in {\rm SO}(3) \times \mathbb{R}^3,\ u\in U,
\end{equation}
where
\begin{equation}\label{fields}
X(g,P):=\begin{pmatrix}
gA(\beta P) \\
P \times (\beta P)
\end{pmatrix}, \quad Y_i(g,P):=\begin{pmatrix}
0\\
(g^{-1}e_i) \times \delta
\end{pmatrix}, \quad i=1,2,3,
\end{equation}
and $P=(P_1,P_2,P_3)^T,\;\beta P=(P_1/I_1,P_2/I_2,P_3/I_3)^T$. Similarly to
\cite[Section 12.2]{Jurdje} (where this is done for the heavy rigid body), these equations can be derived as Hamilton equations corresponding to the Hamiltonian
$$H=\frac{1}{2}\left(\frac{P_1^2}{I_1}+\frac{P_2^2}{I_2}+\frac{P_3^2}{I_3}\right)+V(g),\quad V(g)=-\sum_{i=1}^3u_i\langle(g^{-1}e_i), \delta\rangle $$
on ${\rm SO}(3)\times \mathbb{R}^3$.
System \eqref{euler1} can be seen as a control-affine system with $\ell=3$
controlled fields.
Rotating molecule dynamics can also be represented in terms of quaternions, lifting the dynamics from ${\rm SO}(3)$ to the $3$-sphere $S^3$, as follows.
We
denote by $\mathbb{H}$ the space of quaternions and we
identify $S^3\subset \mathbb{R}^4$ with $\{q_0+\mathrm{i} q_1+\mathrm{j}q_2+\mathrm{k}q_3\in \mathbb{H} \mid q_0^2+q_1^2+q_2^2+q_3^2=1\}$. We also identify $\mathbb{R}^3$ with $\{\mathrm{i} P_1+\mathrm{j}P_2+\mathrm{k}P_3 \in \mathbb{H}\mid (P_1,P_2,P_3)\in \mathbb{R}^3\}$.
Via this identification, the vector product $P\times \Omega$ becomes $\frac{1}{2}[P,\Omega]:=\frac{1}{2}(P\Omega-\Omega P)$, for any $P,\Omega \in \mathbb{R}^3$.
Moreover, given $q=\cos(\alpha)+(q_1,q_2,q_3)\sin(\alpha) \in S^3$ and $P \in \mathbb{R}^3$,
the quaternion product
$qP\overline{q}$ is in $\mathbb{R}^3$ and corresponds to the rotation
of $P$ of angle $2\alpha$ around the axis $(q_1,q_2,q_3)$.
Hence, $S^3$ can be seen as a double covering space of ${\rm SO}(3)$ (see \cite[Section 5.2]{Ratiu} for further details).
System (\ref{euler1}) is lifted to $S^3\times \mathbb{R}^3$ to the system
\begin{equation}\label{quaternion}
\begin{cases}
\begin{aligned}
\dfrac{dq(t)}{dt}=&q(t)\beta P(t), \\
\dfrac{dP(t)}{dt}=&\frac{1}{2}[P(t), \beta P(t)]+\dfrac{u_1(t)}{2}[\overline{q(t)}\mathrm{i} q(t),\delta]+\frac{u_2(t)}{2}[\overline{q(t)}\mathrm{j}q(t),\delta]\\
&+\frac{u_3(t)}{2}[\overline{q(t)}\mathrm{k}q(t),\delta].
\end{aligned}
\end{cases}
\end{equation}
We are going to use the quaternion representation in order to prove that
the vector fields characterizing
\eqref{quaternion} form a Lie bracket generating family. As a consequence,
the same will be true for \eqref{euler1}.
\subsection{Non-controllability of the classical genuine symmetric-top}
In most cases of physical interest, the electric dipole $\delta$ of a symmetric-top molecule lies along the symmetry axis of the molecule. If $I_1=I_2$, the symmetry axis is the third one, and we have that $\delta=
(0,0,\delta_3)^T$, $\delta_3 \neq 0$, in the body frame. The corresponding molecule is called a \emph{genuine symmetric-top} (\cite[Section 2.6]{gordy}).
\begin{theorem}\label{genuinecla}
The third angular momentum $P_3$ is a conserved quantity for the controlled motion \eqref{euler1} of the genuine symmetry-top molecule.
\end{theorem}
\begin{proof}
In order to compute the equation satisfied by $P_3$ in \eqref{euler1}, notice that
\begin{align*}
P(t)\times \beta P(t)=\begin{pmatrix}
P_1(t) \\
P_2(t) \\
P_3(t)
\end{pmatrix}\times\begin{pmatrix}
P_1(t)/I_2 \\
P_2(t)/I_2 \\
P_3(t)/I_3
\end{pmatrix}=\begin{pmatrix}
\Big(\frac{1}{I_3}- \frac{1}{I_2}\Big)P_2(t)P_3(t) \\
\Big(\frac{1}{I_2}- \frac{1}{I_3}\Big)P_1(t)P_3(t)\\
0
\end{pmatrix}.
\end{align*}
Moreover, $u_i(t)(g^{-1}(t)e_i)\times\delta=u_i(t)(g^{-1}(t)e_i)\times (0,0,\delta_3)^T=
(\star,\star,0)^T$.
Hence, for a genuine symmetric-top,
the equation for $P_3$ becomes $\frac{dP_3(t)}{dt}=0$.
\end{proof}
As a consequence, the controlled dynamics live in the hypersurfaces $\{P_3=\mathrm{const}\}$ and hence system (\ref{euler1}) is not controllable in the $6$-dimensional manifold ${\rm SO}(3)\times \mathbb{R}^3$.
\subsection{Controllability of the classical accidentally symmetric-top}
In Theorem \ref{genuinecla} we proved that $P_3$ is a first integral for equations (\ref{euler1}), using both the symmetry of the mass and the symmetry of the charge, meaning that $I_1=I_2$ and $\delta=(0,0,\delta_3)^T$.
We consider now a symmetric-top molecule with electric dipole $\delta$ not along the symmetry axis of the body, that is,
$\delta=(\delta_1,\delta_2,\delta_3)^T$, with $\delta_1 \neq 0$ or $\delta_2 \neq 0$. This system is usually called \emph{accidentally symmetric-top} (\cite[Section 2.6]{gordy}).
\begin{theorem}\label{accidentallycla}
For an accidentally symmetric-top molecule system \eqref{euler1} is controllable.
\end{theorem}
\begin{proof}
The drift $X$ is recurrent, as observed
in \cite[Section 8.4]{AS}. Thus, by Theorem~\ref{basic}, to prove controllability it suffices to show that, for any $(g,P) \in {\rm SO}(3) \times \mathbb{R}^3$, $\mathrm{dim}\Big( \mathrm{Lie}_{(g,P)}\{X,Y_1,Y_2,Y_3\}\Big)=6$. Actually, we will
find six vector fields in
$\mathrm{Lie}\{X,Y_1,Y_2,Y_3\}$ whose span is six-dimensional everywhere but on a set of positive codimension, and we will conclude by applying Lemma \ref{lemmino}. Notice that
$[X,Y_i](g,P)=\begin{pmatrix}
-gS(\beta[(g^{-1})e_i \times \delta]) \\
\star
\end{pmatrix}.$
Denote by $\Pi_{{\rm SO}(3)}$ the projection onto the ${\rm SO}(3)$ part of the tangent bundle, that is, $\Pi_{{\rm SO}(3)}: T({\rm SO}(3)\times \mathbb{R}^3)\rightarrow T{\rm SO}(3).$
Then we have
\begin{align*}
\mathrm{span}&\{\Pi_{{\rm SO}(3)}X(g,P),\Pi_{{\rm SO}(3)}[X,Y_1](g,P), \Pi_{{\rm SO}(3)}[X,Y_2](g, P),\Pi_{{\rm SO}(3)}[X,Y_3](g,P) \} \\ &=gS\Big(\beta[\{\delta\}^\perp \oplus \mathrm{span}\{P\}]\Big).
\end{align*}
Hence, if $\langle P, \delta \rangle \neq 0$, we have
\begin{align}\nonumber
\dim & \Big( \mathrm{span}\{\Pi_{{\rm SO}(3)}X(g,P),\Pi_{{\rm SO}(3)}[X,Y_1](g,P),\Pi_{{\rm SO}(3)}[X,Y_2](g,P),\\ &\Pi_{{\rm SO}(3)}[X,Y_3](g,P) \} \Big)
=3 .\label{3dim}
\end{align}
To go further in the analysis, it is convenient to use the quaternion parametrization (\ref{quaternion}) in which every field is polynomial. We have, in coordinates $q=(q_0,q_1,q_2,q_3) \in S^3, P=(P_1,P_2,P_3) \in \mathbb{R}^3$,
\[
X(q,P)=\begin{pmatrix}
q\beta P \\
\frac{1}{2}[P,\beta P]
\end{pmatrix}=\begin{pmatrix}
-q_1\frac{P_1}{I_2}-q_2\frac{P_2}{I_2}-q_3\frac{P_3}{I_3}\\[1mm]
q_0\frac{P_1}{I_2}+q_2\frac{P_3}{I_3}-q_3\frac{P_2}{I_2}\\[1mm]
q_0\frac{P_2}{I_2}-q_1\frac{P_3}{I_3}+q_3\frac{P_1}{I_2}\\[1mm]
q_0\frac{P_3}{I_3}+q_1\frac{P_2}{I_2}-q_2\frac{P_1}{I_2}\\[1mm]
\Big(\frac{1}{I_3}- \frac{1}{I_2}\Big)P_2P_3 \\[1mm]
\Big(\frac{1}{I_2}- \frac{1}{I_3}\Big)P_1P_3\\[1mm]
0
\end{pmatrix}, \]
\[
Y_1(q,P)=\begin{pmatrix}
0\\
\frac{1}{2}[\overline{q}\mathrm{i} q,\delta]
\end{pmatrix}=\begin{pmatrix}
0\\
0\\
0\\
0\\
(q_1q_2-q_0q_3)\delta_3-(q_1q_3+q_0q_2)\delta_2\\
(q_1q_3+q_0q_2)\delta_1-\frac{1}{2}(q_0^2+q_1^2-q_2^2-q_3^2)\delta_3\\
\frac{1}{2}(q_0^2+q_1^2-q_2^2-q_3^2)\delta_2-(q_1q_2-q_0q_3)\delta_1
\end{pmatrix},
\]
\[
Y_2(q,P)=\begin{pmatrix}
0\\
\frac{1}{2}[\overline{q}\mathrm{j}q,\delta]
\end{pmatrix}=\begin{pmatrix}
0\\
0\\
0\\
0\\
\frac{1}{2}(q_0^2-q_1^2+q_2^2-q_3^2)\delta_3-(q_2q_3-q_0q_1)\delta_2\\
(q_2q_3-q_0q_1)\delta_1-(q_1q_2+q_0q_3)\delta_3\\
(q_1q_2+q_0q_3)\delta_2-\frac{1}{2}(q_0^2-q_1^2+q_2^2-q_3^2)\delta_1
\end{pmatrix}.
\]
Let us consider
the six vector fields $X,Y_1,Y_2,[X,Y_1],[X,Y_2],[[X,Y_1],Y_1]$: we have that the
determinant of the
matrix
obtained by removing the first row from the $7\times 6$ matrix
$$(X(q,P),Y_1(q,P),Y_2(q,P),[X,Y_1](q,P),[X,Y_2](q,P),[[X,Y_1],Y_1](q,P))$$
is equal to $S(q,P):=S_1(q)S_2(q)S_3(q)S_4(q)S_5(P)$, where
\begin{align*}
&S_1(q):=\frac{I_2-I_3}{{32}I_2^{{3}}I_3^2}q_1, \\
&S_2(q):=(-2q_1q_2\delta_1+2q_0q_3\delta_1+q_0^2\delta_2+q_1^2\delta_2-(q_2^2+q_3^2)\delta_2), \\
&S_3(q):=(q_0(-2q_2\delta_1+2q_1\delta_2)+2q_3(q_1\delta_1+q_2\delta_2)+(q_0^2-q_1^2-q_2^2+q_3^2)\delta_3)^2, \\
&S_4(q):=(-2(q_0q_2+q_1q_3)(\delta_1^2+\delta_2^2)+((q_0^2+q_1^2-q_2^2-q_3^2)\delta_1+2(q_1q_2-q_0q_3)\delta_2)\delta_3),\\
&S_5(P):=P_1\delta_1+P_2\delta_2+P_3\delta_3=\langle P,\delta \rangle.
\end{align*}
Hence, for all $(q,P) $ such that $S(q,P)\neq0$,
\begin{align*}
\dim\Big(\mathrm{span}&\{X(q,P),Y_1(q,P),Y_2(q,P),[X,Y_1](q,P),[X,Y_2](q,P),\\ & [[X,Y_1],Y_1](q,P) \}\Big)=6,
\end{align*}
that is, outside the set $N:=
\{(q,P)\in S^3 \times \mathbb{R}^3\mid S(q,P)=0\}$ the
family $X,Y_1,Y_2$
is Lie bracket generating.
Now we are left to prove that $\mathrm{Reach}(q,P)\not\subset N$ for every $(q,P) \in N$,
and then to apply Lemma \ref{lemmino}.
Let us start by considering the factor $S_5$ of $S$ and notice that, for any fixed $q \in S^3$, $\{S_5=0\}$ defines a surface inside $\{q\}\times \mathbb{R}^3$. Denote by $\Pi_{\mathbb{R}^3}:T(S^3\times \mathbb{R}^3)\to T\mathbb{R}^3$ the projection onto the $\mathbb{R}^3$ part of the tangent bundle.
The vector field $\Pi_{\mathbb{R}^3}X$ is tangent to $\{S_5=0\}$ when
\[
\langle \nabla_P S_5,\Pi_{\mathbb{R}^3}X \rangle=\frac12 \langle \delta , [P , \beta P] \rangle=0,
\]
that is, if and only if $P_3=0$ or $P_2 \delta_1-P_1 \delta_2=0$. Notice that one vector between $\Pi_{\mathbb{R}^3}Y_1,\Pi_{\mathbb{R}^3}Y_2,\Pi_{\mathbb{R}^3}Y_3$ is not tangent to $\{P_3=0\}$, otherwise
\[\mathrm{span}\{ \Pi_{\mathbb{R}^3}Y_1,\Pi_{\mathbb{R}^3}Y_2,\Pi_{\mathbb{R}^3}Y_3 \}\subset\{P_3=0 \}=
\begin{pmatrix}
0\\
0\\
1
\end{pmatrix}
^\perp.
\]
However,
$\mathrm{span}\{ \Pi_{\mathbb{R}^3}Y_1,\Pi_{\mathbb{R}^3}Y_2,\Pi_{\mathbb{R}^3}Y_3 \}=
\delta
^\perp
,$
which would imply that $\delta$ is collinear to $(0,0,1)^T$, which is impossible since the molecule is accidentally symmetric.
Concerning the hypersurface $\{P_2 \delta_1-P_1 \delta_2=0\}$, we consider again $\Pi_{\mathbb{R}^3}X$, which is tangent to it when $\langle \nabla_P ( P_2 \delta_1-P_1 \delta_2) ,\Pi_{\mathbb{R}^3}X \rangle=0$, that is, if and only if $P_3=0$ or $P_1 \delta_1+P_2 \delta_2=0$. We treat the second case, being $P_3=0$ already treated. Hence, we consider the intersection
\[
\begin{cases}
P_2 \delta_1-P_1 \delta_2=0, & \\
P_1 \delta_1+P_2 \delta_2=0. &
\end{cases}
\]
The only solution of the system is $P_1=P_2=0$, because the molecule is accidentally symmetric. Finally, when $P_1=P_2=0$, we consider the two-dimensional distribution $\mathrm{span}\{ \Pi_{\mathbb{R}^3}Y_1,\Pi_{\mathbb{R}^3}Y_2,\Pi_{\mathbb{R}^3}Y_3 \}$, which cannot be tangent to the $P_3$ axis.
Summarizing, if $\delta$ is not collinear to $(0,0,1)^T$, we have
$$\mathrm{Reach}(q,P)\not\subset \{S_5=0\}, \quad \forall (q,P) \in \{S_5=0\}.$$
To conclude, if $(q,P) \in \{S_i=0\}$, $i=1,\dots,4$, then we fix $P$ and we get two-dimensional strata $\{ q \in S^3\mid S_i(q)=0\} \subset S^3$. Now the projections of the vector fields $X,[X,Y_1],[X,Y_2],[X,Y_3]$ on the base part of the bundle span a three-dimensional vector space if $\langle P, \delta\rangle \neq 0$, as observed in \eqref{3dim}. So, by possibly steering $P$ to a point where $\langle P, \delta\rangle \neq 0$, it is possible to exit from the union of $\{S_i=0\}$. This concludes the proof of the theorem.
\end{proof}
\subsection{Reachable sets of the classical genuine symmetric-top}
Theorem \ref{genuinecla} states that each hypersurface $\{P_3=\mathrm{const}\}$ is invariant for the controlled motion. Next we prove that the restriction of system \eqref{euler1} to any such hypersurface is controllable.
\begin{theorem}\label{reachcla}
Let $I_1=I_2$ and $\delta = (0,0,\delta_3)^T$, $\delta_3\neq 0$. Then for $({g}_0,{P}_0)\in {\rm SO}(3)\times \mathbb{R}^3$, ${P}_0=(P_{01},P_{02},P_{03})$, one has
$$\mathrm{Reach}(g_0,P_0)=\{(g,P)\in {\rm SO}(3)\times \mathbb{R}^3\mid P_3=P_{03}\}. $$
\end{theorem}
\begin{proof}
From Theorem \ref{genuinecla} we know that $\{P_3=\mathrm{const}\}$ is invariant. Since the drift $X$ is recurrent, it suffices to prove that system (\ref{euler1}) is Lie bracket generating on the $5$-dimensional manifold $\{P_3=\mathrm{const}\}$.
We recall from \eqref{3dim} that, if $\langle P,\delta \rangle \neq 0$, that is, if $P_3\neq 0$, we have
\begin{align*}
\dim &\Big( \mathrm{span}\{\Pi_{{\rm SO}(3)}X(g,P),\Pi_{{\rm SO}(3)}[X,Y_1](g,P),\Pi_{{\rm SO}(3)}[X,Y_2](g,P),\\ &\Pi_{{\rm SO}(3)}[X,Y_3](g,P) \} \Big)=3.
\end{align*}
Moreover, since
$\Pi_{\mathbb{R}^3} Y_i(q,P)=(g^{-1}e_i) \times \delta $ for $i=1,2,3$, we have that
\begin{equation}\label{localcontr}
\dim\Big( \mathrm{span}\{\Pi_{\mathbb{R}^3} Y_1,\Pi_{\mathbb{R}^3} Y_2,\Pi_{\mathbb{R}^3} Y_3 \} \Big)=2
\end{equation}
everywhere. Thus, if $P_3\neq 0$, it follows that
\begin{align*}
\dim &\Big( \mathrm{span} \{X(g,P),Y_1(g,P),Y_2(g,P),Y_3(g,P),[X,Y_1](g,P),[X,Y_2](g,P),\\ &[X,Y_3](g,P) \} \Big)=5.
\end{align*}
So the system is Lie bracket generating on the manifold $\{P_3=\text{const} \neq 0\}$.
We are left to consider the case $P_3=0$. Notice that $\Pi_{\mathbb{R}^3} Y_1,\Pi_{\mathbb{R}^3} Y_2,\Pi_{\mathbb{R}^3} Y_3$ span a two-dimensional distribution for any value of $P_3$. So we consider in the quaternion parametrization the projections of $X,[X,Y_1],[X,Y_2],[[X,Y_1],X]$ on the $S^3$ part of the bundle and we obtain
\begin{align*}
\dim& \Big( \mathrm{span} \{\Pi_{S^3}X(q,P),\Pi_{S^3}[X,Y_1](q,P),\Pi_{S^3}[X,Y_2](q,P),\\ &\Pi_{S^3}[[X,Y_1],X](q,P) \} \Big)=3,
\end{align*}
for $P_3=0$, except when $q_3[2P_2(q_1q_2-q_0q_3)+ P_1(q_0^2+q_1^2-q_2^2-q_3^2) ]=0$.
This equation defines the union of two surfaces inside $S^3$. (Notice that we can assume $P_1 \neq 0$ and $P_2\neq 0$ because \eqref{localcontr} gives local controllability in $(P_1,P_2)$). On $\{q_3=0\}$, we have that $\Pi_{S^3}X$ is tangent if and only if $q_1P_2-q_2P_1=0$. On the curve $\gamma \subset S^3$ of equation
\[
\begin{cases}
q_3=0, & \\
q_1P_2-q_2P_1=0, &
\end{cases}
\]
we can consider the two-dimensional distribution spanned by $\Pi_{S^3}[X,Y_1],\Pi_{S^3}[X,Y_2]$, $\Pi_{S^3}[X,Y_3]$, which is clearly not tangent to $\gamma$. Following Lemma \ref{lemmino}, the system is Lie bracket generating also on $\{q_3=0\}$.
Analogously, on $\{2P_2(q_1q_2-q_0q_3)+ P_1(q_0^2+q_1^2-q_2^2-q_3^2)=0 \}$ we consider the vector field $\Pi_{S^3}[[[X,Y_1],X],Y_2]$ which is tangent if and only if $(q_0q_2+q_1q_3)(P_1q_0q_1+P_2q_0q_2-P_2q_1q_3+P_1q_2q_3)=0$. Again, since the distribution spanned by $\Pi_{S^3}[X,Y_1],\Pi_{S^3}[X,Y_2]$, $\Pi_{S^3}[X,Y_3]$ is two-dimensional, we can exit from the set of equations
\[
\begin{cases}
2P_2(q_1q_2-q_0q_3)+ P_1(q_0^2+q_1^2-q_2^2-q_3^2)=0, & \\
(q_0q_2+q_1q_3)(P_1q_0q_1+P_2q_0q_2-P_2q_1q_3+P_1q_2q_3)=0, &
\end{cases}
\]
whose strata have dimension at most one. Thus, applying again Lemma \ref{lemmino}, we can conclude that
the restriction of
the system to the manifold $\{P_3=0\}$ is Lie bracket generating.
\end{proof}
\section{Quantum symmetric-top molecule}\label{quantum}
\subsection{Controllability of the multi-input Schr{\"o}dinger equation}\label{notations}
Let $\ell \in \mathbb{N}$
and $U\subset \mathbb{R}^\ell$ be a neighborhood of the origin.
Let $\mathcal{H}$ be an infinite-dimensional Hilbert space with scalar product $\langle \cdot, \cdot \rangle $ (linear in the first entry and conjugate linear in the second), $H, B_1,\dots,B_\ell$ be (possibly unbounded) self-adjoint operators on $\mathcal{H}$, with domains $D(H),D(B_1),\dots,D(B_\ell)$. We consider the controlled Schr{\"o}dinger equation
\begin{equation}\label{quantumcontrol}
\mathrm{i}\frac{d\psi(t)}{dt}=(H+\sum_{j=1}^\ell u_j(t)B_j)\psi(t), \quad \psi(t) \in \mathcal{H}, \quad u(t) \in U.
\end{equation}
\begin{definition}
\begin{itemize}
\item We say that the operator $H$ satisfies ($\mathbb{A}1$) if it has discrete spectrum with infinitely many distinct eigenvalues (possibly degenerate).\\
Denote by $\mathcal{B}$ a Hilbert basis $(\phi_k)_{k\in \mathbb{N}}$ of $\mathcal{H}$ made of eigenvectors of $H$ associated with the family of eigenvalues $(\lambda_k)_{k\in \mathbb{N}}$ and let $\mathcal{L}$ be the set of finite linear combination of eigenstates, that is,
$$\mathcal{L}= \mathrm{span} \{\phi_k\mid k\in \mathbb{N} \}. $$
\item We say that $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfies ($\mathbb{A}2$) if $\phi_k \in D(B_j)$ for every $k \in \mathbb{N}$, $j=1,\dots,\ell$.
\item We say that $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfies ($\mathbb{A}3$) if
$$H+\sum_{j=1}^\ell u_jB_j:\mathcal{L}\rightarrow \mathcal{H}$$
is essentially self-adjoint for every $u \in U$.
\item We say that $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfies ($\mathbb{A}$) if $H$ satisfies ($\mathbb{A}1$) and\\ $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfies ($\mathbb{A}2$) and ($\mathbb{A}3$).
\end{itemize}
\end{definition}
If $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfies ($\mathbb{A}$) then, for every $(u_1,\dots,u_\ell)\in U$, $H+\sum_{j=1}^\ell u_jB_j$ generates a one-parameter group $e^{-\mathrm{i} t(H+\sum_{j=1}^\ell u_jB_j)}$ inside the group of unitary operators $U(\mathcal{H})$. It is therefore possible to define the propagator $\Gamma_T^u$ at time $T$ of system (\ref{quantumcontrol}) associated with a
piecewise constant control law $u(\cdot)=(u_1(\cdot),\dots,u_\ell(\cdot))$ by composition of flows of the type $e^{-\mathrm{i} t(H+\sum_{j=1}^\ell u_jB_j)}$.
\begin{definition}
Let $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfy ($\mathbb{A}$).
\begin{itemize}
\item Given $\psi_0,\psi_1$ in the unit sphere $\mathcal{S}$ of $\mathcal{H}$, we say that $\psi_1$ is reachable from $\psi_0$ if there exist a time $T>0$ and a piecewise constant control law $u:[0,T]\rightarrow U$ such that $\psi_1=\Gamma_T^u(\psi_0)$. We denote by $\mathrm{Reach}(\psi_0)$ the set of reachable points from $\psi_0$.
\item We say that (\ref{quantumcontrol}) is approximately controllable if for every $\psi_0\in \mathcal{S}$ the set $\mathrm{Reach}(\psi_0)$ is dense in $\mathcal{S}$.
\end{itemize}
\end{definition}
As a byproduct of the techniques
used to prove approximate controllability of (\ref{quantumcontrol})
for our problem,
we will actually obtain a slightly stronger controllability property. For this reason,
let us introduce the notion of module-tracker (m-tracker, for brevity) that is, a system for which any given curve can be tracked up to (relative) phases. The identification up to phases of elements of $\mathcal{H}$ in the basis $\mathcal{B}=(\phi_k)_{k\in \mathbb{N}}$ can be accomplished by the projection
$$\mathcal{M}:\psi \mapsto \sum_{k\in \mathbb{N}} |\langle \phi_k,\psi \rangle| \phi_k.$$
\begin{definition}
Let $(H,B_1,\dots,B_\ell,\mathcal{B})$ satisfy ($\mathbb{A}$). We say that system (\ref{quantumcontrol}) is an \emph{m-tracker} if, for every $r\in \mathbb{N}$, $\psi_1,\dots,\psi_r$ in $\mathcal{H}$, $\widehat{\Gamma}:[0,T]\rightarrow U(\mathcal{H})$ continuous with $\widehat{\Gamma}_0=\mathrm{Id}_{\mathcal{H}}$, and $\epsilon >0$, there exists an invertible increasing continuous function $\tau:[0,T]\rightarrow [0,T_\tau]$ and a piecewise constant control $u:[0,T_\tau]\rightarrow U$ such that
$$\|\mathcal{M}(\widehat{\Gamma}_t\psi_k)-\mathcal{M}(\Gamma_{\tau(t)}^u\psi_k) \| <\epsilon, \qquad k=1,\dots,r,$$
for every $t\in [0,T_\tau]$.
\end{definition}
\begin{remark}
We recall that if system \eqref{quantumcontrol} is an m-tracker, then it is also approximately controllable, as noticed in \cite[Remark 2.9]{BCS}.
\end{remark}
Following \cite{BCS}
and \cite{CS},
we now introduce some objects
that we later use
to state a
sufficient condition for a system to be an m-tracker.
The proposed sufficient condition
can be seen as a generalization of the main controllability result in \cite{BCS}.
The main difference is that here, instead of testing a sequence of finite-dimensional properties on an increasing sequence of linear subspaces of $\mathcal{H}$, we test them on a sequence of overlapping finite-dimensional spaces, not necessarily ordered by inclusion. This allows the sufficient condition to be checked block-wise.
Let $\{I_j\mid j\in \mathbb{N}\}$ be a family of finite subsets of $\mathbb{N}$ such that $\cup_{j\in \mathbb{N}}I_j=\mathbb{N}$.
Denote by $n_j$
the cardinality of $I_j$. Consider the subspaces
\[\mathcal{M}_j:=\mathrm{span}\{\phi_n \mid n\in I_j\}\subset \mathcal{H}\]
and their associated orthogonal projections
\[\Pi_{\mathcal{M}_j}:\mathcal{H}\ni \psi \mapsto \sum_{n\in I_j} \langle \phi_n,\psi \rangle \phi_n \in \mathcal{H}.
\]
Given a linear operator $Q$ on $\mathcal{H}$ we identify the linear operator $\Pi_{\mathcal{M}_j} Q \Pi_{\mathcal{M}_j}$
preserving
$\mathcal{M}_j$
with its
complex matrix representation with respect to the basis
$(\phi_n)_{n\in I_j}$.
The set $\Sigma_j=\{|\lambda_l-\lambda_{l'}|\mid l,l'\in I_j \}$ is then the collection of the spectral gaps of
$\Pi_{\mathcal{M}_j} H \Pi_{\mathcal{M}_j}$.
We define $B_i^{(j)}:= \Pi_{\mathcal{M}_j} B_i \Pi_{\mathcal{M}_j}$
for every $i=1,\dots,\ell$.
If the element $(B_i)_{l,k}$ is different from zero, then a control $u_i$ oscillating at frequency $|\lambda_l-\lambda_k|$ induces a population transfer between the states $\phi_l$ and $\phi_k$ (\cite{chambrion}). The dynamics of such a population transfer depend on the other pairs of states $\phi_{l'}$, $\phi_{k'}$ having the same spectral gap and whose corresponding element $(B_i)_{l',k'}$ is different from zero. We are interested in controlling the induced
population dynamics within a space ${\cal M}_j$.
This motivates the definition of the sets
\begin{align*}
\Xi_j^0=\{(\sigma,i)\in \Sigma_j \times \{1,\dots,\ell\} \mid \mbox{}&
(B_i)_{l,k}=0\ \mbox{for every }l\in \mathbb{N},\ k\in \mathbb{N}\setminus I_j\\
&\mbox{ such that }|\lambda_l-\lambda_k|=\sigma
\},
\end{align*}
and
\begin{align*}
\Xi_j^1=\{(\sigma,i)\in \Sigma_j \times \{1,\dots,\ell\} \mid \mbox{}& (B_i)_{l,k}=0\ \mbox{for every }l\in I_j,\ k\in \mathbb{N}\setminus I_j\\
&\mbox{ such that }|\lambda_l-\lambda_k|=\sigma \}.
\end{align*}
While the set $\Xi_j^1$ compares only with pairs of states $\phi_{l},\phi_k$ with $\phi_l$ in ${\cal M}_j$, such a requirement is not present in the definition if $\Xi_j^0$. This means that for $(\sigma,i)\in \Xi_j^0$ the induced population dynamics obtained by a control $u_i$ oscillating at frequency $\sigma$
not only does not produce population transfer out of ${\cal M}_j$, but also
is trivial within the orthogonal complement to ${\cal M}_j$.
For every $\sigma \geq 0$, and every square matrix $M$ of dimension $m$, let
$$\mathcal{E}_\sigma(M)=(M_{l,k}\delta_{\sigma,|\lambda_l-\lambda_k| })_{l,k=1,\dots,m}, $$
where $\delta_{l,k}$ is the Kronecker delta. The $n_j\times n_j$ matrix $\mathcal{E}_\sigma(B_i^{(j)})$ corresponds to the activation in $B_i^{(j)}$ of the spectral gap $\sigma\in \Sigma_j$: every element is $0$ except for the $(l,k)$-elements such that $|\lambda_l-\lambda_k|=\sigma$.
A control $u_i$ oscillating at frequency $\sigma$ can induce the dynamics in ${\cal M}_j$ described by the matrix $\mathcal{E}_\sigma(B_i^{(j)})$, and also, by phase modulation,
those described
by the matrix
$W_\xi$, $\xi \in S^1\subset \mathbb{C}$,
defined by
\begin{equation}
(W_\xi(M))_{l,k}=\begin{cases}
\xi M_{l,k}, & \lambda_l< \lambda_k, \\
0, & \lambda_l = \lambda_k, \\
\bar{\xi} M_{l,k}, & \lambda_l > \lambda_k.
\end{cases}
\end{equation}
Let us
consider
the sets of excited modes
\begin{equation}\label{eq:modes}
\nu_{j}^s:= \{W_\xi(\mathcal{E}_\sigma(\mathrm{i} B_i^{(j)})) \mid (\sigma,i)\in \Xi_{j}^s, \sigma \neq 0, \xi \in S^1 \}, \quad s=0,1.
\end{equation}
Notice that $\nu_{j}^0 \subset \nu_{j}^1 \subset \mathfrak{su}(n_j)$. Indeed, we have the following picture:
$$\mathcal{E}_\sigma(\mathrm{i} B_i^{(j)})\in\nu_j^{0} \Rightarrow \mathcal{E}_\sigma(\Pi_{j-1,j,j+1}\mathrm{i} B_i\Pi_{j-1,j,j+1})=\left[
\begin{array}{c|c|c}
0 & 0 &0\\
\hline
0 & \mathcal{E}_\sigma(\mathrm{i} B_i^{(j)}) &0\\
\hline
0 & 0 &0
\end{array}
\right] $$
$$\mathcal{E}_\sigma(\mathrm{i} B_i^{(j)})\in\nu_j^{1} \Rightarrow \mathcal{E}_\sigma(\Pi_{j-1,j,j+1}\mathrm{i} B_i\Pi_{j-1,j,j+1})=\left[
\begin{array}{c|c|c}
* & 0 &*\\
\hline
0 & \mathcal{E}_\sigma(\mathrm{i} B_i^{(j)}) &0\\
\hline
* & 0 &*
\end{array}
\right] $$
where $\Pi_{j-1,j,j+1}$ denotes the projection onto $\mathcal{M}_{j-1}\oplus\mathcal{M}_{j}\oplus\mathcal{M}_{j+1}$.
We denote by $\mathrm{Lie}(\nu_{j}^s)$ the Lie subalgebra of $\mathfrak{su}(n_j)$ generated by the matrices in $\nu_{j}^s$, $s=0,1$, and define
$\mathcal{T}_j$ as the minimal ideal of $ \mathrm{Lie}(\nu_j^1)$ containing $\nu_j^0$.
Finally, we introduce the graph $\mathcal{G}$ with vertices $\mathcal{V}=\{I_j\mid j\in \mathbb{N}\}$ and edges $\mathcal{E}=\{(I_j,I_k)\mid j,k\in\mathbb{N},\;I_j\cap I_k\neq \emptyset\}$. We are now in a position to state a new sufficient condition for a system to be an m-tracker, and thus, approximately controllable.
\begin{theorem}\label{LGTC}
Assume that ($\mathbb{A}$) holds true. If the graph $\mathcal{G}$ is connected and $\mathcal{T}_j=\mathfrak{su}(n_j)$ for every $j\in \mathbb{N}$, then \eqref{quantumcontrol} is an m-tracker.
\end{theorem}
\begin{proof}
The proof works by applying Theorem~2.8 in \cite{BCS}, which
guarantees that \eqref{quantumcontrol} is an m-tracker if a suitable condition, called Lie--Galerkin tracking condition (\cite[Definition 2.7]{BCS}), holds true. In terms of the notation introduced here, the Lie--Galerkin tracking condition is true if there exists a sequence $\{\widetilde{I}_j\mid j\in \mathbb{N}\}$ of finite subsets of $\mathbb{N}$, strictly increasing with respect to the inclusion, such that $\cup_{j\in \mathbb{N}}\widetilde{I}_j=\mathbb{N}$ and $\mathcal{T}_j=\mathfrak{su}(n_j)$ for every $j\in \mathbb{N}$.
Up to reordering the sets $I_j$, we can assume that
\begin{equation}\label{eq:ordering}
I_{j+1}\cap (\cup_{k=1}^j I_k)\not= \emptyset,\qquad \forall j\in\mathbb{N}.
\end{equation}
For $j\in \mathbb{N}$, let $\widetilde{I}_j=\cup_{i=1}^jI_i$ and $ {\cal Z}_j=
\sum_{k=1}^j \mathcal{M}_{k}.$
The Lie--Galerkin tracking condition holds true if
\begin{equation}\label{induction}
\mathrm{Lie}(\cup_{j=1}^m\widetilde{\mathcal{T}}_j)=\mathfrak{su}(\dim({\cal Z}_m)),\qquad m\in\mathbb{N},
\end{equation}
where the set of operators $\widetilde{\mathcal{T}}_j$ is obtained similarly to $\mathcal{T}_j$,
replacing
$\nu_j^s$, $s=0,1$, by
\begin{equation*}
\{W_\xi(\mathcal{E}_\sigma(\mathrm{i} \Pi_{{\cal Z}_m
}B_i\Pi_{{\cal Z}_m
})) \mid (\sigma,i)\in \Xi_{j}^s, \sigma \neq 0, \xi \in S^1 \}, \quad s=0,1.
\end{equation*}
We proceed by induction on $m$. For $m=1$, \eqref{induction} is true, since we have that $\mathrm{Lie}(\mathcal{T}_1)=\mathcal{T}_1=\mathfrak{su}(n_1)=\mathfrak{su}( \dim(\mathcal{Z
}_1))$.
Assume now that \eqref{induction} is true for $m$, and consider the vertex $I_{m+1}\in {\cal V}$.
Consider $t,p\in \cup_{j=1}^{m+1}I_{j}$ and let us prove that $G_{t,p}:= e_{t,p}- e_{p,t}$ is in
$\mathrm{Lie}(\cup_{j=1}^{m+1}\widetilde{\mathcal{T}}_j)$,
where $e_{a,b}$ is the matrix with all entries equal to 0 except for the one in row $a$ and column $b$, which is equal to 1 (and the indices in $\cup_{j=1}^{m+1}I_{j}$ are identified with the elements of $\{1,\dots,\dim({\cal Z}_{m+1}
)\}$).
Decomposing ${\cal Z}_{m+1}$ as a direct orthogonal sum $V_1\oplus ({\cal Z}_{m}\cap {\cal M}_{m+1}) \oplus V_2$ with $V_1\subset {\cal Z}_{m}$ and $V_2\subset {\cal M}_{m+1}$,
a matrix in $\widetilde{\mathcal{T}}_{m+1}$ has the
form
\[\left[
\begin{array}{c|c|c}
0 & 0 &0\\
\hline
0 & Q_{11} &Q_{12}\\
\hline
0 & Q_{21} &Q_{22}
\end{array}
\right],\qquad \left[
\begin{array}{c|c}
Q_{11} &Q_{12}\\
\hline
Q_{21} &Q_{22}
\end{array}
\right]\in \mathfrak{su}(n_{m+1}),\]
as it follows from the definition of $\Xi_j^0$ and $\Xi_j^1$ and the fact that $\widetilde{\mathcal{T}}_{m+1}$ is the ideal generated by $\nu_j^0$ inside ${\rm Lie}(\nu_j^1)$.
Similarly, a matrix in $\cup_{j=1}^{m}\widetilde{\mathcal{T}}_j$ has the form
\[\left[
\begin{array}{c|c|c}
Q_{11}&Q_{12} & 0 \\
\hline
Q_{21}&Q_{22} & 0 \\
\hline
0 & 0 &0
\end{array}
\right],\qquad \left[
\begin{array}{c|c}
Q_{11} &Q_{12}\\
\hline
Q_{21} &Q_{22}
\end{array}
\right]\in \mathfrak{su}(\dim({\cal Z}_{m}
)).\]
If $t,p\in \cup_{j=1}^mI_{j}$ or $t,p\in I_{m+1}$ the conclusion follows from the induction hypothesis and the identity $\mathcal{T}_{m+1}=\mathfrak{su}(n_{m+1})$.
Let then $t\in I_{m+1}\setminus (\cup_{j=1}^mI_{j})$ and $p\in \cup_{j=1}^mI_{j}$.
Fix, moreover, $r\in I_{m+1}\cap (\cup_{j=1}^mI_{j})$, whose existence is guaranteed by \eqref{eq:ordering}.
Again by the induction hypothesis and the identity $\mathcal{T}_{m+1}=\mathfrak{su}(n_{m+1})$,
we have that
$G_{p,r}
$ and $G_{r,t}
$ are in $\mathrm{Lie}(\cup_{j=1}^{m+1}\widetilde{\mathcal{T}}_j)$.
The bracket $[G_{p,r},G_{r,t}]=G_{p,t}$ is therefore also in $\mathrm{Lie}(\cup_{j=1}^{m+1}\widetilde{\mathcal{T}}_j)$. By similar arguments, we deduce that every element of a basis of $\mathfrak{su}(\dim({\cal Z}_{m+1}
))$ is in $\mathrm{Lie}(\cup_{j=1}^{m+1}\widetilde{\mathcal{T}}_j)$.
\end{proof}
\subsection{The Schr{\"o}dinger equation of a symmetric-top subject to electric fields}
We recall in this section some general facts about Wigner functions and the theory of angular momentum in quantum mechanics (see, for instance, \cite{Varshalovich,gordy}).
We use Euler's angles $(\alpha,\beta,\gamma)\in[0,2\pi)\times[0,\pi]\times [0,2\pi)$ to describe the configuration space ${\rm SO}(3)$ of the molecule. More precisely, the coordinates of a vector change from the body fixed frame $a_1,a_2,a_3$ to the space fixed frame $e_1,e_2,e_3$ via three rotations
\begin{equation}\label{3rot}
\begin{pmatrix}
X\\
Y\\
Z
\end{pmatrix}=R_{e_3}(\alpha)R_{e_2}(\beta)R_{e_3}(\gamma)\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}=:R(\alpha,\beta,\gamma)\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
\end{equation}
where $(x,y,z)^T$ are the coordinates of the vector in the body fixed frame, $(X,Y,Z)^T$ are the coordinates of the vector in the space fixed frame and $R_{e_i}(\theta)\in {\rm SO}(3)$ is the rotation of angle $\theta$ around the axis $e_i$. The explicit expression of the matrix $R(\alpha,\beta,\gamma)\in {\rm SO}(3)$ is
\begin{equation}\label{rotation}
R=\begin{pmatrix}
\cos\alpha \cos\beta \cos\gamma-\sin\alpha \sin\gamma & -\cos\alpha \cos\beta \sin\gamma-\sin\alpha \cos\gamma & \cos\alpha \sin\beta\\
\sin\alpha \cos\beta \cos\gamma+\cos\alpha \sin\gamma &-\sin\alpha \cos\beta \sin\gamma+\cos\alpha \cos\gamma & \sin\alpha \sin\beta\\
-\sin\beta \cos\gamma& \sin\beta \sin\gamma& \cos\beta
\end{pmatrix}.
\end{equation}
In Euler coordinates, the angular momentum operators are given by
\begin{equation}\label{wigner}
\begin{cases}
\begin{aligned}
J_1&=\mathrm{i}\cos\alpha \cot\beta \dfrac{\partial}{\partial \alpha}+\mathrm{i}\sin\alpha \dfrac{\partial}{\partial \beta} -\mathrm{i}\dfrac{\cos\alpha}{\sin \beta} \dfrac{\partial}{\partial \gamma} , \\
J_2&=\mathrm{i}\sin\alpha \cot\beta \dfrac{\partial}{\partial \alpha}-\mathrm{i}\cos\alpha \dfrac{\partial}{\partial \beta} -\mathrm{i}\dfrac{\sin\alpha}{\sin \beta} \dfrac{\partial}{\partial \gamma} , \\
J_3&=-\mathrm{i}\dfrac{\partial}{\partial \alpha}.
\end{aligned}
\end{cases}
\end{equation}
These are linear operators acting on the Hilbert space $L^2({\rm SO}(3))$, self-adjoint with respect to the Haar measure $\frac{1}{8}d\alpha d\gamma \sin\beta d\beta$. Using (\ref{wigner}), the self-adjoint operator $P_3:=-\mathrm{i}\frac{\partial}{\partial \gamma}$ can be written as $P_3=\sin\beta \cos\alpha J_1+\sin\beta \sin\alpha J_2+\cos\beta J_3$, that is,
\[
P_3=\sum_{i=1}^3R_{i3}(\alpha,\beta,\gamma)J_i,
\]
where $R=(R_{ij})_{i,j=1}^3$ is given in (\ref{rotation}).
In the same way we define $P_1=\sum_{i=1}^3R_{i1}(\alpha,\beta,\gamma)J_i,P_2=\sum_{i=1}^3R_{i2}(\alpha,\beta,\gamma)J_i$. The operators $J_i$ and $P_i$, $i=1,2,3$, are the angular momentum operators expressed in the fixed and in the body frame, respectively. Finally, we consider the square norm operator $J^2:=J_1^2+J_2^2+J_3^2=P_1^2+P_2^2+P_3^2$. Now, $J^2,J_3,P_3$ can be considered as the three commuting observables needed to describe the quantum motion of a molecule.
Indeed, $[J^2,J_3]=[J^2,P_3]=[J_3,P_3]=0$, and hence there exists an orthonormal Hilbert basis of $L^2({\rm SO}(3))$ which diagonalizes simultaneously $J^2,J_3$ and $P_3$. In terms of Euler coordinates, this basis is made by the so-called Wigner functions
\begin{equation}\label{explicit}
D_{k,m}^j(\alpha,\beta,\gamma):=e^{\mathrm{i}(m\alpha+k\gamma)}d_{k,m}^j(\beta), \qquad j\in \mathbb{N},\quad k,m=-j,\dots,j,
\end{equation}
where the function $d^j_{k,m}$ solves
a suitable Legendre differential equation, obtained by separation of variables (see, e.g., \cite[Section 2.5]{gordy} for the separation of variables ansatz and \cite[Chapter 4]{Varshalovich} for a detailed description of the properties of these functions).
Summarizing, the family of Wigner functions $\{ D_{k,m}^j\mid j\in \mathbb{N},k,m=-j,\dots,j \}$ forms an orthonormal Hilbert basis for $L^2({\rm SO}(3))$.
Moreover,
\[
J^2 D_{k,m}^j=j(j+1) D_{k,m}^j, \quad J_3 D_{k,m}^j=mD_{k,m}^j, \quad P_3D_{k,m}^j=kD_{k,m}^j.
\]
Thus, $m$ and $k$ are the quantum numbers which correspond to the projections of the angular momentum on the third axis of, respectively, the fixed and the body frame.
The rotational Hamiltonian of a molecule is $H=\frac{1}{2}\Big(\frac{P_1^2}{I_1}+\frac{P_2^2}{I_2}+\frac{P_3^2}{I_3}\Big)$, which is seen here as a self-adjoint operator acting on the Hilbert space $L^2({\rm SO}(3))$.
From now on, we impose the symmetry relation
$I_1=I_2$, which implies that $H=\frac{J^2}{2I_2}+\Big(\frac{1}{2I_3}-\frac{1}{2I_2}\Big)P_3^2$. Thus,
\begin{equation}\label{spectrum}
HD_{k,m}^j=\Big(\dfrac{j(j+1)}{2I_2}+\Big(\dfrac{1}{2I_3}-\dfrac{1}{2I_2}\Big)k^2\Big)D_{k,m}^j=:E_k^jD_{k,m}^j.
\end{equation}
Hence, the Wigner functions are the eigenfunctions of $H$.
Since the eigenvalues
of $H$ do not depend on $m$, the energy level $E_k^j$ is $(2j+1)$-degenerate with respect to $m$.
This property is common to every molecule in nature: the spectrum $\sigma(H)$ does not depend on $m$, just like in classical mechanics kinetic energy does not depend on the direction of the angular momentum.
Moreover, when $k \neq 0$ the energy level $E_k^j$ is also $2$-degenerate with respect to $k$. This extra degeneracy is actually a characterizing property of
symmetric molecules.
Breaking this $k$-symmetry will be one important feature of our controllability analysis.
The interaction Hamiltonian between the dipole $\delta$ inside the molecule and the external electric field in the direction $e_i$, $i=1,2,3$, is given by the Stark effect (\cite[Chapter 10]{gordy})
\[
B_i(\alpha,\beta,\gamma)=-\langle R(\alpha,\beta,\gamma) \delta, e_i\rangle,
\]
seen as a multiplicative self-adjoint operator acting on $L^2({\rm SO}(3))$. Thus, the rotational Schr{\"o}dinger equation for a symmetric-top molecule subject to three orthogonal
electric fields reads
\begin{equation}\label{schro}
\mathrm{i}\dfrac{\partial}{\partial t} \psi(\alpha,\beta,\gamma;t)= H\psi(\alpha,\beta,\gamma;t)+\sum_{l=1}^3u_l(t)B_l(\alpha,\beta,\gamma)\psi(\alpha,\beta,\gamma;t),
\end{equation}
with $\psi(t) \in L^2({\rm SO}(3))$ and $u(t) \in U$, for some neighborhood $U$ of $0$ in $\mathbb{R}^3$.
\subsection{Non-controllability of the quantum genuine symmetric-top}
We recall that the genuine symmetric-top molecule is a symmetric rigid body with electric dipole $\delta$ along the symmetry axis:
$\delta=(0,0,\delta_3)^T$ in the principal axis frame on the body. We then introduce the subspaces $S_k:=\overline{\mathrm{span}}\{D_{k,m}^j \mid j\in \mathbb{N}, m=-j,\dots,j\}$, where $\overline{\mathrm{span}}$ denotes the closure of the linear hull in $L^2({\rm SO}(3))$.
\begin{theorem}\label{genuine}
The quantum number $k$ is invariant in the controlled motion of the genuine symmetric-top molecule. That is, if $I_1=I_2$ and $\delta=(0,0,\delta_3)^T$, the subspaces $S_k$ are invariant for any propagator of the Schr{\"o}dinger equation (\ref{schro}).
\end{theorem}
\begin{proof}
We have to show that $H$ and $B_1$, $B_2$, $B_3$,
do not couple different levels of $k$, that is,
\begin{equation}\label{alternativa}
\begin{cases}
\langle D_{k,m}^j,\mathrm{i} H D_{k',m'}^{j'}\rangle_{L^2({\rm SO}(3))}=0, \quad k \neq k', \\
\langle D_{k,m}^j, \mathrm{i} B_l D_{k',m'}^{j'}\rangle_{L^2({\rm SO}(3))}=0, \quad k \neq k',\; l=1,2,3.
\end{cases}
\end{equation}
The first equation of (\ref{alternativa}) is obvious since the orthonormal basis $\{D_{k,m}^j\}$ diagonalizes $H$.
Under the genuine symmetric-top assumption, the second equation of (\ref{alternativa}) is also true: for $l=1$ and $k\neq k'$ we compute
\begin{align*}
\langle D_{k,m}^j,&\mathrm{i} B_1 D_{k',m'}^{j'}\rangle_{L^2({\rm SO}(3))}\\ ={}& -\int_0^{2\pi}d\alpha\int_0^{2\pi}d\gamma\int_0^{\pi}d\beta \sin(\beta) D_{k,m}^j(\alpha,\beta,\gamma)\mathrm{i} B_1(\alpha,\beta,\gamma)\overline{D_{k',m'}^{j'}}(\alpha,\beta,\gamma) \\
={}&\mathrm{i} \delta_3\left(\int_0^{2\pi}d\gamma e^{\mathrm{i} k\gamma}e^{-\mathrm{i} k'\gamma}\right)\left(\int_0^{2\pi}d\alpha\cos(\alpha) e^{\mathrm{i} m\alpha}e^{-\mathrm{i} m'\alpha}\right)
\\ &\left(\int_0^{\pi}d\beta \sin^2(\beta)d_{k,m}^j(\beta)\overline{d_{k',m'}^{j'}}(\beta)\right) =0,
\end{align*}
using the orthogonality of the functions $e^{ik\gamma}$ and the explicit form (\ref{rotation}) of the matrix $R$, which yields
\[B_1(\alpha,\beta,\gamma)=-\langle \begin{pmatrix}
0\\
0\\
\delta_3
\end{pmatrix},R^{-1}(\alpha,\beta,\gamma)\begin{pmatrix}
1\\
0\\
0
\end{pmatrix}\rangle=-\delta_3 \sin \beta \cos\alpha.\]
The computations for $l=2,3$ are analogous, since the multiplicative potentials $B_l$ do not depend on $\gamma$.
\end{proof}
\begin{remark}
Equation (\ref{alternativa}) also shows that, for a genuine symmetric-top, the third component of the angular momentum $P_3$ commutes with $H$ and $B_l$, $l=1,2,3$, hence
\[
\Big[P_3,H+\sum_{l=1}^3u_lB_l\Big]=0, \quad \forall u\in U.
\]
Thus,
$\langle \psi(t),P_3\psi(t)\rangle$ is a conserved quantity, where $\psi$ is the solution of $(\ref{schro})$.
\end{remark}
\subsection{Controllability of the quantum accidentally symmetric-top}
So far we have studied the dynamics of a symmetric-top molecule with electric dipole moment along its symmetry axis and we have proven that its dynamics are trapped in
the eigenspaces of $P_3$.
Nevertheless, for applications to molecules charged in the laboratory, or to particular molecules present in nature such as $D_2S_2$ (Figure \ref{d2s2}) or $H_2S_2$, it is interesting to consider also the case in which the dipole
is not along
the symmetry axis: this case is called the \emph{accidentally symmetric molecule}.
Under a non-resonance condition, we are going to prove that, if the dipole moment is not orthogonal to the symmetry axis of the molecule, the rotational dynamics of an accidentally symmetric-top are approximately controllable. To prove this statement, we are going to apply
Theorem \ref{LGTC}
to \eqref{schro}.
\begin{theorem}\label{rare}
Assume that $I_1=I_2$ and $\frac{I_2}{I_3}\notin \mathbb{Q}$. If $\delta = (\delta_1,\delta_2,\delta_3)^T$ is such that $\delta \neq (0,0,\delta_3)^T$ and $\delta \neq (\delta_1,\delta_2,0)^T$, then system
(\ref{schro}) is an m-tracker, and in particular approximately controllable.
\end{theorem}
\begin{proof}
First of all, one can check, for example in \cite[Table 2.1]{gordy}, that the pairings induced by the interaction Hamiltonians satisfy
\begin{equation}\label{rules}
\langle D_{k,m}^j , \mathrm{i} B_l D_{k',m'}^{j'} \rangle=0,
\end{equation}
when $|j'-j|>1$, or $|k'-k|>1$ or $|m'-m|>1$, for every $l=1,2,3$. Equation (\ref{rules}) is the general form of the so-called selection rules.
We then define
for every $j \in \mathbb{N}$ the set $I_j:=\{\rho(l,k,m)\mid l=j,j+1, \; k,m=-l,\dots,l\}\subset \mathbb{N}$, where $\rho: \{(l,k,m) \mid l\in \mathbb{N},k,m=-l,\dots,l\}\rightarrow \mathbb{N}$ is the lexicographic ordering.
The graph $\mathcal{G}$ whose vertices are the sets $I_j$ and whose edges are
$\{(I_j,I_{j'})\mid I_j\cap I_{j'} \neq \emptyset\}=\{ (I_j,I_{j+1})\mid j\in\mathbb{N}\}$ is
linear.
In order to apply Theorem~\ref{LGTC} we
shall
consider the projection of \eqref{schro} onto each space $\mathcal{M}_j:=\mathcal{H}_j \oplus \mathcal{H}_{j+1}$, where $\mathcal{H}_l:= \mathrm{span}\{D_{k,m}^l \mid k,m=-l,\dots,l\}$. The dimension of $\mathcal{M}_j$ is $(2j+1)^2+(2(j+1)+1)^2$, and we identify $\mathfrak{su}(\mathcal{M}_j)$ with $\mathfrak{su}((2j+1)^2+(2(j+1)+1)^2)$.
According to \eqref{rules}, the three types of spectral gaps in $\mathcal{M}_j$, $j\in \mathbb{N}$, which we should consider are
\begin{equation}\label{usami}
\lambda_k^j:= |E_{k+1}^{j+1}-E_k^j| =\Big| \frac{j+1}{I_2}+\Big(\frac{1}{2I_3}-\frac{1}{2I_2}\Big)(2k+1)\Big|, \quad k=-j,\dots,j,
\end{equation}
corresponding to pairings for which both $j$ and $k$ move (see Figure \ref{lambda}),
\begin{equation}\label{usami1}
\eta_k:= |E_{k+1}^j-E_k^j| =\Big| \Big(\dfrac{1}{2I_3}-\dfrac{1}{2I_2}\Big)(2k+1) \Big|, \quad k=-j,\dots,j,
\end{equation}
and
\begin{equation}\label{usami2}
\sigma^j:= |E_k^{j+1}-E_k^j| = \dfrac{j+1}{I_2}, \quad k=-j,\dots,j,
\end{equation}
for which, respectively, only $k$ or $j$ moves (see,
Figures \ref{transitionsfigure}\subref{eta} and \ref{transitionsfigure}\subref{sigma}).
\begin{figure}[ht!]\begin{center}
\includegraphics[width=0.5\linewidth, draft = false]{lambda.png}
\caption{Graph of the transitions associated with the frequency $\lambda_k^j$ between eigenstates $| j,k\rangle=| j,k,m\rangle:=D_{k,m}^j$ ($m$ fixed). Same-shaped arrows correspond to equal spectral gaps.} \label{lambda}
\end{center}\end{figure}
\begin{figure}[ht!]
\subfigure[]{
\includegraphics[width=0.47\linewidth, draft = false]{eta.png} \label{eta} }
\subfigure[]{
\includegraphics[width=0.47\linewidth, draft = false]{sigma.png} \label{sigma} }
\caption{Transitions between states:
\subref{eta} at frequency $\eta_k$; \subref{sigma} at frequency $\sigma^j$. Same-shaped arrows correspond to equal spectral gaps.} \label{transitionsfigure}\end{figure}
We now classify the spectral gaps
in terms of the sets $\Xi_j^0$ and $\Xi_j^1$ introduced in Section~\ref{notations}.
\begin{lemma}
\label{ext}
Let $I_2/I_3\notin \mathbb{Q}$. Then $(\lambda_k^j,l), (\sigma^j,l)\in \Xi_j^0$, and $(\eta_k,l) \in \Xi_j^1$, for all $k=-j,\dots,j$, $l=1,2,3$.
\end{lemma}
\begin{proof}
Because of the selection rules \eqref{rules}, we only need to check if there are common spectral gaps in the spaces $\mathcal{M}_j$ and $\mathcal{M}_{j'}$ for $j'\ne j$.
We start by
proving that $(\lambda_k^j,l), (\sigma^j,l)\in \Xi_j^0$ by showing that
a spectral gap of the type $\lambda_k^j$ (respectively, $\sigma^j$)
is different from any spectral gap of the type $\lambda_{k'}^{j'}$, $\sigma^{j'}$, or
$\eta_{k'}$ unless $\lambda_k^j=\lambda_{k'}^{j'}$ and $(k,j)=(k',j')$ (respectively, $\sigma^j=\sigma^{j'}$ and $j=j'$).
Using the explicit structure of the spectrum (\ref{spectrum}), any spectral gap of the type
$\lambda_{k'}^{j'}$, $\sigma^{j'}$, or
$\eta_{k'}$ can be written as
\[ \Big|\frac{q_1}{I_2}+q_2\Big(\dfrac{1}{I_3}-\dfrac{1}{I_2}\Big)\Big|,\qquad q_1,q_2\in \mathbb{Q}.\]
Since, moreover,
$\frac {1}{I_2}$ and $\left(\frac1{I_3}-\frac1{I_2}\right)$ are $\mathbb{Q}$-linearly independent,
one easily deduces that, indeed, $(\lambda_k^j,l), (\sigma^j,l)\in \Xi_j^0$.
Notice that the gaps of the type
$\eta_k$ correspond to internal pairings in the spaces ${\cal H}_j$. Henceforth,
in order to prove that
$(\eta_k,l) \in \Xi_j^1$ it is enough to check that $\eta_k$ is different from any gap of the type
$\lambda_{k'}^j,\sigma^j$. This fact has already been noticed
in the proof of the first part of the statement. The proof of the lemma is then concluded.
\end{proof}
Next, we introduce the family of
excited modes
associated with the spectral gap $\lambda_k^j$, that is,
\[
\mathcal{F}_j:=\{\mathcal{E}_{\lambda_k^j}(\mathrm{i} B_l), W_\mathrm{i}(\mathcal{E}_{\lambda_k^j}(\mathrm{i} B_l)) \mid l=1,2,3, \; k=-j,\dots,j \},
\]
where the operators $\mathcal{E}_\mu$ and $W_\xi$
are defined in
Section~\ref{notations}, and where, with a slight abuse of notation, we write $B_l$ instead of $\Pi_{\mathcal{M}_j}B_l\Pi_{\mathcal{M}_j}$.
Notice that $\mathcal{F}_j\subset \nu_j^0$ as it follows from Lemma~\ref{ext}, where $\nu_j^0$
is defined as in \eqref{eq:modes}.
In order to write down the matrices in $\mathcal{F}_j$, we need to study the resonances
between the spectral gaps inside $\mathcal{M}_j$.
We claim that there are no
internal resonances except those due to
the degeneracy
$E_k^j=E_{-k}^j$.
Indeed, we
already noticed in Lemma~\ref{ext}
that a spectral gap of the type $\lambda_k^j$
is different from any spectral gap of the type $\lambda_{k'}^{j'}$, $\sigma^{j'}$, or
$\eta_{k'}$ unless $\lambda_k^j=\lambda_{k'}^{j'}$ and $(k,j)=(k',j')$.
We collect in the lemma below also the similar observations that
$\sigma^j$ is different from any spectral gap of the type $\lambda_{k'}^{j'}$, $\sigma^{j'}$, or
$\eta_{k'}$ unless
$\sigma^j=\sigma^{j'}$ and $j=j'$, and that $\eta_{k}\ne \eta_{k'}$ if $k\ne k'$.
\begin{lemma}
\label{important}
Let $I_2/I_3\notin \mathbb{Q}$. Then
\begin{enumerate}
\item\textbf{$\lambda_k^j$-resonances:} the equation
\begin{equation*}
|E_{k+1}^{j+1}-E_k^j|=|E_{s+h}^{j''}-E_{s}^{j'}|,\ j\le j'\leq j''\le j+1,\ -j'\le s\le j',\ h\in \{-1,0,1\},
\end{equation*}
implies that $j'=j$, $j''=j+1$, $s=\pm k$, $s+h=\pm(k+1)$;
\item\textbf{$\eta_k$-resonances:} the equation
\begin{equation*}
|E_{k+1}^j-E_k^j|=|E_{s+h}^{j''}-E_{s}^{j'}|,\ j\le j'\leq j''\le j+1,\ -j'\le s\le j',\ h\in \{-1,0,1\}, \end{equation*}
implies that $j'=j''=j$ or $j'=j''=j+1$ and $s=\pm k$, $s+h=\pm(k+1)$;
\item\textbf{$\sigma^j$-resonances:} the equation
\begin{equation*}
|E_k^{j+1}-E_k^j|=|E_{s+h}^{j''}-E_{s}^{j'}|,\ j\le j'\leq j''\le j+1,\ -j'\le s\le j',\ h\in \{-1,0,1\}, \end{equation*}
implies that $j'=j$, $j''=j+1$, $h=0$, $s=\pm k$.
\end{enumerate}
\end{lemma}
Denote by $\mathrm{L}_j:=\mathrm{Lie}(\mathcal{F}_j)$ the Lie algebra generated by the matrices in $\mathcal{F}_j$. Let us introduce the generalized Pauli matrices
$$G_{j,k}=e_{j,k}-e_{k,j}, \quad F_{j,k}=\mathrm{i} e_{j,k}+\mathrm{i} e_{k,j},\quad D_{j,k}=\mathrm{i} e_{j,j}-\mathrm{i} e_{k,k},$$
where $e_{j,k}$ denotes the $(2j+1)^2+(2(j+1)+1)^2$-square matrix whose entries are all zero, except the one at row $j$ and column $k$, which is equal to $1$. Consider again the lexicographic ordering $\rho:\{(l,k,m) \mid l=j,j+1,\;k,m=-l,\dots,l\}\rightarrow \mathbb{N}$. By a slight abuse of notation, also set $e_{(l,k,m),(l',k',m')}=e_{\rho(l,k,m),\rho(l',k',m')}$. The analogous identification can be used to define $G_{(l,k,m),(l',k',m')}, F_{(l,k,m),(l',k',m')}, D_{(l,k,m),(l',k',m')}$.
The next proposition tells us how the elements in $\mathrm{L}_j$ look like. For a
proof, see Appendix~\ref{appendixA}.
\begin{proposition}
\label{propA}
Let $m=-j,\dots,j$ and $k=-j,\dots,j$ with $k\neq 0$.
Then the matrices $X_{(j,k,m),(j+1,k+1,m)}-X_{(j,-k,m),(j+1,-k-1,m)}$ and $X_{(j,k,m),(j+1,k+1,m\pm1)}-X_{(j,-k,m),(j+1,-k-1,m\pm1)}$ are in $\mathrm{L}_j$, where $X\in \{G,F\}$.
\end{proposition}
To break the degeneracy between $k$ and $-k$ which appears in the
matrices that we found in
Proposition~\ref{propA}, and obtain all the elementary matrices that one needs to generate $\mathfrak{su}(\mathcal{M}_j)$, we need to exploit the other two types of spectral gaps that we
have introduced in \eqref{usami1} and \eqref{usami2} (see Figure \ref{transitionsfigure}).
Let us introduce the family of
excited modes at the frequencies $\sigma^j$ and $\eta_k$,
\begin{align*}
\mathcal{P}_j:=\{\mathcal{E}_{\sigma^j}(\mathrm{i} B_l), W_\mathrm{i}(\mathcal{E}_{\sigma^j}(\mathrm{i} B_l)),\mathcal{E}_{\eta_k}(\mathrm{i} B_l), W_\mathrm{i}(\mathcal{E}_{\eta_k}(\mathrm{i} B_l)) \mid l=1,2,3, \; k=-j,\dots,j\},
\end{align*}
and notice that, by Lemma~\ref{ext},
$\mathcal{P}_j \subset \nu_j^1$ (cf.~\eqref{eq:modes}).
Therefore,
\[
\widetilde{\mathcal{P}}_j:=\{A, [B,C] \mid A,B \in \mathrm{L}_j, C \in \mathcal{P}_j\} \subset \mathcal{T}_j,
\]
where
we recall that
$\mathcal{T}_j$ is the minimal ideal of $ \mathrm{Lie}(\nu_j^1)$ containing $\nu_j^0$.
The following proposition, whose proof is given in Appendix~\ref{appendixB}, concludes the proof of Theorem~\ref{rare}.
\begin{proposition}\label{su}
$\mathrm{Lie}(\widetilde{\mathcal{P}}_j)=\mathfrak{su}(\mathcal{M}_j)$.
\end{proposition}
\end{proof}
\begin{remark}
The assumption $I_2/I_3\notin \mathbb{Q}$ on the moments of inertia appearing in Theorem~\ref{rare} is technical, and prevents the system from having both external resonances (as we saw in Lemma~\ref{ext}) and internal ones (
Lemma~\ref{important}). Anyway, we have not proven that controllability fails if the ratio $I_2/I_3$ is
rational.
\end{remark}
\subsection{Reachable sets of the quantum genuine symmetric-top}
In (\ref{alternativa}) we see that, when $\delta = (0,0,\delta_3)^T$, transitions $k \rightarrow k'$ are forbidden if $k \neq k'$. Thus, if the quantum system is prepared in the initial state $\psi(0)$ with $P_3 \psi(0)=k\psi(0)$, the wave function $\psi$ evolves in the subspaces $S_k=\overline{\mathrm{span}}\{D_{k,m}^j \mid j \in \mathbb{N},m=-j,\dots,j\}$. The next theorem tells us that
the restriction of (\ref{schro}) to this subspace is approximately controllable.
\begin{theorem}
Let $I_1=I_2$ and fix $k\in \mathbb{Z}$. If $\delta = (0,0,\delta_3)^T$, $\delta_3\neq 0$, then the Schr{\"o}dinger equation (\ref{schro}) is an m-tracker in the Hilbert space $S_k$. In particular, $\mathrm{Reach}(\psi)$ is dense in $S_k \cap \mathcal{S}$ for all $\psi \in S_k \cap \mathcal{S}$.
\end{theorem}
\begin{proof}
For every integer $j \ge |k|$, let $I_{j,k}:=\{\rho(l,m)\mid l=j,j+1, \; m=-l,\dots,l\}$, where $\rho: \{(l,m) \mid l\ge |k|,\;m=-l,\dots,l\}\rightarrow \mathbb{N}$ is the lexicographic ordering.
Then the graph $\mathcal{G}_k$ with vertices $\{I_{j,k}\}_{j=|k|}^\infty$ and edges
$\{(I_{j,k},I_{j',k})\mid I_{j,k}\cap I_{j',k} \neq \emptyset\}$ is linear.
In order to apply Theorem~\ref{LGTC} to the restriction of \eqref{schro} to $S_k$, we should consider the projected dynamics onto $\mathcal{N}_{j,k}:= \mathcal{L}_{j,k} \oplus \mathcal{L}_{j+1,k} $, where $\mathcal{L}_{l,k}:= \mathrm{span}\{D_{k,m}^l\mid m=-l,\dots,l\}$.
The only spectral gaps in $S_k$ are $\sigma^j=|E_k^{j+1}-E_k^j|=\frac{j+1}{I_2}$, $j\ge |k|$. Notice that $(\sigma^j,l)\in \Xi_j^0$.
We write the electric potentials projected onto $\mathcal{N}_{j,k}$:
\begin{align*}
&\mathcal{E}_{\sigma^j}(\mathrm{i} B_1)=\sum_{m=-j,\dots,j}a_{j,k,m}\delta_3G_{(j,k,m),(j+1,k,m+1)}+a_{j,k,-m}\delta_3G_{(j,k,m),(j+1,k,m-1)}, \\
&\mathcal{E}_{\sigma^j}(\mathrm{i} B_2)=\sum_{m=-j,\dots,j}a_{j,k,m}\delta_3F_{(j,k,m),(j+1,k,m+1)}-a_{j,k,-m}\delta_3F_{(j,k,m),(j+1,k,m-1)}, \\
&\mathcal{E}_{\sigma^j}(\mathrm{i} B_3)=\sum_{m=-j,\dots,j}-b_{j,k,m}\delta_3F_{(j,k,m),(j+1,k,m)},
\end{align*}
having used the explicit pairings \eqref{kk}, which can be found in Appendix~\ref{appendixB}, and which describe the transitions excited by the frequency $\sigma^j$.
Note that here the sum does not run over $k$ since we are considering the dynamics restricted to $S_k$.
We consider the family of excited modes
$$\mathcal{F}_{j,k}=\{\mathcal{E}_{\sigma^j}(\mathrm{i} B_l), W_\mathrm{i}(\mathcal{E}_{\sigma^j}(\mathrm{i} B_l)) \mid l=1,2,3\}\subset \nu_j^0.$$
We claim that the Lie algebra generated by $\mathcal{F}_{j,k}$, seen as a subset of $\mathfrak{su}((2j+1)^2+(2(j+1)+1)^2)$, is equal to $\mathfrak{su}((2j+1)^2+(2(j+1)+1)^2)$.
Such an identity has been proved in \cite[Section 3.3]{BCS}, since the projection
to $\mathcal{N}_{j,k}$ is isomorphic to an analogous projection for the linear molecule. Hence, we conclude that system \eqref{schro} is an m-tracker in $S_k$.
\end{proof}
\subsection{Non-controllability of the quantum orthogonal accidentally symmetric-top}
Let us consider separately the case where $\delta=(\delta_1,\delta_2,0)^T$, left out by Theorem~\ref{rare}.
The situation in which the dipole lies in the plane orthogonal to the symmetry axis of the molecule (that is, the \emph{orthogonal} accidentally symmetric-top) is interesting from the point of view of chemistry, since the accidentally symmetric-top molecules present in nature are usually of that kind (see Figure \ref{d2s2}).
\begin{figure}[ht!]\begin{center}
\includegraphics[width=0.3\linewidth, draft = false]{accidentally.png}
\caption{Diagram of the orthogonal accidentally symmetric-top approximation of the molecule $D_2S_2$. The electric dipole $\delta$ lies in the orthogonal plane to the symmetry axis.}\end{center} \label{d2s2}
\end{figure}
In order to study this problem, let us introduce the Wang functions
{\cite[Section 7.2]{gordy}}
\[
S_{0,m,0}^j:=D_{0,m}^j,,\qquad
S_{k,m,\gamma}^j:=\dfrac{1}{\sqrt{2}}(D_{k,m}^j+(-1)^\gamma D_{-k,m}^j), \quad k=1,\dots,j,
\]
for $j\in \mathbb{N}$, $m=-j,\dots,j$, and $\gamma=0,1$. Due to the $k$-degeneracy $E_k^j=E_{-k}^j$ in the spectrum of the rotational Hamiltonian $H$, the functions $S_{k,m,\gamma}^j$ still form an orthogonal basis of eigenfunctions of $H$. Then we consider the change of basis $D_{k,m}^j \rightarrow e^{-\mathrm{i} k\theta}D_{k,m}^j$, and we choose $\theta \in [0,2\pi)$ such that
\begin{equation}\label{thetachange}
\begin{cases}
e^{-\mathrm{i} \theta}(\delta_2+\mathrm{i}\delta_1)=\mathrm{i} \sqrt{\delta_1^2+\delta_2^2}, & \\
e^{\mathrm{i} \theta}(\delta_2-\mathrm{i}\delta_1)=-\mathrm{i} \sqrt{\delta_1^2+\delta_2^2}. &
\end{cases}
\end{equation}
System \eqref{thetachange} describes the rotation of angle $\mp \theta$ in the complex plane of the vector $\delta_2\pm \mathrm{i}\delta_1$.
The composition of these two changes of basis gives us the rotated Wang states $S_{k,m,\gamma}^j(\theta):=\frac{1}{\sqrt{2}}(e^{-\mathrm{i} k\theta}D_{k,m}^j+(-1)^\gamma e^{\mathrm{i} k\theta} D_{-k,m}^j)$, for $k\neq 0$, and $S_{0,m,0}^j=D_{0,m}^j$.
In the next theorem
we express in this new basis
a symmetry which prevents the system from being approximately controllable.
\begin{theorem}\label{accidentally}
Let $I_1=I_2$ and $\delta = (\delta_1,\delta_2,0)^T$.
Then the parity of $j+\gamma+k$ is conserved, that is,
the spaces $\overline{\mathrm{span}}\{S_{k,m,\gamma}^j\mid j+\gamma+k\mbox{ is odd}\}$ and $\overline{\mathrm{span}}\{S_{k,m,\gamma}^j\mid j+\gamma+k\mbox{ is even}\}$
are invariant for the propagators of \eqref{schro}.
\end{theorem}
\begin{proof}
We need to prove that the pairings allowed by the controlled vector fields $B_1,B_2$ and $B_3$ conserve the parity of $j+\gamma+k$. To do so, let us compute
\begin{align}
\langle S_{k,m,\gamma}^j(\theta),\mathrm{i} B_1 S_{k+1,m+1,\gamma}^{j+1}(\theta)\rangle&=-c_{j,k,m}e^{-\mathrm{i} \theta}(\delta_2+\mathrm{i}\delta_1)+c_{j,k,m}e^{\mathrm{i} \theta}(\delta_2-\mathrm{i}\delta_1)\nonumber\\
&=-2\mathrm{i} c_{j,k,m}\sqrt{\delta_1^2+\delta_2^2},\label{pairingwang1} \\
\langle S_{k,m,\gamma}^j(\theta),\mathrm{i} B_1 S_{k+1,m+1,\gamma'}^{j+1}(\theta)\rangle&=-c_{j,k,m}e^{-\mathrm{i} \theta}(\delta_2+\mathrm{i}\delta_1)-c_{j,k,m}e^{\mathrm{i} \theta}(\delta_2-\mathrm{i}\delta_1)\nonumber\\
&=0, \qquad \gamma\neq \gamma', \nonumber
\end{align}
having used the expression of the Wang functions as linear combinations of Wigner functions, the explicit pairings (\ref{rotatedwigner}) which can be found in Appendix~\ref{appendixA}, and the choice of $\theta$ made in (\ref{thetachange}). Then we also have
\begin{equation}\label{pairingwang2}
\begin{cases}
\langle S_{k,m,\gamma}^j(\theta),\mathrm{i} B_1 S_{k+1,m+1,\gamma}^{j}(\theta)\rangle=0, & \\
\langle S_{k,m,\gamma}^j(\theta),\mathrm{i} B_1 S_{k+1,m+1,\gamma'}^{j}(\theta)\rangle=-2\mathrm{i} h_{j,k,m} \sqrt{\delta_1^2+\delta_2^2}, \quad \gamma\neq\gamma', &
\end{cases}
\end{equation}
having used this time the pairings (\ref{jj}), which can be found in Appendix~\ref{appendixB}. From (\ref{pairingwang1}) and (\ref{pairingwang2}) we can see that the allowed transitions only depend on the parity of $j+\gamma$ and $k$; indeed, we have either transitions between states of the form
\[
\begin{cases}
j+\gamma & \text{even} \\
k & \text{even} \\
\end{cases}\longleftrightarrow \begin{cases}
j'+\gamma' & \text{odd} \\
k' & \text{odd},
\end{cases}
\]
or transitions between states of the form
\[
\begin{cases}
j+\gamma & \text{even} \\
k & \text{odd} \\
\end{cases}\longleftrightarrow \begin{cases}
j'+\gamma' & \text{odd} \\
k' & \text{even}.
\end{cases}
\]
The same happens if we replace $m+1$ with $m-1$ and $k+1$ with $k-1$ in \eqref{pairingwang1} and \eqref{pairingwang2}. Because of the selection rules (\ref{rules}), these are the only transitions allowed by the field $B_1$.
One can easily check, in the same way, that every transition induced by $B_2,B_3$ also conserves the parity of $j+\gamma+k$. \end{proof}
|
2,869,038,157,053 | arxiv | \section{The pure glueball meson is not necessary}
In current opinion the quark-gluon picture \cite{Fri} well describes strong
interactions. According to this picture the mesons are built out of quark (q) and
antiquark ($\bar{q}$) which are coupled by gluon ($g$) exchange. It is supposed
that this picture is valid for mesons having any signature $J^{PC}$ and in any
mass region. This feature is described as \it{universality}
\rm of the quark-gluon picture. The hypothetic $g$ is quark-less, flavor-less
electrically neutral particle. It has the property of self-interaction which
implies the existence of bound states of two or more gluons. Such an object is
called glueball (G) and is a singlet state of SU(3) symmetry. Hence, G can
interfere with $q\bar{q}$ singlet and isoscalar octet states.
The quark-gluon picture of strong interaction would be confirmed by the existence
of G. This stimulates its experimental search. However, the hypothetic
properties on G are too scanty for the needs of experimental investigation. The
investigation turned out to be very difficult and during almost half a century
did not provide satisfactory result. Desirable effect would be to
detect a separate particle being pure G state. Such an object has not been
found so far but during investigation considerable collection of particles
which possibly are not $q\bar{q}$ states was discovered. These particles are not
pure G but probably include its considerable component (see \cite{Clo} for
the most recent reviews). The quest
for G is still continued.
Although the pure G meson is not observed one cannot claim that G states do
not exist. This may simply mean that the particles which are pure G states are
unobservable. To be observable G must interact with other hadrons. For this to be
the case
it should have the ability to mix with larger unitary multiplet. Being mixed with
nonet of $q\bar{q}$ states it forms a decuplet. Then it is subjected
to restrictions imposed on decuplet components by SU(3) symmetry
G can be detected as one of the three interfering unphysical components creating
the physical isoscalar states of decuplet. If it dominates one of the
states then this state may be considered as the "G candidate". Hence, the
existence of the G state can be established on the ground of unitary
symmetry and observation of a pure G is not necessary.
The unitary symmetry is perceptible due to the property of mesons to form
multiplets which are collections of particles having different but definite flavors and
the same signature $J^{PC}$. They form octets (O), nonets(N) and decuplets (D).
Each multiplet is described by some representation of unitary symmetry
$SU(3)$. The $SU(3)$ multiplets are analogical to the $SU(2)$ ones describing
electromagnetic interaction. However, the mass differentiation within SU(2)
submultiplets of SU(3) multiplets is usually neglected, therefore, this
symmetry is considered exact. In the strong interactions
the multiplets gather the particles having \it{a priori different}
\rm masses. We call them "multiplets of broken $SU(3)$ symmetry".
It is believed that the symmetry breaking announces the interaction.
Several interactions can break $SU(3)$ symmetry. The most apparent effect
of breaking is the difference between isotriplet and isodublet masses of the
particles belonging to the same multiplet (e.g. $\pi$ and $K$ mesons). This
difference is attributed to nonperturbative $g$ interaction and cannot be
calculated. However, the effect of this interaction can be described in
the phenomenological approach by experimentally verifiable Gell-Mann - Okubo
(GMO) formula for octet mesons. This formula fits the data and has the property
of universality. We call this procedure the GMO-breaking
\footnote{$K-a$ determines all differences between masses of the octet
states as $3(x_8-K)=K-a$}.
Other breakings are much weaker and are masked by GMO one. They can
be exhibited if the description of the multiplet complies with GMO requirement.
Such a GMO-restricted multiplet is named the multiplet of \it{flavor} symmetry.
\rm The effect of its breaking is described as \it{anomaly}. \rm The anomalies
can be observed in the flavor multiplets larger than octet. Anomalies of flavor
symmetry comprise information on the unknown interactions which we want to
investigate. They can be observed through anomaly of multiplet mass pattern.
However, they can be also seen due to pattern difference between two (or more)
identical multiplets differing only by $J^{PC}$ signatures or belonging to
various mass regions. Perhaps the investigation of anomalies is the most direct
way from observation to understanding.
We begin with the question of how the anomalies can be recognized. In the next two
sections we remind the VEC
description of the light meson (LM) multiplets and
define the benchmark multiplet which provides the pattern for anomalies search.
\section{VEC description of light meson multiplets}
The approach refers to phenomenological investigations performed during the eighties
of the twentieth century. They were devoted to the search of G as an object causing
deformation of $2^{++}$ and $0^{-+}$ nonet structures \cite{Ros}.
These attempts did not clarify much as they were premature. However, the very idea
of such line of investigation cannot be questioned. The problem is in its actual
realization. Now we have much larger sample of data and more tools for their
analysis. Therefore, it is probably a right time to come back to these ideas.
\newpage
An opportunity for such an approach to be successful is provided by the model of
VEC\footnote{Model of vanishing exotic commutators (formerly described as ECM model
\cite {TM})}
which describes all multiplets of LM using the system of \it{master equations} \rm(ME)
\cite{TM,MT,Loc}
\begin{equation} \label{A}
\sum_{i=1}^n l_i^2x_i^r=\frac{1}{3}a^r +\frac{2}{3}b^r, \quad b\doteq 2K-a,
\quad r=0,1,2,\ldots
\end{equation}
where $r$ is power index. Particle symbols of $x_i$, $a$, $K$
stand for the mass squared of physical mesons, $l_i$ is the amplitude of octet
content of isoscalar state $x_i$:
\begin{equation}
|x_8\rangle=\sum_{i=11}^nl_i|x_i\rangle. \label{C}
\end{equation}
Since isoscalar octet state $x_8$ and physical $x_i$ states describe uncharged
states, the $l_i$ are real numbers, hence
\begin{equation}
l_i^2> 0, \qquad i=1,2,\ldots n. \label{D}
\end{equation}
The numbering of the isoscalar mesons is chosen so that $i$
increases with growing mass:
\begin{equation}
x_i<x_{i+1}. \label{E}
\end{equation}
ME are linear with respect to unknown $l_i^2$'s. The solutions of such systems
of equations are well known. We are looking for solutions which satisfy the
conditions of positivity (\ref{D}). The $l_i^2$'s are functions of masses
$a, b, x_1,...,x_n$. Conditions (\ref{D}) which restrict these masses help to
test the affiliation of the a given set of particles to the supposed multiplet. So
they provide the criterion of relevant multiplet existence.
Knowing the positive solution $l_i^2$ of ME we can diagonalize the mass operator
of the multiplet and determine its wave function. We thus have fully described
the broken SU(3) multiplet in terms of physical masses \cite{Loc}.
If we want to describe the \it{flavor multiplet} \rm we should take into account
the GMO-breaking. As argued above, this can be achieved at
phenomenological level of description by accepting observed values of $a$ and
$b$. This property distinguishes the $(a, b)$ from other masses and appoints
them to the role of basic input. Below we use also property that for a given
pair $(a,b)$ ME may predict the existence of multiplets including several
number of states $x_i$. Therefore, the masses $(a,b)$ can be viewed as
"theory constants" while $x_i$'s as "parameters".
VEC predicts the existence of D \cite{Loc,Where}.
The mesons $a$, $b$, $x_1$, $x_2$, $x_3$ belong to $D$ if they fit criteria
(\ref{D}). Then the mass operator can be diagonalized and the wave function
of $D$ can be constructed. The wave function is expressed completely in
terms of masses of $D$ particles. The $G$ state can be distinguished and its
unphysical mass can be determined \cite{Loc}
The determination of wave function requires very accurate data on masses.
This is the merit of description, not its fault, as it transforms small
mass differences into remarkable variation of the D shape. However, excessive
sensitivity can weaken predictive power of the procedure. Therefore it is
desirable to have also simpler criteria. They can be also formulated within
the VEC model. We explain below how this can be done but begin from
presenting some further features of ME which justify the procedure we
propose.
\section{Varieties of flavor multiplets}
VEC predicts several multiplets which arise from solution of ME. The
description of multiplets which include $n$ isoscalar mesons $x_i$ requires
solving of the ME with respect to unknown quantities $l_i^2$. To calculate the
$l_i^2$'s we need the set of equations ME for $r=0, 1, \dots,(n-1)$. However,
this is merely the minimal system of ME describing this multiplet.
The same multiplet can be described by larger ME system provided this
system satisfies some solvability conditions. We use a particular form of
such conditions which is suggested by open structure of the ME set. We take
into account subsequent ME for $r=n, n+1,\ldots$. The calculated $l_i^2$,
expressed as the functions of the multiplet masses, should be inserted
into these equations. It may happen that one of the equations (say, for $r=n$)
is satisfied by these masses. Then this equation becomes the mass formula (MF)
of the multiplet. Obviously, a multiplet may have more than one MF.
The MF arises due to restriction on the masses of $x_i$. Therefore, the
number of MF cannot exceed $n$. The actual number $k$ of MF ($0\leq k\leq n$)
should be determined from data fit for each multiplet separately. The
number of ME to be considered for such a multiplet is $n+k-1$.
The multiplets built on the same base (a,b) and having the same $n$ but
different $k$ are independent and have different patterns. They are
considered as \it{varieties} \rm of the same multiplet and marked
by its currently used name indexed by $k$. The very existence of
different varieties of the multiplet testifies the existence of various
interactions influencing the structure of multiplet.
There may exist three $N$ multiplets ($N_0, N_1, N_2$)
and two $D$ multiplets $(D_0, D_1)$ which can be built out on the same
$(a,b)$ basis.
\section{Ideal nonet as a pattern}
Nonet arises due to the mixing of octet isoscalar state with an SU(3)
singlet and is described as ME multiplet for $n=2$. Three old standing
varieties of N are known: \\
$N_0$ - known as Gell-Mann - Okubo (GMO-nonet) having no MF\\
$N_1$ - described as Schwinger (S-nonet) having one MF\\
$N_2$ - ideal (I-nonet) having two MF \\
The I nonet is described by system of ME:
\begin{subequations}\label{F}
\begin{align}
l^2_1+l^2_2&=1,\\
l^2_1x_1+l^2_2x_2&=\frac{1}{3}a +\frac{2}{3}b,\\
l^2_1x^2_1+l^2_2x^2_2&=\frac{1}{3}a^2 +\frac{2}{3}b^2,\\
l^2_1x^3_1+l^2_2x^3_2&=\frac{1}{3}a^3 +\frac{2}{3}b^3.
\end{align}
\end{subequations}
Solving the first two equations we calculate $l^2_1$, $l^2_2$ as the functions
of masses. Next, substituting $l_i^2$'s into third and fourth equations
we obtain two MF's which determine the masses of mesons $x_1$, $x_2$.
These MF's can be transfered to more familiar form:\\
\begin{equation} \label{G}
x_1=a, \quad x_2=b, \quad l^2_1=\frac{1}{3}, \quad l^2_2=\frac{2}{3}; \\
\quad |x_1> = |a>, \quad |x_2> =|b>.
\end{equation}
where $|a>$, $|b>$ are the nonet basic states:
\begin{equation} \label{H}
a=\frac{1}{\sqrt{2}}(u\bar{u}+d\bar{d}), \quad\ b=s\bar{s}. \\
\end{equation}
This solution is determined by GMO breaking mechanism acting alone
on the nonet states. It describes the shape of the nonet in a very simple
way and is universal. Therefore, it can be used as a pattern for searching
anomalies of flavor multiplets.
\section{$D_1$ contents of the S-nonet components}
$\bullet$ $D$ includes three physical isoscalar states $x_i$ which can be
represented as a superpositions of the base states $a$, $b$, $G$
The multiplet $D_1$ is determined by the solution $l_i^2$ (i=1,2,3) of the system of
four ME
\begin{subequations}
\begin{align}\label{J}
l^2_1+l^2_2+l_3^2&=1,\\
l^2_1x_1+l^2_2x_2+l^2_3x_3&=\frac{1}{3}a +\frac{2}{3}b,\\
l^2_1x^2_1+l^2_2x^2_2+l_3^2x^2_3&=\frac{1}{3}a^2 +\frac{2}{3}b^2,\\
l^2_1x^3_1+l^2_2x^3_2+l_3^2x^3&=\frac{1}{3}a^3 +\frac{2}{3}b^3.
\end{align}
\end{subequations}
One can show that $D_1$ can be represented as superposition of $N_2$ and some SU(3)
singlet. The nature of the singlet is undetermined. The VEC requirements accept all
"extending singlets" which are usually mentioned like G, hybrid or multiquark state.
The singlet G is favored as only this state is supposed to have the property
of universality. The MF is:
\begin{equation} \label{K}
(x_1-a)(x_2-a)(x_3-a)+2(x_1-b)(x_2-b)(x_3-b)=0.
\end{equation}
Combining (\ref{K}) with the criterion (\ref{D}) we find that the masses of $D_1$
are subjected to further restrictions: the
masses of the $D_1$ have to satisfy the mass ordering rule (MOR) \cite{Loc}
\begin{equation} \label{Ja}
x_1<a<x_2<b<x_3. \quad (MOR-D_1)
\end{equation}
MOR-$D_1$ divides accessible region of $x_i$ mesons into three isolated subregions
which are separated by $a$ and $b$.
In each of the subregions the states $x_i$ are uniformly dominated by $a$, $G$, $b$.
\begin{equation} \label{Jb}
x_1 \sim a, \quad x_2 \sim G, \quad x_3 \sim b.
\end{equation}
Therefore, it is convenient to introduce another notation
\begin{equation} \label{Jc}
x_a\doteq x_1.\quad x_G\doteq x_2.\quad x_b\doteq x_3.
\end{equation}
which makes MOR-$D_1$ still more transparent:
\begin{equation} \label{L}
x_a <a<x_G<b<x_b.
\end{equation}
$\bullet$ S-nonet ($N_1$) is considered to be a firmly established multiplet.
It is announced for many $J^{PC}$ mesons and dominates perception of LM
spectroscopy. However, the constituents of its isoscalar components and diversity
of S-nonet shapes remain vague. We argue that these problems arise from G mixing.
$N_1$ is described by the first three equations of the system (\ref{F}). Its MF is
\begin{equation} \label{I}
(a-x_1)(a-x_2)+2(b-x_1)(b-x_2)=0.
\end{equation}
As the S-nonet has one MF we need an extra information on the masses of $x_1$, $x_2$
mesons for evaluating $l_1^2$, $l_2^2$. One can use for that the known value of
one of the masses and calculate the other one with the help of MF. We can see that
the pair of masses determined this way is different from the values
of masses of the I-nonet (\ref{G}). The change of S-nonet masses $x_1$, $x_2$
relative to the I-nonet ones shows the anomaly. It is thus compatible with the
existence of an extra state.
The components of $N_1$ have to comply with one of the two MOR conditions \cite{In}
\begin{subequations}\label{H}
\begin{align}
\label{Ha}
(a) \quad a<x_1<b<x_2, \quad (MOR-N_{1(a)}),\\
\label{Hb}
(b) \quad x_1<a<x_2<b. \quad (MOR-N_{1(b)}).
\end{align}
\end{subequations}
These conditions determine two completely different nonets which we describe
as $N_{1(a)}$ and $N_{1(b)}$ ones. Both of them are observed
\footnote{The current description of $N_1$ as S-nonet uses the
mixing angle $\vartheta$ for determination the isoscalar states. The allowed
regions of $\vartheta$ are:
for (\ref{Ha}) $\tan^2\vartheta>\tan^2\vartheta^{id}$ and for (\ref{Hb})
$\tan^2\vartheta<\tan^2\vartheta^{id}$,
where $\vartheta^{id}=35.26^o$ is ideal mixing angle \cite{Sum}.}
If $N_1$ is built out on the same ($a$, $b$) base as $D_1$ then the MOR's
(\ref{H}) may be considered as incomplete MOR-$D_1$ (\ref{L}). The comparison
shows that\\
--- if $x_1,x_2\in N_{1(a)}$ then they are dominated by ($G,b$) components of $D_1$, \\
--- if $x_1,x_2\in N_{1(b)}$ then they are dominated by ($a,G$) components of $D_1$, \\
respectively. Both types of the $N_1$ include $G$ state. Therefore, the very
existence of $N_1$ justifies the existence of $G$ which can be only seen
as the state of $D_1$. This suggests that it plays an essential role in the
structures of $D_1$. Perhaps within this multiplet the suitable $G$ is always
"ready for use" since it is built of the gluons which mediate interactions
between quarks which are present there. This is the way $G$ becomes a
constituent of the isoscalar mesons of $D_1$.
The current description of N does not explain the origin of the S-nonet
anomalies. Moreover, N themselves are distinguished by the results of biased
experiments ignoring the possibility of D appearance. Perhaps the results
of these measurements should be reanalyzed. Also extending these
experiments and increasing their accuracy is necessary.
It is possible that the nature of these multiplets has been for a long
time misunderstood. The explanation of this confusion may have far-reaching
implications for meson spectroscopy. Some of the implications can be seen
immediately.
\newpage
\section{Unrecognised glueballs and missing mesons}
$\bullet$
We have established that all S-nonets include $x_G$ dominated state.
Reviewing the particle data \cite{RPP} we find that the mesons
\begin{equation} \label{N}
f_1(1285),\quad h_1(1380),\quad\ \eta(1405),\quad f_2(1430)
\end{equation}
should be G dominated. The decay modes of $f_1(1285)$ and $h_1(1380)$
do not contradict these assignments; the G dominated structure of
$\eta(1405)$ meson established earlier \cite{FF,Where} is now confirmed;
the $f_2(1430)$ should have G dominated structure if it exists \cite{Loc}. \\
$\bullet$
The old standing puzzle of exceptional properties of $f_1(1230)$
\begin{equation} \label{O}
m=1230\pm 40 MeV, \quad \Gamma =250\div 600 MeV \\
\end{equation}
is solved by changing its affiliation from $N_1$ to $D_1$.
If this signal belongs to $D_1$ then it should be attributed to two different
particles: isosinglet meson $x_a$ and isotriplet meson $a_1$:
\begin{equation} \label{P}
x_a, \quad \quad a_1
\end{equation}
which have similar modes of decay. \\
$\bullet$ The observed axial-vector mesons are
collected into the $N_1$ multiplets having $1^{++}$ and $1^{+-}$ (described
as $N_{1A}$ and $N_{1B}$), where instead of the physical $K_1$(1270)
and $K_1$(1400) stand their C-even or C-odd combinations:
\begin{subequations} \label{Q}
\begin{align}
K_{1A}=K_1(1270)cos\phi -K_1(1400)sin\phi, \\
K_{1B}=K_1(1270)sin\phi +K_1(1400)cos\phi
\end{align}
\end{subequations}
Joint MF's analysis of data on $N_{1A}$ and $N_{1B}$ gives the following
values for these masses \cite{Sum}
\begin{subequations} \label{Q}
\begin{align}
K_{1A}=(1340\pm 8)^2 MeV^2,\\
K_{1B}=(1324\pm 8)^2 MeV^2.
\end{align}
\end{subequations}
and appoints the nonets $N_{1A}$ and $N_{1B}$ as $N_{1(a)}$ and $N_{1(b)}$
respectively. Observe that the slight difference between "bare" $K_{1A}$ and $K_{1B}$
masses is strongly amplified by hadronic interactions.
Consequently, we define also $D_1$ for these mesons as $D_{1A}$ and $D_{1B}$.
The states of the $N_{1A}$ and $N_{1B}$ are not comparable, but the states of
$D_{1A}$ and $D_{1B}$ can be compared.
The fact that basic masses (a,b) of $D_{1A}$ and $D_{1B}$ are not identical
can only increase interest to this comparison because it demonstrates the
dependence of $D_1$ properties on C-parity.
Therefore, they demonstrate the influence of weak interaction on meson
multiplets.
(cf. $K_S$ and $K_L$ states of pseudoscalar mesons).
\section{Call for new data}
Anomalies of the S-nonets provide the evidence for the existence of further (beyond GMO)
mechanisms of SU(3) breaking. The anomalies are caused by interactions which are unknown.
It is just a purpose of the S-nonet investigation to recognize their nature.
The anomalies provide much weaker breaking than the GMO one \cite{Loc}. This requires
much more accurate data to make them observable.
The present data (partly old and skimpy) enable us to select few nonets but
are insufficient for completing decuplets. Therefore, for the sake of
present and future development of meson spectroscopy it is necessary
to increase accuracy of the data and extend the measurements to other $J^{PC}$ mesons.
\section{Acknowledgments}
Author thanks Professor Anna Urbaniak-Kucharczyk - Dean of the Physical
Department of the University of Lodz; Professor Pawe\l{} Ma\'slanka and
Professor Jakub Rembieli\'nski - Chiefs of Theoretical Departments for
their support; Special thanks are
expressed to Professor Piotr Kosi\'nski for many interesting discussions,
valuable comments and reading the manuscript; Dr Bartosz Zieli\'nski's help
in computer manipulations is grateful appreciated.
\newpage
|
2,869,038,157,054 | arxiv | \section{Introduction}
Pathline tracing is a popular technique for flow visualization and analysis, for its ability to depict scientific features in different domains such as aerodynamics and cosmology.
\clr{Eulerian and Lagrangian flow simulations generate flow fields in two different formats: mesh-based and particle-based. For the meshed-based flow fields, velocities are stored at the vertices of the mesh, while the particle-based method stores the velocity information at particle positions without explicit connectivity defined between particles. }
For certain particle simulations such as smoothed particle hydrodynamics (SPH), even though the trajectories of the particles already exist, to have a clear view of the underlying features, sometimes it is necessary to have more pathlines around the locations of interest.
The widely used numerical integration approaches, such as Euler or Runge-Kutta methods for computing pathlines, require an extensive amount of velocity interpolation both spatially and temporally to obtain reasonable accuracy. The computation time of numerical integration largely depends on the number of integration steps and the complexity of the interpolation method. For large-scale datasets, the I/O overhead can also be very high.
Therefore, for particle-based flow fields, using numerical integration to generate pathlines is very expensive.
However, the positions of a particle, or the particle displacement, over time often show particular patterns governed by the underlying physical conditions.
Making use of the particle displacement, previous study \cite{Chandler2015} proposed an interpolation-based tracing approach for particle datasets, where they avoid performing velocity interpolations at every time step needed by numerical integration. Instead, they directly use particle displacement between time steps to generate particle traces.
The performance of their method is determined by three factors: neighborhood search time, the number of interpolated pathlines, and the number of interpolation steps.
Neighborhood search time is optimized using a modified $k$-d tree data structure in their work and pathline advection from different starting points can be parallelized.
However, the tracing time for each pathline cannot be easily reduced.
Considering that current flow simulations can output data at higher spatial and temporal resolutions, a large number of tracing steps introduces a computational challenge.
To solve the aforementioned challenge, we propose a new approach for interpolation-based pathline tracing. B-spline\cite{gordon1974b} curves are used first to fit the traces of existing particle data.
We optimize the accuracy by using an adaptive knot placement method \cite{yeh2020fast} in B-spline approximation, where more control points are placed at the positions of higher complexity in the curve.
Interpolation is performed only between the control points of the parametric curves instead of the original particles, which reduces the number of advection iterations from the number of particle integration steps to the number of control points used.
With our method, we demonstrate a significant reduction in the computation cost for particle tracing while achieving similar tracing quality.
\section{Related Works}
Most of the approaches to calculating pathlines rely on particle tracing by numerical integration \cite{mcloughlin2010over, post2002feature}.
Since regular grid flow field particle tracing is a well-studied area with mature techniques, while studies on particle-based flow field pathline generation are relatively scarce, we focus on the related works to pathline generation for particle-based flow fields in this section.
One of the most relevant studies to our work is interpolation-based pathline tracing\cite{Chandler2015}, where next time step particle location is calculated based on the neighborhood particle position displacement. Their method avoids multiple neighborhood searches needed by numerical integration at each time step. However, their approach faces the problem that the total tracing time is dominated by the number of time steps of the existing pathlines.
Particle-based flow fields can also be interpreted as flow maps stored at scattered point locations. Many recent research that use machine learning techniques to generate \cite{han2021exploratory, lee2021deep} or super-resolve flow maps \cite{jakob2020fluid,agus_euro} are also related to this study. However, machine-learning-based methods usually face the problem of long training time and unbounded tracing errors.
Another related field to our study is using parameter curves to represent pathlines or streamlines in flow visualization. An exploratory research \cite{bujack2015lagrangian} examine different kind of parameter curves and their fitting quality. Hong et al. \cite{hong2017compression} make use of B\'ezier curves for flow visualization under a compression-based pathline or streamline reuse framework. Liu and Wang \cite{liu2022topological} proposed a streamline compression method using B-spline curves that preserves topological relations and has bounded error. Chen et al\cite{chen2015uncertainty} model pathlines with composite B\'ezier curves with uncertainty and reduce error using forward and backward trace.
In all of these studies, parameter curves show small and controllable fitting errors in representing pathlines and streamlines and fast fitting time.
\section{Background}
\label{sect:background}
In this section, we review the interpolation-based pathline tracing method introduced by Chandler et al. \cite{Chandler2015} for particle-based flow fields. Each pathline in the dataset can be described as a function: $f_i: \tau \mapsto \rho$,
where $\tau\in\mathbb{N}$ is the time step and $\rho\in\mathbb{R}^3$ is the particle position at that time step.
When a new particle is inserted at time $\tau$, to compute its trajectory in other time steps, an interpolation-based pathline tracing method described in the following steps is used:
\begin{enumerate}
\setlength\itemsep{-0.3em}
\item Load all particles of time step $\tau$.
\item Find neighbor particles around the inserted particle.
\item Load all particles of time step $\tau+1$.
\item Calculate the interpolation weights based on the spatial positions of the neighboring particles and the inserted particle at time $\tau$.
\item Reconstruct the position of inserted pathline at time $\tau + 1$ based on neighboring particles' displacement.
\item Update the neighbor particles at time step $\tau+1$.
\item Repeat from step 3 until the desired time step is reached.
\end{enumerate}
The reconstruction step in $5$ can be expressed by the following equation:
\begin{equation}
\label{equ:reconstruction}
f_{insert}(\tau+1) = \sum_{f_i(\tau)\in N(f_{insert}(\tau))}w_i(f_i(\tau+1)-f_i(\tau))+f_{insert}(\tau),
\end{equation}
where we use $N(f_{insert}(\tau))$ to denote the set of neighboring particles of the particle at position $\rho=f_{insert}(\tau)$ and $w_i$ to denote the interpolation weight.
It is worth noting that the interpolation-based pathline tracing approach is agnostic to the interpolation method and $w_i$ is calculated using the interpolation method of choice, for example, SPH kernels in their study.
A modified $k$-d tree data structure is constructed at every time step to accelerate neighborhood search in their proposed method.
The major limitation of this method is that the total computation time is determined by the number of integration (time) steps needed to calculate the pathline.
Our method aims to solve this limitation by applying interpolation-based pathline tracing on B-spline representations of the existing particle traces. The interpolation iteration needed is effectively reduced to the number of control points used to represent the particle traces.
\section{Method}
Our B-spline curve interpolation-based pathline tracing approach can be described at a high level in two steps. First, we process the particle-based flow field and fit a B-spline curve for each existing particle trace. Second, we perform interpolation-based pathline tracing on the control points and knots for a given new particle position and time. We explain these two steps in detail next.
\subsection{B-spline Approximation for Particle Traces}
A B-spline curve of order $k$ is a piecewise polynomial function of degree $k-1$ defined by:
\begin{equation}
C(u) = \sum^{n-1}_{i=0}B_{i,k}(u)P_i, \;\;\;\;\; u \in [t_0,t_{n+k-1}],
\end{equation}
where $P_i$ denotes one of the $n$ control points and $C(u)$ evaluates the B-spline curve at parameter location $u$. Knot vector $T=\{t_0,t_1,t_2,...,t_{n+k-1}\}$ defines the parameter $u$ range that is influenced by the control points $P$. The B-spline basis function $B_{i,k}(u)$ is defined recursively with respect to the knot vector and the curve order $k$ as:
\begin{equation}
\begin{aligned}
B_{i,1}(u) = &\begin{cases}
1, & \text{if $t_{i} \leq u < t_{i+1}$} \\
0, & \text{otherwise} \\
\end{cases},\\
B_{i,k}(u) = &\frac{u-t_i}{t_{i+k-1}-t_i}B_{i,k-1}(u) + \frac{t_{i+k}-u}{t_{i+k}-t_{i+1}}B_{i+1,k-1}(u).
\end{aligned}
\end{equation}
More in-depth content about B-spline curves can be found in this book by Farin\cite{farin2002curves}.
Given a pathline represented by $m$ particles whose positions are $\rho_0, \rho_1, ..., \rho_{m-1}$, the approximation of a B-spline curve involves three steps: First, we need to parameterize the data points into a monotonically increasing list $u_0, u_1,...,u_{m-1}$, which determines the distribution of data points along the B-spline curve.
Second, we determine the knot vector along the curve, which decides the control points' distribution in the parameter space.
And last, we optimize the control point positions with respect to the parameters and the knot vector as a least-square solution:
\begin{equation}
\label{equ:least_square}
\argmin_P\sum_0^{m-1}{\lVert \rho_i - C(u_i) \rVert}_2
\end{equation}
\subsubsection{Parameterization}
There are two ways to parameterize this trajectory. The first is to use the time steps as parameters: $\tau$ as the parameter for $\rho$.
And the second is to use the chord length in 4D space-time to determine a parameter for each particle.
\clr{We choose to use time steps to parameterize the trajectory for its simplicity. When representing the curve in 4D space, we need to determine how to normalize the spatial dimensions and the temporal dimension. The choice of different normalizations may have an impact on the fitting accuracy. Moreover, parameterizing the pathlines by the 4D chord length also requires us to use 4D control points to describe the B-spline curve, which can be less efficient when applying the interpolation algorithm.}
Therefore, we normalize the time index to be in the range $[0,1]$, and each B-spline curve is a function from time to 3D spatial positions.
\subsubsection{Knot Placement}
Considering a B-spline curve of order $k$, control point $P_i$ is used when calculating the spline segment between $t_i$ and $t_{i+k}$. Thus, knot vector $T=\{t_0,t_1,t_2,...,t_{n+k-1}\}$ determines the control point density along the pathline in the parameter space. We duplicate the first $k$ and the last $k$ knots to ensure the curve passes through the first and the last control points.
Knot placement is a well-known problem in B-spline approximation to optimize the spline quality. We adopted a recent fast automatic knot placement method proposed by Yeh et al.\cite{yeh2020fast} to determine an optimal knot vector for spline fitting.
The automatic knot placement method is inspired by the idea that an order-$k$ B-spline curve has a piecewise constant $(k-1)$th derivative\cite{yeh2020fast}, and derivative discontinuities mark the knot locations. A feature function is derived from the $k$th derivative of the data points and used the cumulative feature function to guide the knot placement (more knots when the cumulative feature function changes fast). Their method is empirically found to yield better fitting accuracy than directly using $(k-1)$th derivative of the data points.
The only hyper-parameter to choose in automatic knot placement is the number of knots used. Choosing the number of knots also determines the number of control points used for B-spline approximation, since the number of knots is $n+k$ for a B-spline with $n$ control points and order $k$. This choice is a balance between pathline tracing efficiency and accuracy. We discuss in detail the evaluation of the choice in \autoref{sect:fitting_eval}.
After a knot vector has been calculated for the given data points, we find the control points of the approximated B-spline curve using the least square method following \autoref{equ:least_square}.
\subsection{Interpolation-Based B-spline Tracing}
\begin{figure}
\centering
\vspace{-30pt}
\includegraphics[width=\columnwidth]{figures/b-spline_interp.pdf}
\caption{
An example for forward tracing of control points.
Given the pathline tracing starting point (red dot), we first identify the neighbor curves by evaluating $C(u)$ (blue).
We find the nearest knots and control points (green) to $C(u)$ in future time steps.
And then neighbor particles (blue dots in the square) and the nearest knots and control points $P_{d}$ are used to reconstruct knots and control points $P_{r}$ for the new pathline.
We then similarly interpolate knots and control points to get the next iteration knots and control points $P_{r+1}$ for the new pathline.
}
\label{fig:b-spline_interp}
\end{figure}
Based on the method introduced by Chandler et al.\cite{Chandler2015}, we propose a pathline tracing approach based on B-spline control point interpolation. All particle data are processed first and a B-spline curve $C$ is generated for each particle trajectory.
The B-spline approximation only needs to be done once.
The control points $P_0, P_1,..., P_{n-1}$ and the knot vector $T: t_0, t_1,..., t_{n+k-1}$ are saved as the representation for the data instead of the original particle trajectories.
\clr{Since the automatic knot placement algorithm cannot guarantee that the knots are placed at the same time steps for different pathlines, we need to synchronize the control points before interpolation. However, for higher efficiency, this synchronization is only performed at the starting point of the traced pathline. For later control points, we simply choose the nearest neighbors among the next control point from each pathline, even though these control points may correspond to the knots of different time steps. The errors introduced by this choice are small for two reasons. First, nearby pathlines will likely share similar shapes so that the knot intervals of nearby pathlines are similar.
Second, we interpolate both the knots and the control points, which is similar to interpolating a point in 4D space-time, which eases the problem of unsynchronized time steps. }
Next, we describe our algorithm in detail.
An illustration of the interpolation scheme can be found in \autoref{fig:b-spline_interp}.
Given a tracing starting point $(\tau, \rho)$, we present the modified interpolation-based pathline tracing as follows:
\begin{enumerate}
\setlength\itemsep{-0.3em}
\item Normalize $\tau$ to get the parameter $u$ and evaluate all existing B-spline at $u$.
\item For each existing B-spline, find the minimum index $d\in [\lfloor k/2 \rfloor, n +k -1 - \lceil k/2 \rceil]$ so that $t_d \ge u$. For special cases, where $u = 0$ and $1$, we define $d=\lfloor k/2 \rfloor$ and $n +k-1 - \lceil k/2 \rceil$.
\item Find neighboring B-splines $C$ and calculate the interpolation weights based on ${\lVert \rho - C(u)\rVert}_2$.
\item Reconstruct the knot $t_r$ and the control point $P_{r-\lfloor k/2 \rfloor}$ of $Q$ based on neighbor $t_d$ and $P_{d-\lfloor k/2 \rfloor}$, where $r$ denotes the knot and control point indices for the interpolated pathlines.
\item Load knots $t_{d+1}$ and control points $P_{d-\lfloor k/2 \rfloor + 1}$.
\item Find the neighbor control points and knots. Calculate interpolation weights for iteration $r$.
\item Reconstruct the knot $t_{r+1}$ and the control point $P_{r-\lfloor k/2 \rfloor+1}$ for the inserted pathline based on the interpolation weights.
\item Update the neighbor knots and control points for iteration $r+1$.
\item Repeat from step 5 until $t_r > 1$.
\end{enumerate}
The steps above describe how we perform forward tracing. Backward tracing, which is necessary to represent the full new B-spline, can be described similarly. The reconstructions in step 4 and step 7 are defined similarly to \autoref{equ:reconstruction}:
\begin{equation}
\begin{aligned}
\label{equ:reconstruction_2}
t_{r} &= \sum_i w_i \cdot t_{i,d_i}, \\
P_{r-\lfloor k/2 \rfloor} &= \sum_i w_i(P_{i,d_i-\lfloor k/2 \rfloor}-C_i(u))+\rho, \\
t_{r+1} &= \sum_{i}w_i(t_{i,d_i+1}-t_{i,d_i})+t_{r}, \\
P_{r-\lfloor k/2 \rfloor+1} &= \sum_i w_i (P_{i,d_i-\lfloor k/2 \rfloor+1}-P_{i,d_i-\lfloor k/2 \rfloor})+P_{r-\lfloor k/2 \rfloor}, \\
\end{aligned}
\end{equation}
where we use $i$ to denote the indices of all neighbors of the inserted pathline. In the first two equations, neighbors are calculated based on the Euclidean distance between the positions of the synchronized first time step. In the last two equations, neighbors are calculated using Euclidean distance between the control points of different B-spline curves.
Similar to the original interpolation-based method, we can use any interpolation method to calculate the interpolation weights $w_i$.
\section{Results}
\label{sect:results}
The dataset that we use to evaluate our B-spline control point interpolation method is generated by a cosmology simulation called $\nu$bhlight \cite{miller2019nubhlight} for solving general relativistic magnetohydrodynamics. The simulation generates particle traces for 2001 time steps.
We implemented both our method and the baseline method \cite{Chandler2015} with Python for a fair comparison. The B-spline approximation is performed using SciPy API to FITPACK, which is a Fortran routine for B-spline fitting and evaluation. We calculate the knot vectors for each curve before fitting by implementing the method described by fast automatic knot placement\cite{yeh2020fast}. Our implementation can be found here\footnote{\url{https://github.com/harviu/interp\_based\_spline\_tracing}}.
\subsection{B-spline Fitting Evaluation}
\label{sect:fitting_eval}
As the first step of our approach, we evaluate the quality of the B-spline approximation on pathlines. The approximation accuracy depends on the knot placement and how we parameterize the curve. We calculated the fitting error as root-mean-squared-error(RMSE) across different integration steps over time.
Overall, we achieve RMSE of $1.31 \times 10^{-5}$, about $0.000095\%$ of the data range, across all time steps using the 3D B-spline curves with 100 control points parameterized by time.
We compared the fitting accuracy under two different conditions: 4D curves parameterized by the chord length and 3D curves parameterized by time with 100 control points as shown in the left part of \autoref{fig:fitting_accuracy}, and four different numbers of control points for 3D curves in the right figure.
\clr{These errors are calculated by interpolating the B-spline curves at the parameters $u_0, u_1, ..., u_{m-1}$, which corresponds to the time steps in B-spline fitting.
Since 4D errors are not comparable to 3D errors, we only calculate the 3D spatial error in the 4D B-spline case.}
We can observe that 3D curves have fewer errors, especially at the first 750 time steps, when the pathlines have higher curvature in the dataset. The reason for the error difference is that 4D spatial-temporal curves have more complicated geometry because of the additional dimension, which means more control points are needed to achieve a similar spatial error.
The comparison between the different number of control points clearly shows that increasing from 10 to 100 control points dramatically decreases the approximation error. However, using more than 100 control points does not decrease the error much, while increasing the computational burden in spline tracing. Since the RMSE of the time step with the largest error is already low enough ($0.01$, $0.07\%$ of the data range), we choose to use $100$ control points for the latter experiments. For other datasets or under different use cases, we may also use a linear regression model as a heuristic method \cite{yeh2020fast} to determine the number of control points.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fitting_accuracy.pdf}
\vspace{-20pt}
\caption{
B-spline fitting error in RSME. The left figure compares the 3D curve parameterized by time and the 4D curve parameterized by chord length. The right figure compares the 3D curve using different numbers of control points.
}
\label{fig:fitting_accuracy}
\end{figure}
\subsection{Interpolation-based B-spline Tracing Results}
In this section, we quantitatively and qualitatively compare the B-spline tracing results with the particle tracing results. For all $83274$ pathlines in the dataset, we randomly sample $25\%$ of them as the test data. We start the pathline tracing at different time steps using the sampled test data position.
We use the same inverse distance weighting (IDW) interpolation for both methods for a fair comparison.
The pathline tracing results are compared with the ground truth test data to calculate spatial RMSE across different time steps.
Quantitative results are shown in \autoref{fig:tracing_error}. Our method and the compared method have a similar error trend when tracing from different starting points. Errors are low around the starting points and increase when the traces advance further. Errors are generally smaller in the first 100 time steps because the data ranges are smaller in these time steps, which is the property of this dataset.
Our method has slightly lower tracing error across all time steps, possibly because the B-spline is a smoother representation of the original particle trajectories, and thus removing the fluctuations in the original trajectories could lead to lower error.
Next, we show the qualitative pathline tracing results using two different methods in \autoref{fig:qualitative}.
\clr{In \autoref{fig:qualitative} (a), we show three pathlines (first 100 time steps) generated by our method, by the baseline method, and from the original test data. The pathline traced using our method and the baseline method are similar. The errors between the traced pathline and the ground truth pathline are accumulated for later time steps. However, this problem exists for both approaches and can be eased by using a more sophisticated interpolation method.
In \autoref{fig:qualitative} (b), we show all the traced pathlines from the test dataset. }
The color denotes the tracing error at different time steps for each pathline. We can observe similar tracing quality using two different methods.
Since it has already been shown\cite{Chandler2015} that the baseline method has similar tracing accuracy compared to numerical integration methods like adaptive Runge-Kutta 4/5 \cite{cash1990variable}, our method has a comparable tracing error with both the baseline method \cite{Chandler2015} and numerical integration, and moreover, requires much less computation, which we will show in the next section.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/tracing_error.pdf}
\vspace{-20pt}
\caption{
The pathline tracing results that start at different time steps (0 and 750) using our method and the method proposed by Chandler et al. \cite{Chandler2015}. Our method has a slightly lower error under almost all conditions.
}
\label{fig:tracing_error}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/qualitative.pdf}
\vspace{-20pt}
\caption{
The single pathline tracing results (a) and the tested pathline tracing results (b). Our method is on the left and the baseline approach is on the right. Color denotes the error to the ground truth pathlines. Two methods have similar results both in (a) and (b).
}
\label{fig:qualitative}
\end{figure}
\subsection{Timing}
We calculate the B-spline fitting time and the interpolation time of 20818 test pathlines. All the experiments are performed on the same machine and with a similar implementation optimization. The particle dataset is assumed to be pre-arranged in a way that particles in the same pathline are close to each other in the memory.
We present the B-spline control points tracing computational time under the conditions of different numbers of control points in \autoref{tab:time}.
The number of control points does not influence the B-spline fitting time much. However, the interpolation time increases linearly with the number of control points of interpolation.
For the number of control points 100 that we choose for the experiments, the interpolation time is about 10\% of the particle interpolation time.
\clr{For most flow datasets, choosing a number of control points similar to this dataset (~5\% of the number of time steps) can achieve high B-spline fitting accuracy \cite{bujack2015lagrangian}. If the flow data is especially complex, e.g. a lot of fluctuation along the particle trajectories, and we can not reduce the number of control points much, the computation cost of our method could exceed the computation cost of directly interpolating particles. However, this is unlikely for real flow simulations.
The B-spline fitting time is much less than that of interpolation and the fitting only needs to be done once for the same dataset.
In terms of scalability, the time for the fitting process and tracing for different pathlines increases linearly to the number of pathlines. Since the neighborhood search takes most of the time for interpolating between control points, the time will increase logistically to the number of pathlines.
}
\section{Conclusion and Future Works}
We presented an interpolation-based pathline tracing method for particle simulations on B-spline control points.
The B-spline approximation is optimized by using an adaptive knot placement method, which is fast and has controllable errors. The computation time for our method is largely reduced due to the reduction of interpolation iteration, and at the same time, our method achieves similar pathline tracing quality compared to interpolating particles.
Besides the usage for pathline tracing, B-splines can also be applied to data reduction or as a proxy representation for neural network training.
Our future works will focus on B-spline and neural network representations for the pathlines and their application for feature-driven visualization.
\begin{table}[tb]
\caption{
B-spline approximation time and control points interpolation time under the conditions of different numbers of control points.
The ratio shows the computation time compared to the baseline method.
}
\label{tab:time}
\scriptsize
\centering
\begin{tabularx}{\columnwidth}{cccc}
\toprule
Number of Control Points & Fitting Time & Interpolation Time & Ratio \\
\midrule
10 & 67.36s & 3.19s & 0.011 \\
25 & 67.52s & 6.50s & 0.023 \\
50 & 68.80s & 11.04s & 0.040 \\
100 & 69.28s & 26.71s & 0.096 \\
Baseline\cite{Chandler2015} & - & 278.90s & 1 \\
\bottomrule
\end{tabularx}
\end{table}
\acknowledgments{
This work is supported in part by US Department of Energy SciDAC program DE-SC0021360, National Science Foundation Division of Information and Intelligent Systems IIS-1955764, and National Science Foundation Office of Advanced Cyberinfrastructure OAC-2112606.
}
\bibliographystyle{abbrv-doi}
|
2,869,038,157,055 | arxiv | \section{Introduction }
Let $(A,{\underline \theta})$ be a complex $g$-dimensional principally polarized abelian variety.
This paper is concerned with the set of $n$-torsion points lying on the theta divisors, where $n$ is any fixed integer $\ge 2$.
We choose once for all a \emph{symmetric} divisor $\Theta$ representing the polarization. For $x\in A$ we denote $t_x:A\rightarrow A$ the translation by $x$, and $\Theta_x$ the effective divisor corresponding to the line bundle $t_x^*{\mathcal O}_A(\Theta)$ (i.e. $\Theta_x=\Theta-x$).
We set
\[\Theta_{x}(n):=\#A[n]\cap \Theta_{x},\]
where $A[n]$ is the group of all $n$-torsion points of $A$.
A result of Kempf (\cite[Theorem 3]{K1}) asserts that, for $x,y\in A$ the corank of the multiplication map of global sections
\begin{equation}
\label{m2}H^0(A, t_x^*{\mathcal O}_A(2\Theta))\otimes H^0(A, t_y^*{\mathcal O}_A(2\Theta))\longrightarrow H^0(A, t_x^*{\mathcal O}_A(2\Theta)\otimes t_y^*{\mathcal O}_A(2\Theta))
\end{equation}
coincides with the number $\Theta_{y-x}(2)$ (we refer to \cite[\S2]{PS} for the translation into the present setting of Kempf's statement, which contains a slight mistake).
Our first result is an extension of Kempf's theorem to $n$-torsion points, for arbitrary $n$. This is achieved using certain semihomogeneous vector bundles, denoted $\text{W}_{a,b}$, introduced and systematically studied by Oprea in \cite{oprea} (as a consequence of Mukai's theory of semihomogeneous vector bundles, \cite{mukai-semi}). When $a$ and $b$ are coprime positive integers, the $\mathrm{W}_{a,b}$'s are defined as simple, semihomogeneous and symmetric vector bundles such that
\begin{equation}\label{defining}
rk\mathrm{W}_{a,b}=a^g\qquad \det\mathrm{W}_{a,b}={\mathcal O}_A(\Theta)^{a^{g-1}b}.
\end{equation}
If $a$ is odd there is a unique such vector bundle, while if $a$ is even they are not unique when $g\ge 2$. We refer to the next section for generalities on such vector bundles. Our generalization of Kempf's theorem (which is recovered for $a=b=1$) is the following
\begin{theoremalpha} \label{kempf-gen}
Let $a$ and $b$ be coprime positive integers. Let $\mathrm{W}_{a,a+b}$ and $\mathrm{W}_{b,a+b}$ be two vector bundles as above. For $x,y\in A$ the number $\Theta_{y-x}(a+b)$ is equal to the corank of the multiplication map of global sections
\begin{equation}\label{W}m_{a,b}(x,y): H^0(A,t_x^*\mathrm{W}_{a,a+b})\otimes H^0(A, t_y^*\mathrm{W}_{b,a+b})\longrightarrow H^0(A,t_x^*\mathrm{W}_{a,a+b}\otimes t^*_y\mathrm{W}_{b,a+b})
\end{equation}
\end{theoremalpha}
(As it is easy to check, the source and target of the above map have the same dimension, namely $(a+b)^{2g}$.) Note that if $a$ or $b$ are even, say $a$, the above map $m_{a,b}$ depends on a choice of a vector bundle $\mathrm{W}_{a,a+b}$, but we will neglect this in the notation. A special role will be played by
the particular case
\begin{equation}\label{n-1}
m_{1,n-1}(x,y): H^0(A,t_x^*{\mathcal O}_A(n\Theta))\otimes H^0(A, t_y^*\mathrm{W}_{n-1,n})\longrightarrow H^0(A,t_x^*{\mathcal O}_A(n\Theta)\otimes t^*_y\mathrm{W}_{n-1,n}).
\end{equation}
obtained for $a=1$ and $n:=a+b$.
In view of Theorem \ref{kempf-gen}, it is useful to consider criteria for the surjectivity of the multiplication of global sections of semihomogeneous vector bundles, in analogy with well known classical theorems for line bundles (due to Mumford, Koizumi, Kempf and others, see e.g. \cite[\S 7.2]{BL} \cite[\S 6.2]{Kempf} , \cite[\S 8]{JP}). In fact an optimal result in this direction was already proved years ago by Popa and the author (\cite[Theorem 7.30]{PP2}). We restate it more expressively as Theorem \ref{pp} below.
In turn this is an ingredient of the proof of the following lower bound for the rank of the multiplication maps appearing in Theorem \ref{kempf-gen}
\begin{theoremalpha}\label{cor}
In the notation of Theorem \ref{kempf-gen}, \ $\text{rk}\, (m_{a,b}(x,y))\ge((a+b)^2-1)^g$ for all $x,y\in A$ .
\end{theoremalpha}
The last result of this note, in fact our original motivation, is the proof of the following conjecture of Auffarth, Pirola and Salvati Manni on the maximal number of $n$-torsion points on a theta divisor (see \cite{APS}). The case $n=2$, which was conjectured earlier by Marcucci and Pirola (\cite{MP}), was proved by Salvati Manni and the author in \cite{PS}.
\begin{theoremalpha} \label{main} For all $x\in A$
$$\Theta_{x}(n)\leq n^{2g} -(n^2-1)^g.$$
Moreover equality holds if and only if $A$ is a product of elliptic curves and \ ${\mathcal O}_A(\Theta_{x})=\boxtimes_{i}{\mathcal O}_{E_i}(z_i)$ \ where $z_i$ are n-torsion points.
\end{theoremalpha}
Note that the upper bound of the statement is just the combination of Theorems \ref{kempf-gen} and Theorem \ref{cor}. The remaining part is proved in Section \ref{proof}.
It would be interesting to have effective results along these lines for irreducible principal polarizations. To this purpose it should be kept in mind that, thanks to a recent result of Auffarth and Codogni (\cite{AC}), there are irreducible theta divisors containing abelian subvarieties of dimension up to $g-2$, hence containing at least $n^{2(g-2)}$ \ $n$-torsion points for all $n$. On the other hand, by Raynaud's theorem on the Manin-Mumford conjecture, the overall number of torsion points contained in a theta divisor is finite unless it contains translates of positive-dimensional abelian subvarieties by torsion points.
\noindent\textbf{Acklowledgements. } The author thanks: Dragos Oprea for pointing out a gap in an earlier draft of this paper, Mihnea Popa for many conversations about semihomogeneous vector bundles a long time ago, Riccardo Salvati Manni for his encouragement and many discussions and suggestions, and the referee for very accurate remarks.
\section{Preliminaries on the vector bundles $\mathrm{W}_{a,b}$}\label{prelim}
Here we recall some basic facts about the vector bundles $\mathrm{W}_{a,b}$ introduced by Oprea in \cite{oprea}. Let $(A,{\underline \theta})$ be a $g$-dimensional p.p.a.v. and let $\Theta$ be a fixed symmetric theta divisor. For a pair of coprime positive integers $a$ and $b$ we consider simple semihomogeneous vector bundles $\mathrm{W}$ such that
\begin{equation}\label{defining1}
rk\mathrm{W}=a^g\qquad \det\mathrm{W}={\mathcal O}_A(\Theta)^{a^{g-1}b} \>.
\end{equation}
Vector bundles with the above properties do exist and they are unique up to tensorization with an $a^g$-torsion line bundle (\cite[Theorem 7.11 and Remark 7.13]{mukai-semi}). Denoting $a_A:A\rightarrow A$ the isogeny $x\mapsto ax$, we have that
\begin{equation}\label{fund}
a_A^*\mathrm{W}\cong \bigl({\mathcal O}_A(\Theta)^{ab}\bigr)^{\oplus a^g}
\end{equation}
(see \cite[2.3.1]{oprea}). Moreover such $\mathrm{W}$'s satisfy \emph{the index theorem with index $0$} (IT(0) for short), meaning that $h^i(\mathrm{W}_{a,b}\otimes \alpha)=0$ for all $i>0$ and $\alpha\in \widehat A$ (we denote $\widehat A={\rm Pic}^0 A$ the dual abelian variety). Recalling that the degree of the isogeny $a_A$ is $a^{2g}$, it follows that
\begin{equation}\label{rk-etc}
h^0(A,\mathrm{W})=\chi(W)=a^g\bigl(\frac{b}{a}\bigr)^g=b^g
\end{equation}
Another useful fact about the vector bundles satisfying the above condition (\ref{defining1}) is that they are globally generated as soon as $b>a$. This follows from the criterion asserting that a vector bundle $E$ on $A$ is globally generated as soon as $E(-\Theta)$ is IT(0) (\cite[Theorem 2.1]{pareschi}). Indeed (\ref{fund}) yields that $\mathrm{W}(-\Theta)$ is IT(0) if and only if $b>a$.
For odd $a$, imposing the supplementary condition that $\mathrm{W}$ is \emph{symmetric}, i.e. $(-1)_A^*\mathrm{W}\cong \mathrm{W}$, it turns out that there is a \emph{unique} such vector bundle up to isomorphism (\cite[\S2.1]{oprea}). It is denoted $\mathrm{W}_{a,b}$.
Also for even $a$ such symmetric vector bundles do exist (for example the dual of the Fourier-Mukai transform of the vector bundle $\mathrm{W}_{b,a}$) but they are not unique for $g\ge 2$. For even $a$ Oprea defines a unique such
vector bundle $\mathrm{W}_{a,b}$ by means of the Scr\"odinger representation (\cite[\S2.1 and (16)]{oprea}). However this is not important for our purposes, since we will consider \emph{any} simple symmetric semihomogeneous vector bundle $\mathrm{W}$ satisfying (\ref{defining1}). We denote $\mathcal{W}_{a,b}$ the set of all isomorphism classes of such simple symmetric semihomogeneous vector bundles and a
vector bundle $\mathrm{W}\in\mathcal W_{a,b}$ will be usually denoted $\mathrm{W}_{a,b}$.\footnote{Here our notation differs from the one of Oprea, as he denotes $\mathrm{W}_{a,b}$ the unique vector bundle in $ \mathcal{W}_{a,b}$ defined, as mentioned above, via the Schr\"odinger representation.}
We consider the subgroup
\[\Sigma(\mathrm{W}_{a,b})=\{\alpha\in\widehat A\>|\> \mathrm{W}_{a,b}\otimes \alpha\cong\mathrm{W}_{a,b}\}\]
We have that, independently on the parity of $a$,
\begin{equation}\label{subgroup}\Sigma(\mathrm{W}_{a,b})=\widehat A[a]
\end{equation}
($a$-torsion line bundles, see \cite[Corollary 7.2]{mukai-semi}).
Given $\mathrm{W}_{a,b}\in \mathcal W_{a,b}$, the other vector bundles, say $\mathrm{W}_{a,b}^\prime$ (possibly isomorphic to $\mathrm{W}_{a,b}$) whose isomorphism class lies in $\mathcal{W}_{a,b}$ are those of the form
\[\mathrm{W}_{a,b}^\prime\cong \mathrm{W}_{a,b}\otimes \beta\]
for $\beta\in \widehat A[a^g]$ such that $(-1)_A^*(\mathrm{W}_{a,b}\otimes\beta)\cong \mathrm{W}_{a,b}\otimes\beta^{-1}\cong\mathrm{W}_{a,b}\otimes\beta$. Therefore
$\beta^2\in \widehat A[a]$. This, together with the condition $\beta\in \widehat A[a^g]$ implies that, if $a$ is odd (or $g=1$) then $\beta\in\widehat A[a]=\Sigma(\mathrm{W}_{a,b})$. Hence, as mentioned above, there is a unique such an isomorphism class (\cite[\S2.1]{oprea}).
In the proof of Theorem \ref{kempf-gen} the following (slight variant of a) result of Oprea will be in use.
For $a$ and $b$ coprime positive integers
one considers the isogeny
\[\mu_{b, a}:A\times A\rightarrow A\times A\quad (z,t)\mapsto (bz+at, z-t).\]
\begin{proposition}\label{maledetta} \emph{(Oprea)}
Keeping the above notation and assumptions,
given a pair of vector bundles $(\mathrm{W}_{a,a+b},\mathrm{W}_{b,a+b}) \in\mathcal{W}_{a,a+b}\times\mathcal W_{b,a+b}$ there is a vector bundle $W_{ab,1}\in\mathcal W_{ab,1}$ such that
\begin{equation}\label{wirt2}
\mu_{b,a}^*(\mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta))\cong \mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}
\end{equation}
\end{proposition}
\proof
Let $a$ and $b$ be coprime positive integers. For $a$ and $b$ both odd (or $g=1$ and arbitrary $a$ and $b$, see \cite{a}) all vector bundles appearing in the statement are unique and the Proposition is exactly Oprea's \cite[Proposition 1]{oprea}.
Next, we assume that $g\ge 2$ and $a$ and $b$ still coprime, but one of them, say $a$, is even.
We fix a reference bundle $\overline\mathrm{W}_{ab,1}\in \mathcal{W}_{ab,1}$. Oprea's argument still proves that the determinant of $\mu_{b,a}^*\bigl(\overline\mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta)\bigr)$ is equal to the one of $\mathrm{W}_{a,a+b}\boxtimes\mathrm{W}_{b,a+b}$, and that
\begin{equation}\label{wirt3}
\mu_{b,a}^*(\overline\mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta)) \cong (\mathrm{W}_{a,a+b}\otimes \delta)\boxtimes (\mathrm{W}_{b,a+b} \otimes \gamma)
\end{equation}
for suitable $(\delta,\gamma)\in \widehat A[a^g]\times\widehat A[b^g]$. We claim that, moreover, both the vector bundles $E:=\mathrm{W}_{a,a+b}\otimes \delta$ and $F:=\mathrm{W}_{b,a+b}\otimes \gamma$ are symmetric. Indeed an immediate computation shows that $(-1_A,1_A)\circ\mu_{b,a}= (1_A,-1_A)\circ \mu_{b,a}\circ (-1_A,-1_A)$. Therefore, since $\overline\mathrm{W}_{ab,1}$ and ${\mathcal O}_A(\Theta)$ are both symmetric, pulling back $\overline\mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta)$ under the morphism $(-1_A,1_A)\circ\mu_{b,a}$ we get that $(-1_A,1_A)^*(E\boxtimes F)\cong (1_A,-1_A)^*(E\boxtimes F)$. This proves what claimed. Hence, since $b$ is odd, $\mathrm{W}_{b,a+b}\cong \mathrm{W}_{b,a+b}\otimes\gamma$, i.e. $\gamma\in\widehat A[b]$. Moreover $\delta^2\in \Sigma(\mathrm{W}_{a,a+b})=\widehat A[a]$.
To conclude the proof, we show that we can replace $\overline \mathrm{W}_{ab,1}$ with another vector bundle in $\mathrm{W}_{ab,1}\in \mathcal W_{ab,1}$ such that (\ref{wirt2}) is satisfied. For any $\alpha\in \widehat A$ we have that $\mu_{b,a}^*(\alpha\boxtimes {\mathcal O}_A)=(\alpha^b,\alpha^a)$. Therefore
\[\mu_{b,a}^*\bigl((\overline\mathrm{W}_{ab,1}\otimes\alpha)\boxtimes {\mathcal O}_A(\Theta)\bigr) \cong (\mathrm{W}_{a,a+b}\otimes \delta\otimes \alpha^b)\boxtimes (\mathrm{W}_{b,a+b} \otimes \gamma\otimes\alpha^a).\]
Taking any $\alpha$ such that $\alpha^b=\delta^{-1}$ we have that $\alpha^2\in\widehat A[ab]$ (hence also $\alpha\in\widehat A[(ab)^g]$ as soon as $g> 1$). Therefore $\overline\mathrm{W}_{ab,1}\otimes\alpha\in\mathcal{W}_{ab,1}$. As above, by uniqueness when $b$ is odd, we have that $\mathrm{W}_{b,a+b}\cong \mathrm{W}_{b,a+b}\otimes \gamma\otimes \alpha^a$. Hence
\[
\mu_{b,a}^*\bigl((\overline\mathrm{W}_{ab,1}\otimes\alpha)\boxtimes {\mathcal O}_A(\Theta)\bigr) \cong \mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}.
\]
\section{Proof of Theorem \ref{kempf-gen} }\label{kempf-section}
We essentially follow Kempf's argument, with some simplifications. Let $ (\mathrm{W}_{a,a+b}, \mathrm{W}_{b,a+b})\in \mathcal W_{a,a+b}\times\mathcal W_{b,a+b}$. To render the argument more transparent we first prove the result for $(x,y)=(0,0)$.
The multiplication map $m_{a,b}(0,0)$ is the map $H^0(r_\Delta)$, where $r_\Delta$ is the restriction to the diagonal
\[
r_\Delta: \mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}\rightarrow \bigl(\mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}\bigr)_{|\Delta}
\]
We apply Proposition \ref{maledetta}, ensuring that
\[
\mu_{b,a}^*\bigl((\mathrm{W}_{ab,1})\boxtimes {\mathcal O}_A(\Theta)\bigr)\cong \mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}
\]
Moreover, since $\mu_{b,a}^{*}({\mathcal O}_{A\times \{0\}})={\mathcal O}_\Delta$, it follows that $r_\Delta=\mu_{b,a}^*(\rho)$, where $\rho$ is the restriction map
\[\rho: \mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta)\rightarrow \bigl(\mathrm{W}_{ab,1}\boxtimes {\mathcal O}_A(\Theta)\bigr)_{|A\times\{0\}}\
\]
(By the way we note that, since the restricted map $(\mu_{b,a})_{|\Delta}:\Delta\rightarrow A\times\{0\}$ is identified to the isogeny $(a+b)_A:A\rightarrow A$, it follows that $\mathrm{W}_{a,a+b}\otimes \mathrm{W}_{b,a+b}\cong (a+b)_A^*\mathrm{W}_{ab,1}$).
The kernel of the isogeny $\mu_{b,a}$ is $\Delta[a+b]:=\{(z,z)\>|\>(a+b)z=0\}$. Therefore, since
\[
r_\Delta=\mu_{b,a}^*(\rho),
\]
the multiplication map $m_{a,b}(0,0)=H^0(r_\Delta)$ decomposes as
\[\bigoplus_{\alpha\in\widehat A[a+b]}\Bigl(H^0(A\times A, (\mathrm{W}_{ab,1}\otimes P_\alpha)\boxtimes ({\mathcal O}_A(\Theta)\otimes P_\alpha))\rightarrow H^0(A\times A, ((\mathrm{W}_{ab,1}\otimes P_\alpha)\boxtimes ({\mathcal O}_A(\Theta)\otimes P_\alpha))_{|A\times\{0\}}\Bigr)\>.\]
Via the isomorphism induced by the principal polarization $A\rightarrow \widehat A$,
the above can be written as
\[\bigoplus_{z\in A[a+b]}\Bigl(H^0(A\times A, t_z^*\mathrm{W}_{ab,1}\boxtimes t_z^*{\mathcal O}_A(\Theta))\buildrel{\lambda_z}\over\longrightarrow H^0(A\times A, (t_z^*\mathrm{W}_{ab,1}\boxtimes t_z^*{\mathcal O}_A(\Theta))_{|A\times\{0\}})\Bigr)\>.\]
Notice that, by (\ref{rk-etc}), $H^0(A,\mathrm{W}_{ab,1})=1$, so that the individual maps $\lambda_z$ appearing above are maps of $1$-dimensional vector spaces. Hence the assertion of the theorem follows from the fact that the scalar $\lambda_z$ vanishes if and only if $A\times\{0\}\subset A\times \Theta_{z}$, i.e. $z\in\Theta$.
In the general case the proof is similar. In the first place, applying $t_{-x}^*$ we can assume that $x=0$. Via translation on the second factor we identify the map $m_{a,b}(0,y)$ of the statement to the map
\begin{equation}\label{variant} H^0(A,\mathrm{W}_{a,a+b})\otimes H^0(A, \mathrm{W}_{b,a+b})\longrightarrow H^0(A,\mathrm{W}_{a,a+b}\otimes t^*_y\mathrm{W}_{b,a+b}).
\end{equation}
This is the $H^0$ of the restriction map
\begin{equation}\label{variant2}
r_{\Delta_y}: \mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}\rightarrow \bigl(\mathrm{W}_{a,a+b}\boxtimes \mathrm{W}_{b,a+b}\bigr)_{|\Delta_y}
\end{equation}
where $\Delta_y=d^{-1}(y)$ (here $d$ is the difference map $A\times A\rightarrow A$, $(z,t)\mapsto z-t$). We have that ${\mathcal O}_{\Delta_y}=\mu_{b,a}^{*}({\mathcal O}_{A\times\{y\}} )$. The rest of the proof is unchanged.
\section{Multiplication of global sections of semihomogeneous vector bundles}\label{multip-section}
\noindent\textbf{A surjectivity criterion for multiplication maps. } We recall \cite[Theorem 7.30]{PP2}, mentioned in the Introduction. In the case of interest for this paper, namely semihomogeneous vector bundles whose first Chern class is a power of a principal polarization, it can be stated as follows.
Following Mukai (\cite{mukai-semi}), for a vector bundle $E$ on $A$ we denote
\begin{equation}\label{delta}
\delta_E=\frac{c_1(E)}{\text{rk}\,(E)}\in NS(A)\otimes \mathbb Q \>.
\end{equation}
If $c_1(E)$ is a multiple of ${\underline \theta}$ we denote also $\mu_E$ the rational number defined by
\[\delta_E=\mu_E{\underline \theta} \>.
\]
\begin{theorem}\label{pp}(Pareschi-Popa) Let $E$ and $F$ be semihomogeneous vector bundles on $A$ such that $c_1(E)$ and $c_1(F)$ are multiples of ${\underline \theta}$. If \[\mu_F >1\qquad \hbox{and}\qquad
\mu_E>\frac{\mu_F}{\mu_F-1}
\]
then the multiplication map of global sections
\[
H^0(A,E)\otimes H^0(A,F)\rightarrow H^0(A,E\otimes F)
\]
is surjective.
\end{theorem}
Note that for line bundles one recovers the classical fact that the multiplication map of a second power and a third power of a line bundle representing ${\underline \theta}$ is surjective.
Here we show that Theorem \ref{pp} is just the restatement of \cite[Theorem 7.30]{PP2}, asserting that the multiplication map as in the statement is surjective as soon as both $E(-\Theta)$ and $F(-\Theta)$ satisfy IT(0) and
\begin{equation}\label{mascherata}
\delta_{E(-\Theta)}+\delta_{\widehat{\Phi}_\mathcal{P}(F(-\Theta))}>0
\end{equation}
where $\widehat{\Phi}_{\mathcal{P}}: D(A)\rightarrow D(\widehat A)$ is the Fourier-Mukai transform associated to the Poincar\'e bundle. Let us explain how to get the statement of Theorem \ref{pp} from this. In the first place we recall that, for a semihomogeneous vector bundle $G$, the IT(0) condition is equivalent to $\delta_G>0$. This follows, for example, from \cite[Lemma 6.11]{mukai-semi}, stating that
\begin{equation}\label{homog}
r_A^*G\cong (\det G)^r\otimes H
\end{equation}
where: $r:=\text{rk}\,G$, $r_A$ denotes, as usual, the isogeny $x\mapsto rx$, and $H$ is a homogeneous vector bundle (indeed a homogenous vector bundle is a direct sum of vector bundles of the form ${\mathrm U}\otimes \alpha$, where $\alpha\in \widehat A$ and ${\mathrm U}$ is a \emph{unipotent} vector bundle, namely a vector bundle having a filtration $0={\mathrm U}_0\subset {\mathrm U}_1\subset\cdots\subset{\mathrm U}_{n-1}\subset {\mathrm U}_n={\mathrm U}$, with ${\mathrm U}_i/U_{i-1}\cong{\mathcal O}_A$ for $i=1,\dots ,n$, see \cite[Theorem 4.17]{mukai-semi}).
Using the formulas
\begin{equation}\label{sum}
\delta_{F\otimes G}=\delta_F+\delta_G\qquad\hbox{ and }\qquad \delta_{G^\vee}=-\delta_G,
\end{equation}
it follows that the condition that $\mu_F-1>0$, i.e. the first hypothesis of Theorem \ref{pp}, is equivalent to the fact that $F(-\Theta)$ satisfies IT(0).
If this is the case then, by base change, the complex $\widehat{\Phi}_\mathcal{P}(F(-\Theta))$ is a sheaf in cohomological degree $0$, in fact a locally free sheaf.
Next, we recall that, for a semihomogenous vector bundle $G$ satisfying IT(0), the vector bundle $\widehat{\Phi}_\mathcal{P}(G)$ is again semihomogeneous (this follows from the fact that $\widehat{\Phi}_\mathcal{P}$ exchanges translation with tensorization with a line bundle in $\widehat A$, see \cite[(3.1)]{mukai}).
Finally we claim that if $G$ is such that $c_1(G)$ is a multiple of ${\underline \theta}$ then also $c_1(\widehat{\Phi}_\mathcal{P}(G))$ is a multiple of ${\underline \theta}$ and the following beautiful formula holds
\begin{equation}\label{nice}
\mu_{\widehat{\Phi}_\mathcal{P}(G)}=-\frac{1}{\mu_G}.
\end{equation}
This translates the hypothesis (\ref{mascherata}) into the numerical condition
\[\mu_E-1-\frac{1}{\mu_F-1}>0,\]
i.e. the second inequality in the hypothesis of Theorem \ref{pp}.
Finally, we briefly indicate the proof of (\ref{nice}). This is certainly well known to the experts but we couldn't find an explicit reference.
We recall that, for $\lambda\in\mathbb Q$, $r_A^*(\lambda{\underline \theta})=r^2\lambda{\underline \theta}$. Therefore from (\ref{homog}) it follows that $ch(G)=r\exp (\mu_G{\underline \theta})$. Then a well known calculation using GRR and the Fourier-Mukai transform at the level of Chow rings modulo numerical equivalence (see e.g. the proof of \cite[Lemma 2]{oprea}) shows that $ch(\widehat{\Phi}_\mathcal{P}(G))=r(\mu_G)^{g}\exp (-\mu_G^{-1}{\underline \theta})$. Therefore
$\delta_{\widehat{\Phi}_\mathcal{P}(G)}=-\mu_G^{-1}{\underline \theta}$.
\noindent\textbf{Proof of Theorem \ref{cor}. } By Theorem \ref{kempf-gen} for fixed $x,y\in A$ the maps $m_{a,b}(x,y)$ have the same rank for all $a,b$ (coprime) with $a+b=n$ (and for all representatives $(\mathrm{W}_{a,a+b}, \mathrm{W}_{b,a+b})\in\mathcal {W}_{a,a+b}\times\mathcal{W}_{b,a+b}$). Therefore it is enough to prove the statement for $(a,b)= (n-1,1)$. As in the proof of Theorem \ref{kempf-gen}, we can furthermore assume that $x=0$. Let us fix $y\in A$. For general $z\in A$ we consider the commutative diagram
\begin{equation}\label{diagram}\xymatrix{H^0(\mathrm{W}_{n-1,n})\otimes H^0(t_y^*{\mathcal O}_A(n\Theta))\otimes H^0(t_z^*\mathrm{W}_{n-1,n})\ar[rr]\ar[dd]^{m_{n-1,1}(0,y)\otimes \,\mathrm{id}}&&H^0(\mathrm{W}_{n-1,n})\otimes H^0(t_y^*{\mathcal O}_A(n\Theta)\otimes t_z^*\mathrm{W}_{n-1,n})\ar[dd]\\ \\
V_n(y)\otimes H^0(t_z^*\mathrm{W}_{n-1,n})\ar[rr]&&H^0(\mathrm{W}_{n-1,n}\otimes t_y^*{\mathcal O}_A(n\Theta)\otimes t_z^*\mathrm{W}_{n-1,n})}
\end{equation}
where $V_n(y)$ denotes the image of the map $m_{n-1,1}(0,y)$.
By Theorem \ref{kempf-gen} the top horizontal map is surjective for general $z\in A$. By Theorem \ref{pp} the right vertical map is surjective for all $z\in A$ (here we use that $\mu_{\mathrm{W}_{a,b}}=\frac{b}{a}$ and (\ref{sum})). Therefore the bottom horizontal map is surjective for general $z\in A$. Hence
\[
\dim V_n(y)\ge \frac{\chi(\mathrm{W}_{n-1,n}^{\otimes 2}\otimes{\mathcal O}_A(n\Theta))}{\chi(\mathrm{W}_{n-1,n})} \>.
\]
By (\ref{rk-etc}) we have that $\chi(\mathrm{W}_{n-1,n})=n^g$. Using (\ref{fund}) one gets easily that $\chi(\mathrm{W}_{n-1,n}^{\otimes 2}\otimes {\mathcal O}_A(n\Theta))=(n-1)^gn^g(n+1)^g$. The result follows.
\section{Proof of Theorem \ref{main}}\label{proof}
It is easy to check that the bound of Theorem \ref{main} is attained by line bundles of the form
$ {\mathcal O}_A(\Theta_x)= \boxtimes_{i}{\mathcal O}_{E_i}(z_i)$ on products of elliptic curves $E_i$,
where the $z_i$'s are $n$-torsion points. Conversely, the second part of Theorem \ref{main} asserts that this is the only case. To prove this, the main point consists in showing that if the bound is attained then $\Theta$ must be reducible and therefore the p.p.a.v. $(A,{\underline \theta})$ decomposes as a product of lower dimensional p.p.a.v's.
\noindent\textbf{Proof for $\mathbf{g>2}$. } According to Theorem \ref{kempf-gen}, what we need to show is that in the irreducible case the rank of the multiplication maps $m_{n-1,n}(0,y)$ is $>(n^2-1)^2$ for all $y$.
For $y\in A$ and $n>1$ let us consider the following divisor
\begin{equation}\label{E}
E_{y,n}:=\sum_{\eta\in A[n]}\Theta_{y+\eta}
\end{equation}
An immediate consequence of Theorem \ref{kempf-gen} is the following
\begin{corollary}\label{aggiunto} Let $y\in A$. The map
\[
m_{a,b}(y,z): H^0(A,t_y^*W_{a,a+b})\otimes H^0(A, t_z^*\mathrm{W}_{b,a+b})\longrightarrow H^0(A,t_y^*\mathrm{W}_{a,a+b}\otimes t^*_z\mathrm{W}_{b,a+b})
\]
is singular if and only if $z\in \text{Supp}\,E_{y,a+b}$.
\end{corollary}
\noindent (where, for reasons apparent in what follows, the notation $(x,y)$ in the statement of Theorem \ref{kempf-gen} has been switched to $(y,z)$). In order to prove Theorem \ref{main} we see the map of Corollary \ref{aggiunto}, as all maps of diagram (\ref{diagram}), as the fiberwise maps of maps of locally free sheaves. This is well known and it is done as follows.
Following \cite{pareschi} (see also \cite{PP1}), given two coherent sheaves $\mathcal{F}$ and $\mathcal{G}$, we define their (derived) skew Pontryagin product
\[
\mathcal{F}\hat *\mathcal{G}:=d_*(\mathcal{F}\boxtimes \mathcal{G})
\]
where $d:A\times A\rightarrow A$ is the difference map. We make the simplifying assumption that both the sheaves $\mathcal{F}$ and $\mathcal{G}$ are locally free, semihomogeneous, and they satisfy IT(0) (these conditions will be always satisfied by the sheaves appearing in what follows). By \cite[Proposition 2.9]{PP1} the IT(0) condition for $\mathcal{F}$ and $\mathcal{G}$ implies that also the vector bundle $\mathcal{F}\otimes \mathcal{G}$ satisfies IT(0). In turn implies by base change that: $R^id_*(\mathcal{F}\boxtimes \mathcal{G})=0$ for $i\ne 0$, $\mathcal{F}\hat *\mathcal{G}$ is a locally free sheaf (in degree 0), and
\[ d_*(F\boxtimes G)\otimes \mathbb C(z)\cong H^0(A\times A, (F\boxtimes G)_{|\Delta_z})=H^0(A,\mathcal{F}\otimes t_z^*\mathcal{G})\]
for all $z\in A$ (see (\ref{variant2})). Thus the multiplication map of global sections
\[
H^0(A, \mathcal{F})\otimes H^0(A, t_z^*\mathcal{G})\rightarrow H^0(A, F\otimes t_z^*G),
\]
is naturally identified, via the isomorphism
\[
\mathrm{id}\otimes t_z^*: H^0(A, \mathcal{F})\otimes H^0(A, \mathcal{G})\rightarrow H^0(A, \mathcal{F})\otimes H^0(A, t_z^*\mathcal{G}),
\] to the fiber map at $z$ of the map of ${\mathcal O}_A$-modules:
\begin{equation}\label{global-version}
H^0(A,\mathcal{F})\otimes H^0(A,\mathcal{G})\otimes {\mathcal O}_A\cong
{d_{23}}_*(\mathcal{F}\boxtimes{\mathcal O}_A\boxtimes\mathcal{G})\rightarrow {d_{23}}_*((\mathcal{F}\boxtimes{\mathcal O}_A\boxtimes\mathcal{G})_{|\Delta_{12}})\cong \mathcal{F}\hat *\mathcal{G}
\end{equation}
where $d_{23}(x_1,x_2,x_3)= (x_2-x_3)$ and $\Delta_{12}=\{(x_1,x_2,x_3)\>|\>x_1=x_2\}$.
(Note that $d_*({\mathcal O}_A\boxtimes \mathcal{G})$ is trivial and canonically isomorphic to $H^0(A,\mathcal{G})\otimes {\mathcal O}_A$, as it is most easily seen via the automorphism of $A\times A$, $(x,y)\mapsto (x,x-y)$, sending $p_2$ to $d$ and leaving $p_1$ unchanged. Therefore ${d_{23}}_*(\mathcal{F}\boxtimes{\mathcal O}_A\boxtimes\mathcal{G})\cong H^0(A,\mathcal{F})\otimes H^0(A,\mathcal{G})\otimes {\mathcal O}_A$.)
More generally, given another IT(0) sheaf on $A$, say $\mathcal{H}$, the multiplication map of global sections
\[
H^0(A, \mathcal{F})\otimes H^0(A, \mathcal{H}\otimes t_z^*\mathcal{G})\rightarrow H^0(A, \mathcal{F}\otimes\mathcal{H}\otimes t_z^*\mathcal{G})
\]
is naturally identified the fiber map at $z$ of the map of ${\mathcal O}_A$-modules:
\[H^0(A,\mathcal{F})\otimes ( \mathcal{H}\hat * \mathcal{G})\cong
{d_{23}}_*(\mathcal{F}\boxtimes\mathcal{H}\boxtimes\mathcal{G})\rightarrow {d_{23}}_*((\mathcal{F}\boxtimes\mathcal{H}\boxtimes\mathcal{G})_{|\Delta_{12}})\cong (\mathcal{F}\otimes\mathcal{H})\hat *\mathcal{G}.
\]
After these preliminaries, we can proceed with the proof. Let us fix $y\in A$. Then diagram (\ref{diagram}) is identified to the diagram of fiber maps at $z\in A$ of the commutative diagram of ${\mathcal O}_A$-modules, with surjective vertical maps,
\begin{equation}\label{diagram1}\xymatrix{H^0(\mathrm{W}_{n-1,n})\otimes H^0( t_y^*{\mathcal O}_A(n\Theta))\otimes H^0( \mathrm{W}_{n-1,n})\otimes {\mathcal O}_A\ar[r]\ar[d]&H^0( \mathrm{W}_{n-1,n})\otimes (t_y^*{\mathcal O}_A(n\Theta)\hat * \mathrm{W}_{n-1,n})\ar[d]\\
V_n(y)\otimes H^0( \mathrm{W}_{n-1,n})\otimes {\mathcal O}_A\ar[r]&\bigl(\mathrm{W}_{n-1,n}\otimes t_y^*{\mathcal O}_A(n\Theta)\bigr)\hat * \mathrm{W}_{n-1,n}} \ .
\end{equation}
After routine calculations (summarized below), based on well known results of Mukai and Oprea, one computes
\begin{equation}\label{c1}
c_1(\mathrm{W}_{a,a+b}\hat * \mathrm{W}_{b,a+b} )=(a+b)^{2g}{\underline \theta}
\end{equation}
and
\begin{equation}\label{c11}
c_1((\mathrm{W}_{a,a+b}\otimes \mathrm{W}_{b,a+b})\hat * \mathrm{W}_{a,a+b} )=(a+b)^{g+2}a^{g-1}(a+2b)^{g-1}{\underline \theta}
\end{equation}
In particular, for $a=n-1$ and $b=1$ one gets
\begin{equation}\label{c1bis}
c_1(\mathrm{W}_{n-1,n}\hat * {\mathcal O}_A(n\Theta) )=n^{2g}{\underline \theta}
\end{equation}
and
\begin{equation}\label{c11bis}
c_1((\mathrm{W}_{n-1,n}\otimes {\mathcal O}_A(n\Theta))\hat *\mathrm{W}_{n-1,n} )=n^{g+2}(n-1)^{g-1}(n+1)^{g-1}{\underline \theta}
\end{equation}
Assume that $\Theta$ is irreducible, and, as above, let us fix $y\in A$. From Corollary \ref{aggiunto} and (\ref{c1bis}) it follows that the effective divisor defined by the vanishing of the determinant of the map
\[
H^0( t_y^*{\mathcal O}_A(n\Theta))\otimes H^0( \mathrm{W}_{n-1,n})\otimes {\mathcal O}_A\rightarrow (t_y^*{\mathcal O}_A(n\Theta))\hat * \mathrm{W}_{n-1,n}
\]
is the divisor $E_{y,n}$ of (\ref{E}) (note that if $\Theta$ is irreducible then $E_{y,n}$ has no multiple components).
Now assume that $\dim V_n(y)=(n)^{2g}-((n)^2-1)^g$, which corresponds precisely to the bound in Theorem \ref{main}. Then, as shown in the previous section, the source and target of the bottom horizontal map of (\ref{diagram1}) have the same rank. The determinant of this map vanishes on an effective divisor $D_{y,n}$, which is invariant under translation by $n$-torsion points (since the vector bundles $\mathrm{W}_{n-1,n}$ are so, see e.g. \cite[(14)]{oprea}). Therefore, since the support of $D_{y,n}$ is contained in $E_{y,n}$, it must be equal to $E_{y,n}$. Hence $c_1(D_{y,n})$ should be a multiple of $n^{2g}{\underline \theta}$. On the other hand, (\ref{c11bis}) yields that
\begin{equation}\label{contradiction}
c_1(D_{y,n})=n^{g+2}(n-1)^{g-1}(n+1)^{g-1}{\underline \theta}
\end{equation}
This is a contradiction as soon as $g\ge 3$. Hence, for $g\ge 3$, if the rank of $\Theta_y(n)$ attains the maximum for some $y\in A$ then the polarization must be reducible.
Finally we show the computation of (\ref{c1}) and (\ref{c11}). We use the Fourier-Mukai transform $\widehat\Phi_\mathcal{P}:D(A)\rightarrow D(\widehat A)$
already invoked in \S\ref{multip-section}, and also the transform in the opposite direction \break $\Phi_\mathcal{P}:D(\widehat A)\rightarrow D(A)$, as well as their versions at the level of Chow rings modulo numerical equivalence $\widehat\Phi_{CH}:\mathcal A(A)\rightarrow \mathcal A(\widehat A)$ and $\Phi_{CH}:\mathcal A ( \widehat A)\rightarrow \mathcal A(A)$ (see \cite[Proposition 1.21]{mukaiF}). By GRR they commute with the Chern character. We use the following facts: (a) $\Phi_\mathcal{P}\circ\widehat \Phi_\mathcal{P}=(-1)_A^*[-g]$; \break
\ (b) $ch(\mathrm{W}_{a,b})=a^g\exp(\frac{b}{a}{\underline \theta})$ (this follows from (\ref{fund})); \ (c) $\Phi_{CH}(\exp\frac{b}{a}{\underline \theta})=(\frac{b}{a})^g\exp(-\frac{a}{b}{\underline \theta})$ (\cite[\S 3.3]{oprea}); (d) If $G$ is symmetric then $\Phi_\mathcal{P}(F\otimes G)=\Phi_\mathcal{P}(F)\hat *\Phi_\mathcal{P}(G)[g]$ and $\widehat\Phi _\mathcal{P}(\mathcal{F}\hat *\mathcal{G})=\widehat\Phi_\mathcal{P}(\mathcal{F})\otimes \widehat\Phi_\mathcal{P}(\mathcal{G})$ (this follows from \cite[(3.7)]{mukai} using that $\mathcal{F}\hat*\mathcal{G}\cong \mathcal{F}* (-1)^*\mathcal{G})$, where $*$ is the Pontryagin product).
Therefore
\begin{eqnarray*}
ch(\mathrm{W}_{a,a+b}\hat * \mathrm{W}_{b,a+b})
& \buildrel{(d)(b)(c)}\over =&(-1)^g\Phi_{CH}\bigl((a+b)^{g}\exp(-\frac{a}{a+b}{\underline \theta})(a+b)^{g}\exp(-\frac{b}{a+b}{\underline \theta})\bigr)\\
&=&(a+b)^{2g}\exp({\underline \theta})
\end{eqnarray*}
This proves (\ref{c1}). Moreover
\begin{eqnarray*}
&&ch\bigl(\,(\mathrm{W}_{a,a+b}\otimes \mathrm{W}_{b,a+b})\hat *\mathrm{W}_{a,a+b}\bigr)=\\
& \buildrel{(a)(b)(d)}\over=&(-1)^g\Phi_{CH}\bigl(\,\widehat \Phi_{CH}\bigl(a^g\exp(\frac{a+b}{a}{\underline \theta})b^g\exp(\frac{a+b}{b}{\underline \theta})\bigr)\cdot \widehat\Phi_{CH} \bigl(a^g\exp(\frac{a+b}{a}{\underline \theta})\bigr)\,\bigr)\\
& =&(-1)^g \Phi_{CH}\bigl(\,\widehat\Phi_{CH}\bigl((ab)^g\exp(\frac{(a+b)^2}{ab}{\underline \theta})\bigr)\cdot \widehat\Phi_{CH} \bigl(a^g\exp(\frac{a+b}{a}{\underline \theta})\bigr)\,\bigr)\\
& \buildrel{(c)}\over = &(-1)^g\Phi_{CH}\bigl(\,\bigl((ab)^g\frac{(a+b)^{2g}}{(ab)^g}\exp(-\frac{ab}{(a+b)^2}{\underline \theta})\bigr)\cdot \bigl(a^g\frac{(a+b)^g}{(a)^g}\exp(-\frac{a}{a+b}{\underline \theta})\bigr)\,\bigr)\\
& =
&\Phi_{CH}\bigl((a+b)^{3g}\exp (-\frac{a(2b+a)}{(a+b)^2}{\underline \theta})\bigr)\\
& \buildrel{(c)}\over = &(a+b)^ga^g(2b+a)^g\exp(\frac{(a+b)^2}{a(2b+a)}{\underline \theta}),
\end{eqnarray*}
where in the first equality we used that the vector bundles appearing in the calculation are symmetric, so that one can neglect the $(-1)_A^*$ in the formula $\Phi_\mathcal{P}\circ \widehat \Phi _\mathcal{P}=(-1)^*[-g]$. This proves~(\ref{c11}).
\noindent\textbf{Proof for $ \mathbf{g=2}$. } In this case the irreducibility means that $\Theta$ is a smooth irreducible curve of genus $2$, and $A$ is its Jacobian. Assuming this, we claim that the isogeny $n_A$ restricted to any translate $\Theta_y$ is birational onto its image. We postpone this for the moment (see below), and we proceed with the proof. We note that the $n$-torsion points in $\Theta_y$ (if any) map all to $0$, which is therefore a point of multiplicity $\Theta_y(n)$ of the curve $n_{A,y}(\Theta_y)$. The class of the curve $n_A(\Theta_y)$ is $n^2{\underline \theta}$ (indeed $n_A^*(n_A(\Theta_y))$ is the divisor
$ E_{y,n}$
of (\ref{E}), whose class is $n^4{\underline \theta}$). Hence
\[
m(A,\Theta,0):=\inf_{0\in C\subset A}\Big\{\frac{\Theta\cdot C}{\mathrm{mult}_0\>C}\Big\}\le \frac{2n^2}{\Theta_y(n) },
\] the infimum being taken over all reduced irreducible curves $C$ in $A$ passing through $0$. But $m(A,\Theta ,0)$ is the Seshadri constant of $\Theta$ at the point $0$, (actually it is constant on all points of $A$, see \cite[\S 5.1]{pag1}), and it is known that, for irreducible principally polarized abelian surfaces $A$, $m(A,\Theta,0)=\frac 4 3$ (as a particular case of a more general result concerning jacobians of hyperelliptic curves, see \cite[Theorem 7]{debarre}, where Debarre attributes it to Lazarsfeld). This proves that, if $g=2$ and $\Theta$ is irreducible, $\Theta_y(n)\le \frac{3}{2}n^2<2n^2-1=n^4-(n^2-1)^2$. This proves the desired bound for $g=2$.
Finally, for the reader's convenience, we provide a proof of the previous claim, which is however a well known fact.
We denote the restriction $n_{A,y}:=(n_A)_{|\Theta_y}:\Theta_y\rightarrow A$. Let $x\in \Theta_y$. The fiber of $n_{A,y}$ at $x$ is, set theoretically, a subset of $\Theta_y$ of the form $\{x+\eta_1,,x+\eta_2,\dots ,x+\eta_{k(x)}\}$, with $\eta_i\in A[n]$ and $\eta_1=0$. Hence $x\in \cap_{i=1}^{k(x)}\Theta_{y+\eta_i}$. In conclusion, the points $x$ such that $k(x)>1$ belong to the set of singular points of the effective divisor $E_{y,n}$ of (\ref{E}), which is finite since $\Theta$ is irreducible. This proves what claimed.
\vskip0.5truecm In conclusion for all $g\ge 2$, and $n\ge 2$, if there is a $y\in A$ such that the translate $\Theta_y$ contains the maximal number of $n$-torsion points, namely $n^{2g}-(n^2-1)^g$, then $\Theta$ must be reducible. Therefore, by the decomposition theorem for p.p.a.v.'s (\cite[Theorem 4.3.1]{BL}), $A$ splits as the polarized product of irreducible p.p.a.v.'s $(A_i,{\underline \theta}_i)$ for $i=1,\dots ,k$, of dimension $g_i$, with $g=\sum_{i=1}^k g_i$. Furthermore $\Theta_y=\sum_{i=1}^kp_i^*\Theta_{i,y_i}$, where $p_i$ denotes the projection $A\rightarrow A_i$, and it follows that, for all $i$, the translates $\Theta_{i,y_i}$ contain the maximal number of $n$-torsion points. Therefore $g_i=1$ for all $i$, since otherwise some of $\Theta_i$'s would be reducible. It follows also that, for all $i$, $y_i\in A_i[n]$. This concludes the proof of Theorem \ref{main}.
|
2,869,038,157,056 | arxiv | \section{Introduction}
The accelerating cosmic concordance model (flat $\Lambda$CDM) is in agreement with all
the existing observations both at the background and perturbative levels. However, while more data are
being gathered, there is an accumulating evidence
that a more realistic description beyond the ``precision era" requires a
better comprehension of systematic effects in order to have the desirable accuracy.
Local inhomogeneities are not only possible sources of different systematics, but may also be signalizing for an intrinsic incompleteness of the cosmic description. This occurs because the Universe is homogeneous and
isotropic only on large scales ($\gtrsim 100 \, Mpc$). However, on smaller scales, a variety of structures involving galaxies, clusters, and superclusters of galaxies are observed. Permeating these structures there are also
voids or ``black regions" (as dubbed long ago by Zel'dovich \cite{Z1}) where galaxies are almost or totally absent as recently suggested by the N-body {\it Millenium} simulations \cite{Mill}. This means that statistically uniform
cosmologies are only
coarse-grained representations of what is actually present in the real Universe. As a consequence, the description of light propagation by taking into account such richness of structures is a challenging task to improve the
cosmic concordance model, but the correct method still remains far from a consensus \cite{rasanen2009,quartin,bolejkodr2011,Kolb2011,Clarkson1,bl12}.
Zel'dovich \cite{Ze64}, followed by
Bertotti \cite{bert66}, Gunn \cite{gunn67} and Kantowski
\cite{Kant69} were the first to investigate the influence of small-scale inhomogeneities in the light
propagation from distant sources. Later on, Dyer and Roeder (DR) \cite{Dy72} assumed explicitly that only a fraction of the average matter density
must affect the light propagation in the intergalactic medium. Phenomenologically, the unknown physical
conditions along the path, associated with the clumpiness effects, were
described by the smoothness parameter:
\begin{equation}
\alpha = \frac {\rho_{h}}{ \rho_{h} + \rho_{cl}},
\end{equation}
where $\rho_{h}$ and $\rho_{cl}$ are the fractions of homogeneous and clumped densities, respectively. This parameter quantifies the fraction of
homogeneously distributed matter within a given light cone. For $\alpha=0$ (empty beam), all matter is clumped
while for $\alpha=1$ the fully homogeneous case is recovered, and
for a partial clumpiness the smoothness parameter is restricted over the interval $[0,1]$. The reader should keep in mind that such a restriction
clearly excludes the possibility
of light rays traveling in regions denser than average. In principle, it should be very interesting to see how the presence of cosmic voids - a
key entity nowadays - could be considered in the above prescription.
More recently, many studies concerning the light propagation and its effects on the derived distances have been performed \cite{Mattsson10,Rasanen2010,bolejkodr2011,Clarkson1}.
Current constraints on the smoothness parameter are still weak \cite{BSL2012,AL04,SL07,hzdata}, however, it is intriguing that the quoted analyses had their best fits for $\alpha$ equal to unity which corresponds to a
perfectly $\Lambda$CDM homogeneous model at all scales \cite{BSL2012,AL04}. More recently, some authors have also argued for a crucial deficiency of the DR approach, and, as such, it should be replaced by a more detailed description, probably, based on the weak lensing approach \cite{Rasanen2010,bolejkodr2011}.
In this letter we advocate a slightly different but complementary point of view. It will be assumed that the DR approach is an useful tool in the sense that it provides the simplest one-parametric description of the effects
caused by local inhomogeneities, but its initial conception needs to be somewhat extended. This is done in two steps: (i) by allowing $\alpha$ (here denoted by $\alpha_E$) to be greater than unity in the statistical data analyses, (ii) by interpreting the obtained results
in terms of the existence of an uncompensated distribution of cosmic voids or ``black regions" in the Universe (see section V). As we shall see, by performing a statistical analysis involving 557 SNe Ia from the Union2 compilation data \cite{Union2}, we obtain $\alpha_E=1.26^{+0.68}_{-0.54}$ ($1\sigma$) for a flat $\Lambda$CDM model. This $1\sigma$ confidence region shows that $\alpha >1$ has a very significant probability. We also show
that $\alpha$ greater than unity is also able to harmonize the low redshift (Supernovae Ia) and baryon acoustic oscillations (BAO) data with the observations from cosmic microwave background (CMB).
\begin{figure*}
\centerline{\epsfig{figure=grafa.eps,width=2.4truein,height=2.2truein}
\epsfig{figure=grafb.eps,width=2.4truein,height=2.2truein}
\epsfig{figure=fig1c.eps,width=2.4truein,height=2.2truein}\hskip
0.1in} \caption{(color online). {\bf{a)}} The $(\Omega_m,\alpha_E)$ plane for a flat $\Lambda$CDM model. The contours represent the 68.3\% and 95.4\%
confidence levels. The best fit is $\Omega_m=0.25$ and $\alpha_E=1.26$.
Note that a flat model with only matter and inhomogeneities ($\Omega_m=1$) is ruled out with great statistical confidence. {\bf{b)}} Likelihood
of $\alpha_E$. The smoothness parameter is restricted on the
interval $0.72 \leq \alpha_E \leq 1.94$ (1$\sigma$). {\bf{c)}} Likelihood of $\Omega_m$. We see that the density parameter $\Omega_m$ is restricted to the interval
$0.21 \leq \Omega_m \leq 0.29$ (1$\sigma$).} \label{figstat}
\end{figure*}
\section{The Dyer-Roeder Distance}
The differential equation driving the light propagation in curved spacetimes is the Sachs optical equation
\begin{eqnarray}\label{sachs}
{\sqrt{A}}'' +\frac{1}{2}R_{\mu \nu}k^{\mu}k^{\nu} \sqrt{A}=0,
\end{eqnarray}
where a prime denotes differentiation with respect to the affine parameter
$\lambda$, $A$ is the cross-sectional area of the light beam,
$R_{\mu\nu}$ the Ricci tensor, $k^{\mu}$ the photon four-momentum ($k^\mu k_{\mu} =0$), and the shear was neglected \cite{Sachs61}.
Five steps are needed to achieve the luminosity distance in the
Dyer-Roeder approach:
\begin{itemize}
\item the assumption that the angular diameter
distance $d_A \propto \sqrt{A}$,
\item the relation between the Ricci
tensor and the energy-momentum tensor $T_{\mu\nu}$ through the Eintein's field
equations
\begin{equation}
R_{\mu\nu}-\frac{1}{2} R g_{\mu\nu} = 8\pi G T_{\mu\nu},
\end{equation}
where in our units $c=1$, $R$ is the scalar curvature, $g_{\mu\nu}$ is the metric
describing a FRW geometry, $G$ is Newton's constant and $R_{\mu\nu} k^\mu k^\nu =
8\pi G T_{\mu\nu} k^\mu k^\nu$.
\item the relation between the affine parameter $\lambda$
and the redshift $z$
\be
\frac{dz}{d\lambda}=(1+z)^2 \frac{H(z)}{H_0},
\ee
where $H(z)$ is the Hubble parameter whose present day value, $H_0$, is the Hubble's constant,
\item the {\it ansatz} $\rho_m$ goes to $\alpha
\rho_m$, and, finally,
\item the validity of the duality relation between the
angular diameter and luminosity distances
\cite{ETHER33,Holanda10,Ellis2012}
\be
d_L(z)=(1+z)^2 d_A(z).
\ee
\end{itemize}
For a general XCDM model, where the dark energy component is described by a perfect fluid with equation of state $p_x=w \rho_X$ ($w$ constant), the Dyer-Roeder distance
$(d_L=H_0^{-1} D_L)$ can be written as:
\begin{eqnarray}
\frac{3}{2} \left[ \alpha_{E}(z)\Omega_m (1+z)^3 + \Omega_X (1+w)(1+z)^{3(1+w)} \right] D_L(z) \nonumber
\\ + (1+z)^2 E(z) \frac{d}{dz} \left[ (1+z)^2 E(z) \frac{d}{dz} \frac{D_L(z)}{(1+z)^2} \right] = 0,
\label{angdiamalpha}
\end{eqnarray}
where $\alpha_{E}(z)$ denotes the extended Dyer-Roeder parameter, $\Omega_X$, $w$, are the density and equation of state parameters of dark energy while the dimensionless Hubble
parameter, $E(z)= H/H_0$, reads:
\begin{equation}
E(z)= \sqrt{\Omega_m (1+z)^3 + \Omega_X (1+z)^{3(1+w)} + \Omega_k(1+z)^2},
\end{equation}
where $\Omega_k=(1-\Omega_m - \Omega_X)$ and the limiting case ($\omega =-1, \, \Omega_X = \Omega_{\Lambda}$) of all the above expressions describe an arbitrary $\Lambda$CDM model. The above Eq.(\ref{angdiamalpha}) must be solved
with two initial conditions, namely: $D_L\left(z=0\right) =0$ and $\frac{dD_L}{dz}|_{z=0}=1$. As in the original DR approach, from now on it will be assumed that $\alpha_E$ is a constant parameter (see, however, \cite{Linder88,SL07}).
\section{Determining $\alpha_E$ from Supernova Data}
In order to show the physical interest of the approach proposed here we have performed a statistical analysis involving 557 SNe Ia from the Union2
compilation
data \cite{Union2}. Following standard lines, we have applied the maximum likelihood estimator [we refer the reader to Ref. \cite{Union2,BSL2012}
for details on statistical analysis involving Supernovae data].
In Fig. \ref{figstat}(a) we display the results obtained by assuming a flat $\Lambda$CDM model.
The contours correspond to 68.3\% (1$\sigma$) and 95.4\% (2$\sigma$) confidence levels.
The best fits are $\Omega_m=0.25$ and $\alpha_E=1.26$. As we can see from Figs. \ref{figstat}(b) and \ref{figstat}(c) the matter density parameter is well constrained, being restricted over the interval $0.21 \leq \Omega_m \leq 0.29$ (1$\sigma$),
while the smoothness parameter is in the interval $0.72 \leq \alpha_E \leq 1.94$ (1$\sigma$). Although $\alpha_E$ being poorly constrained, we see that the probability peaks in $\alpha_E>1$, and, therefore, denser than average regions in the line of
sight are fully compatible with the data. It is interesting to compare the bounds over $\Omega_m$ with our previous analysis with the restriction $\alpha \leq 1.0$ \cite{BSL2012}. The interval $0.24 \leq \Omega_m \leq 0.35$ (2$\sigma$)
was obtained. As should be expected, by dropping the restriction $\alpha \leq 1.0$ lesser values of $\Omega_m$ are allowed
by data.
\section{Supernovae-CMB tension and $\alpha_E$}
The tension between low and high redshift data has been reported by many authors (see, for instance, \cite{sss2009}). A numerical weak lensing
approach to solve this problem was recently discussed by Amendola et al. \cite{quartin} based on a meatball model. Can such a tension be alleviated
by our extended DR approach?
In order to answer that, let us consider an arbitrary $\Lambda$CDM model and plot the bounds on the ($\Omega_m,\Omega_\Lambda$) plane by fixing
three different values of $\alpha_E$. By selecting $\alpha_E=0.7$, 1.0 and 1.3 we may study what happens with the ($\Omega_m,\Omega_\Lambda$) contours when higher values are considered.
In Fig. \ref{figtension}(a) we show the contours obtained for the chosen values of $\alpha_E$. Note that when $\alpha_E$ grows from $0.7$ to $1.3$ the best fit moves of around $1\sigma$ towards
lower values of the pair ($\Omega_m,\Omega_\Lambda$) thereby becoming more compatible with the cosmic concordance flat $\Lambda$CDM model. This is a remarkable result since it improves the agreement with independent constraints
coming from baryon acoustic oscillations (BAO) and the angular power spectrum of the cosmic microwave background (CMB), and, more important,
maintaining the same reduced $\chi^2_{red}$.
\begin{table}[htbp]
\caption{Best fits for $\Omega_m$ and $\Omega_\Lambda$.}
\label{tab1
\begin{center}
\begin{tabular}{@{}cccc@{}}
\hline $\alpha_E$ \hspace{0.4cm}& $\Omega_m$ \hspace{0.2cm} & $\Omega_\Lambda$ \hspace{0.4cm}&
$\chi^2_{red}$
\\ \hline\hline
0.7 \hspace{0.4cm}& 0.39 \hspace{0.2cm} & 0.83 \hspace{0.4cm}& 0.978 \\
1.0 \hspace{0.4cm}& 0.30 \hspace{0.2cm} & 0.78 \hspace{0.4cm}& 0.977 \\
1.3 \hspace{0.4cm}& 0.24 \hspace{0.2cm} & 0.74 \hspace{0.4cm}& 0.977 \\
\hline
\end{tabular}
\end{center}
\end{table}
In Table \ref{tab1}, the basic results are summarized. Note that the greatest value of $\alpha_E$ yields the minimum
reduced $\chi^2_{red} = \chi^2/\nu$ ($\nu$ is number of d.o.f).
\begin{figure*}
\centerline{\epsfig{figure=contoursalphawm.eps,width=2.8truein,height=2.8truein,angle=-90}
\epsfig{figure=contourswmw.eps,width=2.8truein,height=2.8truein,angle=-90}
\hskip
0.1in} \caption{(color online) {\bf{a)}} The influence of the smoothness parameter on the $(\Omega_m,\Omega_\Lambda)$ plane. The contours for
three values of the
smoothness parameter $\alpha_E$ using 557 SNe Ia from the Union2 compilation data \cite{Union2} correspond to 1, 2 and $3\sigma$. Greater values
of $\alpha_E$ provide results more compatible with a flat model. {\bf{b)}} Contours for
the $(\Omega_m,w)$ plane in a flat XCDM model. The same trend is observed, greater values of $\alpha_E$ imply greater values of $w$ thereby
alleviating the tension among the low and high redshift data.}
\label{figtension}
\end{figure*}
In Fig. \ref{figtension}(b), we display the statistical results for a flat XCDM model and the same values for $\alpha_E$ adopted in the previous $\Lambda$CDM analysis. Again, we see that for higher values of $\alpha_E$, the contours
are displaced towards regions with higher values for $w$ and smaller values for $\Omega_m$, again contributing to cancel the tension between the low and high redshift data.
In Table II, we summarize the best fits for $\Omega_m$ and $w$ along with their respective
minimum reduced $\chi^2_{red}$.
\begin{table}[htbp]
\caption{Best fits for $\Omega_m$ and $w$.}
\label{tab2
\begin{center}
\begin{tabular}{@{}cccc@{}}
\hline $\alpha_E$ \hspace{0.4cm}& $\Omega_m$ \hspace{0.2cm} & $w$ \hspace{0.4cm}&
$\chi^2_{red}$
\\ \hline\hline
0.7 \hspace{0.4cm}& 0.35 \hspace{0.2cm} & -1.18 \hspace{0.4cm}& 0.978 \\
1.0 \hspace{0.4cm}& 0.29 \hspace{0.2cm} & -1.06 \hspace{0.4cm}& 0.978 \\
1.3 \hspace{0.4cm}& 0.23 \hspace{0.2cm} & -0.96 \hspace{0.4cm}& 0.977 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Why is $\alpha_E$ bigger than unity?}
Here we propose a simple toy model based on the existence of cosmic voids in order to explain why $\alpha_E$ can be bigger than unity. Recent studies have pointed out that cosmic voids not only represent a key constituent of the cosmic mass distribution, but, potentially, may become one of the cleanest probes to constrain cosmological parameters \cite{Cvoids}. The idea is to consider that very large voids are relatively rare entities, i.e.
their formation suffer from the same kind of size (mass) segregation `felt' by the largest galaxies and clusters. By assuming that the 3 basic entities filling the observed Universe are: (i) matter homogeneously distributed ($\rho_h$), (ii) the clustered component ($\rho_{cl}$) and (iii) voids ($\rho_{vd}$) of small and moderate sizes, we define the extended DR parameter (see Eq.(1)):
\begin{equation}\label{voids}
\alpha_E = \frac {\rho_{h}}{\rho_{h} + \rho_{cl} + \rho_{vd}}.
\end{equation}
The important task now is to quantify the contribution of voids representing the local underdensities in the Universe. The presence of a void means that its matter was somehow redistributed to the clustered and the homogeneous components. The gravitational effect of a void in an initially homogeneous distribution is equivalent to superimpose a negative density (for small densities the nonrelativistic superposition principle is approximately valid). For simplicity, it will be assumed here that the overall contribution of the void component can be approximated by the linear expression, $\rho_{vd} = -\delta(\rho_{h} + \rho_{cl})$, where $\delta$ is a positive number smaller than unity. Therefore, $\alpha_E$ given Eq. (\ref{voids}) can be rewritten as:
\begin{equation}\label{voids1}
\alpha_E = \frac {\rho_{h}}{(\rho_{h} + \rho_{cl}) (1 - \delta)} \equiv \frac{\alpha}{1 - \delta},
\end{equation}
which clearly satisfies the inequality $\alpha_E \geq \alpha$, where $\alpha$ is the standard DR parameter. In particular, when the clustered component
does not contribute we find $\alpha_E = \frac {1}{1 - \delta} \geq 1$. The previous analyses using supernovae data implies that we have effectively constrained the extended parameter, $\alpha_E$. How to roughly estimate the void contribution from this crude model? By applying the standard DR approach to the Union2 sample, the best fit is $\alpha=1$, and combining with the result for a flat $\Lambda$CDM model (section III), one may check that the void contribution has a best fit of $\delta \sim 0.2$. It should be important to search for a possible connection between the present approach and more sophisticated methods from weak lensing.
\section{Conclusions}
In this Letter we have discussed the role played by local inhomogeneities on the light propagation based on an extended Dyer-Roeder approach.
In the new interpretation light can travel in regions denser than average, a possibility phenomenologically described by a smoothness parameter $\alpha_E > 1$.
In order to test such a hypothesis we have performed a statistical analysis in a flat $\Lambda$CDM model and the best fit achieved was $\alpha_E=1.26$ and $\Omega_m=0.25$, the parameters being restricted to the intervals $0.72 \leq \alpha_E \leq 1.94$ and $0.21 \leq \Omega_m \leq 0.29$ within the 68.3\% confidence level. Although $\alpha_E$ being poorly constrained, the results are fully compatible with the hypothesis of light traveling in denser than average regions. We have also analyzed how different values for the smoothness parameter affect the bounds over $(\Omega_m,\Omega_\Lambda)$ in an arbitrary $\Lambda$CDM model. Interestingly, $\alpha_E>1$ improves the cosmic concordance model since it provides a better agreement between low and high redshift data (Supernovae, CMB and BAO). The same happened when a flat XCDM model was considered with the assumption that $\alpha_E>1$.
Such results suggest that the hypothesis of light traveling in regions denser than the cosmic average seems to be quite realistic. A toy model justifying why this may occur with values of $\alpha_E$ greater than unity was also discussed by taking into account the possible influence of cosmic voids on the Dyer-Roeder approach. The simplicity of the model and the obtained results reinforce the interest on the influence of local inhomogeneities and may pave the way for a more fundamental description.
\begin{acknowledgements}
JASL is partially supported by CNPq and
FAPESP while VCB and RCS are supported by CNPq and INCT-Astrof\'isica, respectively.
\end{acknowledgements}
|
2,869,038,157,057 | arxiv | \section{Introduction}
Pinterest’s mission is to bring everyone the inspiration to create a life they love. Users browse Pinterest to get inspired for their next home decoration project or to stay up to date with the latest fashion and beauty trends. Common feedback we hear from our users is that once they discover a product that matches their taste, they want to be able to purchase it as seamlessly as possible. In order to build a delightful shopping experience, we need our recommendation systems to evolve beyond image-based recommendations by leveraging the multi-faceted information available for products in our shopping catalog. Unlike pins - the main type of content on Pinterest, products consist of several images displaying the product from different angles or in different contexts and have high quality textual metadata provided by merchants including title, description, colors, and patterns in which the product is available for sale (see Figure \ref{fig:product_item} for an example). Our shopping recommendation systems also need to optimize for new types of outcomes like purchases and add-to-cart actions in addition to typical engagement metrics like clicks or saves.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{product_item.png}
\caption{Products on Pinterest consist of several images and rich textual metadata.}
\label{fig:product_item}
\end{figure}
This paper introduces Pinterest’s learned embedding representation for products named ItemSage. Embeddings are a powerful tool for building recommendation systems at scale for e-commerce \cite{amazonsearch,amazonsearch2,jd} and social media \citep{pinsage,visual,pinnersage,youtube,facebooksearch,twitter} applications. From our experience, one of the key reasons to focus on building embedding representations lies in their versatility: we have successfully experimented with using ItemSage embeddings (1) for generating candidates to upstream ranking systems via approximate nearest neighbor search, (2) as features in the ranking models responsible for determining the final ordering of the products shown to users and (3) as signals in classification models aimed at inferring missing information from the shopping catalog (e.g. the category or the gender affinity for a specific product).
With ItemSage, we made a few conscious design decisions that contrast it from other approaches in several key aspects:
\textbf{Multi-modal features.} Earlier approaches typically focus on building embeddings from content in a single modality, e.g. text \cite{pintext,amazonsearch,detext} or images \cite{visual,visualtransformer}. Product information spans both modalities. Since Pinterest is a visually dominant platform, it is important to capture the nuanced information available in a product’s images to make sure shopping recommendations feel natural with the rest of the product (e.g. users tend to prefer fashion products shown in lifestyle photos over images of the products on a white background). At the same time, product images may contain other products (e.g. a sofa might be shown in a living room with a coffee table and a rug) so textual matches are crucial for providing highly relevant recommendations. We introduce a transformer-based architecture capable of combining features from these different modalities which is different from earlier work \cite{pinsage,youtube} that extends to multi-modal features.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{itemsage_surfaces.pdf}
\caption{Screenshots of ItemSage being used for product recommendations on Home, Closeup and Search surfaces.}
\label{fig:surfaces}
\end{figure}
\textbf{Multi-modal vertical recommendations.} Pinterest has 3 main surfaces (Figure \ref{fig:surfaces}) that provide personalized recommendations: (1) in the \textbf{Home} surface, users are provided with recommendations based on their past activity, (2) in the \textbf{Closeup} surface, we provide similar recommendations to a pin the user is currently viewing, while (3) in the \textbf{Search} surface, we provide recommendations in response to a query string that the user has typed. Note that in each surface, the query comes from a different modality: (1) in Home, the query is essentially a \textbf{sequence of pins}, (2) for Closeup, the query is a \textbf{single pin}, (3) while in Search, the query is a \textbf{text string}. In contrast with other works that typically target a single vertical application (e.g. product search \cite{amazonsearch, jd}), ItemSage can provide relevant candidates via approximate neighbor search \cite{hsnw} for all these surfaces and, therefore, in response to queries formulated in each of these modalities. We achieve this by training ItemSage embeddings to be compatible with the learned representations for pins \cite{pinsage} and search queries. Recommendations based on user activities are a more general case of pin-based recommendations where the activity history is first partitioned into clusters and then a few representative pins are sampled from different clusters to generate pin to product recommendations \cite{pinnersage}.
\textbf{Multi-task Learning.} To strike the right balance between inspiration and conversion optimization, Pinterest shopping recommendation systems optimize for several objectives including purchases, add-to-cart actions, saves and clicks. Traditionally, recommendation systems tackle multi-objective optimization in the ranking phase by building models capable of computing a score for every engagement type under consideration conditioned on the query context and the ranked candidate \cite{youtubemtl,ple}. In our work, we learn embeddings that \textbf{additionally} optimize the candidate generation phase for all engagement types under consideration, leading to an overall more efficient recommendation funnel. We show that in cases where the labels are sparse, multi-task learning leads to improved results over models specialized only on sparse objectives. For objectives where labeled data is plentiful, we show that we can optimize our embeddings for new objectives with little or no performance penalty on existing objectives.
\section{Related Work}
The main concepts that underpin our work building product embeddings at Pinterest are multi-modal learning and multi-task learning. ItemSage aggregates multi-modal information from images and text. The embeddings are learned in a multi-task regime that supports cross-modal candidate retrieval and joint optimization for several engagement types. In this section, we briefly review these two concepts and related work. Another goal of our work is to create embeddings that are \textbf{compatible} with the learned representations of other entities in the Pinterest ecosystem, namely pins and search queries. We briefly review our approach for generating these “target” embeddings.
\subsection{Multi-Modal Representation Learning}
Multi-modal representation learning aims to aggregate information from different modalities into a common subspace \cite{multimodal}. It has been extensively studied in areas like video classification \cite{videoclassification1,videoclassification2}, speech recognition \cite{speechrecognition1,speechrecognition2}, and visual question answering \cite{visualqa,tan2019lxmert} where information are often available in audio, text, and visual formats.
The fusion of different modalities often follows a projection and concatenation pattern. For example, \cite{liu2016multimodal} first projects the image, audio, and text features into the same space using autoencoders and then concatenates the hidden features to produce the final embedding with a further projection. Inspired by the success of Transformer models like BERT \cite{bert}, more works adopt Transformer models for modality fusion \cite{tan2019lxmert,zhou2020unified}. Among these, the single-stream Transformer model \cite{chen2019uniter,li2019visualbert}, which also uses the same projection and concatenation idea, is the most suitable for our use case given our modalities.
We should note that although multi-modal representation learning has been studied in various areas, few works \cite{youtube,pinsage}
have successfully applied it to large-scale recommendation systems. To our knowledge, this work is the first one that uses the Transformer architecture to aggregate image and text modalities to learn product representations for production-scale recommendation systems.
\subsection{Multi-Task Representation Learning}
Multi-task learning is designed to improve the model performance on individual tasks by sharing model parameters across related tasks \cite{mtloverview}. Typical multi-task learning models are deployed in the ranking phase of the recommendation systems \cite{youtubemtl,ple}, and a model will have several outputs, one for each task. Similar model architectures are used in representation learning \cite{liu2015representation} where multiple task-specific embeddings are learned by a model. However, from the production point of view, it is most convenient and economic to use a single version of embeddings for all tasks. Therefore, we take the same approach as \cite{visual,visualtransformer} and utilize multi-task learning to optimize for a single embedding. While these works are optimized for multiple classification tasks, our work is trained on retrieval tasks with several engagement types for multi-modal vertical recommendation systems. This kind of multi-modal multi-task learning helps us solve the special challenge of making our learned product embeddings compatible with embeddings of both query images and search queries.
\subsection{Image and Search Query Representations at Pinterest}\label{sec:xsage}
PinSage \cite{pinsage} is a highly-scalable implementation of the GraphSage GCN algorithm \cite{graphsage} that is deployed in production at Pinterest to produce the image embeddings of billions of pins. It aggregates the visual and textual features along with the graph information of pins to produce a rich and compact representation for various use cases including retrieval, ranking, and classification.
SearchSage\footnote{\url{https://medium.com/pinterest-engineering/searchsage-learning-search-query-representations-at-pinterest-654f2bb887fc}} is our search query embedding model trained by fine-tuning DistilBERT \cite{distilbert}. It is trained on search query and engaged pin pairs from search logs. The loss aims to optimize the cosine similarity between the embedding of the global \texttt{[CLS]} token which can be seen as an aggregation of the input query and the output of an MLP that summarizes the pin into an embedding based on several text features and its PinSage embedding. Because other features in addition to PinSage contribute to the candidate pin representation and because of the MLP transformation, PinSage and SearchSage embeddings are not directly compatible with one another. We will refer to this later when we discuss baselines for ItemSage embeddings.
When training ItemSage, the PinSage model is used to provide the embeddings of both the feature images of the product and the query images, while the SearchSage model embeds the search query string. Since the PinSage and SearchSage models both have multiple applications in production, they are frozen at ItemSage training time due to considerations of development velocity and ease of adoption.
\section{Method}
In this section, we introduce our approach to building product embeddings at Pinterest. We first formally define the notion of compatibility across learned representations for different entities. Then, we introduce our model starting with the features used as input by the model and the process for collecting the training labels for the different tasks. We provide a detailed description of the model architecture, loss function, the multi-task learning regime and the inference and serving setup.
\subsection{Embedding Compatibility}
One of the requirements for ItemSage is to create product embeddings that are compatible with PinSage embeddings for images and SearchSage embeddings for search queries. In this case, compatibility means that the distance between a query (image or text string) embedding and a product embedding should be an informative score indicating how relevant the product is as a result for the query. We use cosine similarity as a measure of the embedding distance due to its simplicity and efficiency. The compatibility requirement originates from our desire to support candidate generation via approximate nearest neighbor (ANN) search techniques like Hierarchical Navigable Small Worlds (HNSW) \cite{hsnw}. We cannot afford to apply expensive transformations to achieve compatibility as they would significantly increase retrieval latency. On the other hand, our experience indicates that for ranking and classification applications, compatibility plays a less important role as most deep learning models operating on pretrained embeddings can learn MLP transformations that are sufficient to map embeddings into a shared latent space.
\subsection{Features and Labels}
\begin{table}
\caption{Training data volume.}
\label{tab:data-volume}
\begin{tabular}{c c c c}
\toprule
Surface & Engagement & No. Examples \\
\midrule
Closeup & Clicks + Saves & 93.3M \\
Closeup & Checkouts + Add-to-Cart & 3.7M \\
Search & Clicks + Saves & 99.4M \\
Search & Checkouts + Add-to-Cart & 3.5M \\
\bottomrule
\end{tabular}
\end{table}
A product consists of a list of images depicting the product from different angles or in different contexts and a list of text features. We truncate the list of images to at most 20 (99.7\% of products in our catalog have less than or equal to 20 images). Each image is represented by its pretrained PinSage embedding \cite{pinsage} and for products with fewer than 20 images, each empty slot is represented by a zero embedding. We use 12 text features as input to our model: title, description, merchant domain, product links, google product category\footnote{The google product category represents the node from a standard taxonomy that merchants may tag their products with. The taxonomy can be found at \url{https://www.google.com/basepages/producttype/taxonomy.en-US.txt}}, product types\footnote{The product types are html breadcrumbs scraped from the merchant product page, e.g. \texttt{Home Decor > New Arrivals}.}, brand, colors, materials, patterns, size, and size type. In cases where a product may have several values for a particular feature (e.g. links, colors, etc.) these strings are concatenated into one string. Standard word-level tokenization and lowercasing are applied to all text features. Each processed string is represented as a bag of word unigrams, bigrams and character trigrams \cite{dssm, amazonsearch}. The tokens are mapped to numerical IDs using a vocabulary $\mathcal{V}$ of the most frequent 200,000 word unigrams, 1,000,000 word bigrams and 64,000 character trigrams and out-of-vocabulary tokens are discarded.
We construct our training dataset by collecting labels from the Closeup and Search engagement logs. Each positive example is a pair of query and engaged product where the query represents either an image for examples mined from Closeup logs or a text string for search logs. The dataset is deduplicated such that only one instance of a query and engaged product pair is kept. We select 4 engagement types to train our models: clicks and saves are the main actions that users can take on any Pinterest content, while add-to-cart actions and checkouts are actions that express shopping intent. The labels for all tasks are collected from the same date range. The number of positive labels is summarized in Table \ref{tab:data-volume}. In addition to positive labels, our loss uses random negative labels which we sample randomly from the shopping catalog. The negative labels are joined with the features offline and streamed into the model trainer through a separate data loader. The trainer alternates between consuming a batch of positive labels and a batch of negative labels which are then concatenated into a single training batch.
\subsection{Model Architecture}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{itemsage_model_architecture_v2.pdf}
\caption{ItemSage model architecture}
\label{fig:itemsage}
\end{figure}
We use a transformer encoder as the basic building block for learning product embeddings. It takes as input a sequence of 32 embeddings representing the image and text features of each product. The image embeddings are generated by the pretrained PinSage model \cite{pinsage}, while the text embeddings are learned jointly with the encoder. In order to deal with the large vocabulary size, we apply the hash embedding trick from \cite{svenstrup2017hash} to learn a compact embedding table.
The hash embedder consists of an embedding table $E$ of size $100,000 \times 256$ and an importance weight table $W$ of size $|\mathcal{V}| \times 2$. We use two hashing functions $h_1$, $h_2$ to map each token ID $i = 1, \ldots, |\mathcal{V}|$ into two slots in the embedding table $h_1(i)$, $h_2(i)$. The embedding of token with ID $i$ is then the weighted interpolation of the two embeddings: $W_{i1} E_{h_1(i)} + W_{i2} E_{h_2(i)}$. The final embedding of a feature string is the summation of all its token embeddings.
As shown in Figure \ref{fig:itemsage}, we apply a linear transformation with output size 512 for each group of feature embeddings. This allows us to relax the requirement that all input features must have the same dimension. Similar to \cite{bert}, we use a global token \texttt{[CLS]} to aggregate the information from the input sequence. The transformed embeddings are concatenated together with the global token and then passed through a one-layer transformer block consisting of a self-attention module with 8 heads and a feed forward module with one hidden layer. The output corresponding to the global token then goes through a two-layer MLP head to produce the 256-dimension product embedding. The final ItemSage embedding is $L_2$-normalized for easier computation of cosine similarity with query embeddings when performing ANN search. We experimented with deeper transformer encoders in Section \ref{sec:arch-experiments}, but did not see an improvement in offline metrics.
\subsection{Loss} \label{sec:loss}
We frame the problem of learning product embeddings as an extreme classification problem where given a query entity, our goal is to predict the product from the catalog that the user will engage with next \cite{youtube}. Formally, let $\{x_i, y_i\}_{i=1}^{|\mathcal{B}|}$ be a training batch of query, engaged product pairs sampled from the engagement logs and $\mathcal{B} = \{y_i\}_{i=1}^{|\mathcal{B}|}$ be the set of engaged products in the batch. Let $\mathcal{C}$ denote the catalog containing all products. Given the pretrained embeddings $q_{x_i} \in \mathbb{R}^d$ for the queries $x_i$, our goal is to learn embeddings $p_{y_i} \in \mathbb{R}^d$ for the engaged products $y_i$ such that $p_{y_i}$ is more similar to $q_{x_i}$ than to all of the embeddings of other products from the catalog. This can be achieved by minimizing the softmax loss
\begin{equation}
L_{S} = -\frac{1}{|\mathcal{B}|} \sum_{i=1}^{|\mathcal{B}|} \log \frac{ e^{\langle q_{x_i}, p_{y_i} \rangle} }{ \sum_{y \in \mathcal{C}} e^{\langle q_{x_i}, p_y \rangle} },
\end{equation}
where $\langle \cdot, \cdot \rangle$ denotes the dot product function. In our case $\langle q_{x_i}, p_{y_i} \rangle$ is the same as the cosine similarity between the two embeddings since they are $L_2$-normalized.
The main challenge with computing the softmax loss $L_{S}$ is the expensive nature of the normalization step $\sum_{y \in \mathcal{C}} e^{\langle q_{x_i}, p_y \rangle}$ which involves a summation over the entire catalog. To make the loss computation tractable, a common technique is to approximate the normalization term by treating all the other positive examples from the same training batch as negatives and ignoring all the remaining products in the catalog. This approach is very efficient as no additional product embeddings need to be generated to compute the loss. However, naively replacing the whole catalog $\mathcal{C}$ with $\mathcal{B}$ introduces a sampling bias to the full softmax. We address this issue by applying the logQ correction \cite{corrected_softmax} that updates the logits $\langle q_{x_i}, p_{y} \rangle$ to be $\langle q_{x_i}, p_{y} \rangle - \log Q_p(y | x_i)$, where $Q_p(y | x_i)$ is the probability of $y$ being included as a positive example in the training batch. The loss becomes:
\begin{equation}\label{eq:sampled_softmax_pos}
L_{S_{pos}} = -\frac{1}{|\mathcal{B}|} \sum_{i=1}^{|\mathcal{B}|} \log \frac{e^{\langle q_{x_i}, p_{y_i} \rangle - \log Q_p(y_i | x_i)}}{\sum_{y \in \mathcal{B}} e^{\langle q_{x_i}, p_y \rangle - \log Q_p(y | x_i)}},
\end{equation}
We estimate the probabilities $Q_p(y | x_i)$ in streaming fashion using a count-min sketch that tracks the frequencies with which entities appear in the training data. The count-min sketch \cite{cms} is a probabilistic data structure useful for tracking the frequency of events in streams of data that uses sub-linear memory.
One problem we have experienced with using in-batch positives as negatives is that unengaged products in the catalog will never appear as negative labels. This treatment unfairly penalizes popular products as they are ever more likely to be selected as negatives. To counter this effect, we adopt a mixed negative sampling approach inspired by \cite{yang2020mixed}. For each training batch, we further select a set of random negatives $\mathcal{N}$ (where $|\mathcal{N}|=|\mathcal{B}|$) based on which we compute a second loss term:
\begin{equation}\label{eq:sampled_softmax_neg}
L_{S_{neg}} = -\frac{1}{|\mathcal{N}|} \sum_{i=1}^{|\mathcal{N}|} \log \frac{e^{\langle q_{x_i}, p_{y_i} \rangle - \log Q_n(y_i)}}{\sum_{y \in \mathcal{N}} e^{\langle q_{x_i}, p_y \rangle - \log Q_n(y)}},
\end{equation}
where $Q_n(y)$ represents the probability of random sampling product $y$. The loss term $L_{S_{neg}}$ helps reduce the negative contribution that popular products receive when used as negative labels. The main distinction between our approach and \cite{yang2020mixed} is that we optimize for $L_{S_{pos}} + L_{S_{neg}}$, while \cite{yang2020mixed} uses both $\mathcal{B}$ and $\mathcal{N}$ to calculate the normalization term and minimizes
\begin{equation}\label{eq:sampled_softmax_mixed}
\begin{split}
L_{S_{mixed}} &= -\frac{1}{|\mathcal{B}|} \sum_{i=1}^{|\mathcal{B}|} \log \frac{e^{\langle q_{x_i}, p_{y_i} \rangle - \log Q_p(y_i | x_i)}}{Z} \\
Z &= \sum_{y \in \mathcal{B}} e^{\langle q_{x_i}, p_y \rangle - \log Q_p(y | x_i)} + \sum_{y \in \mathcal{N}} e^{\langle q_{x_i}, p_y \rangle - \log Q_n(y)}.
\end{split}
\end{equation}
Section \ref{sec:loss-ablation} shows that we obtain better results separating the loss terms of the two negative sampling approaches.
\subsection{Multi-task Learning}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{itemsage_training_batch_v2.pdf}
\caption{The construction of a training batch. The white squares on the diagonal indicates a mask applied to prevent using an example's positive label also as a negative. Squares of different shades denote different queries and products.}
\label{fig:training_batch}
\end{figure}
We implement multi-task learning by combining positive labels from all the different tasks into the same training batch (Figure \ref{fig:training_batch}). This technique is effective even when the query entities come from different modalities, the only difference being that the query embedding needs to be inferred with a different pretrained model. In a training batch of size $|\mathcal{B}|$, we allocate $T_k$ positive examples for each task of interest $k \in \{1, \cdots, K\}$ such that $|\mathcal{B}| = \sum_{k=1}^K T_k$. Therefore tuning the values $T_k$ to be an effective way to create trade-offs between different tasks. When introducing new tasks, we find that setting $T_k = |\mathcal{B}|/K$ can be a good starting point to achieve significant improvements towards the new task without hurting the performance of other tasks. We believe the lack of negative impact on existing tasks can be attributed to the correlation between tasks. For example, to purchase a product users are likely to click on it first, thus adding the click task will not hurt the performance of the purchase task.
\subsection{Inference and Serving}
Figure \ref{fig:serving} illustrates how ItemSage embeddings are deployed to power Pinterest shopping recommendations. The embeddings are inferred in a batch workflow, which ensures that the model inference latency does not impact the end-to-end latency of the vertical recommendation systems. The inference workflow runs daily to update the embeddings based on the latest features and to extend the coverage to newly ingested products. Each new set of embeddings is used to create an HNSW index \cite{hsnw} for candidate generation and is also pushed to the online feature store for ranking applications. The HNSW index and the feature set are reused by all of the different vertical systems for Home, Closeup and Search recommendations. The Home and Closeup surfaces use precomputed PinSage embeddings fetched from the feature store as queries, while in Search, the query embeddings are inferred on the fly to support the long tail of previously unseen queries. The main thing to note is the simplicity of this design enabled by multi-task learning. By creating a single set of embeddings for all 3 vertical applications, we can use a single inference workflow and a single HNSW index to serve recommendations, thereby reducing the infrastructure and maintenance costs by three times.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{itemsage_serving.pdf}
\caption{ItemSage inference and serving.}
\label{fig:serving}
\end{figure}
After ItemSage embeddings are published to the offline feature store, they can be used as features in classification models designed to infer missing information about the products in our catalog. One example with which we have seen early success is inferring whether a product (e.g. in the fashion vertical) is intended to be sold to male users, female users or whether it is unisex. Training a simple MLP on top of ItemSage embeddings yielded a 3.6\% improvement in top-1 accuracy over our previous baseline.\footnote{We do not provide a detailed presentation on using ItemSage for classification applications in this work as our approach does not control for several important factors including dataset, features and model architecture compared to our baseline. Nonetheless, it is encouraging to see that our product embeddings can deliver impact to other applications with little effort.}
\section{Experiments} \label{sec:experiments}
In this section, we provide an empirical analysis of ItemSage embeddings, focused on evaluating the design choices outlined in this paper. We first conduct an extensive evaluation of the embeddings on offline benchmarks, followed by sharing results obtained via A/B experiments on live traffic.
\subsection{Offline Evaluation} \label{sec:offline-eval}
\subsubsection{Evaluation Protocol} \label{sec:eval-protocol}
Our goal is to build embeddings that are effective throughout the recommendation stack starting from candidate generation. Therefore we use recall as the main metric to evaluate the quality of embeddings and potential trade-offs.
We set up eight offline benchmarks for model evaluation, including four engagement types (clicks, saves, add-to-cart actions and checkouts) for two surfaces with different query modalities (Closeup and Search). Each benchmark $\mathcal{P}$ consists of a set of 80,000 pairs of query and engaged products $(x_i, y_i)$ that are sampled from image based recommendations or search results not included in the training data. We also randomly sampled a distractor set $\tilde{\mathcal{C}}$ of one million products from the shopping catalog $\mathcal{C}$ to measure the engagement-weighted recall at $k$, which we define as the weighted proportion of $(x_i, y_i)$ pairs for which the engaged product $y_i$ is ranked within top $k$ among $\tilde{\mathcal{C}} \cup \{y_i\}$,
\begin{equation}
Recall@k = \frac{1}{\sum_{i} w_i} \sum_{i=1}^{|\mathcal{P}|} w_i \mathbf{1}\left\{ \left| \left\{\langle q_{x_i}, p_y \rangle \ge \langle q_{x_i}, p_{y_i} \rangle | y \in \tilde{\mathcal{C}}\right\}\right| \leq k \right\},
\end{equation}
where $w_i$ represents the engagement count associated with $(x_i, y_i)$. In the following experiments, we fix the value of $k$ to 10 since the results for other values of $k$ are similar.
\subsubsection{Model Architecture} \label{sec:arch-experiments}
\begin{table*}[t]
\caption{Comparison of different model architectures with baselines.}
\label{tab:architectures}
\begin{tabular}{l|c|cccc|cccc}
\toprule
& Number of & \multicolumn{4}{c|}{Closeup} & \multicolumn{4}{c}{Search}\\
& Parameters & Clicks & Saves & Add-to-Cart & Checkouts & Clicks & Saves & Add-to-Cart & Checkouts\\
\midrule
Sum & - & 0.663 & 0.647 & 0.669 & 0.699 & - & - & - & - \\
Sum-MLP & - & - & - & - & - & 0.577 & 0.533 & 0.561 & 0.629 \\
MLP-Concat-MLP & 30.8M & 0.805 & 0.794 & 0.896 & \textbf{0.916} & 0.723 & 0.736 & 0.834 & 0.861 \\
ItemSage & 33.1M & \textbf{0.816} & 0.812 & \textbf{0.897} & \textbf{0.916} & 0.749 & 0.762 & \textbf{0.842} & \textbf{0.869} \\
2-Layer Transformer & 36.3M & 0.815 & 0.809 & 0.895 & 0.913 & 0.745 & 0.759 & 0.837 & 0.867 \\
3-Layer Transformer & 39.4M & 0.815 & 0.810 & 0.896 & 0.915 & 0.747 & 0.758 & 0.841 & \textbf{0.869} \\
4-Layer Transformer & 42.6M & \textbf{0.816} & \textbf{0.813} & \textbf{0.897} & 0.915 & \textbf{0.750} & \textbf{0.764} & 0.840 & \textbf{0.869} \\
\bottomrule
\end{tabular}
\end{table*}
In this section we compare the model architecture described above with several baselines and deeper versions of the ItemSage model with additional transformer encoder blocks. The first baseline simply applies sum-pooling and $L_2$ normalization to the PinSage embeddings of the images associated with each product (denoted as Sum). While this baseline is fairly simplistic, note that PinSage embeddings have been independently trained to effectively generate image (pin) candidates for non-shopping use cases at Pinterest. Additionally, other works \cite{amazonsearch} have reported simple pooling as being a baseline difficult to outperform for e-commerce search.
The Sum baseline cannot generate candidates for search results since the PinSage image model and the fine-tuned DistilBERT models produce embeddings that are not compatible with one another. We introduce a second baseline for search, Sum-MLP, which applies the pretrained MLP for image candidates from SearchSage (introduced in Section \ref{sec:xsage}) on every product image, followed by a sum-pooling and $L_2$ normalization to obtain product level embeddings.
The Sum and Sum-MLP baselines require little effort to generate. Consequently, they were adopted in production to support shopping recommendations while ItemSage embeddings were under development. We will refer to these baselines again in Section \ref{sec:online-experiments} when discussing results in A/B experiments.
The final baseline (denoted as MLP-Concat-MLP) is more competitive. It first maps each input feature into a latent space by applying a 3-layer MLP module with 256 hidden units. We learn separate MLPs for image and text features. The latent representations are concatenated into a single vector and passed through a second 3-layer MLP.
The results are presented in Table \ref{tab:architectures}. We observe that ItemSage strongly outperforms the Sum and Sum-MLP baselines. The transformer architecture yields improvements over the MLP-Concat-MLP model baseline on all tasks; the most notable improvements can be seen in the search tasks for clicks and saves. We attribute these gains to the self-attention module in ItemSage. However, using deeper architectures does not significantly improve the model performance: the results using 2 or 3 layers are worse than the 1 layer baseline, while the model with 4 layers has mixed results. In all cases, the relative change in metrics is small and considering the increased cost of deeper architectures, we chose to deploy the variant with a single transformer layer.
\subsubsection{Feature Ablation} \label{sec:feature-ablation}
In this section, we analyze the benefit of learning the ItemSage embeddings from multi-modal features. We compare our final model with two different models using the same transformer architecture, but limited to using features corresponding to a single modality: image or text (Table \ref{tab:ablation}, Row ``Feature''). The image-only model takes as input just the PinSage embeddings of the product's images. The text-only model is trained based on the text attributes from the shopping catalog (title, description, etc.). We observe that the model trained on features from both modalities has significantly better performance than both baselines on all 8 tasks. Also, the image only model has significantly stronger performance over the text-only model, which could be attributed to the additional information summarized into the image embeddings: (1) PinSage is a graph neural network (GNN) aggregating information from the Pinterest pin-board graph in addition to the image itself and (2) the learnings presented in this paper regarding multi-modal learning have also been applied within PinSage to textual metadata available with each image. As one might expect, the models trained on a single feature modality have stronger performance when the query comes from the same modality, i.e., the image-only model shows better performance on Closeup tasks, while the text-only model performs better on Search tasks.
Inspired by PinSage and related work on GNNs \cite{graphsage, gtn1}, we conducted an experiment augmenting ItemSage with information obtained from the Pinterest pin-board graph \cite{pixie}. Products can naturally be embedded into this graph by creating edges between a product and its own images. For each product, we performed random walks starting from its corresponding node (reusing the same configuration as PinSage \cite{pinsage}) and kept the most frequently occurring 50 image neighbors that are different from the product's own images. The images are mapped to their corresponding PinSage embeddings which are appended to the sequence of features provided as input to the transformer. This extension to ItemSage showed neutral results (Table \ref{tab:ablation}, Row ``Feature'') and increased training cost. We attribute this result to the fact that the PinSage embeddings of the product's own images are already aggregating the graph information, making the explicit extension redundant.
\subsubsection{Loss Ablation} \label{sec:loss-ablation}
In Section \ref{sec:loss}, we introduce our approach for sampling negative labels which consists of two sources: (1) other positive labels from the same batch as the current example and (2) randomly sampled negative labels from the entire catalog. In this section, we compare how our model performs if the negative labels are selected from only one of these sources. The results are presented in the Row ``Negative Sampling'' of Table \ref{tab:ablation}. We observe a steep drop in recall if only one source of negatives is used. Moreover, if we only select random negatives, the model converges to a degenerate solution (and thus we need to apply early stopping) where a few products become very popular and appear in the top 10 results of more than 10\%-15\% of the queries in the evaluation set depending on the task.
We also compare our mixed negative sampling approach with the approach from \cite{yang2020mixed}, and observe that our approach which introduces separate loss terms for in batch positives and random negatives provides an improvement of at least 3.5\% on every task.
\subsubsection{Task Ablation} \label{sec:task-ablation}
\begin{table*}[t]
\caption{Ablation study for ItemSage models. Relative differences from the ItemSage model are shown in the parentheses.}
\label{tab:ablation}
\begin{tabular}{ll|cccc|cccc}
\toprule
&& \multicolumn{4}{c|}{Closeup} & \multicolumn{4}{c}{Search}\\
&& Clicks & Saves & Add Cart & Checkouts & Clicks & Saves & Add Cart & Checkouts\\
\midrule
\multicolumn{2}{c|}{ItemSage}
& 0.816 & 0.812 & 0.897 & 0.916 & 0.749 & 0.762 & 0.842 & 0.869 \\
\midrule
\multirow{6}{40pt}{Feature}
& \multirow{2}{*}{Image Only}
& 0.795 & 0.787 & 0.882 & 0.908 & 0.670 & 0.698 & 0.798 & 0.830 \\
&& (-2.6\%) & (-3.1\%) & (-1.7\%) & (-0.9\%) &(-10.5\%) & (-8.4\%) & (-5.2\%) & (-4.5\%) \\
& \multirow{2}{*}{Text Only}
& 0.683 & 0.658 & 0.832 & 0.859 & 0.669 & 0.665 & 0.790 & 0.820 \\
&&(-16.3\%) &(-19.0\%) & (-7.2\%) & (-6.2\%) &(-10.7\%) &(-12.7\%) & (-6.2\%) & (-5.6\%) \\
& \multirow{2}{*}{Image + Text + Graph}
& 0.814 & 0.812 & 0.893 & 0.905 & 0.743 & 0.767 & 0.842 & 0.860 \\
&& (-0.2\%) & (0.0\%) & (-0.4\%) & (-1.2\%) & (-0.8\%) & (0.7\%) & (0.0\%) & (-1.0\%) \\
\midrule
\multirow{6}{40pt}{Negative Sampling}
& \multirow{2}{*}{$L_{S_{pos}}$ Only}
& 0.597 & 0.602 & 0.717 & 0.772 & 0.553 & 0.544 & 0.662 & 0.724 \\
&& (--26.8\%) & (-25.9\%) &(-20.1\%) & (-15.7\%) & (-26.2\%) & (-28.6\%) &(-21.4\%) &(-16.7\%) \\
& \multirow{2}{*}{$L_{S_{neg}}$ Only}
& 0.774 & 0.768 & 0.868 & 0.897 & 0.655 & 0.670 & 0.804 & 0.840 \\
&&(-5.1\%) &(-5.2\%) & (-3.2\%) & (-2.1\%) &(-12.6\%) &(-12.1\%) & (-4.5\%) & (-3.3\%) \\
& \multirow{2}{*}{$L_{S_{mixed}}$}
& 0.781 & 0.774 & 0.860 & 0.884 & 0.687 & 0.706 & 0.809 & 0.838 \\
&&(-4.3\%) &(-4.7\%) & (-4.1\%) & (-3.5\%) &(-8.3\%) &(-7.3\%) & (-3.9\%) & (-3.6\%) \\
\midrule
\multirow{4}{40pt}{Surface}
& \multirow{2}{*}{Closeup}
& 0.815 & 0.811 & 0.891 & 0.909 & - & - & - & - \\
&& (-0.1\%) & (-0.1\%) & (-0.7\%) & (-0.8\%) & - & - & - & - \\
& \multirow{2}{*}{Search}
& - & - & - & - & 0.760 & 0.766 & 0.830 & 0.861 \\
&& - & - & - & - & ( 1.5\%) & ( 0.5\%) & (-1.4\%) & (-0.9\%) \\
\midrule
\multirow{4}{40pt}{Engagement Type}
& \multirow{2}{*}{Clicks + Saves}
& 0.819 & 0.812 & 0.869 & 0.894 & 0.755 & 0.768 & 0.689 & 0.765 \\
&& ( 0.4\%) & ( 0.0\%) & (-3.1\%) & (-2.4\%) & ( 0.8\%) & ( 0.8\%) &(-18.2\%) &(-12.0\%) \\
& \multirow{2}{*}{Add Cart + Checkouts}
& 0.503 & 0.503 & 0.850 & 0.882 & 0.382 & 0.392 & 0.768 & 0.793 \\
&&(-38.4\%) &(-38.1\%) & (-5.2\%) & (-3.7\%) &(-49.0\%) &(-48.6\%) & (-8.8\%) & (-8.7\%) \\
\bottomrule
\end{tabular}
\end{table*}
In this section, we evaluate the effectiveness of the multi-task learning regime used to train ItemSage.
We first evaluate the recall of ItemSage embeddings against two baseline models trained on vertical specific tasks: ``Closeup'' and ``Search'' (Table \ref{tab:ablation}, Row ``Surface''). Each baseline uses the same model architecture and represents the performance we would expect if we deployed separate embeddings per vertical. We find it encouraging that the model optimized for both verticals performs better on 6 out of 8 tasks. This suggests that applying multi-task learning leads to improved performance even across product verticals and, in the cases where the performance degrades, the degradation is not substantial compared to a vertical specific model. This is a remarkable result considering that the PinSage and SearchSage embeddings are not compatible with one another. We also find it interesting that the largest improvements are seen on shopping specific objectives (add-to-cart actions and checkouts), these actions being 30 times more sparse than clicks and saves, suggesting that the cross vertical setup helps the model extract more information about which products are more likely to be purchased by users.
The next experiment focuses on the impact of mixing regular engagement tasks (clicks and saves) together with shopping engagement tasks (add-to-cart actions and checkouts). The results are summarized in the Row ``Engagement Type'' of Table \ref{tab:ablation}. As expected, regardless of the task, we see an improvement in recall whenever we optimize the model on that specific task. Furthermore, we observe substantial gains in add-to-cart actions and checkouts compared to a model specialized at modeling just shopping engagement and minimal losses on clicks and saves tasks compared to the corresponding specialized model. We believe the substantial gains (3.7\%-8.7\%) of the joint model on add-to-cart actions and checkouts can be explained by the significant difference in training data volume between shopping and regular engagement, the additional click and save labels help the embeddings converge towards a more robust representation.
\subsection{Online Experiments} \label{sec:online-experiments}
\begin{table}
\caption{Results of A/B experiments. The three columns show the relative difference between ItemSage and the baselines in number of total clicks, average checkouts per user, and average Gross Merchandise Value (GMV) per user.}
\label{tab:online-results}
\begin{tabular}{c c c c}
\toprule
Surface & Clicks & Purchases & GMV \\
\midrule
Home & 11.34\% & 2.61\% & 2.94\% \\
Closeup & 9.97\% & 5.13\% & 6.85\% \\
Search & 2.3\% & 1.5\% & 3.7\% \\
\bottomrule
\end{tabular}
\end{table}
We report results from A/B experiments applying ItemSage in each of the main surfaces at Pinterest: Home, Closeup and Search. In these experiments, ItemSage was used to generate candidates for upstream recommendation systems via approximate nearest neighbor retrieval \cite{hsnw}. As shown in Section \ref{sec:task-ablation}, ItemSage is directly applicable to Search and Closeup recommendations; to extend to Home recommendations, we cluster the user activity history and sample pins from several clusters to reduce the problem to the same setting as producing Closeup recommendations \cite{pinnersage}. The embeddings used in the control group are generated with the baselines introduced in Section \ref{sec:arch-experiments}: the Sum baseline is used for Home and Closeup recommendations and the Sum-MLP baseline is used in Search. The results are summarized in Table \ref{tab:online-results}. We observed a strong impact on both engagement and shopping specific key business indicators deploying ItemSage embeddings for product recommendations on all surfaces.
\section{Conclusion}
In this work, we presented our end-to-end approach of learning ItemSage, the product embeddings for shopping recommendations at Pinterest.
In contrast to other work focused on representation learning for e-commerce applications, our embeddings are able to extract information from both text and image features. Visual features are particularly effective for platforms with a strong visual component like Pinterest, while modeling text features leads to improved relevance, especially in search results.
Furthermore, we describe a procedure to make our embeddings compatible with the embeddings of other entities in the Pinterest ecosystem \textbf{at the same time}. Our approach enables us to deploy a single embedding version to power applications with different inputs: sequences of pins (images) for user activity based recommendations in the Home surface, single images for image based recommendations in the Closeup surface and text queries to power search results. This leads to a 3X reduction in infrastructure costs as we do not need to infer separate embeddings per vertical. From our experience, embedding version upgrades are a slow process taking consumers up to a year to completely adopt a new version before an old one can be fully deprecated. A unified embedding for all applications means less maintenance throughout this period and a more efficient, consolidated process for upgrades and deprecation.
Finally, we show that by applying multi-task learning, ItemSage embeddings can optimize for multiple engagement types leading to improved performance for objectives with sparse labels and no penalty for objectives with sufficient training labels. By optimizing our product embeddings this way, we can ensure that the shopping recommendation stack is efficient with respect to all objectives starting from the candidate generation layer.
The effectiveness of our approach is demonstrated by thorough offline ablation studies and online A/B experiments. There are several promising areas for future work, such as replacing the bag of words model used for text features with a pretrained Transformer model or using neighborhood sampling to extract additional multi-modal features from the Pinterest entity graph from shopping specific entities (e.g. merchants) or edges (e.g. co-purchases).
\begin{acks}
The authors would like to thank Pak-Ming Cheung, Van Lam, Jiajing Xu, Cosmin Negruseri, Abhishek Tayal, Somnath Banerjee, Vijai Mohan and Kurchi Subhra Hazra who contributed or supported us throughout this project.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,157,058 | arxiv | \section{Introduction}
The low-energy effective models of the QCD, such as the
Nambu--Jona-Lasinio (NJL) model \cite{nambu61} and the linear sigma
model (also called chiral quark or quark-meson (QM) model)
\cite{gell-mann60} are based on the global chiral symmetry of the
QCD. They proved to be very useful in qualitative understanding of
many aspects related to the spontaneous breaking of the chiral
symmetry and its restoration at finite temperature and density, but
share as a major drawback the lack of the confinement property. As a
consequence of the absence of gluonic effective degrees of freedom and
due to the lack of color clustering \cite{megias06} there are
unsuppressed contributions of constituent quarks in the
low-temperature phase. Both features, which are in fact related, alter
the reliability of the quantitative thermodynamic predictions of these
models, such as the equation of state or the location of the critical
end point (CEP) in the $\mu_q-T$ phase diagram.
Since the QCD phase transition involves both the restoration of chiral
symmetry measured by the evaporation of the chiral condensate
$\grdstate{\bar\psi\psi}$ and the liberation of quarks and gluons
encoded in the change of the Polyakov loop $\Phi$, much effort was
devoted to understand the relation between the chiral and
deconfinement phase transitions. An argument by Casher \cite{casher79}
states that in the vacuum, that is at $\mu_q=T=0,$ confinement implies
the breaking of the chiral symmetry. The connection between
$\grdstate{\bar\psi\psi}$ and $\Phi$ is revealed by the spectral
density of the Dirac operator $\rho(\lambda)$. In the vacuum the
Banks-Casher relation $\rho(0)=\grdstate{\bar\psi\psi}/\pi$
\cite{banks80} states that the spectral density of the Dirac operator
in the deep infrared is proportional to the quark condensate. Finite
temperature lattice studies at $\mu_q=0$ show that the connection
between chiral symmetry and confinement suggested by Casher's
argument holds. The infrared part of $\rho(\lambda)$ undergoes a
pronounced change as one crosses from the confined to the deconfined
phase \cite{edwards00,kovacs08}. As shown recently in
\cite{gattringer06}, the phase of the Polyakov loop $\Phi,$ which can
be expressed as a spectral sum of eigenvalues and eigenvectors of the
Dirac operator with different boundary conditions, receives its main
contribution from the infrared end of $\rho(\lambda).$ It is generally
true for both $N_c=2$ and $N_c=3$ that at high temperature the fermion
determinant favors the sector where the Polyakov loop lies along the
positive real axis \cite{edwards00,kovacs08}. For this type of
configuration in which the phase of $\Phi$ vanishes the chiral
symmetry is restored, because a sizable gap develops in the spectral
density of the Dirac operator which implies in view of the
Banks-Casher relation $\grdstate{\bar\psi\psi}=0.$ For configurations
in which the phase of the Polyakov loop is not vanishing, the chiral
symmetry is not restored, as observed in the lattice study of the quenched
QCD \cite{chandrasekharan96a} and also by using a random matrix model
calculation \cite{stephanov96}.
Casher's argument suggests that the temperature $T_d$ for the
deconfinement phase transition is somewhat lower than the restoration
temperature $T_\chi$ of the chiral symmetry. As explained in
\cite{glozman09} at finite density Casher's argument could fail, so that it
does not contradict the existence of a dense phase in which at a given
temperature chiral symmetry is restored while quarks remains confined.
Such a phase can exist inside the so-called quarkyonic phase, which
was suggested as a new phase of the QCD at finite temperature and
density, based on its existence within a large-$N_c$
analysis~\cite{mcLerran07}.
In the Polyakov-loop extended NJL model (PNJL) where the coupling of
the Polyakov loop to the quark sector is achieved by the propagation
of the quarks on a constant temporal gauge field background, the
simultaneous crossover-type transition of deconfinement and chiral
restoration was obtained \cite{fukushima04b}. As shown in
\cite{meisinger96,chandrasekharan96b} this model is able to reproduce
the main features of the quenched lattice result of
\cite{chandrasekharan96a}. The phase transition was recently
intensively investigated in the PNJL model with two
\cite{hansen07,abuki08,kashiwa08,costa09a,sakai09} and three flavors
\cite{ciminale08,fu2008,costa09b}, also in the nonlocal formulation
of the model \cite{sasaki07,dumm10}. The interplay between chiral and
deconfinement transitions was investigated in the PNJL model using
large-$N_c$ expansion in \cite{mcLerran09}.
By coupling the Polyakov loop to the quark degrees of freedom of the
QM model the thermodynamics of the resulting Polyakov quark-meson
model (PQM) was studied for two
\cite{schaefer07,tuominen08,nakano10,skokov10,skokov10b,herbst10} and three
quark flavors \cite{schaefer10,mao10,gupta10}. The effect of a strong
magnetic field expected to be generated in the LHC in noncentral
high-energy heavy ion collisions on the chiral and deconfining phase
transitions was studied recently in \cite{mizher10} within the PQM
model with two flavors. The possibility of coupling the Polyakov loop
to meson models without quarks was considered in
\cite{sannino04,fraga}.
The coupling of the Polyakov loop to the chiral effective models
mimics the effect of confinement by statistically suppressing at low
temperature the contribution of one- and two-quark states relative to
the three-quark states. This feature makes the Polyakov-loop extended
effective models more appropriate for the description of the
low-temperature phase and for quantitative comparison with the
thermodynamic observables on the lattice
\cite{weise08,schaefer07,schaefer10}. Better agreement is expected up
to $T\simeq (1.5-2) T_c$ above which the transverse gluonic degrees of
freedom dominate in thermodynamic quantities, such as the pressure,
over the longitudinal ones represented by the Polyakov loop.
Despite this success, one should keep in mind that the solution of the
Polyakov-loop extended effective models is mainly obtained in the
lowest one-loop (trace-log) order of the fermionic sector, hence
studying their stability against inclusion of higher loops would be
certainly of interest. Some approximations to the PQM model
\cite{schaefer07,tuominen08,schaefer10,mao10,gupta10} neglect the
fermionic vacuum fluctuations and by treating the mesonic potential at
tree level, completely disregard quantum effects in the mesonic
sector. The effect of including the quantum fluctuation in the PQM
model was recently studied in \cite{nakano10,skokov10,herbst10}
using functional renormalization group methods.
In this work we would like to address two questions using large-$N_f$
approximation to the $SU(2)_L\times SU(2)_R\simeq O(4)$ model. The
first one is to what extent the inclusion of the Polyakov loop
modifies the $\mu_q-T$ phase diagram obtained previously in
\cite{toni04} in the chiral limit of the two flavor QM using the
large-$N_f$ approximation. The second one concerns the effect of
different partial resummations on the quantitative results. To this
end several approximate resummations of the perturbative series will
be investigated and the obtained results compared.
The paper is organized as follows. In Sec.~II we overview some basic
facts about the Polyakov loop, including different forms of the
effective potential and we introduce and parametrize the PQM model,
presenting also the approximations exploited for its solution. The
renormalization of the model and the determination of the counterterms
is discussed in Sec.~III. In Sec.~IV we present the numerical results
on the $\mu_q-T$ phase diagram obtained in the chiral limit and for
the physical value of the pion mass by using different forms of the
Polyakov-loop effective potential and various approximations to the
resummed pion propagator. Section V is devoted to discussion and
summary.
\section{The PQM model within a large-$N_f$ approximation
\label{sec:model}}
\subsection{The Polyakov loop as an order parameter\label{ss:POP}}
We shortly review a few well known facts about the Polyakov loop
incorporated as a new effective degree of freedom in the
chiral quark model. This is usually done by considering the
propagation of quarks on the homogeneous background of a temporal
gauge field $A_0(x).$ At finite temperature $T=1/\beta$, after
analytical continuation to imaginary time $t\to i\tau,$ $A_0\to i A_4,$
the temporal component of the Euclidean gauge field
$A_4$ enters in the definition of the Polyakov-loop operator (path
ordered Wilson line in temporal direction) $L(\vec x)$ and its
Hermitian (charge) conjugate $L^\dagger(\vec x)$
\begin{equation}
L(\vec x)={\cal P} \exp\left[i\int_0^\beta d\tau A_4(\tau,\vec x) \right],
\qquad
L^\dagger(\vec x)={\cal P} \exp\left[-i\int_0^\beta d\tau A_4^*(\tau,\vec x)
\right],
\label{Eq:Polyakov_op}
\end{equation}
which are matrices in the fundamental representation of the $SU(N_c)$
color gauge group ($N_c=3$). In the so-called Polyakov gauge, the temporal
component of the gauge field is time independent and can be gauge rotated
to a diagonal form in the color space
$A_{4,d}(\vec x)=\phi_3(\vec x) \lambda_3+\phi_8(\vec x) \lambda_8$
\cite{reinhardt97,ford98,schaden05},
where $\lambda_3,\lambda_8$ are the two diagonal Gell-Mann matrices.
Then the Polyakov-loop operator simplifies
\begin{eqnarray}
L(\vec x)&=&
\tn{diag}(e^{i\beta\phi_+(\vec x)},
e^{i\beta\phi_-(\vec x)},e^{-i\beta(\phi_+(\vec x)+\phi_-(\vec x))}),
\label{Eq:Polyakov_op_diag}
\end{eqnarray}
where $\phi_\pm(\vec x)=\pm \phi_3(\vec x)+\phi_8(\vec x)/\sqrt{3},$
with a similar form for the conjugate $L^\dagger(\vec x).$
Topologically nontrivial gauge transformations
$U(\tau,\vec x)\in SU(N_c)$ that are periodic up to a twist, that is
$U(\tau+\beta,\vec x)=z U(\tau,\vec x),$ were introduced in \cite{thooft},
where $z$ is an element of the center of the $SU(N_c)$ group which is
isomorphic with
$\mathbb{Z}_{N_c}=\{z|z=\exp(2\pi n i/N_c), n=0,1, \dots, N_c-1\},$
the cyclic group of order $N_c.$
Under such transformations the color trace of the Polyakov-loop
operator and its conjugate, that is $l(x)=\textrm{tr}_c L(\vec x)/N_c$ and
$l^\dagger(x)=\textrm{tr}_c L^\dagger(\vec x)/N_c,$ are transformed by an element of
the center: $l(x)\to z l(x), l^\dagger(x)\to z^* l^\dagger(x).$
Consequently, in the pure gauge theory, which has an exact
$\mathbb{Z}_{N_c}$ global symmetry, the thermal expectation value
of the traced Polyakov-loop operator and its conjugate,
\begin{eqnarray}
\Phi(\vec x)=\frac{1}{N_c} \grdstate{\textrm{tr}_c L(\vec x)},\qquad
\bar\Phi(\vec x)=\frac{1}{N_c} \grdstate{\textrm{tr}_c L^\dagger(\vec x)},
\label{Eq:Polyakov_field}
\end{eqnarray}
are order parameters for the center symmetry and must vanish if the
symmetry is unbroken. However, the Polyakov
loop $\Phi(\vec x)$ and its conjugate $\bar\Phi(\vec x)$ can acquire
a nonvanishing value, signaling the spontaneous breaking of
the $\mathbb{Z}_{N_c}$ symmetry. These complex quantities can be regarded as
order parameters of the deconfinement phase transition, because the
free energy of a heavy (static) quark-antiquark pair with spatial separation
$\vec r=\vec x-\vec y$ is related to the expectation value of the
correlator of the traced Polyakov-loop operator for which cluster
decomposition is expected to hold at infinite separation
\begin{equation}
\exp\left[-\beta F_{q\bar q}(\vec r)\right]=
\frac{1}{N_c^2} \grdstate{L(\vec x) L^\dagger(\vec x)} \to
\Phi(\vec x)\bar\Phi(\vec y).
\label{Eq:Polyakov_cluster}
\end{equation}
When $\Phi(\vec x),\bar\Phi(\vec y)=0$ then
$F_{q\bar q}(\vec r)\to \infty,$ and when $\Phi(\vec x),\bar\Phi(\vec y) \ne 0$
then $F_{q\bar q}(\vec r)$ is finite, which means confinement and
deconfinement, respectively \cite{mclerran81,svetitsky86}.
In the presence of dynamical fermions the $\mathbb{Z}_{N_c}$ symmetry
is not exact anymore. Nevertheless, the Polyakov loop gives through its
distribution information about the confinement (low $T$) or
deconfinement phase (high $T$) of the system both in a canonical or
grand-canonical formulation of the QCD \cite{faber95,kratochvila06}.
Since its absolute value can be related to the free energy difference
between two systems, one containing the quark-antiquark source pair and the
other not containing it, by renormalizing the free energy a
renormalized Polyakov loop can be defined \cite{peter02}, which
provides information on the temperature of the deconfinement phase
transition.
\subsection{The mean-field Polyakov-loop potentials \label{ss:PEP}}
In the mean-field approximation $\Phi(x)$ and $\bar\Phi(x)$ are
replaced by $x$-independent constant fields which satisfy
$|\Phi|=|\bar\Phi|$ at vanishing chemical potential. We review here
several forms and some basic features of the mean-field effective
potential for the Polyakov loop frequently used in the literature.
This effective potential will be incorporated in the thermodynamic
potential of the PQM model. The simplest effective potential is of a
Landau type, constructed with terms consistent with the $\mathbb{Z}_3$
symmetry \cite{pisarski00}:
\begin{equation}
\beta^4\,{\cal U}_\tn{poly}(\Phi,\bar\Phi)=
-\frac{b_2(T)}{2}\Phi\bar\Phi -\frac{b_3}{6}(\Phi^3 + \bar\Phi^3)
+\frac{b_4}{4}(\Phi\bar\Phi)^2\, ,
\label{Eq:P_eff_pot_poly}
\end{equation}
where the temperature-dependent coefficient which makes
spontaneous symmetry breaking possible is
\begin{equation}
b_2(T)=a_0 + a_1\left(\frac{T_0}{T}\right) +a_2\left(\frac{T_0}{T}\right)^2
+a_3 \left(\frac{T_0}{T}\right)^3\,.
\end{equation}
$T_0$ is the transition temperature of the confinement/deconfinement
phase transition, in the pure gauge theory $T_0=270$~MeV. The
parameters $a_i, i=0,\dots, 3$ and $b_3, b_4$ determined in
\cite{ratti06} reproduce the data measured in pure $SU(3)$ lattice
gauge theory \cite{boyd96} for pressure, and entropy and energy
densities. The minimum of this potential is at $\Phi=0$ for low
temperature and $\Phi\to 1$ for $T\to \infty$ in accordance with the
definitions (\ref{Eq:Polyakov_op}) and
(\ref{Eq:Polyakov_field}). However, when using this potential in
either the PNJL or the PQM models the minimum of the resulting
thermodynamic potential is at $\Phi>1$ for $T\to\infty$ and also leads
to negative susceptibilities \cite{sasaki07}.
An effective potential for the Polyakov loop inspired by a
strong-coupling expansion of the lattice QCD action was derived in
\cite{fukushima04b,fukushima04a}. Using the part coming from the
$SU(3)$ Haar measure of group integration an effective potential was
constructed in \cite{ratti07,roessner07}
\begin{equation}
\beta^4\,{\cal U}_{\text{log}}(\Phi,\bar\Phi)= -\frac{1}{2}a(T) \Phi\bar \Phi
+ b(T) \ln \left[1-6 \Phi\bar\Phi + 4\left(\Phi^{3}+\bar\Phi^{3}\right)
- 3 \left(\Phi\bar\Phi\right)^{2}\right]\, ,
\label{Eq:P_eff_pot_log}
\end{equation}
with the temperature-dependent coefficients
\begin{equation}
a(T)=a_0+a_1 \left(\frac{T_0}{T}\right)+a_2 \left(\frac{T_0}{T}\right)^2,
\qquad
b(T)=b_3\left(\frac{T_0}{T}\right)^3\ .
\end{equation}
The parameters $a_i,i=0,1,2$ and $b_3$ determined in \cite{ratti07}
reproduce the thermodynamic quantities in the pure $SU(3)$ gauge theory
measured on the lattice. The use of this effective potential
cures the problem with negative susceptibilities \cite{sasaki07}
and since the logarithm in ${\cal U}_{\text{log}}(\Phi,\bar\Phi)$ diverges
as $\Phi,\bar\Phi\to 1$ it will also guarantee that
$\Phi,\bar\Phi\to 1$ for $T\to \infty.$
A third effective potential is the one determined in
Refs.~\cite{fukushima04b,fukushima04a}:
\begin{equation}
\beta\, {\cal U}_{\tn{Fuku}}(\Phi,\bar\Phi) =
-b \left[
54 e^{-a/T} \Phi \bar\Phi +\ln\left[1 - 6 \Phi \bar\Phi
+ 4 (\Phi^3 + \bar\Phi^3)
- 3 (\Phi \bar\Phi)^2 \right]
\right],
\label{Eq:P_eff_pot_Fuku}
\end{equation}
where $a$ controls the temperature of the deconfinement phase
transition in pure gauge theory, while $b$ controls the weight of
gluonic effects in the transition. In this case, the parameters
$a=664$~MeV and $b=(196.2 \tn{MeV})^3$ are obtained from the
requirement of having a first order transition at about $T=270$~MeV
\cite{fukushima08,schaefer07}.
It was shown in \cite{fukushima08} that there is little difference in
the pressure calculated from the three effective potentials for the
Polyakov loop in their validity region up to $T\simeq (1.5-2) T_c.$
The presence of dynamical quarks influences an effective treatment
based on the Polyakov loop which in this case is not an exact order
parameter. Defining effective Polyakov-loop potentials for
nonvanishing chemical potential when $|\Phi|\ne|\bar\Phi|$ is
somewhat ambiguous \cite{schaefer07}. In the present analysis we will
use at $\mu_q\ne 0$ the $\mathbb{Z}_3$ symmetric Polyakov-loop
potentials given above. The effect of the dynamical quarks was modeled by
considering the $N_f$ and $\mu_q$ dependence of the $T_0$ parameter
of the Polyakov-loop effective potential. Using renormalization group
arguments this dependence was parametrized in \cite{schaefer07}
to be of the form $T_0(\mu_q,N_f)=T_\tau \exp(-1/(\alpha_0 b(\mu_q)))$.
The parameters were chosen to have $T_0(\mu_q=0,N_f=2)=208.64$~MeV. When using
the polynomial and logarithmic effective potential for the Polyakov loop
given in (\ref{Eq:P_eff_pot_poly}) and (\ref{Eq:P_eff_pot_log})
we will consider in addition to $T_0=270$~MeV two more cases,
one with a constant value $T_0=208$~MeV and the other with the
above-mentioned $\mu_q$-dependent $T_0$ taken at $N_f=2.$
\subsection{Constructing the grand potential of the PQM
in a ``$\Phi$-derivable'' approximation}
The Lagrangian of the $SU(2)_L\times SU(2)_R$ chiral quark model
\cite{gell-mann60} is written in the usual matrix form
\cite{bardeen68,roder03} by introducing two $N_f\times N_f$ matrices
\begin{equation}
M(x)=\frac{\sigma(x)}{\sqrt{2N_f}}\mbox{$\mathbf{1}$ \hspace{-0.91em+i T^a \pi^a(x),\qquad
M_5(x)=\frac{\sigma(x)}{\sqrt{2N_f}}\mbox{$\mathbf{1}$ \hspace{-0.91em+i \gamma_5 T^a \pi^a(x),\qquad
\label{Eq:M_and_M5}
\end{equation}
in terms of which one has
\begin{eqnarray}
{\cal L}&=&\textrm{tr}_f\left[\partial_\mu M^\dagger\partial^\mu M-m^2 M^\dagger M \right]
-\frac{\lambda}{6 N}\left[\textrm{tr}_f\left(M^\dagger M\right)\right]^2 + N_f h\sigma
+\bar\psi\left(i\gamma^\mu D_\mu-\tilde g M_5\right)\psi +\tn{c.t.}
\,,
\label{Eq:Lagrange_matrix}
\end{eqnarray}
where in the mesonic part we have introduced an explicit symmetry
breaking term and ``c.t.'' stands for counterterms.
After performing the trace, one can see that without the fermionic term
the Lagrangian (\ref{Eq:Lagrange_matrix}) is that of the $O(N)$ model,
which describes the system of sigma and $N-1$ pion fields and is appropriately
parametrized for a large-$N$ treatment \cite{patkos02}. Vanishing background
is considered for the spatial components of the gauge
field and a constant mean-field $A_0$ for the temporal component, so that
the covariant derivative is $D_\mu=\partial_\mu-i \delta_{\mu 0} A_0.$
The trace in (\ref{Eq:Lagrange_matrix}) is to be taken in the flavor
space and to simplify notations the flavor, color, and Dirac indices
of the fermionic fields $\psi,\bar\psi$ are not indicated. The
$SU(N_f)$ generators in the fundamental representation $T^a$
($a=1,\dots, N_f^2-1$) satisfy the normalization condition
$\textrm{tr}_f(T^a T^b)=\delta^{ab}/2.$ Some rescaling with $N_f=\sqrt{N}$
was done and since in the mesonic sector we only want to increase
the number of pions we do not introduce another invariant,
$\textrm{tr}_f\big[\big(M^\dagger M\big)^2\big],$ which for $N_f>2$
is independent of $\big[\textrm{tr}_f\big(M^\dagger M\big)\big]^2.$ For a
recent treatment of the $U(N_f)_L\times U(N_f)_R$ meson model having both
invariants see \cite{fejos10}.
The constituent quarks become massive only after spontaneous/explicit
symmetry breaking. In (\ref{Eq:Lagrange_matrix}) the sigma field is
shifted by the vacuum expectation value $v$ as $\sigma\to v\sqrt{N}+\sigma,$
where on the right-hand side of the arrow $\sigma$ denotes the
fluctuating part of the original field. Then evaluating the trace,
one obtains
\begin{eqnarray}
{\cal L}&=&
-N\left[\frac{\lambda}{24} v^4+\frac{1}{2}m^2v^2 -h v\right]-
\sqrt{N}\left[\frac{\lambda}{6} v^3+m^2 v - h\right]\sigma
\nonumber\\
&&
+\frac{1}{2}\bigl[(\partial\sigma)^2 + (\partial\vec\pi)^2 \bigr]
-\frac{1}{2}m^2_{\sigma 0}\sigma^2
-\frac{1}{2}m^2_{\pi 0}\vec\pi^2
-\frac{\lambda v}{6\sqrt{N}}\sigma \rho^2-
\frac{\lambda}{24N}\rho^4
\nonumber\\
&&+\bar \psi\big[(i\partial^\mu+\delta^{\mu0}A_0)\gamma_\mu-m_q\big]\psi
-\frac{g}{\sqrt{N}}\left[\bar\psi
\left(\sigma +i\sqrt{2N_f}\gamma_5T^a\pi^a\right)\right]\psi+
\tn{c.t.},
\label{Eq:Lagrange_field}
\end{eqnarray}
where $\rho^2=\sigma^2+\vec\pi^2.$
Here a rescaled Yukawa coupling $g=\tilde g \sqrt{N/(2N_f)}$ was
introduced in order to assure the finiteness of the tree-level quark
mass $m_q=g v$ in the $N\to\infty$ limit. In this limit,
due to the $N_f$ scaling of the vacuum expectation value of the
sigma field in (\ref{Eq:M_and_M5}) and
of the coupling $\lambda$ in (\ref{Eq:Lagrange_matrix}) the
tree-level sigma and pion masses $m^2_{\sigma 0}=m^2+\lambda v^2/2$
and $m^2_{\pi 0}=m^2+\lambda v^2/6$ are also finite.
After continuation to imaginary time, the grand partition function $Z$
and the grand potential $\Omega(T,\mu_B)$ of the spatially uniform
system defined by (\ref{Eq:Lagrange_field}) are introduced as follows:
\begin{equation}
Z=\textrm{tr}\big\{
\exp\big[-\beta\big( H_0(A_4)+H_\tn{int}-\mu_B Q_B\big)\big]
\big\}=e^{-\beta \Omega},
\label{Eq:Z_def}
\end{equation}
where $\mu_B$ is the baryon chemical potential. $H_\tn{int}$ is the
interacting part of the Hamiltonian constructed from (\ref{Eq:Lagrange_field}).
With the help of $H_0$, the quadratic part of the Hamiltonian at vanishing
$A_4,$ one can define the $A_4$-dependent free Hamiltonian $H_0(A_4),$
which for two quark flavors $u$ and $d$ reads as
\begin{equation}
H_0(A_4)=H_0+\int d^3 x \left[i u^\dagger_i(x) A_{4,ij} u_j(x) +
i d^\dagger_i(x) A_{4,ij} d_j(x)\right],
\end{equation}
where $A_4=\tn{diag}(\phi_+,\phi_-,-(\phi_++\phi_-))$ is diagonal in color
space and $i,j$ denotes color indices. In (\ref{Eq:Z_def}) $Q_B$ is
the conserved baryon charge which can be written in terms of the particle
number operators as $Q_B=\frac{1}{3}\sum\limits_{i=1}^3 N_{q,i},$ with
$N_\tn{q,i}=N_{u,i}+N_{d,i}-N_{\bar u,i}-N_{\bar d,i}$ and {\it e.g. }
$N_{u,i}=\int d^3 x \big(u_i^\dagger u_i+d_i^\dagger d_i).$
Then combining $H_0(A_4)$ and $\mu_B Q_B$ one can see that the effect of
fermions propagating on the constant background $A_4$, diagonal in
the color space, is like having imaginary chemical potential for color.
Following Ref.~\cite{korthals_altes00} one introduces
color-dependent chemical potential for fermions
\begin{equation}
\mu_{1,2}=\mu_q-i\phi_\pm,\quad
\mu_3=\mu_q+i(\phi_++\phi_-),
\label{Eq:c-dep_mu}
\end{equation}
where $\mu_q=\mu_B/3$ is the quark baryon chemical potential.
Then, introducing the notation
${\cal H}=H_0- \sum\limits_{i=1}^3\mu_i N_{q,i},$
one can write $Z$ as a path integral over the fields, generically
denoted by~$\Psi$
\begin{equation}
Z=e^{-\beta \Omega_0}
\frac{\displaystyle
\int\big[{\cal D}\Psi\big] \bigg\{
e^{-\beta {\cal H}}
{\cal P} \exp\Big[-\int_0^\beta d\tau H_\tn{int}(\tau)\Big] \bigg\}
}
{\displaystyle \int\big[{\cal D}\Psi\big]
e^{-\beta {\cal H}}},
\end{equation}
where $\Omega_0$ is the grand potential of the
unperturbed system with fermions having color-dependent chemical potential
\begin{equation}
e^{-\beta \Omega_0}=
\int\big[{\cal D}\Psi\big] e^{-\beta {\cal H}}.
\end{equation}
In the one-particle irreducible (1PI) formalism the grand potential is
given by the sum of the grand potential of the unperturbed system and
of perturbative quantum corrections represented by closed loops
constructed with the tree-level (free) propagators. In the
``$\Phi$-derivable'' approximation of Ref.~\cite{ward60}, also called
two-particle irreducible (2PI) approximation, the grand potential is a
functional of the full propagators and field expectation values, and is
of the following form:
\begin{eqnarray}
\beta\Omega[G_\pi,G_\sigma,G,v,\Phi,\bar\Phi]&=&U(\Phi,\bar\Phi)+
\frac{N}{2} m^2 v^2+N\frac{\lambda}{24} v^4 - N h v
-(N-1)\frac{i}{2}\int_k
\left[\ln G_\pi^{-1}(k)+D^{-1}_\pi(k) G_\pi(k)\right]
\nonumber\\
&&-
\frac{i}{2}\int_k\left[
\ln G_\sigma^{-1}(k)+D^{-1}_\sigma(k) G_\sigma(k)\right]
+\sqrt{N} i\, \textrm{tr}_{D,c}\int_k
\left[\ln G^{-1}(k)+D^{-1}(k) G(k)\right]
\nonumber\\
&&+
\Gamma_\tn{2PI}[G_\pi,G_\sigma,G,v,\Phi,\bar\Phi]+\tn{c.t.}\, ,
\label{Eq:Omega_grand_pot}
\end{eqnarray}
where the trace is taken in Dirac and color space. $U(\Phi,\bar\Phi)$
is a particular version of the effective Polyakov-loop potential
reviewed in Sec.~\ref{ss:PEP}; the tree-level propagators of
the pion, sigma, and constituent quark fields are
\begin{equation}
i D_\pi^{-1}(k)=k^2-m^2_{\pi0},\qquad
i D_\sigma^{-1}(k)=k^2-m^2_{\sigma0},\qquad
i D^{-1}(k)=\mbox{\,\slash \hspace{-0.5em}$k$}-m_q,\qquad
\end{equation}
while $G_\pi,$ $G_\sigma,$ and $G$ are the respective full propagators.
$\Gamma_\tn{2PI}[G_\pi,G_\sigma,G,v,\Phi,\bar\Phi]$ denotes the set
of 2PI skeleton diagrams constructed with full
propagators, which to ${\cal O}(1/\sqrt{N})$ accuracy is given by
\begin{eqnarray}
\Gamma_\tn{2PI}[G_\pi,G_\sigma,G,v,\Phi,\bar\Phi]
&=&
N\frac{\lambda}{24}\left(\int_k G_\pi(k)\right)^2
+\frac{\lambda}{12} \int_k G_\pi(k) \int_p G_\sigma(p)
-\frac{\lambda}{12}i\int_k \Pi(k)
-\frac{i}{2}\int_k\ln\bigg(1-\frac{\lambda}{6}\Pi(k)\bigg)
\nonumber\\
&&
-\frac{\lambda}{6} v^2\int_k G_\sigma(k)
+\frac{\lambda}{6} v^2\int_k \frac{G_\sigma(k)}{1-\lambda \Pi(k)/6}
\nonumber\\
&&
-\sqrt{N}\frac{g^2}{2}i\textrm{tr}_{D,c}\int_k\int_p\gamma_5 G(k)\gamma_5 G(k+p)G_\pi(p)
+\frac{g^2}{2\sqrt{N}}i\textrm{tr}_{D,c}\int_k\int_p G(k) G(k+p) G_\sigma(p)\,, \ \ \
\label{Eq:Omega_2PI}
\end{eqnarray}
where the notation $\displaystyle \Pi(k)=-i\int_p G_\pi(p) G_\pi(k+p)$
was introduced. The mesonic part of $\Gamma_\tn{2PI}$ contains the 2PI
diagrams of the $O(N)$ model as given in Eq.~(2.13) and Figs.~2 and 4 of
\cite{dominici93} and also in Eq.~(48) and Fig.~2 of \cite{fejos09}.
We see that the contribution of the fermions which goes with
fractional powers of $N$ intercalates between the leading order (LO)
and next-to-leading order (NLO) contributions of the pions, which go
with integer powers of $N.$ This can be also seen by comparing the
expression of the pion propagator derived in (\ref{Eq:Gp}) with
Eqs.~(53) and (54) of \cite{fejos09}.
We use the imaginary time formalism of the finite
temperature field theory in which the four-momentum is $k=(i\omega_n,{\bf k}).$
The Matsubara frequencies are $\omega_n=2\pi n T$ for bosons
while for fermions they depend also on the color due to the color-dependent
chemical potential $\mu_i$ introduced in (\ref{Eq:c-dep_mu}) and are given by
$\omega_n=(2 n+1)\pi T-i \mu_i.$ The meaning of the integration symbol
in (\ref{Eq:Omega_2PI}) is
\begin{equation}
\int_k = i T\sum_{n} \int_{{\bf k}}\equiv i T\sum_{n}\int\frac{d^3 {\bf k}}{(2\pi)^3}.
\label{Eq:sum_int_def}
\end{equation}
The dependence on $\Phi$ and $\bar\Phi$ of the fermionic trace-log term
in the grand potential $\Omega$ and of the quark-pion setting-sun in
$\Gamma_\tn{2PI}$ results from the fact that, as shown in the Appendix
between Eqs.~(\ref{Eq:I1_def}) and (\ref{Eq:I1_final}), as well as between
Eqs.~(\ref{Eq:I2}) and (\ref{Eq:T2_decompose}), respectively,
after performing the Matsubara sum the color trace can be
expressed in closed form in terms of the mean-field ($\vec x$-independent)
Polyakov loop $\Phi$ and its conjugate $\bar \Phi.$
What we evaluate in this work is not the grand potential
(\ref{Eq:Omega_2PI}), but rather its
derivatives, that is the equations for the two-point functions and the
field equations, which are given by the stationary conditions
\begin{equation}
\frac{\delta \Omega}{\delta G}=\frac{\delta \Omega}{\delta G_\pi}=
\frac{\delta \Omega}{\delta G_\sigma}=\frac{\delta \Omega}{\delta v}=
\frac{\delta \Omega}{\delta \Phi}=\frac{\delta \Omega}{\delta \bar\Phi}=0.
\label{Eq:stac_cond}
\end{equation}
In each of these equations we will keep the contribution of the
fermions only at the leading order in the large-$N_f$ expansion. The LO
contribution of the fermions is ${\cal O}(\sqrt{N})$ in the field
equations of $\Phi$ and $\bar\Phi,$ ${\cal O}(1)$ in the equation for
the fermion propagator $G,$ and ${\cal O}(1/\sqrt{N})$ in the
remaining equations, that is the field equation of $v$, and the equations
of $G_\pi$ and $G_\sigma.$
It is easy to see that the third and fourth terms on the right-hand side of
(\ref{Eq:Omega_2PI}) do not contribute to any of the equations at the
order of interest, and that the second term contributes
only in the equation for the sigma propagator
\begin{equation}
i G_\sigma^{-1}(p)=i D_\sigma^{-1}(p)+\frac{\lambda v^2}{3}-
\frac{\lambda}{6}\int_k G_\pi(k)-
\frac{\lambda v^2}{3} \frac{1}{1-\lambda \Pi(p)/6}
-\frac{i g^2}{\sqrt{N}}\textrm{tr}_{D,c}\int_k G(k) G(k+p) + \tn{c.t.}.
\label{Eq:Gs}
\end{equation}
In fact, the equation for $G_\sigma$ decouples, in the sense that $G_\sigma$
will not appear in any of the remaining five equations. Nevertheless,
it plays an important role in the parametrization of the model,
as will be shown in Sec.~\ref{ss:param}.
\subsection{Approximations made to solve the model \label{ss:approx}}
In this work we use some approximations to solve the set of
coupled equations coming from (\ref{Eq:stac_cond}).
1.~As a first approximation we disregard the self-consistent equation for
the fermions arising from $\delta\Omega/\delta G=0,$ that is
\begin{equation}
i G^{-1}(k)=i D^{-1}(k)-i g^2\int_p \gamma_5 G(p) \gamma_5 G_\pi(p-k)
+ \tn{c.t.},
\label{Eq:G}
\end{equation}
and simply use in the remaining five equations the tree-level fermion
propagator $D(k).$ A study based on the solution
of the self-consistent equation for the fermion propagator is
left for a forthcoming publication.
Within this approximation the field equation for $v$, hereinafter
called equation of state (EoS), and the pion propagator simplify
considerably. The contribution of the last but one term of
(\ref{Eq:Omega_2PI}) to the pion propagator breaks up upon working out
the Dirac structure into the linear combination of a fermionic tadpole
$\tilde T(m_q)$ and a bubble integral $\tilde I(p;m_q)$.
Introducing the propagator
\begin{equation}
D_0(k)=\frac{i}{k^2-m_q^2},
\label{Eq:D0_prop}
\end{equation}
these integrals are defined as
\begin{eqnarray}
\label{Eq:T_q_def}
\tilde T(m_q)&=&\frac{1}{N_c}\sum_{i=1}^{N_c}\int_k D_0(k),\\
\tilde I(p;m_q)&=&\frac{1}{N_c}\sum_{i=1}^{N_c}
\left[-i\int_q D_0(q) D_0(q+p)\right].
\label{Eq:I_q_def}
\end{eqnarray}
In terms of these integrals which are evaluated in the Appendix between
Eqs.~(\ref{Eq:T_q}) and (\ref{Eq:I_q_beta_finite}) one obtains:
\begin{eqnarray}
0&=&N v\left[m^2+\frac{\lambda}{6}\left(v^2+\int_k G_\pi(k)\right)
-\frac{4 g^2 N_c}{\sqrt{N}}\tilde T(m_q)-\frac{h}{v}
\right] + \tn{c.t.}\, ,
\label{Eq:EoS}
\\
i G_\pi^{-1}(k)&=&k^2-m^2-\frac{\lambda}{6}\left(v^2+\int_k G_\pi(k)\right)
+ \frac{4g^2 N_c}{\sqrt{N}} \tilde T(m_q) -
\frac{2 g^2 N_c}{\sqrt{N}} k^2 \tilde I(p;m_q) + \tn{c.t.}\, .
\label{Eq:Gp}
\end{eqnarray}
One can see that the Goldstone theorem is fulfilled, since using the
EoS in the equation for the pion propagator one obtains
$i G_\pi^{-1}(k=0)=-h/v.$ This is only accidental because the Ward identity
relating the inverse fermion propagator and the proper vertex
$\Gamma_{\pi^a\psi\bar\psi}=\delta^3 \Gamma/\delta \bar\psi\delta \psi \delta \pi^a$ (see {\it e.g.} Eq.~(13.102) of \cite{zinn-justin})
\begin{equation}
-\frac{i}{2} T_a\Big\{\gamma_5,i G^{-1}(p)\Big\}=v\sqrt{\frac{N}{2 N_f}}
\Gamma_{\pi^a\psi\bar\psi}(0,p,-p),
\label{Eq:WI}
\end{equation}
is satisfied only with tree-level propagators and vertices. The
relation above is violated at any order of the perturbation theory in
the large-$N_f$ approximation, since in view of (\ref{Eq:G}) the
corrections to the inverse tree-level fermion propagator are of
${\cal O}(1),$ while the corrections to the tree-level
$\pi-\psi-\bar\psi$ vertex are suppressed by $1/N.$
2.~A further approximation concerns the self-consistent pion
propagator (\ref{Eq:Gp}). In this work four approximations for $G_{\pi}$
are considered; two local approximations and two nonlocal approximations
obtained using an expansion in $1/\sqrt{N}.$ In the local
approximation one parametrizes the pion propagator as
\begin{equation}
G_{\pi,l}(p)=\frac{i}{p^2-M^2},
\label{Eq:Gp_local}
\end{equation}
and uses this form in all equations. In the {\it first} variant
$M^2$ is determined as a pole mass from
$i G^{-1}_{\pi,l}(p_0^2=M^2,{\bf p}=0)=0$ by the self-consistent gap equation
arising from (\ref{Eq:Gp})
\begin{equation}
M^2=m^2+\frac{\lambda}{6} \left(v^2+T_F(M)\right)
-\frac{4 g^2 N_c}{\sqrt{N}}\tilde T_F(m_q)
+\frac{2 g^2 N_c}{\sqrt{N}}M^2 \tilde I_F(M,\bm{0};m_q).
\label{Eq:gap_pole}
\end{equation}
In a {\it second} variant $M^2$ is determined from
$M^2=-i G^{-1}_{\pi,l}(p=0)$, when the gap-equation becomes
\begin{equation}
M^2=m^2+\frac{\lambda}{6}\left(v^2+T_F(M)\right)
-\frac{4g^2 N_c}{\sqrt{N}} \tilde T_F(m_q).
\label{Eq:gap_p0}
\end{equation}
The subscript $F$ denotes the finite part of the integrals defined in
Eqs.~(\ref{Eq:T_q_def}) and (\ref{Eq:I_q_def}), which are given
explicitly in Eq.~(\ref{Eq:Tad_q_F_decomp}) and
Eqs.~(\ref{Eq:I_q_finite})-(\ref{Eq:I_q_beta_finite}). In this way
the finite parts of all vacuum pieces are contained in our
equation. The importance of these terms for the thermodynamics of the
PQM model was pointed out in \cite{nakano10}. In view of the EoS
(\ref{Eq:EoS}) the two definitions of $M^2$ coincide in the chiral
limit $h=0,$ where for both variants one has $M^2=0.$ We note that due
to their self-consistent nature, when (\ref{Eq:gap_pole}) or
(\ref{Eq:gap_p0}) is solved, a series containing all orders of
$1/\sqrt{N}$ is in fact resummed.
The {\it third}, nonlocal variant of the pion equation is
derived using an $1/\sqrt{N}$ expansion in the pion propagator (\ref{Eq:Gp})
after exploiting the EoS (\ref{Eq:EoS}). One obtains
\begin{eqnarray}
G_\pi(p)&=&\frac{i}{p^2-\frac{h}{v}-\frac{2 g^2 N_c}{\sqrt{N}}p^2
\tilde I_F(p;m_q)}
=\frac{i}{p^2-\frac{h}{v}}+\frac{2 g^2 N_c}{\sqrt{N}}
\frac{i p^2 \tilde I_F(p;m_q)}{\left(p^2-\frac{h}{v}\right)^2}
+{\cal O}\left(\frac{1}{N}\right).
\label{Eq:Gp_third}
\end{eqnarray}
With this form of the pion propagator the EoS reads
\begin{equation}
m^2+\frac{\lambda}{6}\left(v^2+T_F(M)\right)
+\frac{2 g^2 N_c}{\sqrt{N}} J_F(M,m_q)
-\frac{4 g^2 N_c}{\sqrt{N}} \tilde T_F(m_q)=\frac{h}{v},
\label{Eq:EoS_third}
\end{equation}
where in this case $M^2=h/v$ and we have introduced the integral
\begin{equation}
J(M,m_q)=-i\int_p G^2_{\pi,l}(p) p^2 \tilde I_F(p;m_q).
\label{Eq:J_def}
\end{equation}
Solving this equation for $v$ shows that this approximation still resums
infinitely many orders in $1/\sqrt{N}$.
A {\it fourth} variant of $G_{\pi},$ which by a strict expansion
in $1/\sqrt{N}$ will include terms of no higher order than
${\cal O}(1/\sqrt{N}),$ can be obtained by expanding not only the
nonlocal, momentum-dependent part of the self-energy in the pion
propagator (\ref{Eq:Gp}), but also its local part. This is explicitly
constructed including counterterms in Sec.~\ref{sec:renorm}, where a
diagrammatic illustration of the approximation is also given.
For this approximation, the pion propagator is given by
Eqs.~(\ref{Eq:Gp_expanded}), (\ref{Eq:M2_LO_finite}),
and (\ref{Eq:M2_NLO_finite}), while the EoS is given in (\ref{Eq:d_all}).
\subsection{Parametrization\label{ss:param}}
The mass parameter $m^2,$ the couplings $g,\lambda,$ the
renormalization scale $M_{0B},$ the vacuum expectation value $v_0$,
and the external field $h,$ which vanishes in the chiral limit, are
determined at $T=\mu_q=0$ using some information from the sigma sector
and the following physical quantities: the pion decay constant
$f_\pi=93$~MeV and its mass $m_\pi=140$~MeV, and the constituent quark
mass taken to be $M_q=m_N/3=313$~MeV. From the sigma sector we use the
mass and the width of the sigma particle and the behavior of the
spectral function. $v_0$ is determined from
the matrix element of the axial vector current between the vacuum
state and a one-pion state, which due to the rescaling of the vacuum
expectation value by $\sqrt{N}$ gives $v_0=f_\pi/2.$ The value of the
Yukawa coupling $g=6.7$ is obtained by equating the tree-level fermion
mass $m_q$ with $M_q.$ The parameters $\lambda$ and $M_{0B}$ are
determined from the sigma propagator, as will be detailed below.
Having determined them, in the chiral limit $m^2$ is fixed from the EoS,
while in the case of the physical pion mass the remaining parameters $m^2$
and $h$ are determined as follows. If the local approximation for the
pion propagator is used $m^2$ is determined from the gap equation by
requiring $M^2=m_\pi^2,$ and $h$ is obtained from the EoS. When
$G_\pi$ is approximated using a large-$N_f$ expansion $h$ is fixed by
requiring $h=m_\pi^2 v_0,$ and $m^2$ is determined from the EoS.
Now we turn to the issue of fixing $\lambda$ and $M_{0B}.$
Using in (\ref{Eq:Gs}) the tree-level fermion propagator together with
the local approximation (\ref{Eq:Gp_local}) for the pion propagator
and also the equation of state (\ref{Eq:EoS}), one obtains after working out
the Dirac structure the following form for the sigma propagator:
\begin{equation}
i G_\sigma^{-1}(p)=p^2-\frac{h}{v}-\frac{\lambda v^2}{3}
\frac{1}{1-\lambda I_F(p;M)/6}+\frac{2 g^2 N_c}{\sqrt{N}} (4m_q^2-p^2)
\tilde I_F(p;m_q).
\label{Eq:Gs_param}
\end{equation}
The integral $I_F(p;M),$ obtained using the local approximation
(\ref{Eq:Gp_local}) for the pion propagator with $M^2=m_\pi^2,$
can be found in Eqs.~(10) and (11) of Ref.~\cite{patkos02} with
$M_0$ replaced by $M_{0B},$ while $\tilde I_F(p;m_q)$ is given in
Eqs.~(\ref{Eq:I_q_finite})-(\ref{Eq:I_q_beta_finite}).
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio,width=0.5\textwidth,angle=0]{sigma}
\caption{The $\lambda$ dependence of the real and imaginary parts of the
complex sigma pole $p_0=M_\sigma-i\Gamma_\sigma/2$ and of the Landau ghost
$M_L$ in the chiral limit and for the physical pion mass. The upper curve for
$M_\sigma$ and the lower one for $\Gamma_\sigma$ represent the case of
the physical pion mass, as indicated on the plot. $M_L$ is shown only
in this case, for in the chiral limit there is very little difference.}
\label{Fig:sigma}
\end{center}
\end{figure}
Both in the chiral limit $M=0$ and for $M=m_\pi$ the self-energy has
along the positive real axis of the complex $p_0$ plane two cuts above
the thresholds of the pion and fermion bubble integrals, which start
at $p^2=4 M^2$ and $p^2=4 m_q^2,$ respectively. Above these thresholds
the respective pion and fermion bubble integrals have nonvanishing
imaginary parts. We search for poles of the sigma propagator
analytically continued between the two cuts to the second Riemann
sheet in the form $i G^{-1}_\sigma(p_0=\kappa e^{-i\phi},{\bf p}=0)=0$.
The pole is parametrized as $p_0=M_\sigma-i \Gamma_\sigma/2,$ with the
real and imaginary parts corresponding to the mass and the half-width
of the sigma particle. The solution for $M_\sigma$ and
$\Gamma_\sigma$ is shown in Fig.~\ref{Fig:sigma} both in the chiral
limit ($h=m_\pi=0$) and for the $h\ne 0$ case. Similar to the case of
the $O(N)$ model studied in Ref.~\cite{patkos02}, in the chiral limit
the value of $M_\sigma$ is a little smaller and the value of
$\Gamma_\sigma$ larger than in the $h\ne 0$ case. Comparing
Fig.~\ref{Fig:sigma} with Fig.~2 of Ref.~\cite{patkos02} obtained in
the $O(N)$ model, that is without fermions, the $M_\sigma(\lambda)$
curve moved slightly upward, but the $\Gamma_\sigma(\lambda)$ curve
moved significantly downward, which means that in the present case the
phenomenologically expected value \cite{caprini06}
$M_\sigma/\Gamma_\sigma\sim 1$ cannot be achieved for any value of the
coupling $\lambda.$ Another difference is that for low values of
$\lambda$ there are two poles of $G_\sigma$ on the negative imaginary
axis in contrast to only one such pole in the $O(N)$ model. These
poles approach each other as $\lambda$ increases and after they
collide at a given value of $\lambda$ there are two complex poles at
higher $\lambda$, one with positive and one with negative real
part. The imaginary part of the complex pole having positive real part
is shown in Fig.~\ref{Fig:sigma} for the renormalization scale
$M_{0B}=885$~MeV. As explained in the study done in the chiral limit
in \cite{toni04} for lower values of the renormalization scale the
scale $M_L$ of the lower Landau ghost on the imaginary axis comes even
closer to $M_\sigma$ and as a result the spectral function of the
sigma is heavily distorted. In order to avoid this and based on the
ratio of $M_\sigma/\Gamma_\sigma$ we have chosen $\lambda=400$ and
$M_{0B}=885$~MeV. For these values $M_\sigma=456$~MeV and
$\Gamma_\sigma=221$~MeV in the chiral case, while $M_\sigma=474$~MeV
and $\Gamma_\sigma=152$~MeV for the case of a physical pion
mass. These values are used throughout Sec.~IV also in the case when
the pion propagator is expanded in $1/\sqrt{N},$ that is in the third
and fourth variants of $G_\pi$ discussed in
Sec.~\ref{ss:approx}.
\section{Renormalization \label{sec:renorm}}
The lesson one can learn from the successful renormalization in 2PI
\cite{hess02,blaizot04,borsanyi05,reinosa06,patkos08} or large-$N$
\cite{fejos09,toni09,fejos08} approximations is that, due to the
systematic nature of these expansions, the counterterms can be obtained
by analyzing the structure of the equations. In these cases there is
no need for an order-by-order detailed study of the counterterm
diagrams which becomes rather complicated due to the proliferation of
the diagrams. If one uses some ad-hoc approximation which spoils the
self-consistent nature of the propagator equations or is not
systematic, then one loses the possibility to explicitly or uniquely
determine the counterterms from the equations.
In this section we discuss the renormalization of the model in the
case of a strict $1/\sqrt{N}$ expansion in the pion propagator and the
equation of state. This expansion is not entirely consistent because
as mentioned in Sec.~\ref{ss:approx} we use for simplicity
tree-level fermion propagators despite the fact that this expansion
produces ${\cal O}(1)$ corrections in the fermion propagator
(\ref{Eq:G}) which all have to be resummed. We will see that as a
consequence of this approximation the best one can achieve is to
determine the counterterms to some order in the Yukawa coupling. It
will turn out that the order depends on the equation and the type of
subdivergence we are looking at.
The counterterm functional needed to renormalize the pion propagator
and the equation of state reads
\begin{eqnarray}
\nonumber
\Omega_\tn{ct}[G_\pi,v]&=&
\frac{N}{2}\delta m_0^2 v^2+\frac{N}{24}\delta\lambda_4 v^4
+\frac{N-1}{2}\left(\delta m_2^2+\frac{\delta\lambda_2}{6} v^2\right)
\int_k G_\pi(k) \\
&&+(N-1)\frac{\delta\lambda_0}{24}\left[\int_k G_\pi(k)\right]^2
-(N-1)\frac{\delta Z}{2}\int_k k^2 G_\pi(k).
\label{Eq:Omega_ct}
\end{eqnarray}
Compared to the counterterm functional used in Eq.~(48) of
\cite{fejos09} to renormalize the stationary equations of the $O(N)$
model the only difference in (\ref{Eq:Omega_ct}) is the appearance of
the term containing the wave-function renormalization counterterm
$\delta Z.$ This is needed to remove the momentum-dependent divergence
of the fermionic contribution to the pion propagator (\ref{Eq:Gp})
rewritten as
\begin{equation}
i G_\pi^{-1}(k)=(1+\delta Z)k^2-M^2-\frac{2 g^2 N_c}{\sqrt{N}}\,
k^2 \tilde I(k;m_q).
\label{Eq:Gp_ct}
\end{equation}
The local part $M^2$ containing the remaining counterterms reads
\begin{equation}
M^2=m^2+\delta m_2^2+\frac{\lambda+\delta\lambda_2}{6} v^2
+\frac{\lambda+\delta\lambda_0}{6} \int_p G_\pi(p)
-\frac{4 g^2 N_c}{\sqrt{N}}\tilde T(m_q).
\label{Eq:M2_ct}
\end{equation}
Using in (\ref{Eq:Gp_ct}) the notation introduced in (\ref{Eq:I_q_div})
one can readily determine $\delta Z:$
\begin{equation}
\delta Z=\frac{2 g^2 N_c}{\sqrt{N}} \tilde I_\tn{div}(k;m_q)
=\frac{2 g^2 N_c}{\sqrt{N}} T_d^{(0)}.
\end{equation}
Separating the LO and NLO contributions in the local part given in
(\ref{Eq:M2_ct}) by writing $M^2=M^2_\tn{LO}+M^2_\tn{NLO}/\sqrt{N},$
an expansion in powers of $1/\sqrt{N}$ in the pion propagator
(\ref{Eq:Gp_ct}) gives
\begin{eqnarray}
G_\pi(p)
=G_\tn{LO}(p)-
i \frac{G^2_\tn{LO}(p)}{\sqrt{N}}\left[
M^2_\tn{NLO}-2 N_c g^2 p^2 \tilde I_F(p;m_q)\right]
+{\cal O}\left(\frac{1}{N}\right),
\label{Eq:Gp_expanded}
\end{eqnarray}
where $G_\tn{LO}(p)=i/(p^2-M^2_\tn{LO}).$
The counterterms are also written as the sum of LO and NLO
contributions $\delta m_i^2=\delta {m^2_i}^{(0)}+\delta {m^2_i}^{(1)}/\sqrt{N},$
$\delta\lambda_i=\delta\lambda_i^{(0)}+\delta\lambda_i^{(1)}/\sqrt{N}$
and used together with (\ref{Eq:Gp_expanded}) in (\ref{Eq:M2_ct})
to obtain the equations for the LO and NLO local parts
\begin{subequations}
\begin{eqnarray}
&&
M^2_\tn{LO}=m^2+\delta {m^2_2}^{(0)}+
\frac{\lambda+\delta\lambda_2^{(0)}}{6} v^2+
\frac{\lambda+\delta\lambda_0^{(0)}}{6} T(M_\tn{LO}),
\label{Eq:M2_LO}
\\
&&
M^2_\tn{NLO}
\left[
\frac{1}{\lambda_B^{(0)}}-\frac{I(0;M_\tn{LO})}{6}\right]=
\frac{1}{\lambda_B^{(0)}}\left[
\delta {m^2_2}^{(1)}+\frac{\delta\lambda_2^{(1)}}{6}v^2+
\frac{\delta\lambda_0^{(1)}}{6} T(M_\tn{LO})\right]
-\frac{4 g^2 }{\lambda_B^{(0)}} N_c \tilde T(m_q)
+\frac{2 g^2 }{6} N_c J(M_\tn{LO},m_q),\ \ \
\label{Eq:M2_NLO}
\end{eqnarray}
\end{subequations}
where we divided the second equation by
$\lambda_B^{(0)}=\lambda+\delta\lambda_0^{(0)}$ and used the integral
introduced in (\ref{Eq:J_def}) with $G_{\pi,l}$ replaced by $G_\tn{LO}.$
Compared to the perturbative renormalization of the fermionic trace-log
contribution to the effective potential performed in \cite{skokov10b},
the difficulty here is that, due to the self-consistent nature of the
pion propagator, an infinite series of coupling counterterms of the mesonic
part, that is $\delta\lambda_0,$ $\delta\lambda_2,$ $\delta m_0^2,$ and
$\delta m_2^2$ has to be determined to ${\cal O}(\sqrt{N})$ of the
large-$N$ expansion. In order to achieve this we apply the method developed
in Refs. \cite{fejos08,patkos08,fejos10}. The method for determining the
counterterms appearing in a particular equation resides in the
separation of the divergent part of the integrals contained by the
equation. This is obtained by expanding the propagators around an
appropriately defined auxiliary propagator (see Appendix~A of
\cite{patkos08}). Then, the finite part of the integrals is used to
write down a finite equation, which, when subtracted from the original
equation, provides a relation between counterterms and
divergences. This relation still involves the vacuum expectation value
$v$ and the finite part of some integrals, {\it e.g.} $T_F,$ the
finite part of the pion tadpole. Requiring as in \cite{fejos08} the
vanishing of the coefficient of $v^2$ and $T_F$ (cancellation of
subdivergences) one obtains the coupling counterterms, while the
remaining part of the relation mentioned above gives the mass
counterterm (cancellation of the overall divergence).
To apply the above method to Eq.~(\ref{Eq:M2_LO}) for
$M^2_\tn{LO}$, which is the gap equation of the $O(N)$ model at
leading order of the large-$N$ approximation, one uses the expression
of the pion tadpole given in (\ref{Eq:Tad_pi}) in terms of finite and
divergent pieces. Retaining in (\ref{Eq:M2_LO}) the finite part of
the tadpole one obtains the LO finite gap equation
\begin{equation}
M^2_\tn{LO}=m^2+\frac{\lambda}{6}\big[v^2+T_F(M_\tn{LO})\big].
\label{Eq:M2_LO_finite}
\end{equation}
Subtracting this from (\ref{Eq:M2_LO}) one requires the vanishing of
the coefficient of $v^2$ and $T_F(M_\tn{LO})$ in the resulting equation. This
determines the LO coupling counterterms
\begin{equation}
\delta\lambda_2^{(0)}=\delta\lambda_0^{(0)}=
-\frac{\lambda^2}{6}\frac{T_d^{(0)}}{1+\lambda T_d^{(0)}/6},
\label{Eq:LO_delta_lambda}
\end{equation}
while requiring the cancellation of the remaining overall divergence
determines the LO mass counterterm
$\delta {m^2_2}^{(0)}=-\big(\lambda+\delta\lambda_0^{(0)}\big)
\left[T_d^{(2)}+[M^2-M_0^2] T_d^{(0)}\right]/2.$
The determination of the counterterms in the equation for the NLO local part
in the pion propagator parallels to some extent the analysis of the
NLO divergences in the $O(N)$ model discussed in Sec.~VI~B
of \cite{fejos09}. One observes that since $I_\tn{div}(0;M_\tn{LO})=T_d^{(0)},$
in view of (\ref{Eq:LO_delta_lambda}) the left-hand side of
(\ref{Eq:M2_NLO}) is finite and it actually enters the finite equation for
$M^2_\tn{NLO}$
\begin{equation}
M^2_\tn{NLO}
\left[
\frac{1}{\lambda}-\frac{I_F(0;M_\tn{LO})}{6}
\right]=
-\frac{4 g^2}{\lambda}N_c \tilde T_F(m_q)
+\frac{2 g^2}{6} N_c J_F(M_\tn{LO},m_q).
\label{Eq:M2_NLO_finite}
\end{equation}
Subtracting (\ref{Eq:M2_NLO_finite}) from (\ref{Eq:M2_NLO}) the following
relation between divergences and counterterms is obtained:
\begin{eqnarray}
0=\frac{1}{\lambda_B^{(0)}}\left[
\delta {m^2_2}^{(1)}+\frac{\delta\lambda_2^{(1)}}{6}v^2+
\frac{\delta\lambda_0^{(1)}}{6} T(M_\tn{LO})-4g^2\tilde T_\tn{div}(m_q)
\right]
-4 g^2 N_c \frac{T_d^{(0)}}{6}\tilde T_F(m_q)
+\frac{2 g^2}{6} N_c J_\tn{div}(M_\tn{LO},m_q).
\end{eqnarray}
Then we use the expression of $J_\tn{div}(M_\tn{LO},m_q)$ given in
(\ref{Eq:J_div}) in terms of $M_\tn{LO},m_q^2,$ and $\tilde T(m_q)$
in which one substitutes for $M_\tn{LO}$ its expression from
(\ref{Eq:M2_LO_finite}). The terms proportional to $\tilde T_F(m_q)$
cancel. The overall divergences determine the form of $\delta {m^2_2}^{(1)}.$
The remaining terms proportional to $v^2$ and $T_F(M_\tn{LO})$
can be grouped as
\begin{eqnarray}
\nonumber
\dots
&+&
\frac{v^2}{6}\Bigg\{
\delta\lambda_2^{(1)}+
\frac{\lambda \delta\lambda_0^{(1)}}{6} T_d^{(0)}
+\frac{2 g^2}{3} N_c\lambda \lambda_B^{(0)} T_d^{(I)}
+4 g^4 N_c \left[\lambda_B^{(0)}
\big(T_d^{(I)}+(T_d^{(0)})^2\big)-6 g^2 T_d^{(0)}
\right]
\Bigg\}
\\
&+&
\frac{T_F(M_\tn{LO})}{6}\,
\left[
\delta\lambda_0^{(1)}+
\frac{\lambda \delta\lambda_0^{(1)}}{6} T_d^{(0)}
+\frac{2 g^2}{3} N_c\lambda \lambda_B^{(0)} T_d^{(I)}
\right]=0.
\label{Eq:delta_lambda_NLO_Gp}
\end{eqnarray}
Requiring the vanishing of the coefficient of $v^2$ and $T_F(M_\tn{LO})$
determines the NLO coupling counterterms $\delta\lambda_2^{(1)}$ and
$\delta\lambda_0^{(1)}.$ One can see that these counterterms agree at
${\cal O}(g^2)$ but differ at ${\cal O}(g^4)$.
A completely similar analysis performed on the EoS
\begin{equation}
m^2+\delta m_0^2+\frac{\lambda+\delta\lambda_4}{6}v^2
+\frac{\lambda+\delta\lambda_2}{6}\int_k G_\pi(k)
-\frac{4 g^2 N_c}{\sqrt{N}}\tilde T(m_q)=\frac{h}{v}
\end{equation}
gives an equation analogous in form and meaning
to (\ref{Eq:delta_lambda_NLO_Gp})
\begin{eqnarray}
\nonumber
\dots
&+&
\frac{v^2}{6}\,\Bigg\{
\delta\lambda_4^{(1)}+
\frac{\lambda \delta\lambda_2^{(1)}}{6} T_d^{(0)}
+\frac{2 g^2}{3} N_c \lambda \lambda_B^{(0)} T_d^{(I)}
+4 g^4 N_c \left[
\lambda_B^{(0)} \big(T_d^{(I)}+(T_d^{(0)})^2\big)-6 g^2 T_d^{(0)}\right]
\Bigg\}
\\
&+&
\frac{T_F(M_\tn{LO})}{6}
\left[\delta\lambda_2^{(1)}+
\frac{\lambda \delta\lambda_2^{(1)}}{6} T_d^{(0)}
+\frac{2 g^2}{3} N_c\lambda
\Big(\lambda+\delta\lambda_0^{(0)}\Big) T_d^{(I)}
\right]=0.
\label{Eq:delta_lambda_NLO_EoS}
\end{eqnarray}
From this equation one can see that the NLO coupling counterterms
$\delta\lambda_2^{(1)}$ and $\delta\lambda_4^{(1)}$ agree at
${\cal O}(g^2)$ but differ at ${\cal O}(g^4)$. Moreover, one sees that
$\delta\lambda_2^{(1)}$ determined from the requirement to cancel the
coefficient of $v^2$ in (\ref{Eq:delta_lambda_NLO_EoS}) differs at
${\cal O}(g^4)$ from $\delta\lambda_2^{(1)}$ needed to cancel the
coefficient of $T_F(M_\tn{LO})$ in (\ref{Eq:delta_lambda_NLO_EoS}).
These requirements give the same expression for the counterterm only
at ${\cal O}(g^2).$
The above feature is a consequence of the fact that by keeping the
fermions unresummed one does not take into account all the diagrams
which are of the same order in the large-$N_f$ expansion. It does not
necessarily mean that the approximation we use is unrenormalizable.
Rather we suggest the interpretation that the approximation is such
that different subseries of the counterterms are needed to cancel the
subdivergences of different equations. Although we have a method to
determine the counterterms in each equation, corrections are to be
expected starting at ${\cal O}(g^4).$ If one traces back the origin of
the term proportional to $T_F(M_\tn{LO})$ in
(\ref{Eq:delta_lambda_NLO_Gp}) one finds that it comes from the
expression (\ref{Eq:M2_LO_finite}) for $M^2_\tn{LO}$ used in the
divergent contribution $J_\tn{div}(M_\tn{LO},m_q)$ as given by
(\ref{Eq:J_div}). In turn, the integral $J(M_\tn{LO},m_q)$ defined in
(\ref{Eq:J_def}) is generated through the expansion
(\ref{Eq:Gp_expanded}) by the one-loop fermion bubble contribution to
the pion self-energy. But, when the first correction to the fermion
propagator is included, then we have to take into account in the
square bracket of the expanded pion propagator (\ref{Eq:Gp_expanded})
the contribution of the two-loop self-energy
\begin{equation}
i\raisebox{-0.41cm}{\includegraphics[width=1.75cm]{2loop_pion_self_energy}}=\frac{g^4 N_c}{\sqrt{N}} \Sigma_2(p;M_\tn{LO},m_q).
\label{Eq:Sigma2}
\end{equation}
This will generate in the equation for $M^2_\tn{NLO}$ an integral similar to
$J(M_\tn{LO},m_q)$
\begin{equation}
g^4 K(M_\tn{LO},m_q)=-i g^4 \int_p G^2_\tn{LO}(p) \Sigma_2(p;M_\tn{LO},m_q),
\label{Eq:K}
\end{equation}
which is expected to have a divergence proportional to
$T_F(M_\tn{LO}).$ This divergence would result in ${\cal O}(g^4)$
corrections in the $\delta\lambda_0^{(1)}$ counterterm as determined
from (\ref{Eq:delta_lambda_NLO_Gp}). Therefore, it is expected that
with a resummed fermion propagator, as required by the large-$N_f$
resummation, the determined counterterms will eventually agree in all
equations. It is rather nontrivial to check this conjecture, even at
the two-loop level, because the reduction of the two-loop integral in
(\ref{Eq:Sigma2}) performed with the method of \cite{weiglein94}
produces more than a dozen scalar integrals and their contribution
should be analyzed in (\ref{Eq:K}). We have only checked that at
the two-loop level the Goldstone theorem is indeed violated as mentioned
in Sec.~\ref{ss:approx}, based on the violation of the Ward
identity (\ref{Eq:WI}).
\begin{figure}[htbp
\centering
$\displaystyle i\sum_{\tn{loops}}$\raisebox{-0.4cm}{
\includegraphics[keepaspectratio,width=0.2\textwidth,angle=0]{superdaisy}}
$\displaystyle=i$
\raisebox{-0.4cm}{
\includegraphics[keepaspectratio,width=0.05\textwidth,angle=0]{summed_superdaisy}}
\qquad\qquad
$\displaystyle i\sum_{\tn{loops}}$
\raisebox{-0.4cm}{\includegraphics[keepaspectratio,width=0.15\textwidth,angle=0]{superdaisy_w_fermioncup}}
$\displaystyle=i\sum_{\tn{skeleton loops}}$
\raisebox{-0.4cm}{\includegraphics[keepaspectratio,width=0.05\textwidth,angle=0]{summed_superdaisy_w_fermioncup}}
\caption{Leading order and next-to-leading order diagrams resummed in the
equation of state obtained by expanding the self-consistent pion propagator
to first order in $1/\sqrt{N}.$ The tree-level pion and fermion propagators
are denoted by thin and double lines, while the thick line represents the
resummed LO pion propagator.}
\label{Fig:diagrammatic}
\end{figure}
Before closing this section we give in Fig.~\ref{Fig:diagrammatic} the
diagrammatic illustration of the equation of state obtained by a
strict expansion to first order in $1/\sqrt{N}$ of the self-consistent
pion propagator. This corresponds to the fourth approximation to the
pion propagator discussed in Sec.~\ref{ss:approx}. Because we
do not draw the counterterm diagrams, we actually obtain the
unrenormalized EoS which reads
\begin{subequations}
\begin{eqnarray}
&&M^2_\tn{LO}+\frac{1}{\sqrt{N}}M^2_\tn{NLO}=\frac{h}{v},
\label{Eq:d1}\\
&&\qquad M^2_\tn{LO}=m^2+\frac{\lambda}{6}\big[v^2+T(M_\tn{LO})\big],
\label{Eq:d2}\\
&&\qquad M^2_\tn{NLO}=\frac{4 g^2 N_c}{1-\frac{\lambda}{6} I(0;M_\tn{LO})}
\left[-\tilde T(m_q)+\frac{\lambda}{12} J(M_\tn{LO},m_q)\right],
\label{Eq:d3}
\end{eqnarray}
\label{Eq:d_all}
\end{subequations}
where $I(0;M_\tn{LO})=d T(M)/(d M^2),$ with $T(M)$ defined in
(\ref{Eq:Tad_pi}). The first set of diagrams are the ${\cal O}(\sqrt{N})$
superdaisy diagrams made of pions with tree-level propagators. Their
resummation is clearly provided by (\ref{Eq:d2}), as one can check iteratively.
The second set of diagrams is ${\cal O}(1)$ and contains only a single
insertion of a fermion bubble. Using Feynman rules, one can readily
check that when the chain of pion bubbles is resummed one obtains
\begin{equation}
-\frac{\lambda g^2 v/6}{1-\frac{\lambda}{6} I(0;M_\tn{LO})}
\int_k G_\textnormal{LO}^2(k) \int_p \textrm{tr}_{c,D}[\gamma_5 D(p)\gamma_5 D(k+p)]\
=\frac{4 N_c \lambda g^2 v/6}{1-\frac{\lambda}{6} I(0;M_\tn{LO})}
\bigg[-I(0;M_\tn{LO}) \tilde T(m_q)+ \frac{1}{2} J(M_\tn{LO},m_q)
\bigg].
\end{equation}
Adding this to the fermion tadpole and dividing by $v$ one obtains the
expression of $M^2_\tn{NLO}$ given in (\ref{Eq:d3}). The finite
version of the EoS (\ref{Eq:d_all}) is obtained by replacing the
integrals by their finite parts taken from the Appendix.
If one would try to use the method described in this section to renormalize
the EoS (\ref{Eq:EoS}) using the local pion propagator (\ref{Eq:Gp_local})
with a mass determined from the gap equation (\ref{Eq:gap_pole}),
one would encounter a subdivergence proportional to
$\tilde I_F(M,{\bf 0};m_q)$ which is not canceled.
This is an artifact of the local approximation used and a self-consistent
treatment of the propagator would unfold this into renormalizable pieces as
happened already when the propagator was expanded consistently in $1/\sqrt{N}.$
As we have seen, in this case only subdivergences proportional with $v^2$ and
$\tilde T_F(m_q)$ appeared. As a consequence, we will not attempt to explicitly
construct the counterterms when using other approximate forms of the
pion propagator given in Sec.~\ref{ss:approx}.
Fortunately, in these cases, since the propagators are of a tree-level form,
the finite part of the integrals can be easily determined and we assume that
in a given equation the subtraction of the infinite part of an integral can
be achieved by a corresponding subset of the full series of
counterterm diagrams.
\section{The $\bm{\mu_q-T}$ Phase diagram}
The thermodynamics is determined by solving the field equations,
{\it i.e.} the EoS (\ref{Eq:EoS}) and the equations giving the
dependence on $T$ and $\mu_q$ of the two real mean fields $\Phi$ and
$\bar\Phi.$ When the full fermion propagator is replaced by the
tree-level one, one has in view of
Eqs.~(\ref{Eq:I1_def})-(\ref{Eq:I1_final}) and
(\ref{Eq:I2})-(\ref{Eq:T2_decompose})
\begin{eqnarray}
\nonumber
&&\frac{d U(\Phi,\bar\Phi)}{d \Phi}
-2 N_c \sqrt{N} \int \frac{d^3 {\bf k}}{(2\pi)^3}
\frac{k^2}{3 E_k} \left(\frac{d \tilde f_\Phi^+(E_k)}{d \Phi}+
\frac{d \tilde f_{\bar\Phi}^-(E_k)}{d \Phi}
\right)\\
&&+g^2\sqrt{N} N_c \left[
2 \left(\tilde T_F^0(m_q) - T_F(M)\right)
\frac{d \tilde T^\beta(m_q)}{d \Phi}+
\frac{d \tilde T_2^{\beta,2}(m_q)}{d \Phi}
-M^2\left(\frac{d S^{\beta,1}(M,m_q)}{d \Phi}
+\frac{d S^{\beta,2}(M,m_q)}{d \Phi}\right)
\right]
=0,
\label{Eq:dU_dPhi}
\end{eqnarray}
where $E_k=({\bf k}^2+m_q^2)^{\frac{1}{2}}$ and $M$ satisfies either one of
the gap equations (\ref{Eq:gap_pole}), (\ref{Eq:gap_p0}), or
(\ref{Eq:M2_LO}), or the relation $M^2=h/v.$ The other equation is
similar to (\ref{Eq:dU_dPhi}), the only difference is that the
derivative is taken with respect to $\bar\Phi.$ The integral in
(\ref{Eq:dU_dPhi}) is the contribution of the fermionic trace-log
integral defined in Eq.~(\ref{Eq:I1_def}), while the term proportional
with $g^2$ is the contribution of the quark-pion two-loop integral in
(\ref{Eq:Omega_2PI}) given in Eq.~(\ref{Eq:I2}). When solving the
field equations for $\Phi$ and $\bar\Phi$, we disregard for simplicity
the contribution of the setting-sun and keep only the one-loop
contribution coming from the fermionic trace-log. The complete
equation (\ref{Eq:dU_dPhi}) is solved only in one case (see the last
row of Table~\ref{tab:phys_data}) in order to estimate the error made
by neglecting this term in all the other cases. To solve the field
equations we use for the pion propagator a given approximation
described in Sec.~\ref{ss:approx} as will be specified below.
The tricritical point (TCP) and the critical end point (CEP) are
identified as the points along the chiral phase transition line of the
$\mu_q-T$ phase diagram where a first order phase transition turns
with decreasing $\mu_q$ into a second order or crossover transition,
respectively. In case of a crossover, the temperature $T_\chi$ of the
chiral transition is defined as the value where the derivative $d v/d
T$ has a minimum (inflection point of $v(T)$), while the temperature
$T_d$ of the deconfinement transition is obtained as the location of
the maximum in $d\Phi/d T.$ The transition point in the case of a
first order phase transition is estimated by the inflection point
located between the turning points of the multivalued curve $v(\mu_q)$
obtained for a given constant temperature. Although the precise
definition of the 1st order transition point is given by that value of
the intensive parameter for which the two minima of the effective
potential are degenerate, we adopt the definition based on the
inflection point, which is also commonly used in the literature,
because we are not computing the effective potential, but only its
derivatives with respect to the fields and propagators.
\subsection{Phase transition in the chiral limit}
In the chiral limit we solve the EoS (\ref{Eq:EoS}) using only the
local approximation to the pion propagator (\ref{Eq:Gp_local}) with
$M^2=0.$ The critical temperature of the chiral transition $T_\chi$
and the pseudocritical temperature $T_d$ of the deconfinement
transition at vanishing chemical potential, and the location of the
TCP are summarized in Table~\ref{tab:chiral_data} for various forms of
the Polyakov-loop potential. On one hand, one can see that with the
inclusion of the Polyakov loop $T_\chi(\mu_q=0)$ and $T_\tn{TCP}$
increase significantly compared with the values obtained earlier in
\cite{toni04} without the Polyakov loop. On the other hand, in all
cases, the inclusion of the Polyakov loop has little effect on the
value of $\mu_q^\tn{TCP}.$ The increase in $T_\chi(\mu_q=0)$ obtained
with the inclusion of the Polyakov-loop effective potential is
basically determined by the value of its parameter $T_0$, while the
value of $T_\tn{TCP}$ shows no significant variation among different
cases having the same value of $T_0$. One can also see, that as
explained in \cite{fukushima08}, the use of the polynomial and
logarithmic effective potentials for the Polyakov loop, that is
(\ref{Eq:P_eff_pot_poly}) and (\ref{Eq:P_eff_pot_log}), drags the
value of $T_\chi(\mu_q=0)$ closer to the value of the parameter $T_0$
than the use of $U_\tn{Fuku}(\Phi,\bar\Phi)$ given in
(\ref{Eq:P_eff_pot_Fuku}). In this latter case one obtains the
smallest value for $T_\tn{TCP}.$
\begin{table}[!t]
\centering
\begin{tabular}{|c|c||c|c|r|c|}
\hline
$U(\Phi,\bar\Phi)$ & $\ \ T_0\ \ $ & $T_\chi(\mu_q=0)$ & $T_d(\mu_q=0)$ & $\ \ T_\tn{TCP}\ $ & $\ \ \mu_q^\tn{TCP}\ \ $ \\ \hline \hline
$-$ & $-$ & 139.0 & $-$ & 60.7\ \ & 277.0 \\ \hline \hline
poly & 270 & 185.6 & 229.0 & 104.5\ \ & 261.8 \\ \hline
poly & 208 & 168.2 & 176.5 & 96.2\ \ & 263.4 \\ \hline
log & 270 & 191.4 & 209.0 & 109.4\ \ & 261.2 \\ \hline
log & 208 & 167.6 & 162.4 & 102.6\ \ & 261.2 \\ \hline
log & $T_0(\mu_q)$ & 167.9 & 162.8 & 84.3\ \ & 266.9 \\ \hline
Fuku & $-$ & 176.5 & 193.0 & 99.8\ \ & 262.2 \\ \hline
\end{tabular}
\caption{
The critical temperature $T_\chi$ of the chiral transition and the
pseudocritical temperature $T_d$ of the deconfinement transition at
$\mu_q=0,$ and the location of the TCP in units of MeV obtained in
the chiral limit without the Polyakov loop (first row) and with the
inclusion of the Polyakov loop using various effective potentials
summarized in Sec.~\ref{ss:PEP}.
}
\label{tab:chiral_data}
\end{table}
For $T_0=270$~MeV the deconfinement transition line in the $\mu_q-T$
phase diagram is above the chiral transition line in all three
variants of the effective potential for the Polyakov loop. This is
illustrated in the left panel of Fig.~\ref{Fig:chiral_PD} in case of
the polynomial effective potential, where the chiral phase diagram is
compared to the one obtained without including the Polyakov loop.
When the logarithmic effective potential $U_\tn{log}(\Phi,\bar\Phi)$
is used either with a constant $T_0=208$~MeV or with the
$\mu_q$-dependent $T_0$ proposed in \cite{schaefer07} one finds
$T_d<T_\chi$ at $\mu_q=0,$ but at a given value of the chemical
potential the deconfinement transition line crosses the chiral
transition line and remains above it for higher values of $\mu_q.$
This is shown in the right panel of Fig.~\ref{Fig:chiral_PD}, where
the deconfinement transition line is obtained from the inflection
point of $\Phi(T).$ The transition line obtained from the inflection
point of $\bar\Phi(T)$ is practically indistinguishable from the line
shown in the figure. One can see that in contrast to the case of
constant $T_0,$ where basically the deconfinement transition line is
not affected by the increase of $\mu_q,$ with a $\mu_q$-dependent
$T_0$ the deconfinement transition line strongly bends, staying close
to the chiral line. The two lines cross just above the TCP.
The lowering of the deconfinement transition in the case when
$T_0(\mu_q)$ is used and as a result the shrinking of the so-called
quarkyonic phase was already observed in Ref.~\cite{abuki08}. As
distinguished from the mesonic phase which is confined and has zero
quark number density and the deconfined phase which has finite quark
number density, the quarkyonic phase is a confining state made of
quarks and is characterized by a high quark number density and
baryonic (three-quark state) thermal excitations. Based on the fact
that in the PNJL model the quantity measuring the quark content inside
thermally excited particles carrying baryon number shows a pronounced
change along the chiral phase transition line, the region of the
$\mu_q-T$ plane for which $T_\chi<T<T_d$ was identified in
\cite{fukushima08} with the quarkyonic phase. The first numerical evidence
from lattice QCD for the existence of a phase which is neither the
hadronic nor the deconfined phase and is characterized by a high value
of the quark number density was given in \cite{fodor07}. This could be
a candidate for the quarkyonic phase. Further evidence for such a
state was reported also in \cite{miura09} within the strong-coupling
expansion of the lattice QCD.
\begin{figure}[!t]
\begin{center}
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{chiral_mu_T_1}
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{chiral_mu_T_2}
\caption{Left panel: Phase diagrams obtained in the chiral limit without and
with the inclusion of the Polyakov loop. The latter has higher $T_\tn{TCP}$
and was obtained using $U_\tn{poly}(\Phi,\bar\Phi)$ with $T_0=270$~MeV.
Shown are the location of the inflection points of $\Phi(T)$ and
$\bar\Phi(T).$ Right panel: Chiral and deconfinement phase
transitions obtained for $U_\tn{log}(\Phi,\bar\Phi)$ with $T_0=208$~MeV
(upper curves) and with $T_0(\mu_q)$ (lower curves). The deconfinement
transition line is obtained from the inflection point of $\Phi(T).$
}
\label{Fig:chiral_PD}
\end{center}
\end{figure}
Comparing our results on the phase diagram to those obtained in the
chiral limit of the PNJL model one can notice differences of both
qualitative and quantitative nature. In the nonlocal PNJL model of
Ref.~\cite{sasaki07} the deconfinement phase transition line starts at
$\mu_q=0$ below the chiral transition line both for a polynomial and a
logarithmic Polyakov-loop effective potential with $T_0=270$~MeV, so
that the two transition lines cross at finite $\mu_q.$ In our case
this happens only for the logarithmic potential with $T_0=208$~MeV, as
can be seen in Fig.~\ref{Fig:chiral_PD}. In \cite{sasaki07,costa09a}
the values of $T_\chi(\mu_q=0)$ and $T_\tn{TCP}$ are much larger than
in our case, while the value of $\mu_q^\tn{TCP}$ is similar to ours.
\subsection{Phase transition in case of the physical pion mass \label{ss:phys}}
In the case of the physical pion mass we solve the EoS (\ref{Eq:EoS})
using each one of the four approximations to the pion propagator
introduced in Sec.~\ref{ss:approx}. Within the approximation
corresponding to a strict expansion in $1/\sqrt{N}$ of the pion
propagator and of the EoS discussed in detail in
Sec.~\ref{sec:renorm} there is no CEP in the $\mu_q-T$ phase diagram
within a range $0<\mu_q<500$~MeV. Without inclusion of the Polyakov
loop the transition at $\mu_q=0$ is a very weak crossover
characterized by a large value of the width at half maximum of
$-dv/dT,$ $\Gamma_\chi\sim 100$~MeV. Including the Polyakov loop,
although the width at half maximum of $-dv/d T$ decreases by a factor
of 2 compared to the case without the Polyakov loop, the transition
remains a crossover for $\mu_q<500$~MeV. This means that as a result
of resumming in the pion propagator ${\cal O}(1/\sqrt{N})$ fermionic
fluctuations, while keeping the fermion propagator unresummed, the
crossover transition at $\mu_q=0$ softens and increasing $\mu_q$
cannot turn the phase transition into a first order one. For the other
three approximations, which all resum infinitely many orders in
$1/\sqrt{N},$ the phase transition turns with increasing $\mu_q$ from
a crossover type into a first order transition and in consequence
there is a CEP in the $\mu_q-T$ phase diagram. For these cases the
result is summarized in Table~\ref{tab:phys_data} for various forms of
the Polyakov-loop potential reviewed in Sec.~\ref{ss:PEP}.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c||c|c|c|c|c|}
\hline
$U(\Phi,\bar\Phi)$ & $\ \ T_0\ \ $ & $\ \ \ G_\pi(p)\ \ \ $ & $T_\chi(\mu_q=0)$ & $T_d(\mu_q=0)$ & $\Gamma_\chi$ & $\ \ T_\tn{CEP}\ \ $ & $\ \ \mu_q^\tn{CEP}\ \ $ \\ \hline \hline
$-$ & $-$ & local, pole & 152.8 & $-$ & 37.6 & 14.4 & 327.1 \\ \hline
$-$ & $-$ & local, $p=0$ & 158.2 & $-$ & 41.5 & 12.1 & 329.1 \\ \hline
$-$ & $-$ & large-$N$ & 158.6 & $-$ & 40.7 & 13.5 & 328.6 \\\hline\hline
poly & 270 & local, pole & 205.6 & 226.8 & 25.6 & 37.8 & 326.9 \\ \hline
poly & 208 & local, pole & 180.6 & 175.0 & 19.8 & 35.3 & 326.7 \\ \hline
poly & 270 & local, $p=0$ & 211.4 & 217.8 & 27.3 & 32.4 & 329.0 \\ \hline
poly & 208 & local, $p=0$ & 184.6 & 176.7 & 22.7 & 30.1 & 328.9 \\ \hline
poly & 270 & large-$N$ & 212.5 & 217.4 & 28.3 & 32.9 & 328.8 \\ \hline
poly & 208 & large-$N$ & 184.6 & 176.8 & 22.3 & 30.6 & 328.8 \\ \hline
log & 270 & local, pole & 207.2 & 207.7 & 12.3 & 39.3 & 327.0 \\ \hline
log & 208 & local, pole & 168.0 & 167.0 & *30.3& 37.9 & 326.9 \\ \hline
log & 270 & local, $p=0$ & 209.8 & 209.3 & 12.1 & 33.9 & 329.1 \\ \hline
log & 208 & local, $p=0$ & 168.5 & 167.0 & *42.8& 32.7 & 329.0 \\ \hline
log&$T_0(\mu_q)$&local, $p=0$&168.9& 167.4 & *42.5& 25.7 & 328.7 \\ \hline
log & 270 & large-$N$ & 209.7 & 209.3 & 12.0 & 34.5 & 329.0 \\ \hline
log & 208 & large-$N$ & 168.5 & 167.1 & *43.0& 33.0 & 328.9 \\ \hline
Fuku & $-$ & local, pole & 191.0 & 188.7 & 19.8 & 36.2 & 326.8 \\ \hline
Fuku & $-$ & local, $p=0$ & 195.3 & 191.2 & 21.2 & 31.2 & 328.9 \\ \hline
Fuku & $-$ & large-$N$ & 195.2 & 191.3 & 21.2 & 31.8 & 328.8 \\ \hline \hline
poly & 208 & large-$N$, full & 188.1 & 183.1 & 21.4 & 32.2 & 329.0 \\ \hline
\end{tabular}
\caption{The pseudocritical temperatures $T_\chi$ and $T_d$ of the chiral
and deconfinement transitions, the half-width at half maximum
$\Gamma_\chi$ of $-d v/d T$ at $\mu_q=0$
(in the cases marked with $*$, due to an asymmetric shape of $-d v/d T$ the
full width is given)
and the location of the CEP in units of MeV
obtained in various approximations for the pion propagator without and
with the inclusion of the Polyakov loop. The two local approximations are
defined by (\ref{Eq:gap_pole}) and (\ref{Eq:gap_p0}), respectively.
The large-$N$ approximation is defined by (\ref{Eq:Gp_third}) and
(\ref{Eq:EoS_third}). Only for the result in the last row the contribution
of the setting-sun was kept in (\ref{Eq:dU_dPhi}).}
\label{tab:phys_data}
\end{table}
In the cases studied in Table~\ref{tab:phys_data} increasing $\mu_q$
drives at $T=0$ the restoration of chiral symmetry via a first order
transition at some value $\mu_q^c>M_q.$ Increasing the temperature
$\mu_q^c$ decreases and the first order chiral restoration becomes a
crossover at a much lower temperature $T_\tn{CEP}$ than in the chiral
case. The inclusion of the Polyakov loop increases significantly the
value of $T_\tn{CEP},$ but as in the chiral case it has little effect
on the value of $\mu_q^\tn{CEP}.$ One can see that neither the choice
of the effective potential for the Polyakov loop nor the value of
$T_0$ has a significant effect on the value of $\mu_q^\tn{CEP}.$ Some
variation can be observed among the values of $\mu_q^\tn{CEP}$
obtained using different approximations for the pion propagator. The
result in the last row was obtained by keeping in the field equation
of the Polyakov loop (\ref{Eq:dU_dPhi}) and its conjugate the
contribution of the setting-sun diagram, while in all other cases only
the contribution of the fermionic trace-log was kept. Comparing the
result in the last row with that of the last row obtained using the
polynomial Polyakov-loop potential, one can see that the error we make
by neglecting the setting-sun contribution in all other cases is
fairly small.
The values of $T_\chi$ and $T_d$ at $\mu_q=0$ are mostly influenced by
the choice of the Polyakov effective potential and the value of $T_0:$
they decrease with the decrease of $T_0$ and by using the logarithmic
potential instead of the polynomial one. Using the polynomial
potential with $T_0=270$~MeV the confinement transition line in the
$\mu_q-T$ plane is above the chiral transition line. This can be seen
in the left panel of Fig.~\ref{Fig:phys_PD} where the phase diagram is
compared with the one obtained without the inclusion of the Polyakov
loop. As in the chiral case, when a logarithmic potential is used with
either a fixed value $T_0=208$~MeV or with a $\mu_q$-dependent $T_0,$
the deconfinement transition line starts at $\mu_q=0$ below the chiral
one and the two lines cross at some higher value of $\mu_q.$ This can
be seen in the right panel of Fig.~\ref{Fig:phys_PD}. When
$T_0(\mu_q)$ is used the two lines go together until they cross each
other just above the location of the CEP. This $\mu_q$-dependent $T_0$
gives the lowest value of $T_\tn{CEP},$ similar to the results
reported in \cite{ciminale08} and \cite{herbst10}. Because of the much
lower value of the $T_\tn{CEP}$ the shrinking of the quarkyonic phase
is more pronounced than in the chiral case, as the deconfinement
transition lines approaches the $\mu_q$ axis. This is even more the
case here, with a physical pion mass, since the deconfinement
transition is a crossover and as such it happens in a relatively large
temperature interval. However, the quarkyonic phase does not vanish
completely as happens in \cite{herbst10}, where quantum fluctuations
are included using functional renormalization group methods.
\begin{figure}[!t]
\centering
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{phys_mu_T_1}
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{phys_mu_T_2}
\caption{Left panel: Phase diagrams obtained for the
physical value of the pion mass using the local approximation to $G_\pi$ with
a mass determined by (\ref{Eq:gap_p0}) without and with the inclusion of
the Polyakov loop. The latter has higher $T_\tn{CEP}$ and was obtained using
$U_\tn{poly}(\Phi,\bar\Phi)$ with $T_0=270$~MeV. The part of the phase
diagram where the transition is of first order is enlarged in the
inset. Shown are the global maxima of $d\Phi(T)/d T$ and
$d\bar\Phi(T)/d T.$ Right panel: Chiral and deconfinement phase
transitions obtained for $U_\tn{log}(\Phi,\bar\Phi)$ with $T_0=208$~MeV
(upper curves) and with $T_0(\mu_q)$ (lower curves). The deconfinement
transition line is obtained from the global maximum of $d\Phi(T)/d T.$
}
\label{Fig:phys_PD}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{negyes_fogat_208}
\includegraphics[keepaspectratio,width=0.495\textwidth,angle=0]{negyes_fogat_T0_mu}
\caption{Evolution of the maxima of the $-d v/d T$ and $d \Phi/d T$
with the increase of the chemical potential $\mu_q$ in case of using
$U_\tn{log}(\Phi,\bar\Phi)$ with a $T_0=208$~MeV (left figure)
and with a $\mu_q$-dependent $T_0$ parameter (right figure).
}
\label{Fig:maxima}
\end{figure}
By studying the derivatives of the $v(T)$ and $\Phi(T)$ curves one
observes in panel (a) of Fig.~\ref{Fig:maxima} that at low $\mu_q$ it
is the Polyakov loop which plays the driving role in the transition:
for $\mu_q=0$ the $d v/d T$ is much wider and has a small peak in the
temperature range where $\Phi(T)$ shows a pronounced variation. This
happens only for very low values of $\mu_q,$ as in the region of
$\mu_q$ where the deconfinement transition line is a little bit
further below the chiral transition line than for $\mu_q=0$, such a
driving role cannot be identified. For values of $\mu_q$ where the two
transition line cross and also in the region where the chiral
transition line is below the deconfinement one can see the influence
of the chiral transition on the shape of $d\Phi/d T.$ This is the most
pronounced in the case of the $\mu_q$-dependent $T_0$ where the two
transition lines cross near the CEP. In this case the chiral phase
transition plays the driving role as one can clearly see on panel (c)
of Fig.~\ref{Fig:maxima} (right). In panel (d) of
Fig.~\ref{Fig:maxima} (left) one sees that $d\Phi/d T$ has two
peaks. In such cases, as in Ref.~\cite{tuominen08}, the position of
the higher peak is followed to determine the deconfinement transition
temperature, since the first peak is a result of the influence of the
chiral phase transition.
From Table~\ref{tab:phys_data} one can see that there is a correlation
between the strength of the chiral crossover at $\mu_q=0$ as measured
by $\Gamma_\chi$ and the location of CEP: weaker crossover (larger value of
$\Gamma_\chi$) corresponds in general to a larger value of $\mu_q^\tn{CEP}.$
In the cases marked with an asterisk in Table~\ref{tab:phys_data} a
$d v/d T$ is distorted by the temperature dependence of the Polyakov loop,
as one can see in panel (a) of Fig.~\ref{Fig:maxima}. For this reason
in this cases we denote by $\Gamma_\chi$ the full width at half maximum of
$-d v/d T.$ In other cases one gives the half-width at half maximum.
This is measured on the left of the maximum, because when the gap equation
is used, the threshold of the fermionic bubble is generally on the right
of the maximum and distorts the $d v/d T$ curve.
\section{Discussion and conclusions}
Using the tree-level fermion propagator and several approximate forms
of the pion propagator obtained within a large-$N_f$ expansion, we
studied in the chiral limit and for the physical value of the pion
mass the influence of the Polyakov loop on the chiral phase
transition. We obtained that only when the local part of the
approximate pion propagator resums infinitely many orders in $1/N_f$
of fermionic contributions it is possible to find a CEP on the
chiral phase transition line of the $\mu_q-T$ phase diagram. When the
logarithmic form $U_\tn{log}(\Phi,\bar\Phi)$ of the effective
potential for the Polyakov loop was used with parameter $T_0=208$~MeV
a crossing between the chiral and deconfinement transition lines was
observed, with the latter line starting at $\mu_q=0$ slightly below
the former one. In this case the existence of the quarkyonic phase is
possible.
We have seen at the beginning of Sec.~\ref{ss:phys} that as a
result of resumming in the pion propagator ${\cal O}(1/\sqrt{N})$
fermionic fluctuations obtained with a strict expansion in
$1/\sqrt{N}$, while keeping the fermion propagator unresummed, the
phase transition softens. One can easily demonstrate the same feature
by including the contributions of the fermion vacuum fluctuations and
of the pion tadpole in the equation of state of
Ref.~\cite{schaefer07}, that is the field equation determining the
chiral order parameter. There, because the parameters of the PQM
model were determined at tree-level, the fermionic vacuum fluctuations
coming from the fermion tadpole ($\tilde T^0$) were neglected, while
the pions were treated at tree-level. However, by choosing an
appropriate renormalization scale one can arrange for the vanishing of
the entire $\tilde T^0_F$ only at $T=0.$ At finite temperature the
vacuum fluctuation is in this way correctly included and due to the
temperature dependent fermionic mass $\tilde T^0_F$ will be
nonvanishing. The value of the renormalization scales for which the
fermion and pion tadpoles vanish at $T=\mu_q=0$ are
$M_{0F}=\sqrt{e} m_q$ and $M_{0B}=\sqrt{e} m_\pi,$ respectively. The
importance of including the vacuum fluctuations was discussed also in
\cite{nakano10,skokov10b}, where the effect on the location of the CEP
and on the isentropic trajectories in the $\mu_q-T$ plane was shown.
From Table~\ref{tab:compare} one can see that comparing with the
original result of Ref.~\cite{schaefer07} the inclusion of the
fermionic vacuum fluctuations softens the transition at $\mu_q=0$, as
shown by the larger full width $\Gamma_\chi$ at half maximum of
$-d v/d T,$ and in consequence the location of the CEP is moved to
higher values of $\mu_q$ and lower values of $T.$ Inclusion of the
pion vacuum and thermal fluctuations in the equation of state through
a pion tadpole further accentuates this behavior. Inclusion of the
fluctuations using functional renormalization group methods also
pushed the location of the CEP to higher values of $\mu_q,$ as can be
seen by comparing the left panel of Fig.~6 in ~\cite{herbst10} to
Fig.~6 of \cite{schaefer07}.
\begin{table}[htbp]
\centering
\begin{tabular}{|l|cccc|c|r|c|c|}
\hline
& $\ \tilde T^0_F\ $ & $\ T^0_F\ $ & $\ \tilde T^{\beta}\ $ & $\ T^{\beta}\ $ & $\ T_\chi(\mu_q=0)\ $ & $\ \ \Gamma_\chi\ \ $ & \ \ $T_\tn{CEP}$\ \ & \ \ $\mu_q^\tn{CEP}$\ \ \\
\hline \hline
QP & $-$ & $-$ & $+$ & $-$ & 184.6 & 4.6 & 162.8 & 165.1 \\ \hline
QP & $-$ & $-$ & $+$ & $+$ & 180.2 & 8.6 & 145.3 & 204.3 \\
QFT & $+$ & $-$ & $+$ & $-$ & 173.0 & 26.9 & 91.3 & 241.1 \\
QFT & $+$ & $+$ & $+$ & $+$ & 170.1 & 30.3 & 85.5 & 243.5 \\
\hline
\end{tabular}
\caption{
The pseudocritical temperatures of the chiral transition and the full width
$\Gamma_\chi$ at half maximum of $-d v/ (d T)$ at $\mu_q=0,$ and the
location of the CEP in units of MeV in various treatments of the model
with a physical pion mass. The Polyakov loop is included using
$U_\tn{poly}(\Phi,\bar\Phi)$
and $T_0=208$~MeV. QP stands for the quasiparticle approximation
in which the vacuum fluctuations in the fermion ($\tilde T^0_F$)
or pion ($T^0_F$) tadpoles are disregarded (marked by $-$) and only
the finite temperature part of the tadpoles ($\tilde T^{\beta}$ or
$T^{\beta}$) is kept (marked by $+$). QFT stands for a quantum field
theoretical calculation where the vacuum fluctuations are properly treated.
The first row is the reproduced result of \cite{schaefer07}.
}
\label{tab:compare}
\end{table}
It remains to be seen to what extent our results are stable against
the use of the self-consistent propagator for fermions, as required by
a completely systematic large-$N_f$ expansion. A highly interesting
question which requires going beyond the level of approximations of
this work is whether a completely systematic expansion in
$1/\sqrt{N}$ of the propagator equations could lead to the existence
of the CEP in the phase diagram, and how the results obtained within
such a resummation scheme are related to a numerically even more
demanding resummation represented by the complete self-consistent
solution (without further expansion in $1/\sqrt{N}$)
of the coupled pion and fermion propagator equations.
\begin{acknowledgments}
The authors benefited from discussions with Andr\'as Patk{\'o}s and
Antal Jakov\'ac. This work is supported by the Hungarian Research Fund
under Contracts No.~T068108 and No.~K77534.
\end{acknowledgments}
|
2,869,038,157,059 | arxiv | \section{Introduction}
The paradigm that has emerged for the production of outflows from
active galactic nuclei (AGN) involves the presence of large scale
electromagnetic fields which are instrumental in their formation,
acceleration and collimation, many gravitational radii from the
central supermassive black hole (Nakamura et al, 2008; Meier et al,
2001; Blandford, 1976; Lovelace, 1976). Two models have taken center
stage. Blandford \& Payne (1982; henceforth BP) and extensions of
this model (Li et al, 1992 and Vlahakis \& Konigl, 2003) describe a
centrifugally driven outflow of gas originating in a cold accretion
disk as a solution to ideal MHD within the context of self-similarity.
If the angle between the poloidal component of the magnetic field and
the disk surface is less than 60 degrees, mass-loading of the magnetic
field lines occurs, leading to an inbalance between inward
gravitational and outward centrifugal forces, with gravity being
overwhelmed. Unlike the BP mechanism which taps into the
gravitational potential energy of the accretion flow, the
Blandford-Znajek (1977; henceforth BZ) mechanism produces relativistic
jets from large scale magnetic fields threading the rotating event
horizon by extraction of black hole rotational energy. The
flux-trapping model (Reynolds et al, 2006) is an attempt to understand
ways in which black hole accretion flows can overcome their diffusive
character (see also Bisnovatyi-Kogan \& Lovelace, 2007 and Rothstein
\& Lovelace, 2008) to produce strong magnetic field on the black hole
(see Bisnovatyi-Kogan \& Ruzmaikin, 1976, for earliest attempt to study
the accretion of large-scale ordered magnetic field on black
hole) indicating that if the flux-trapping behavior of the gap/plunge
region is valid, the BZ mechanism produces greatest power for black
hole spin of $a\approx-1$ (Garofalo, 2009). Here we show that the
same is true for the BP mechanism. This means that although the spin
dependence of BZ and BP power is different overall, they both peak for
near maximal retrograde black hole spin. We motivate the idea that
'ordinary' astrophysical processes will tend to shift near maximal
retrograde black hole accretion systems toward more prograde spins
(i.e. accretion and/or spin energy extraction). Once formed (e.g. in
galaxy mergers), such systems will evolve toward a state of lower
power output, which implies that the population density of near
maximal retrograde black hole accretion systems that produce outflows
and jets, is larger at the redshift of formation of the highly
retrograde accretion systems and naturally tends to drop, so that the
cosmological evolution of black hole spin is in the direction of
prograde spins. In section 2 we describe the basic geometry of the
flux-trapping model. In section 3 we discuss its implications for the
BP power and those of assuming that outflows and jets in AGN are all
due to either BZ, BP, or a combination of both mechanisms (Meier,
1999). In section \ref{conclusion} we conclude.
\section{The model}
The basic feature of our model is illustrated in Figure
\ref{BlackHole_disk} where magnetic field lines threading the black
hole are separated from those threading the disk by a gap region (or
plunge region). The absence of magnetic field threading the gap
region is the fundamental assumption of the flux-trapping model
(Reynolds, et al, 2006). This assumption has implications for both
the BZ and BP effects of which the former are illustrated in Figure
\ref{Flxvsspin}, originating from the numerical solution to the MHD
equations in a Kerr metric (Garofalo, 2009). We emphasize the fact
that maximum BZ power is produced for highly retrograde black hole
spin, and extend the flux-trapping model to outflows of the BP type,
with the basic point to motivate the existence of a spin dependence
in BP power that is also maximized at high retrograde black hole spin
values. The model is further described below.
\begin{enumerate}
\item Our accretion disk is described by a Novikov \& Thorne (1974)
disk truncated at the marginally stable orbit, inwards of which is the
gap region.
\item In the magnetosphere (the region outside of the black hole and
accretion disk) we assume that the plasma density is negligible and
hence that the magnetic field is force free.
\item We assume that no magnetic flux threads the gap or plunge region
of the accretion disk. Any magnetic flux that is advected inwards
across the radius of marginal stability is immediately added to the
flux bundle threading the black hole.
\item Far away from the black hole and at poloidal angles above the
accretion disk, we assume the large-scale field is uniform.
\end{enumerate}
\begin{figure*}[h!]
\centerline{\includegraphics[angle=-90,scale=0.4]{BlackHole_disk.ps}}
\caption{A black hole accretion disk (with the no-flux gap region
boundary condition) threaded by large scale magnetic field that is
parallel to the black hole spin axis far from the accretion
disk. }\label{BlackHole_disk}
\end{figure*}
\begin{figure*}[h!]
\centerline{\includegraphics[angle=-0,scale=0.8]{L_all.ps}}
\caption{Blandford-Znajek power vs. spin according to
flux-trapping. }\label{Flxvsspin}
\end{figure*}
\section{BP outflows and the cosmological evolution of black hole spin in the flux-trapping model}
In this section the focus is on the geometry of the magnetic field as
in figure~\ref{BlackHole_disk} and the changes that occur as the spin
of the black hole varies. Because BP outflows depend on the angle
between the magnetic field and the accretion disk surface, the
emphasis is on how this angle changes with spin. Despite highlighting
MHD force balance in the force-free magnetosphere, the discussion
remains qualitative, limiting the study to identifying the spin value
for which BP power is maximized. Magnetic forces between the flux
bundle on the hole and that threading the disk compete at latitudes
above the equatorial plane where the no-flux boundary condition is
imposed (see arrows in Fig.~\ref{spin_geometry}). Whereas magnetic
pressure/tension of magnetic field lines threading the disk tend to
push the hole-threading flux bundle onto the horizon, the latter
reacts back on the disk-threading magnetic field to limit additional
magnetic field advection onto the black hole. The bend in the
magnetic field threading the disk stems from the fact that while the
radial inflow of the accreting gas attempts to drag the large scale
magnetic field toward the black hole, the aformentioned magnetic
forces from the flux bundle already threading the black hole, push the
magnetic field lines threading the disk outward. The greater the
magnetic flux bundle on the black hole, the more effective its ability
to halt additional advection from the disk, and the greater the bend
inflicted on the magnetic field lines threading the disk. As
Fig.~\ref{spin_geometry} illustrates, the magnitude of the black
hole-threading flux bundle depends on the ability of the gap region
to drag magnetic field inward. For high prograde-spinning black
holes, the marginally stable circular orbit is close to the black hole
horizon in both coordinate and proper distance which makes the gap
region ineffective at dragging large magnetic flux to the horizon. In
the low-spin or retrograde case, instead, the inner edge of the
accretion disk at the marginally stable circular orbit is much further
out so the proper distance to the horizon from the disk inner edge is
larger. This means that slowly spinning or retrograde black holes
acquire magnetic flux via the gap region further out in radial
position compared to their high-spin counterparts, resulting in a
larger magnetic flux bundle on the horizon. As BP point out, if the
magnetic field lines and the disk surface meet at an angle that is 60
degrees or less, a centrifugally-driven MHD wind is possible. With
respect to their high-spin counterparts, then, low-spin or retrograde
systems are more likely to exhibit bent magnetic field line
configurations which makes them comparatively better candidates for BP
outflows. This behavior is seen in the steady-state magnetic field
configurations of the numerical solution (figures \ref{spin_negative}
and \ref{spin_positive} ). We choose a representative set of disk
parameters such as disk thickness, accretion rate, Prandtl number
etc., and illustrate the geometry of the numerical solution. We find
that the magnetic field lines threading the retrograde system are bent
well out into the disk. The high prograde spin system, on the other
hand, displays bent magnetic flux contours only in the innermost
region of the accretion disk and even there the bend is small. In
short, as the spin of the black hole decreases from high prograde
toward high retrograde values, the magnetic field lines bend
progressively more toward the disk surface. If we associate this
feature with greater BP outflow power, the arguments suggest that the
spin dependence of BP power in the flux trapping model increases
progressively from high prograde spins to high retrograde spins.
Therefore, like the BZ power, the BP power is maximized for
$a\approx-1$.
Assuming that the BZ and BP mechanisms are the dominant path chosen by
nature to produce outflows and jets in AGN within the context of the
flux-trapping model, leads to the conclusion that retrograde black
hole accretion systems threaded by large-scale magnetic fields, tend
toward prograde black hole spin systems unless some external factors
beyond accretion and black hole rotational energy extraction occur.
Accretion in retrograde systems, in fact, adds angular momentum to the
hole in the prograde sense. In addition, the BZ power is largest for
high retrograde spins which means that greatest spin energy extraction
occurs to reduce the absolute value of the hole's angular momentum.
The BP mechanism, on the other hand, does not directly affect the
black hole spin. Clearly, accretion and spin-energy extraction via BZ
both operate to increase the spin away from $a\approx-1$. How fast it
moves away from $a\approx-1$ and whether it crosses $a=0$ depends on
the rate at which angular momentum is extracted by the BZ mechanism as
well as on the rate at which angular momentum is supplied by
accretion. The one thing that is clear, though, is that the most
energetic outflows and jets produced in $a\approx-1$ systems are not
stable, so the spin must change. Thus, if flux-trapping via the
plunge region occurs in nature, the most powerful AGN evolve to lower
energies as their black hole spins become more prograde.
\begin{figure*}[h!]
\centerline{\includegraphics[angle=-90,scale=0.4]{spin_geometry.ps}}
\caption{The difference in the geometry between a high spinning black
hole accretion disk (top) and a low spinning or retrograde black hole
accretion disk (bottom). In the high-spin case the gap region is small
and the black hole acquires little magnetic flux from the dragging of
magnetic field through it. The resulting weak flux bundle on the hole
has little effect on the magnetic field threading the disk whose
geometry remains thus mostly vertical. In the low-spin or retrograde
case, instead, the gap region is large and it drags a strong flux
bundle onto the horizon. The strength of this flux bundle compared to
that in the high spin case is such that it is comparatively more
effective in deforming the disk-threading magnetic flux lines which
bend.}\label{spin_geometry}
\end{figure*}
\begin{figure*}[h!]
\centerline{\includegraphics[angle=-0,scale=0.7]{retrograde_flux.ps}}
\caption{Magnetic configuration for a -0.90 retrograde spinning black
hole and its accretion disk. Notice the large bend at 30
gravitational radii compared to the high prograde figure.
}\label{spin_negative}
\end{figure*}
\begin{figure*}[h!]
\centerline{\includegraphics[angle=-0,scale=0.7]{prograde_flux.ps}}
\caption{Magnetic configuration for a 0.90 prograde spinning black
hole and its accretion disk. Notice how the flux lines in the disk
are only slightly bent at 15 gravitational radii unlike the high
retrograde case where they are considerably bent at that
location. }\label{spin_positive}
\end{figure*}
\section{Conclusions}
This work extends the relativistic flux-trapping model to include
outflows of both the BZ and BP type in an effort to constrain the
cosmological evolution of black hole spin (e.g. Brenneman, 2009) and
its possible connection to powerful outflows such as those in radio
loud galaxies (Evans et al., 2009, in press). Our current picture of
the interaction between black holes in AGN and the host galaxy,
suggests a coherent two-way conversation in which the host galaxy
speaks to the black hole about the galaxy via accretion by funneling
matter toward the black hole, and the black hole speaks to the galaxy
about the black hole via outflows that leave signatures of its mass
(Kormendy \& Richstone, 1995; Magorrian et al. 1998; Marconi \& Hunt,
2003; Gebhardt et al. 2000; Ferrarese \& Merritt, 2000; Tremaine et
al, 2002). If the behavior of the gap region is as fundamental as
assumed here, this two-way conversation includes more, one in which
the more subtle features of the highly non-Newtonian aspects of space
and time that dominate the region close to the center of galaxies are
also revealed. In fact, the scenario that emerges is one in which the
spin parameter of the central supermassive black hole is not simply
revealed in the generated outflow, but is an active participant in the
dynamics of the latter, to the extent that it sets the scale for the
magnitude of the outflow power. The overall conclusion of the
assumption that BP and BZ operate within the context of flux-trapping
is that galaxy evolution is tightly coupled to black hole spin.
\label{conclusion}
\section{acknowledgments}
The author thanks David L. Meier for detailed discussion on the BP
effect. The research described in this paper was carried out at the
Jet Propulsion Laboratory, California Institute of Technology, under a
contract with the National Aeronautics and Space Administration.
D.G. is supported by the NASA Postdoctoral Program at NASA JPL
administered by Oak Ridge Associated Universities through contract
with NASA.
\section*{References}
\noindent Bisnovatyi-Kogan, G.S. \& Lovelace, R.V.E., 2007, ApJ, 667, L167
\noindent Bisnovatyi-Kogan, G.S. \& Ruzmaikin, A.A.,1976, Ap\&SS, 42,
401
\noindent Blandford, R. D., \& Payne, D. G. 1982, MNRAS, 199, 883
\noindent Blandford, R. D., \& Znajek, R. L. 1977, MNRAS, 179, 433
\noindent Blandford, R. D., 1976, MNRAS, 176, 465
\noindent Brenneman, L. Astro2010 Science White Paper
\noindent Evans, D., 2009, ApJ, in press.
\noindent Ferrarese, L., \& Merritt, D., 2000, ApJ, 539, L9
\noindent Garofalo, D., 2009, ApJ, 10, 700
\noindent Gebhardt, K., et al. 2000, ApJ, 539, L13
\noindent Kormendy, J., \& Richstone, D., 1995, ARA\&A, 33, 581
\noindent Li, Z-Y, Chiueh, T. \& Begelman M., 1992, ApJ, 394, 459
\noindent Lovelace, R.V.E. 1976, Nature, 262, 649
\noindent Magorrian, J., et al. 1998, AJ, 115, 2285
\noindent Marconi, A., \& Hunt, L.K., 2003, ApJ, 589, L21
\noindent Meier, D. L., 1999, ApJ, 522, 753
\noindent D.L. Meier et al., Science 291 (2001), 84
\noindent Nakamura, M. et al., 2008, ASPC, 386, 373N
\noindent Reynolds, C. S., Garofalo, D., \& Begelman, M. 2006, ApJ, 651, 1023
\noindent Rothstein D.M. \& Lovelace, R.V.E., 2008, ApJ, 677, 1221
\noindent Tremaine, S., et al 2002, ApJ, 574, 740
\noindent Vlahakis, N. \& Konigl, A., 2003, ApJ, 596, 1104
\end{document}
|
2,869,038,157,060 | arxiv |
\section{Introduction}
In contrast to most other measurements, the rate of the production of a single top quark in association with a Higgs boson (tH production) is sensitive not only to the magnitude but also to the sign of the Yukawa coupling of the top quark. In this production mode, the single top quark is produced via the $t$ channel or via the associated production with a W boson. Due to its small cross section, the $s$-channel production is negligible. The Higgs boson can be emitted either from the top quark or the intermediate W boson (see Fig.~\ref{fig:feynman}) and the amplitudes of these two possibilities interfere. The resulting amplitude depends on the ratios of actual coupling strengths to the standard model (SM) predictions for the Higgs-top coupling ($\kappa_\mathrm{t}$) and for the coupling of the Higgs boson to vector bosons ($\kappa_\mathrm{V}$), given by $\mathcal{A} \propto (\kappa_\mathrm{t}-\kappa_\mathrm{V})$. A scan over different values of $\kappa_\mathrm{t}$ and $\kappa_\mathrm{V}$, suitable for a test of Higgs boson couplings, is provided. Furthermore, physics beyond the SM can be potentially discovered by the evidence of anomalous couplings. The scenario with a negative Yukawa coupling of the top quark ($\kappa_\mathrm{t} = -1.0$, $\kappa_\mathrm{V} = +1.0$) -- called inverted top coupling (ITC) scenario -- is particularly relevant for the analysis, which differs only in the sign of $\kappa_\mathrm{t}$ from the SM case ($\kappa_\mathrm{t} = +1.0$, $\kappa_\mathrm{V} = +1.0$), as it provides a significantly higher cross section. The increase of the cross section is caused by the constructive interference of the amplitudes of the two emission possibilities of the Higgs boson. In the SM case, a destructive interference of the amplitudes occurs.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{feyman_thq_ct2.pdf}
\hspace*{1.2em}
\includegraphics[width=0.4\textwidth]{feyman_thq_cv2.pdf}
\caption{Feynman diagrams for the associated production of a single top quark and a Higgs boson in the $t$ channel. Left figure: Higgs boson emitted from the top quark. Right figure: Higgs boson emitted from the intermediate W boson.}
\label{fig:feynman}
\end{figure}
\section{Classification and Limit Calculation}
The analyzed dataset consists of events recorded with the Compact Muon Solenoid (CMS) experiment~\cite{det} during Run II of the Large Hadron Collider (LHC) in 2015, corresponding to an integrated luminosity of $2.3\,\mathrm{fb}^{-1}$. Events are selected which contain exactly one isolated lepton (muon or electron), three or four b-tagged jets and at least one untagged jet. All jets are required to have $p_\mathrm{T}>30\,\mathrm{GeV}$ ($|\eta|<2.4$) and $p_\mathrm{T}>40\,\mathrm{GeV}$ ($|\eta|\geq 2.4$) respectively. Additionally, a cut for the missing transverse energy is applied: ${\not\mathrel{E}}_\mathrm{T}>45\,\mathrm{GeV}$ (electron channel) and ${\not\mathrel{E}}_\mathrm{T}>35\,\mathrm{GeV}$ (muon channel). According to the number of b-tagged jets, two independent signal regions are defined, namely the 3 tag and 4 tag region.
For the event classification, 51 boosted decision trees (BDTs) are used. Each of these BDTs corresponds to one point in the two-dimensional $\kappa_\mathrm{t}-\kappa_\mathrm{V}$ plane which consists of 51 different values ranging from $-3.0 \leq \kappa_\mathrm{t} \leq +3.0$ and $\kappa_\mathrm{V} = +0.5, +1.0, +1.5$. For all BDTs, the signal events (tH production) are trained against the dominating t$\mathrm{\bar{t}}$ and t$\mathrm{\bar{t}}$H background in the 3 tag region. Three types of input variables are used for the classification BDTs: Variables from one of the 51 tH reconstructions, variables from the t$\mathrm{\bar{t}}$ reconstruction and global variables, which are independent of any reconstruction. The most discriminating variables are $\log m(\mathrm{t}_\mathrm{had})$ (from t$\mathrm{\bar{t}}$ reconstruction), $|\eta(\mathrm{recoil\ jet})|$ (from tH reconstruction) and aplanarity (global variable). The outputs of the classification BDTs are applied to the 3 tag and to the 4 tag region.
After the event classification, limits for 51 different points in the $\kappa_\mathrm{t}-\kappa_\mathrm{V}$ plane are determined from a simultaneous fit of the corresponding BDT output in the 3 tag and 4 tag region. The largest systematic uncertainty of the limit calculation arises from variations in the jet energy scale, from variations in the $Q^2$ scale used in the generation of the t$\mathrm{\bar{t}}$ and tH samples, and from b-tagging reweighting.
\section{Results}
The resulting postfit distributions of the classification BDT output for the ITC and for the SM coupling scenario in the two signal regions are shown in Fig.~\ref{fig:postfit-BDT1} and \ref{fig:postfit-BDT2}. The expected and observed upper limits on the tH production rate for all 51 studied couplings can be found in Fig.~\ref{fig:limits}. For the SM case, the observed limit is $113.7\times\sigma_\mathrm{SM}$ (expected: 98.6), and for the ITC case, an upper limit of $6.0\times\sigma_\mathrm{ITC}$ is observed (expected: 6.4). For all 51 points in the $\kappa_\mathrm{t}-\kappa_\mathrm{V}$ plane, the observed limit is well within one standard deviation of the expected limit. The sensitivity is already comparable to the Run I analysis~\cite{pas_old} which yielded an expected limit of $5.4\times\sigma_\mathrm{ITC}$ for the ITC scenario.
A more detailed description of the analysis is available in Ref.~\cite{pas}.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{postfit2D__MVA_tH_3m0.pdf}
\hspace*{1.2em}
\includegraphics[width=0.45\textwidth]{postfit2D__MVA_tH_3m12.pdf}
\caption{Postfit distributions of the classification BDT output in the 3 tag region for the ITC (left) and SM (right) scenario. The signal distributions correspond to the expected contributions scaled by the factors given in the legends. Taken from~\cite{pas}.}
\label{fig:postfit-BDT1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{postfit2D__MVA_tH_4m0.pdf}
\hspace*{1.2em}
\includegraphics[width=0.45\textwidth]{postfit2D__MVA_tH_4m12.pdf}
\caption{Postfit distributions of the classification BDT output in the 4 tag region for the ITC (left) and SM (right) scenario. The signal distributions correspond to the expected contributions scaled by the factors given in the legends. Taken from~\cite{pas}.}
\label{fig:postfit-BDT2}
\end{figure}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{thesis_limit-1_obs.pdf}\\
\includegraphics[width=0.4\textwidth]{thesis_limit-0p5_obs.pdf}
\includegraphics[width=0.4\textwidth]{thesis_limit-1p5_obs.pdf}
\caption{Upper limits on tH scenarios with different $\kappa_\mathrm{t}-\kappa_\mathrm{V}$ configurations. Top figure: $\kappa_\mathrm{V}=+1.0$, bottom left figure: $\kappa_\mathrm{V}=+0.5$, bottom right figure: $\kappa_\mathrm{V}=+1.5$. The tH cross sections are given on the right $y$ axis. Taken from~\cite{pas}.}
\label{fig:limits}
\end{figure}
|
2,869,038,157,061 | arxiv | \section{Introduction}
The spectacular new data obtained in the past year or two with HST, Keck
and other large telescopes are finally giving substance to the adjective
``few'' in Y.B. Zel'dovich's famous 1977 statement: ``It will only be a
few years before the origin and evolution of galaxies is understood.'' A
full understanding of the origin and evolution of galaxies, however, will
require a great deal of detailed theoretical modelling to uncover the
physical processes manifested in the data. At present such modelling is
lagging behind observational advances partly because of the breathtaking
pace of these advances and partly because some of the physical processes at
work, particularly those involving gas dynamics and star formation, are
intrinsically very complex.
Two interrelated techniques are available for theoretical modelling of
galaxy formation and evolution: numerical simulations and semi-analytic
modelling. The overall strategy is the same in both cases: to calculate
how density perturbations emerging from the Big Bang turn into visible
galaxies. This requires following a number of processes: (i) the
growth of dark matter halos by accretion and mergers, (ii) the dynamics of
cooling gas, (iii) the transformation of cold gas into stars, (iv) the
spectrophotometric evolution of the resulting stellar populations, (v) the
feedback from star formation and evolution on the properties of prestellar
gas and (vi) the build-up of large galaxies by mergers. Numerical
simulations so far have focussed on a small subset of these
processes which are treated in as realistic a way as is allowed by current
algorithms and computing power. The semi-analytic approach, on the other
hand, considers the combined effect of all these processes which are
simplified into parametric rules distilled from simulations or analytic
considerations.
The numerical and semi-analytic approaches are clearly complementary and
have different strengths and weaknesses. The simulations generally attempt
to model the relevant physics from first principles, but still require
various approximations and free parameters. For example, when dealing with
gas dynamics one needs to chose between Lagrangian methods like ``Smooth
Particle Hydrodynamics (SPH)'' (Katz \& Gunn 1991, Navarro \& White 1993,
Steinmetz \& Muller 1995, Evrard \it et al. \rm 1994)
or Eulerian methods (Cen \& Ostriker 1993).
There are also choices to be made regarding the cooling
processes to be included, the mechanism to lay down initial conditions,
the resolution of the calculation, etc. For more realistic modelling of
galaxies, it is also necessary to include ad hoc algorithms for turning
cold gas into stars and for coupling the energy liberated by
stellar winds and supernovae to the gas.
In the semi-analytic approach (Cole 1991, White \& Frenk 1991, Kauffmann
\it et al. \rm 1993, Lacey \it et al. \rm 1993, Cole \it et al. \rm 1994), the required
approximations and free parameters are more readily apparent. The backbone
of this technique is a Monte-Carlo implementation of the ``extended
Press-Schechter theory'' (Bower 1991, Bond \it et al. \rm 1991) used to describe the formation
of dark matter halos by hierarchical clustering and merging (Kauffmann \&
White 1993, Lacey \& Cole 1993). An
attractive feature of this approach is that within a fairly general
framework, a full model of galaxy formation is specified by a surprisingly
small number of free parameters. This a common feature of the two main
semi-analytic models in existence today, that of G. Kauffmann and
collaborators (Kauffmann \it et al. \rm 1993, 1994, Kauffmann 1996, 1996)
and that of the present authors (Cole \it et al. \rm 1994, Heyl \it et al. \rm 1995, Baugh,
Cole \& Frenk 1996a, 1996b).
We summarize here the free parameters that appear in our semi-analytic
model since this will be used in the remainder of this paper. Within a
given cosmology, the model requires fixing five parameters (see Cole \it et al. \rm
(1994) for further details): (i) the star formation timescale, i.e. the
timescale on which gas that has cooled inside a dark matter halo is turned
into stars; (ii) a feedback parameter which determines the efficiency with
which energy liberated from supernovae and stellar winds reheats gas
cooling inside a halo; (iii) the initial mass function of the stars that
form; (iv) the timescale on which a galaxy falling onto a halo merges with
the central galaxy and (v) the fraction of the stellar mass in stars above
the hydrogen burning limit. To describe the broad morphology of a galaxy
(i.e. its bulge-to-disk ratio) a sixth parameter is required: (vi) a
threshold mass fraction for a merger to turn a disk into a spheroid. It
must be emphasized that these are not fitting parameters but rather
parameters that describe various astrophysical processes, mostly related to
star formation, that are poorly understood. Lacking a full physical
understanding of these processes, it seems sensible to adjust the
parameters so as to obtain as good a match as possible to a few basic
observational data, in our case, to the local galaxy luminosity functions
in the B and K bands.
In this article, we summarize some of the lessons learned from
numerical and semi-analytic models of galaxy formation, highlighting a
number of unresolved issues (Section~2). We then deploy our semi-analytic
tools to explore the implications of the recent discovery by Steidel
\it et al. \rm (1996)
of a population of star forming galaxies at redshift $z>3$
(Section~3). We conclude with a brief discussion in Section~4.
\section{A summary of current theoretical issues}
Most theoretical work on galaxy formation is carried out within the
framework of hierarchical clustering and gravitational instability (eg
White \& Rees 1978, Peebles 1980). Within this general picture, the
various relevant processes are understood at different levels. Progress in
several of these areas may be summarized as follows.
\medskip
\noindent $\bullet$ {\it Dark matter halos}.
Processes associated with the gravitational evolution of dark matter halos
are reasonably well understood. This subject has progressed significantly
in the past 15 years as a result of the increased sophistication of N-body
simulations allied to some degree of analytic insight. Thus, in a given
cosmological model, the abundance of dark matter halos, their merging
history and their internal structure can be predicted reliably (Press \&
Schechter 1974, Frenk \it et al. \rm 1988, Lacey \& Cole 1993,
Navarro, Frenk \& White 1996, Cole \& Lacey 1996).
For example, recent high resolution N-body simulations by
Navarro, Frenk \& White (1996) have established that independently of the
cosmological model, dark matter halos of all masses develop a mass density
profile that follows a simple, two-parameter form, scaling like $r^{-1}$
in the central regions and like $r^{-3}$ near the virial radius. The two
parameters of the fit, which can be expressed as the mass and
characteristic density of each halo, turn out to be strongly correlated:
low-mass halos are significantly denser than more massive halos because,
on average, they form earlier. Thus, in effect, the spherically averaged
density profiles of dark matter halos are described by a universal
one-parameter function.
\medskip
\noindent $\bullet$ {\it The shape of the luminosity function}. Both
numerical simulations and analytic considerations indicate that the mass
function of galactic halos has a steeper slope at the low-mass end
($\alpha \simeq 2$) than the observed field galaxy luminosity function
($\alpha \simeq 1$) (Loveday \it et al. \rm 1992, but see Marzke \it et al. \rm 1994). The
semi-analytic models, however, have
demonstrated that the faint end of the galaxy luminosity function is determined
by the combined effect of mergers and feedback but, in general, no model
so far has succeeded in producing a faint end slope much flatter than
$\alpha \simeq 1.5$. At the bright end, the galaxy luminosity function cuts
off exponentially, much as observed, as a result of the large cooling time
of gas in massive halos.
\medskip
\noindent $\bullet$ {\it The Tully-Fisher relation}. The Tully-Fisher
relation predicted in semi-analytic models in a variety of cosmologies has
a slope and scatter quite similar to those observed (White \& Frenk 1991,
Kauffmann \it et al. \rm 1993, Lacey \it et al. \rm 1993, Cole \it et al. \rm 1994, Heyl \it et al. \rm 1995).
However, so far it has proved impossible to match simultaneously the
zero-point of this relation and the amplitude of the galaxy luminosity
function. The overall luminosity normalization of the models (parameter
(v) above) can be chosen to match one or the other, but not both. This is
another outstanding problem and results from the overabundance of galactic dark
halos predicted in the models. The problem is particularly severe for
standard CDM, but it is still present in low-density variants of this
cosmology.
\medskip
\noindent $\bullet$ {\it The colours of galaxies}. Standard stellar
population synthesis models and a standard IMF are sufficient to produce
model galaxies with the spread of colours observed in the local population.
This is true in most popular cosmologies except in the mixed dark matter
model in which galaxies are much too young and thus much too blue compared
to observations (Heyl \it et al. \rm 1995).
\medskip
\noindent $\bullet$ {\it The morphologies of galaxies}. N-body/gas dynamic
simulations produce galaxies with spiral disks and bulges (Katz \& Gunn
1991, Steinmetz \& Muller 1995) and merger remnants that resemble
ellipticals (Barnes \& Hernquist 1996, Mihos \& Hernquist 1996). However,
gaseous disks in simulations with realistic initial conditions have much
smaller radii than observed disks because the fragments from which they
form lose angular momentum to the halo as they merge (Navarro \& Benz 1991,
Navarro, Frenk \& White
1995). Thus, contrary to common belief, the origin of the angular momentum
of spiral disks is not yet fully understood. Incorporating a simple
prescription for merger-induced transformations of disks into spheroids in
semi-analytic models reproduces the Dressler (1980) morphology-density
relation (Kauffmann 1995, Baugh \it et al. \rm 1996a). This success provides
suggestive support for the view that accretion of rotating gas and mergers
are the key ingredients in understanding the broad morphological
characteristics of galaxies. The same environmental effects in clusters
that produce the morphology-density relation today are responsible for the
Butcher-Oemler effect (Kauffmann 1996, Baugh \it et al. \rm 1996a).
\medskip
\noindent $\bullet$ {\it The colour-magnitude relation of cluster
ellipticals}. Semi-analytic models tend to produce colour-magnitude
relations with an approximately flat slope and small scatter (Kauffmann
1996, Baugh \it et al. \rm 1996b). This is a counterintuitive result in hierarchical
clustering models in which small objects form first and might therefore
be expected
to be redder. It arises because star formation in subgalactic
fragments generally precedes the assembly of the galaxy by mergers.
Furthermore, elliptical galaxies tend to form from fragments whose mass is
biased towards large values. The traditional argument that the small
scatter in the colour-magnitude relation requires ellipticals to be old
and mergers to be unimportant thus appears to be
incorrect. However, the observed colour-magnitude relation has a small but
non-negligible slope (Bower, Lucey \& Ellis 1992). In the context of current models,
this must arise from metallicity effects which are neglected at
present. It remains a major challenge for the models to reproduce the
observed slope while retaining a small scatter.
\begin{figure}[ht]
{\epsfxsize=11.5truecm \epsfysize=8.truecm
\epsfbox[-50 435 590 720]{nz_bcowie.ps}}
\caption[junk]
{
The redshift distribution of galaxies with magnitudes in the range $22.5 <
B < 24.0$. The solid histogram shows the data of Glazebrook {\it et al}
(1995), while the dashed histogram shows the (more complete) data of Cowie {\it et
al} (1996). The lines show the predictions of the model of Cole {\it
et al} (1994) for a Scalo IMF (solid line) and a Miller-Scalo IMF
(dotted line).
}
\label{fig:dndz}
\end{figure}
\medskip
\noindent $\bullet$ {\it Counts of faint galaxies as a function of
magnitude, redshift and morphology}. A notable success of the semi-analytic
models is the excellent match they provide to the counts of faint galaxies
as a function of magnitude, redshift and morphology. The supporting data
are presented in the papers by White \& Frenk (1991), Lacey \it et al. \rm (1993),
Kauffmann \it et al. \rm (1993), Cole \it et al. \rm (1994), and Baugh {\it et al} (1996a).
The agreement is particularly good in the standard CDM
model but it is also acceptable in low-density CDM models. Particularly
noteworthy is the match to the morphological data from the Hubble Deep
Field discussed by Baugh \it et al. \rm (1996a) and the prediction that faintwards
of $I\simeq 25$ the galaxy counts should become increasingly dominated by
irregulars. Also noteworthy is the successful prediction of the redshift
distribution of $B\simeq 24$ mag galaxies. The model predictions of Cole
\it et al. \rm (1994), published before the observations were made, are compared
with the
recent data of Cowie \it et al. \rm (1996) in Figure~1. This agreement is the most
striking indication so far that the models contain some element of truth.
A consequence of these successes is that the models of Baugh \it et al. \rm (1996a)
also give a reasonable match to the redshift evolution of the luminosity
function recently measured from the CFRS survey by Lilly \it et al. \rm (1995) and
from a combination of surveys by Ellis {\it et al} (1996).
It should be noted, however, that
the good match to faint data is due, in part, to the steep faint end slope
in the model luminosity function for local field galaxies.
\section{The Lyman break galaxies}
Steidel \it et al. \rm (1996) have recently discovered a population of star
forming galaxies at redshift $z\simeq 3-3.5$. In the context of the models
discussed here, these galaxies are among the first objects in
which appreciable star formation has taken place. Because of their great
importance in understanding galaxy formation, we discuss them here in
some detail.
\begin{figure}[ht]
{\epsfxsize=11.7truecm \epsfysize=11.7truecm
\epsfbox[-75 140 590 720]{summary_bw.ps}}
\caption[junk]
{
The properties of Lyman-break or ``UV drop-out" galaxies identified in our
models using identical selection criteria to those applied to the
observations of Steidel {\it et al} (1996). Results are given for the
standard CDM model (solid lines) and the $\Lambda$CDM model (dashed
lines). The top panel shows the distribution of stellar mass and halo
mass; the middle panel shows the distribution of halo circular velocities;
and the bottom panel shows the distribution of star formation rates. The
arrows in the middle panel indicate the range inferred from the
observations.
}
\label{fig:steidel}
\end{figure}
Candidate high-z galaxies were identified spectroscopically, using $U_n, G$
and $R$ filters (Steidel, Pettini \& Hamilton 1995). At $z\simeq 3$ the
912 \AA\ break produced by the Lyman limit shifts into the $U_n$ filter
passband while, for the roughly flat spectrum characteristic of a
star-forming object, the fluxes in the two other filters are
comparable. Follow-up spectroscopy at Keck revealed that the objects so
identified are indeed star-forming galaxies at $3.0 \lower .75ex \hbox{$\sim$} \llap{\raise .27ex \hbox{$<$}} z \lower .75ex \hbox{$\sim$} \llap{\raise .27ex \hbox{$<$}}
3.5$. Steidel \it et al. \rm find that these galaxies represent 1.3\% of the faint
counts brighter than $R=25$, corresponding to a comoving number density
comparable to that of present day $L_*$ galaxies. The spectra of these
galaxies are similar to those of nearby star-forming regions; their
circular velocities are $250\leq V_c/({\rm km\ s}^{-1}) \leq 450$ (if the line widths of
saturated interstellar lines are assumed to reflect the circular velocity
of the galaxy); and their typical star formation rates are inferred to be
$\sim 2 h^{-2}{\rm M_\odot}$ yr$^{-1}$ for $q_0=0.5$ and $\sim 6 h^{-2}{\rm M_\odot}$ yr$^{-1}$ for
$q_0=0.05$ (where $h$ is Hubble's constant in units of 100 km~s$^{-1}$~Mpc$^{-1}\,$.)
At first sight, the existence of a sizeable population of massive
star-forming galaxies at such high redshifts may appear surprising,
particularly in the context of the standard $\Omega=1$, $h=0.5$ CDM
cosmology in which, as has been emphasized for a number of years, galaxy
formation is a relatively recent phenomenon (Frenk \it et al. \rm 1995). Our
semi-analytic machinery allows us to investigate in detail whether galaxies
with the required properties occur in a given cosmological model. Here we
present results for our standard CDM model (normalised to
$\sigma_8=0.67$ so as to give the observed local abundance of rich galaxy
clusters) and for a flat
COBE-normalized ``$\Lambda$CDM'' model ($\Omega=0.3$, $\Lambda=0.7$,
$h=0.6$, $\sigma_8=0.97$). Further details of this analysis and results for
other cosmologies will be presented in a forthcoming paper.
Following our general philosophy, we apply our fully specified model,
i.e. the model in which all free parameters have been previously fixed by
reference to local galaxy data, in essence the model published by Cole {\it
et al} (1994). The only change we have made is to assume a Miller-Scalo
rather than a Scalo IMF. As Cole \it et al. \rm discuss, the choice of IMF has
little effect on the properties of the local galaxy population but it does
affect the properties of galaxies at high redshift.
We first selected galaxies using exactly the same
filters and colour criteria as Steidel {\it et al.}, taking into account the effects of
absorption by intervening cold gas (Madau 1995). These criteria did indeed pick
out galaxies at $2.8 \lower .75ex \hbox{$\sim$} \llap{\raise .27ex \hbox{$<$}} z \lower .75ex \hbox{$\sim$} \llap{\raise .27ex \hbox{$<$}} 3.5$. The standard CDM model produced
2400 galaxies per square degree brighter than $R=25$ satisfying the colour
criteria, of which 1200 lie in the redshift interval $z=3-3.5$. The
corresponding numbers in the $\Lambda$CDM model are 3700 and 1700. From
their 31 robust candidates, Steidel \it et al. \rm estimated $1400\pm 300$ galaxies
per square degree in this redshift interval.
The properties of our model ``Lyman-break'' galaxies are displayed in
Figure~2. In the standard CDM model their typical stellar masses are a few
times $10^9 h^{-1}{\rm M_\odot}$ ($\sim 10^{10}h^{-1}{\rm M_\odot}$ in the $\Lambda$CDM model)
and these galaxies inhabit dark matter halos with typical mass $10^{12}
h^{-1}{\rm M_\odot}$. The velocity dispersions of the standard model galaxies are
remarkably similar to those measured by Steidel
\it et al. \rm Finally, the model star formation rates also agree well with the
rates inferred by
Steidel \it et al. \rm (Star formation rates are not directly measured in the data
but inferred from the $R$ magnitude assuming an IMF and a stellar
population synthesis model which are similar, but not identical, to those
in our galaxy formation model.)
The success of our CDM models of galaxy formation in accounting for the
observed abundance and overall properties of the Steidel \it et al. \rm galaxies is
both striking and surprising. However, several caveats are in
order. Firstly, the predicted abundance of star-forming galaxies at high
redshift depends sensitively on at least two model assumptions: the IMF and
the normalisation of the linear density fluctuation spectrum,
$\sigma_8$. Adopting a Scalo rather than a Miller-Scalo IMF reduces the
total number of faint R-band counts by only 20\% but it reduces the number
of Lyman-break galaxies in the redshift interval of interest by about a
factor of 10. Similarly, reducing $\sigma_8$ from our adopted value of 0.67
in the standard CDM model to 0.5 reduces the number of high redshift
galaxies also by a factor of approximately 10. These uncertainties dwarf
the changes produced by varying the other cosmological parameters of the model.
\begin{figure}[ht]
{\epsfxsize=12.truecm \epsfysize=7.truecm
\epsfbox[-100 435 640 720]{carlos_mz.ps}}
\caption[junk]
{
The mass in stars formed by redshift z as a fraction of the final mass in
stars at redshift zero. The solid line shows results for the standard CDM
model, while the dotted lines show results for flat COBE-normalised
cosmological models
with $\Omega_0=0.3$, $\Lambda=0.7$ and $h=0.6$. The lower dotted curve has
a spectral shape parameter $\Gamma=0.18$ and the upper dotted curve has
$\Gamma=0.3$. In all cases, less than $10 \% $ of the total mass of stars
has formed by $z=3.5$.
}
\label{fig:mstarsz}
\end{figure}
Regardless of the uncertainties just discussed, the Lyman-break galaxies of
Steidel \it et al. \rm correspond to the first objects in our models in which
significant star formation is taking place. Figure~3 shows how the overall
stellar population builds up in our two models (and in a variant of the
$\Lambda$CDM model in which the initial power spectrum has more
power on galactic scales.) In all cases, only a small fraction of the final stellar component
of the Universe has formed by $z=3.5$. The standard CDM and $\Lambda$CDM
models have almost identical star formation
histories and both have formed less than 5\% of the total stellar
population by $z=3.5$. In the low-density model with more small scale power
this fraction is still less than 10\%. Thus, in the class of models we are
considering, the redshift $z\simeq 3.5$ at which the Steidel \it et al. \rm Lyman
break galaxies are found is close to the onset of galaxy formation. Very
few bright objects should exist beyond this redshift.
\begin{figure}[ht]
{\epsfxsize=13.truecm \epsfysize=11.truecm
\epsfbox[0 150 590 710]{sfr.ps}}
\caption[junk]
{
The distribution of star formation rates at four redshifts: $z=3$
(dot-dashed curves), $z=2.35$ (dashed curve), $z=1$ (dotted curves) and
$z=0$ (solid curves). The upper panel shows results for our standard CDM
model and the lower panel for our $\Lambda$CDM model. The distribution of
star formation rates evolves slowly between the epochs shown and is highest
at $z=1$.
}
\label{fig:sfr}
\end{figure}
Our predicted (differential) star formation rates at
four different redshifts are shown in Figure~4. The upper panel gives
results for the standard CDM model and the lower panel for the
$\Lambda$CDM model. The lower abscissa is labelled by the actual star
formation rate while the corresponding 1500 \AA\ luminosities are given in
the upper abscissa. In both cosmological models, the distribution of star
formation rates has a similar shape at all times, but the rates are higher
at $z=1$ than at $z=3$ or $z=0$. The evolution in the star formation rate
is relatively mild: over most of the range, the comoving abundance of galaxies
varies by less than an order of magnitude between the peak at $z\simeq 1$
and the present.
We can interrogate our galaxy formation model to find out what sort of
objects the Lyman-break galaxies eventually turn into. Two examples taken
from our standard CDM model are shown in the ``tree diagrams" of
Figure~5. Redshift decreases downward in these plots and the width of the
shaded region is proportional to the mass in stars at each epoch. Stars
generally form in subgalactic fragments at high redshift which grow larger
as gas cools onto a disk and turns into stars. Fragments can merge together
and, if the merger is massive enough, the disks turn into a spheroid; a new
disk may grow by subsequent accretion of gas (Kauffmann \it et al. \rm 1993,
Baugh \it et al. \rm 1996b). The galaxy on the
left of Figure~5 experienced only two very small mergers at $z\simeq 3$ and
grew almost entirely by accretion. This object ends up as a late type
spiral galaxy with a very small bulge. The galaxy on the right formed by
the merger of several fragments, including a major merger at $z\simeq 0.3$.
This galaxy ends up as an elliptical. The asterisks at high $z$ represent
Lyman-break objects that satisfy the Steidel \it et al. \rm selection
criteria. The spiral galaxy is the descendant of a single fairly massive
Lyman-break object; the elliptical harbours the descendants of two less
massive Lyman-break objects which merged at relatively recent epochs.
The present-day luminosity function of galaxies which had a Lyman-break
progenitor at $3<z<3.5$ is shown in Figure~6 for our two cosmological
models and compared with estimates of the local field luminosity function.
In both models the descendants populate the bright end of the luminosity
function and, in the standard CDM model, most galaxies with $M_B -5 {\rm
log} h \simeq -21$ once harboured a Lyman-break object.
\begin{figure}[ht]
\begin{picture}(300,200)
\put(5,0)
{\epsfxsize=6.3truecm \epsfysize=6.3truecm
\epsfbox[20 140 590 720]{tree1.ps}}
\put(190,0)
{\epsfxsize=6.3truecm \epsfysize=6.3truecm
\epsfbox[20 140 590 720]{tree2.ps}}
\end{picture}
\caption[junk]
{
Star formation histories of two present day galaxies that contained a
Lyman-break progenitor (marked by an asterisk) satisfying the selection
criteria of Steidel {\it et al} (1996). Redshift decreases downward and
the width of the shaded region is proportional to the mass in stars at
each epoch. The galaxy on the left ends up as a late-type spiral; the
galaxy on the right ends up as an elliptical.
}
\label{fig:tree}
\end{figure}
\begin{figure}[ht]
{\epsfxsize=12.truecm \epsfysize=7.truecm
\epsfbox[-80 400 590 720]{lum_fun.ps}}
\caption[junk]
{
Present-day luminosity functions. The squares show estimates of the field
galaxy luminosity function in the local universe by Loveday \it et al. \rm (1992;
open symbols) and Marzke \it et al. \rm (1994; solid symbols). The curves show
predicted galaxy luminosity functions in the standard CDM (solid curves)
and $\Lambda$CDM (dotted curves) models. The curves that extend over the
entire range of magnitudes are the predicted local field galaxy luminosity
functions. The curves near the bottom left of the diagram are the
predicted present-day luminosity functions of galaxies which had a
Lyman-break progenitor at $3<z<3.5$. Many of the brightest galaxies seen
today harboured a Lyman-break object at high redshift.
}
\label{fig:lf}
\end{figure}
\section{Conclusions}
Theoretical studies of galaxy formation, based on numerical simulations and
semi-analytic techniques are an essential complement to observational studies
of the high redshift universe. Such modelling is required in order to
establish the connection between different types of data and their relation
to the physics of galaxy formation in a cosmological setting. Although
some of the physical processes involved, particularly those associated with
star formation, are very complex and poorly understood, progress can be
made by complementing a physically based description with heuristic rules
to describe star formation.
Semi-analytic models now exist in which the detailed properties of the
galaxy population at all epochs can be predicted {\it ab initio}, starting
from a cosmological spectrum of density fluctuations. The various physical
processes can be characterised by a minimum of free parameters, all of
which are fixed by reference to a small subset of the data for local
galaxies. We have illustrated the predictive power of these semi-analytic
models by comparing our published predictions for the redshift distribution
of galaxies of $B\simeq 24$ mag with recent data from Cowie \it et al. \rm (1996)
(Figure~1). The excellent agreement between them is the most striking
demonstration so far of the virtues of this approach.
In general, the best understood aspects of galaxy formation are those
related to their dark matter component. The abundance, merging history and
internal structure of galactic halos are all reasonably well established
in a variety of cosmological models of hierarchical clustering. Some
understanding also exists of the physical basis of observable
properties such as the general shape of the galaxy luminosity function,
the slope and scatter of the Tully-Fisher relation, the general features
of the colour-magnitude diagram, the gross morphological properties of
galaxies in different environments, and the counts of faint galaxies as a
function of magnitude, redshift and morphology. All of these properties
can be explained, at least at some level, within a broad class of CDM
cosmologies.
Several fundamental properties of the galaxy population remain poorly
understood. Examples include the faint end slope of the field luminosity
function which is predicted to be significantly steeper than the standard
estimate (Loveday \it et al. \rm 1992). None of the existing models can simultaneously
match the zero-point of the Tully-Fisher relation and the overall
amplitude of the galaxy luminosity function, a problem which can be traced
back to an overabundance of dark matter halos predicted in all CDM
cosmologies. While the small scatter in the observed colour-magnitude
relation for cluster ellipticals does not seem incompatible with
hierarchical clustering, none of the models published to date can
account for the measured slope in this relation.
In spite of the unsolved problems just mentioned, semi-analytic modelling
remains a powerful tool to interpret the recent exciting new data on the high
redshift universe. As an example, we presented
in this article results from new calculations which attempt to identify
the evolutionary status of the Lyman-break galaxies at $z\simeq 3-3.5$
recently discovered by Steidel \it et al. \rm (1996). Perhaps surprisingly, we
found that the abundance and global properties of these objects is almost
exactly what is predicted by our fiducial model of galaxy formation based
on the standard CDM cosmology. Although the predicted abundance is, in
fact, quite sensitive to certain model assumptions such as the IMF and the
amplitude of mass fluctuations, in general, these galaxies are among the
first objects in which appreciable star formation is taking place. Within
a broad class of CDM models, it appears that the Steidel \it et al. \rm objects
signal the onset of significant galaxy formation. These objects evolve
into the population of bright normal galaxies seen today. Our models seem
to imply that the long awaited discovery of the early phases of normal
galaxy evolution has now taken place.
|
2,869,038,157,062 | arxiv | \section{Introduction}
Motion planning is a fundamental problem in the field of robotics and control, which aims at finding paths or trajectories to guide mobile robots moving from initial conditions to their respective destinations, while avoiding collisions with obstacles and other robots. One of the most typical robotic systems is the nonholonomic mobile robot, which is also referred to as the unicycle-type vehicle. Although neglected by a number of persons, two facts regarding the nonholonomic mobile robot should be mentioned above all. One is that the theoretical model of such a kind of robot is essentially a rigid body rather than a point of mass \cite{Bloch2015Nonholonomic}, since the robot states include both position and orientation (or attitude). Different from a 3-degree-of-freedom (DOF) rigid body moving freely on a plane, such a robot has no lateral velocity due to the nonholonomic constraint, which naturally brings about the other fact that the nonholonomic mobile robot is indeed an underactuated system \cite{Bullo2005Geometric}. To be more specific, the robot has three DOFs (two translation DOFs and one rotation DOF), while possessing only two control inputs (one linear velocity and one angular velocity). These two facts make the motion planning for nonholonomic mobile robots more challenging and demanding.
Although a variety of methodologies have been proposed for motion planning, such as roadmap \cite{Kavraki1996Probabilistic,Bhattacharya2008Roadmap,Lehner2018Repetition}, cell decomposition \cite{Cai2009Information,Zhang2008Efficient,Cowlagi2012Multiresolution}, sampling-based algorithm \cite{Karaman2011Sampling,Jaillet2010Sampling,Oh2021Chance-Constrained}, they cannot be applied to nonholonomic mobile robots resulting from omitting the kinematics or only considering the model of the point of mass. Regarding the nonholonomic model, some researchers employ the optimization-based methods \cite{Hussein2008Optimal,Hausler2016Energy,Zhao2021Pareto,Bloch2021Dynamic,Li2021Efficient,Cichella2021Optimal,Zhao2022Scalable,Li2021Optimal}. For example, Bloch et al. in \cite{Bloch2021Dynamic} formulate the problem as dynamic interpolation on Riemannian manifolds and provide the necessary conditions for optimality. Li et al. in \cite{Li2021Efficient} propose a prioritized optimization method so as to compute the planning results efficiently. In \cite{Cichella2021Optimal}, Cichella et al. utilize the Bernstein polynomials to transform the motion planning into a discrete optimization approximately. There is no denying that optimization methods have the advantage of easily handling unicycle models, because the nonholonomic constraints can always be included as the optimization constraints. Nevertheless, the feasibility of such optimization problems are unable to be guaranteed, or they suffer from extremely heavy computational burden. Another drawback is that the control inputs derived from optimization are open-loop, relying only on time, thereby are not robust to disturbances.
Given above shortages, researchers are motivated to investigate feedback motion planning algorithms for mobile robots. A typical closed-loop methodology is the velocity vector field, where a velocity vector related to the state is defined at every point in the configuration space and the integral curve of the vector field converges to the goal point. The most common vector field is defined by the gradient of a potential function, also referred to as the potential field \cite{Ge2002Dynamic,Huang2009Velocity,Valbuena2012Hybrid,Karagoz2014Coordinated,Kovacs2016A_novel,Tian2021An_Overall}, but one of its inherent limitations is the possible local minina other than the desired state \cite{Koren1991Potential}. Particular forms of potential function overcome such a drawback, that is, harmonic function \cite{Kim1992Real-time,Garrido2010Garrido,Masoud2012Motion} and navigation function \cite{Rimon1992Exact,Loizou2008Navigation,Li2019Navigation}. However, the former has demanding computational complexity related to PDEs, while the latter is difficult to implement since a lower bound to ensure no local minima is unknown in advance.
As a matter of fact, the velocity vector field does not necessarily have to be given by a potential gradient. Instead, it can be defined directly over the configuration space. To the best of our knowledge, few works focus on motion planning via non-gradient-based vector fields, except two seminal papers by Lindemann et al. \cite{Lindemann2009Simple} and Panagou \cite{Panagou2017A_Distributed}. In \cite{Lindemann2009Simple}, the environment with obstacles is decomposed into convex polytopes, in which simple local vector fields are defined and smoothly blended to form a global vector field convergent to the desired point. The author in \cite{Panagou2017A_Distributed} proposes a family of 2-D analytic vector fields, which exhibit different patterns (such as attractive or repulsive) by choosing the value of an parameter, so that the overall vector field can finally be obtained by a suitable blending design.
Regardless of open-loop or feedback algorithms, the aforementioned results rarely consider a significant yet implicit fact typically occurring in real-world scenarios. Actually, apart from the position constraints, the orientation of a robot is also usually required to reach a desired direction at the terminal time. For instance, in the multi-robot surveillance mission, the final orientation of each robot should point to a certain direction so as to obtain the largest overall surveillance area. Similarly, the attitudes of missiles are generally specified in the terminal guidance so as to realize a better performance of coordinated attack. Thus, these practical demands strongly motivate us that it is necessary to incorporate the orientation constraints into the motion planning. Additionally, such a motivation is further strengthened from a theoretical point of view. As mentioned above, the model of a nonholonomic mobile robot is a rigid body. Then, serving as a state, the attitude angle should also be specified to be a desired value, similar to the position, rather than being left randomly, which can be referred to as full-state motion planning. Since the nonholonomic mobile robot is an underactuated system, the biggest challenge lies in how to achieve full-state planning by fewer control inputs.
Motivated by above discussions, in this paper we consider the simultaneous position and orientation planning of nonholonomic multi-robot systems by designing a non-gradient-based velocity vector field. To begin with, the mobile robot is modelled as a rigid body with nonholonomic constraints rather than a point of mass. More importantly, besides the desired position constraint, we take into account the orientation constraint at the final time instant as well. Next, different from common static vector fields on a 2-D plane, we propose a novel dynamic vector field (DVF) in the sense that the dynamics of the attitude angle are introduced into the vector field. This implies that the velocity direction at a certain point is decided by not only the position but also the orientation of robot that passes such a point. Thus, by moving along the integral curve of the DVF, the robot can reach the specified position at the terminal time, and meanwhile the attitude angle can converge to the desired value following the orientation dynamics. Subsequently, the control inputs or velocities are designed based on the DVF, where the feedback of an extra angle in the body-fixed frame is brought in as an additional angular velocity, with the result that the robot orientation can be tuned along the direction of the DVF to deal with the lack of lateral velocity. Moreover, under the frame work of the DVF, the problems of obstacle avoidance and mutual-robot-collision avoidance are studied by proposing a circular vector field, where robots can move along the tangential directions of the round obstacles so as to evade collisions. Note that such two kinds of collision avoidance problems are both able to be solved by one circular vector field, which greatly simplifies the design of planned trajectories.
Apart from the dynamical characteristics, the following merits also distinguish our method from the existing motion planning results which also utilize non-gradient-based vector fields. As opposed to \cite{Lindemann2009Simple}, the proposed dynamic vector field is unnecessary to be derived based on a cell decomposition of the whole environment, then advanced high-level discrete motion planning is not required in this paper. In contrast to \cite{Panagou2017A_Distributed}, the dynamic vector field is global over the state space in the sense that the initial and final configurations can both be chosen arbitrarily, while the vector field in \cite{Panagou2017A_Distributed} has a separatrix (or mirror line) where the integral curves can become divergent possibly.
The organization of this paper is given below. Section~\ref{sec_preli} provides mathematical preliminaries and formulates the motion planning problem. Section~\ref{sec_DVF} proposes the dynamic vector field approach, under which the problems of obstacle avoidance and collision avoidance are solved by designing a circular vector field in Section~\ref{sec_ob_avoid} and Section~\ref{sec_co_avoid}, respectively. Section~\ref{sec_sim} gives several numerical simulation examples to verify the effectiveness of the proposed vector fields. Finally, conclusions end the whole paper in Section~\ref{sec_con}.
\section{Preliminaries and Problem Statement}\label{sec_preli}
Several commonly-used notations are defined in advance. The identity matrix in $\mathbb{R}^n$ is denoted by $\bm{I}_n$. The base vectors of $\mathbb{R}^3$ are denoted by $\bm{e}_1,\bm{e}_2,\bm{e}_3$. The symbol $\bm{0}_{m\times n}$ represents a matrix in $\mathbb{R}^{m\times n}$ with all zero components. The Euclidean norm of a vector is denoted by $\|\cdot\|$. Moreover, variables denoting vectors and matrices are written in bold, while those denoting scalars are not.
As mentioned in the Introduction, the model of the nonholonomic mobile robot is a rigid body. Thus, before providing the nonholonomic model, we firstly consider a fully actuated planar rigid body moving in a 2-D Euclidean space. Let $\bm{\mathcal{F}}_{\mathcal{E}}$ denote the earth-fixed frame, and let $\bm{\mathcal{F}}_{\mathcal{B}}$ represent the body-fixed frame, which is attached to the center of mass of the rigid body. The position of the rigid body in $\bm{\mathcal{F}}_{\mathcal{E}}$ is given by a vector $\bm{p}=[x\ \ y]^{\rm T}\in\mathbb{R}^2$, and the attitude is specified by a rotation matrix $\bm{R}\in\mathbb{R}^{2\times 2}$, which depicts the rotation of $\bm{\mathcal{F}}_{\mathcal{B}}$ relative to $\bm{\mathcal{F}}_{\mathcal{E}}$. Herein, the rotation matrix $\bm{R}$ can be parameterized by a scalar $\theta\in[-\pi,\pi]$, that is,
\begin{equation}\label{eq_rotation_matrix}
\bm{R}=\begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix},
\end{equation}
where $\theta$ is interpreted as the attitude angle of the rigid body. Let $\omega\in\mathbb{R}$ and $\bm{v}=[v^x\ v^y]^{\rm T}\in\mathbb{R}^2$ denote the rigid body's angular velocity and linear velocity in the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$. Then, the kinematics of the fully actuated rigid body can be given by
\begin{subequations}\label{eq_kine_fully}
\begin{align}
\dot{x} &= v_x\cos\theta-v_y\sin\theta, \\
\dot{y} &= v_x\sin\theta+v_y\cos\theta, \\
\dot{\theta} &= \omega.
\end{align}
\end{subequations}
Regarding nonholonomic mobile robots, due to no side slip of the wheels, the robot cannot move sideways. In other words, the velocity along the $Y_{\mathcal{B}}$-axis of the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$ is always zero, that is, $v_y=0$. Such a constraint is named the nonholonomic constraint, since its reformulation in the earth-fixed frame $\bm{\mathcal{F}}_{\mathcal{E}}$ which is a differential equation given by
\begin{equation*}
\dot{x}\sin\theta-\dot{y}\cos\theta=0
\end{equation*}
cannot be integrated to be an algebraic equation. Thus, according to (\ref{eq_kine_fully}), the kinematic model of a nonholonomic mobile robot degenerates to
\begin{subequations}\label{eq_kine_nonho}
\begin{align}
\dot{x} &= v_x\cos\theta, \\
\dot{y} &= v_x\sin\theta, \\
\dot{\theta} &= \omega.
\end{align}
\end{subequations}
Given the fact that the nonholonomic mobile robot moves in an environment possibly populated with obstacles, we assume that each obstacle can be bounded by a circular region with radius $r_o>0$. Moreover, suppose that multiple nonholonomic mobile robots move in a common workspace simultaneously. Therefore, the potential collisions with obstacles and among robots should both be taken into account in the motion planning process. To acquire the information of obstacles and other robots positions, the robot $i$ ($i=1,\cdots,N$) is assumed to have a circular sensing range with radius $R_s$, which can be defined by
\begin{equation*}
\mathcal{S}_i=\left\{\bm{q}\in\mathbb{R}^2\ \big| \ \|\bm{q}-\bm{p}_i\|\leq R_s \right\},
\end{equation*}
where $\bm{p}_i$ is the current position vector of the robot $i$. Then, once obstacles and other robots appear into the sensing region $\mathcal{S}_i$, the robot $i$ can obtain the position information of them.
The motion planning problem for multiple nonholonomic mobile robots can be formulated below.
\emph{Problem Statement:} Regarding $N$ nonholonomic mobile robots described by (\ref{eq_kine_nonho}), design linear velocity $v_{ix}$ and angular velocity $\omega_i$ for the robot $i$ ($i=1,\cdots,N$), such that each controlled trajectory of the robots, which starts from the initial condition $(x_{i0},y_{i0},\theta_{i0})$, can reach the specified destination $(x_{id},y_{id},\theta_{id})$, and meanwhile avoid the obstacles and mutual collisions with other robots.
\section{Dynamic Vector Fields}\label{sec_DVF}
In this section, we construct a non-gradient-based vector field to solve the motion planning problem for a nonholonomic mobile robot in an obstacle-free environment. With the loss of generality, we assume that the destination of the robot $(x_d,y_d,\theta_d)$ is chosen as $(0,0,0)$. For the sake of simplicity, we firstly rewrite the kinematics (\ref{eq_kine_fully}) and (\ref{eq_kine_nonho}) in a more compact form.
Define the following matrix
\begin{equation}\label{eq_confi_matrix}
\bm{h}=\begin{bmatrix}
\bm{R} & \bm{p} \\
\bm{0}_{1\times2} & 1
\end{bmatrix}=
\begin{bmatrix}
\cos\theta & -\sin\theta & x \\
\sin\theta & \cos\theta & y \\
0 & 0 & 1
\end{bmatrix},
\end{equation}
which is uniquely decided by the rotation matrix $\bm{R}$ and the position vector $\bm{p}$, and we call $\bm{h}$ the configuration of the rigid body. Similarly, the velocity can be formulated in a matrix as
\begin{equation}\label{eq_velo_matrix}
\bm{\eta}=\begin{bmatrix}
\hat{\bm{\omega}} & \bm{v} \\
\bm{0}_{1\times2} & 0
\end{bmatrix}=
\begin{bmatrix}
0 & -\omega & v_x \\
\omega & 0 & v_y \\
0 & 0 & 0
\end{bmatrix},
\end{equation}
where the hat operator $\hat{\cdot}$ defines a map from a scalar to a skew symmetric matrix in $\mathbb{R}^{2\times2}$. Therefore, based on (\ref{eq_confi_matrix}) and (\ref{eq_velo_matrix}), the fully actuated kinematic model (\ref{eq_kine_fully}) can be redefined by
\begin{equation}\label{eq_kine_fully_m}
\dot{\bm{h}}=\bm{h}\bm{\eta},
\end{equation}
where the configuration $\bm{h}$ and the velocity $\bm{\eta}$ serve as the state and control input, respectively. Correspondingly, the nonholonomic kinematic model can be rewritten as (\ref{eq_kine_fully_m}) with an additional nonholonomic constraint
\begin{equation*}
\bm{e}_2^{\rm T}\bm{\eta}\bm{e}_3=0,
\end{equation*}
where $\bm{e}_i$ ($i=1,2,3$) are standard basis in $\mathbb{R}^3$.
To design the vector field, a transformation is introduced for the configuration $\bm{h}$. By utilizing the matrix logarithmic map proposed in \cite{Bullo2005Geometric,Bullo1995Proportional}, we obtain the logarithm of configuration $\bm{h}$ as follows
\begin{equation}\label{eq_Upsi_log_h_def}
\bm{\Upsilon}=\log(\bm{h})=\begin{bmatrix}
\hat{\bm{\theta}} & \bm{\varphi}(x,y,\theta) \\
\bm{0}_{1\times 2} & 0
\end{bmatrix},
\end{equation}
where $\hat{\bm{\theta}}$ is a $2\times 2$ skew symmetric matrix with respect to $\theta$, similar to the form of $\hat{\bm{\omega}}$ in (\ref{eq_velo_matrix}), and the vector $\bm{\varphi}(x,y,\theta)$ is given by
\begin{equation}\label{eq_phi_def}
\bm{\varphi}(x,y,\theta)=\frac{\theta}{2}
\begin{bmatrix}
\frac{1+\cos\theta}{\sin\theta} & 1 \\
-1 & \frac{1+\cos\theta}{\sin\theta}
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}
\triangleq
\begin{bmatrix}
\varphi_1(x,y,\theta) \\
\varphi_2(x,y,\theta)
\end{bmatrix}.
\end{equation}
Once defining a matrix
\begin{equation}\label{eq_Xi_for_phi}
\bm{\Xi}=\frac{\theta}{2}
\begin{bmatrix}
\frac{1+\cos\theta}{\sin\theta} & 1 \\
-1 & \frac{1+\cos\theta}{\sin\theta}
\end{bmatrix},
\end{equation}
the formula (\ref{eq_phi_def}) can be simplified as
\begin{equation}\label{eq_phi_def_simp}
\bm{\varphi}(x,y,\theta)=\bm{\Xi}\bm{p}.
\end{equation}
where $\bm{p}$ is the position vector. Then, we refer to $\bm{\Upsilon}$ given in (\ref{eq_Upsi_log_h_def}) as the transformed configuration, which is uniquely defined by the attitude angle $\theta$ and position vector $\bm{p}$. According to \cite{Bullo1995Proportional}, under the transformed configuration $\bm{\Upsilon}$, the kinematics (\ref{eq_kine_fully_m}) can be reformulated as
\begin{equation}\label{eq_dot_log_h}
\dot{\bm{\Upsilon}}=\bm{\mathcal{M}}(\bm{\Upsilon})\bm{\eta},
\end{equation}
where the transformed configuration $\bm{\Upsilon}$ is the state, the velocity $\bm{\eta}$ is the control input, and $\bm{\mathcal{M}}(\bm{\Upsilon})\in\mathbb{R}^{3\times 3}$ is a state-dependent matrix satisfying
\begin{equation}\label{eq_M_Upsi_property}
\bm{\mathcal{M}}(\bm{\Upsilon})\bm{\Upsilon}=\bm{\Upsilon}.
\end{equation}
We recommend \cite{Bullo1995Proportional} for readers who are interested in more information about the kinematics (\ref{eq_dot_log_h}) and matrix $\bm{\mathcal{M}}(\bm{\Upsilon})$.
\begin{remark}
As a matter of fact, the formulas (\ref{eq_confi_matrix})-(\ref{eq_M_Upsi_property}) are all originated from the geometric control theory of mechanical systems. Specifically, the configuration $\bm{h}$ is an element in the Lie group ${\rm SE}(2)$; the body velocity $\bm{\eta}$ is a twist in the Lie algebra $\mathfrak{se}(2)$; $\bm{\Upsilon}$ is called the exponential coordinate of $\bm{h}$, which also lies in the Lie algebra $\mathfrak{se}(2)$ but represents the configuration. Since the framework of geometric control is established on fully actuated rigid bodies, we introduce related preliminaries at the beginning of Section~\ref{sec_preli}, in advance of the nonholonomic mobile robots. Although we subsequently design the vector field with these variables in geometric control, for the sake of readability, the concepts such as Lie group and Lie algebra are not introduced in this paper. This is because we do not utilize any complicated properties from geometric control theory, and it is believed that the aforementioned modelling preliminaries are sufficient for the completeness and explicitness of this paper. The readers interested in geometric control are recommended to refer to \cite{Bullo2005Geometric,Murray1994A_Mathematical,Bloch2015Nonholonomic}.
\end{remark}
Now, we propose a vector field $\bm{\Gamma}_d:\mathbb{R}^2\times\mathbb{S}\to\mathbb{R}^2$, whose components $\Gamma_{d}^x$ and $\Gamma_{d}^y$ are given by
\begin{subequations}\label{eq_DVF}
\begin{align}
\Gamma_{d}^x &= -\varphi_1(x,y,\theta)\cos\theta+\varphi_2(x,y,\theta)\sin\theta, \\
\Gamma_{d}^y &= -\varphi_1(x,y,\theta)\sin\theta-\varphi_2(x,y,\theta)\cos\theta,
\end{align}
\end{subequations}
where $\theta$ serve as an internal parameter satisfying
\begin{equation}\label{eq_dot_theta_minus_theta}
\dot{\theta}=-\theta.
\end{equation}
\begin{remark}
In contrast to common planner vector fields which are maps of $\mathbb{R}^2\to\mathbb{R}^2$, such as in \cite{Panagou2017A_Distributed,Kapitanyuk2018A_Guiding}, one more parameter $\theta\in\mathbb{S}$ is introduced to the vector field $\bm{\Gamma}_d$ in this paper. More importantly, $\theta$ has an explicit physical meaning, that is, the attitude angle of the robot. To bring $\theta$ into the vector field is naturally motivated by the fact that the nonholonomic mobile robot is essentially a kind of rigid bodies rather than a point of mass. Consequently, as a state of the robot, the attitude angle are not supposed to be set free at the destination. Instead, it should be guided to the specified value, like positions, after the motion planning procedure. Moreover, the designation of the final attitude angle has a more practical value in the sense that it specifies the initial velocity direction of the subsequent period of motion. Thus, we incorporate the attitude information into the vector field, leading to the result that $\bm{\Gamma}_d$ is decided by not only position but also orientation. Then, the vector field in $\mathbb{R}^2$ is ``dynamic" instead of a ``static" one. Herein, the ``dynamic" means $\bm{\Gamma}_d$ will vary with the initial value of $\theta$, which evolves following (\ref{eq_dot_theta_minus_theta}). Therefore, even if for the same one point, the vector direction is probably different due to the attitude angle of the robot. In light of this fact, we refer to $\bm{\Gamma}_d$ as a dynamic vector field. Fig.~\ref{fig_DVF} provides two dynamic vector fields with initial condition ${\theta(0)}=\frac{\pi}{2}$ and ${\theta(0)}=-\frac{\pi}{2}$, respectively.
\end{remark}
The convergence of $\bm{\Gamma}_d$ is provided in the following theorem.
\begin{figure}[htp]
\centering
\subfigure[$\theta(0)=\frac{\pi}{2}$]{
\label{fig_DVF_pi_2}
\includegraphics[width=0.23\textwidth,trim=60 5 70 5,clip]{pi_2_VF.eps}}
\subfigure[$\theta(0)=-\frac{\pi}{2}$]{
\label{fig_DVF_minus_pi_2}
\includegraphics[width=0.23\textwidth,trim=60 5 70 5,clip]{pi_2_VF.eps}}
\caption{Dynamic vector field $\bm{\Gamma}_d$ with different initial parameter $\theta$}
\label{fig_DVF}
\end{figure}
\begin{theorem}\label{theo_DVF}
The dynamic vector field $\bm{\Gamma}_d$ given in (\ref{eq_DVF}), (\ref{eq_dot_theta_minus_theta}) converges to $(x,y,\theta)=(0,0,0)$ asymptotically.
\end{theorem}
\begin{IEEEproof}
Firstly, we prove that the transformed configuration $\bm{\Upsilon}$ converges to $\bm{0}$ asymptotically. Let $\dot{x}=\Gamma_{d}^x$ and $\dot{y}=\Gamma_{d}^y$. By comparing with the kinematics (\ref{eq_kine_fully}), we obtain that the velocity or the control input $\eta$ can be expressed as
\begin{equation}\label{eq_eta_minus_Upsi}
\bm{\eta}=-\begin{bmatrix}
\hat{\bm{\omega}} & \bm{v} \\
\bm{0}_{1\times 2} & 0
\end{bmatrix}
=-\begin{bmatrix}
\hat{\bm{\theta}} & \bm{\varphi}(x,y,\theta) \\
\bm{0}_{1\times 2} & 0
\end{bmatrix}=-\bm{\Upsilon},
\end{equation}
where the condition (\ref{eq_dot_theta_minus_theta}) is utilized. Define a Lyapunov function $\Phi=\frac{1}{2}\langle \bm{\Upsilon},\bm{\Upsilon} \rangle$, where $\langle \cdot,\cdot \rangle$ represents the inner product. Taking the time derivative of $\Phi$ along the trajectory of (\ref{eq_dot_log_h}), we have
\begin{equation}\label{eq_dot_Phi_1}
\dot{\Phi}=\langle \bm{\Upsilon},\dot{\bm{\Upsilon}} \rangle = \langle \bm{\Upsilon},\bm{\mathcal{M}}(\bm{\Upsilon})\bm{\eta} \rangle.
\end{equation}
Substituting (\ref{eq_eta_minus_Upsi}) into (\ref{eq_dot_Phi_1}), there holds
\begin{equation}\label{eq_dot_Phi_2}
\dot{\Phi}=-\langle \bm{\Upsilon},\bm{\mathcal{M}}(\bm{\Upsilon})\bm{\Upsilon} \rangle=-\langle \bm{\Upsilon},\bm{\Upsilon} \rangle < 0, \quad \forall\bm{\Upsilon}\ne\bm{0},
\end{equation}
where the property (\ref{eq_M_Upsi_property}) is used. Thus, we obtain that the transformed configuration $\bm{\Upsilon}$ converges to $\bm{0}$ asymptotically, implying $\theta\to 0$ and $\bm{\varphi}(x,y,\theta)\to \bm{0}$ as $t\to\infty$.
Next, we prove that the position vector $\bm{p}=[x\ \ y]^{\rm T}$ can asymptotically converge to $\bm{0}$. By using the L'Hospital's rule, we have
\begin{equation*}
\lim_{\theta\to 0}\frac{\theta}{2}\frac{1+\cos\theta}{\sin\theta} = \lim_{\theta\to 0}\frac{\theta}{\sin\theta} = \lim_{\theta\to 0}\frac{1}{\cos\theta}=1,
\end{equation*}
so that there holds
\begin{equation*}
\lim_{\theta\to 0}{\det}(\bm{\Xi})=1,
\end{equation*}
where ${\rm det}(\cdot)$ represents the determinant and the matrix $\bm{\Xi}$ is given in (\ref{eq_Xi_for_phi}). Therefore, based on the formulation in (\ref{eq_phi_def_simp}), we obtain that $\bm{\varphi}(x,y,\theta)\to \bm{0}$ if and only if $\bm{p}\to\bm{0}$.
\end{IEEEproof}
In the following, we extend the dynamic vector field $\bm{\Gamma}_d$ to an arbitrarily-specified final state $(x_d,y_d,\theta_d)$. Based on (\ref{eq_confi_matrix}), once there holds $(x,y,\theta) = (0,0,0)$, the corresponding configuration matrix $\bm{h}$ becomes a $3\times 3$ identity matrix $\bm{I}_3$. Thus, the dynamic vector field $\bm{\Gamma}_d$ given in (\ref{eq_DVF}), (\ref{eq_dot_theta_minus_theta}) can drive $\bm{h}$ to $\bm{I}_3$, indeed. Let $\bm{h}_d$ denote the corresponding configuration matrix of the final state $(x_d,y_d,\theta_d)$. Then, the problem now becomes how to define a dynamic vector field $\tilde{\bm{\Gamma}}_d$ which can drive the configuration $\bm{h}$ to $\bm{h}_d$. Motivated by the fact that $\bm{\Gamma}_d$ achieves $\bm{h}\to\bm{I}_3$,, we define a new configuration
\begin{equation}\label{eq_tilde_h}
\tilde{\bm{h}}=\bm{h}_d^{-1}\bm{h},
\end{equation}
which contains the information of the desired final state $\bm{h}_d$. The parameterized description of $\tilde{\bm{h}}$, denoted by $(\tilde{x},\tilde{y},\tilde{\theta})$, can be given by $\tilde{x}=(x-x_d)\cos\theta_d+(y-y_d)\sin\theta_d$, $\tilde{y}=-(x-x_d)\sin\theta_d+(y-y_d)\cos\theta_d$, $\tilde{\theta}=\theta-\theta_d$. Based on above definitions, we present the following theorem.
\begin{theorem}
For an arbitrarily-specified final state $(x_d,y_d,\theta_d)$, design a dynamic vector field $\tilde{\bm{\Gamma}}_d:\mathbb{R}^2\times\mathbb{S}\to\mathbb{R}^2$, whose components $\tilde{\Gamma}_{d}^x$ and $\tilde{\Gamma}_{d}^y$ are given by
\begin{subequations}\label{eq_DVF_arbitr}
\begin{align}
\tilde{\Gamma}_{d}^x &= -\varphi_1(\tilde{x},\tilde{y},\tilde{\theta})\cos\tilde{\theta}+\varphi_2(\tilde{x},\tilde{y},\tilde{\theta})\sin\tilde{\theta}, \\
\tilde{\Gamma}_{d}^y &= -\varphi_1(\tilde{x},\tilde{y},\tilde{\theta})\sin\tilde{\theta}-\varphi_2(\tilde{x},\tilde{y},\tilde{\theta})\cos\tilde{\theta},
\end{align}
\end{subequations}
where $\tilde{\theta}$ satisfies $\dot{\tilde{\theta}}=-\tilde{\theta}$. Then, the dynamic vector field $\tilde{\bm{\Gamma}}_d$ converges to $(x,y,\theta)=(x_d,y_d,\theta_d)$ asymptotically.
\end{theorem}
\begin{IEEEproof}
According to Theorem~\ref{theo_DVF}, the dynamic vector field $\tilde{\bm{\Gamma}}_d$ will converge to $(\tilde{x},\tilde{y},\tilde{\theta})=(0,0,0)$ asymptotically. That is to say, $\tilde{\bm{\Gamma}}_d$ can drive the configuration $\tilde{\bm{h}}$ to the identity matrix $\bm{I}_3$. Then, according to the definition of $\tilde{\bm{h}}$ in (\ref{eq_tilde_h}), we have $\bm{h}_d^{-1}\bm{h}\to\bm{I}_3$, i.e., $\bm{h}\to\bm{h}_d$, implying that $(x,y,\theta)\to(x_d,y_d,\theta_d)$.
\end{IEEEproof}
Having obtained the dynamic vector field $\tilde{\bm{\Gamma}}_d$, we can further design the control inputs $\omega$ and $v_x$. It should be noted that the dynamic vector field $\tilde{\bm{\Gamma}}_d$ in (\ref{eq_DVF_arbitr}), (\ref{eq_dot_theta_minus_theta}) is defined in the earth-fixed frame $\bm{\mathcal{F}}_{\mathcal{E}}$. In contrast, the control inputs are the velocities given in the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$. Thus, for the purpose of a simple controller design, we transform the dynamic vector field $\tilde{\bm{\Gamma}}_d$ from the earth-fixed frame $\bm{\mathcal{F}}_{\mathcal{E}}$ to the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$. Let $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$ denote the dynamic vector field in $\bm{\mathcal{F}}_{\mathcal{B}}$. Then, $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$ can be obtained by
\begin{equation}\label{eq_DVF_in_Fb}
\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}=\bm{R}^{\rm T}\tilde{\bm{\Gamma}}_d\triangleq
\begin{bmatrix}
\tilde{\Gamma}_{d}^{\mathcal{B}x} \\
\tilde{\Gamma}_{d}^{\mathcal{B}y}
\end{bmatrix}.
\end{equation}
where $\bm{R}$ is the rotation matrix defined in (\ref{eq_rotation_matrix}). Thus, based on (\ref{eq_DVF_in_Fb}), it will be straightforward to design the motion planning controller of fully actuated rigid bodies, which can be obtained by making the body velocity $\bm{\eta}$ equivalent to $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$, that is
\begin{equation}\label{eq_fully_controller}
v_x=k_v\tilde{\Gamma}_{d}^{\mathcal{B}x},\quad v_y=k_v\tilde{\Gamma}_{d}^{\mathcal{B}y},\quad \omega=-k_{\omega}\theta,
\end{equation}
where $k_v,k_{\omega}$ are positive scalars.
\begin{figure}[htp]
\centering
\includegraphics[width=0.25\textwidth]{vehicle.eps}
\caption{Orientation of the nonholonomic mobile robot and direction of the vector field $\bm{\Gamma}$}
\label{fig_vehicle_attitude}
\end{figure}
However, regarding nonholonomic mobile robots, there does not exist any control input along the direction of $v_y$. Hence, only $v_x$ and $\omega$ can be used to make the robot ``follow" the dynamic vector field $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$. To this end, we introduce an additional rotation in $\omega$, which is a negative feedback relevant to the angle between the orientation of the robot and the direction of $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$. Specifically, as shown in Fig.~\ref{fig_vehicle_attitude}, the orientation of the robot is along the $X_{\mathcal{B}}$-axis of $\bm{\mathcal{F}}_{\mathcal{B}}$, but the integral curve of the vector field moves along the direction of $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$. The angle between these two directions which is denoted by $\delta$ can be given by
\begin{equation}\label{eq_angle_delta}
\delta=-{\rm atan}\frac{\tilde{\Gamma}_{d}^{\mathcal{B}y}}{\tilde{\Gamma}_{d}^{\mathcal{B}x}},
\end{equation}
where the negative sign means $\delta$ rotates along a counter direction with respect to the attitude angle $\theta$. Note that for nonholonomic mobile robots, the linear velocity $v_x$ has the same direction as the orientation. Therefore, in order to make the robot ``follow" the vector field, the orientation of the robot should be rotate to the direction of $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$. Then, we can construct an additional angular velocity, which is an negative feedback of $\delta$, so as to rotate the robot orientation to the direction of $\tilde{\bm{\Gamma}}_{d}^{\mathcal{B}}$. Thus, based on (\ref{eq_fully_controller}), the motion planning controller for nonholonomic mobile robots is designed to be
\begin{subequations}\label{eq_nonholo_controller}
\begin{align}
v_x &= k_v\tilde{\Gamma}_{d}^{\mathcal{B}x}, \\
\omega &= -k_{\omega}\theta+k_a{\rm atan}\frac{\tilde{\Gamma}_{d}^{\mathcal{B}y}}{\tilde{\Gamma}_{d}^{\mathcal{B}x}},
\end{align}
\end{subequations}
where $k_v,k_{\omega},k_a$ are all positive scalars.
\section{Dynamic Vector Fields with Obstacle Avoidance}\label{sec_ob_avoid}
This section focuses on the motion planning problem involving obstacle avoidance. Instead of using a repulsive vector field directly, we design a circular vector field around the obstacle, and then blend it with the dynamic vector field which is convergent to the desired configuration.
Consider a circular obstacle with radius $r_o$ located at $\bm{p}_o=[x_o\ \ y_o]^{\rm T}$. Suppose that the obstacle avoidance vector field lies in the following area
\begin{equation*}
\mathcal{D}=\left\{\bm{q}\in\mathbb{R}^2\ \big| \ r_o<\|\bm{q}-\bm{p}_o\|\leq R_o,\ r_o<R_o<R_s \right\},
\end{equation*}
where $R_s$ is the radius of the sensing region $\mathcal{S}_i$. Once the robot enters the area $\mathcal{D}$, we define a repulsive vector
\begin{equation}\label{eq_Gamma_r_ob_avoid}
\bm{\Gamma}_r=\bm{p}-\bm{p}_o,
\end{equation}
which points from the obstacle to the nonholonomic mobile robot. Then, the distance between obstacle and robot can be obtained by $d=\|\bm{\Gamma}_r\|$. Let $\bm{\Gamma}_r^{\perp}$ denote the a vector satisfying the following two conditions
\begin{align}\label{eq_Gamma_r_perp_ob_avoid}
\langle \bm{\Gamma}_r^{\perp},\bm{\Gamma}_r \rangle = 0, \quad
\langle \bm{\Gamma}_r^{\perp},\bm{v} \rangle > 0,
\end{align}
where $\bm{v}$ is the velocity vector in the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$ of the nonholonomic mobile robot. Intuitively, the vector $\bm{\Gamma}_r^{\perp}$ is perpendicular to $\bm{\Gamma}_r$ and projected positively onto the velocity direction $\bm{v}$. According to such a definition, we can derive that the vector $\bm{\Gamma}_r^{\perp}$ always points to a collision-free direction with respect to the obstacle, since $\bm{\Gamma}_r^{\perp}$ will be tangent to a certain circle located at $\bm{p}$ with radius $r$ ($r_o<r\leq R_o$). Thus, all the vectors $\bm{\Gamma}_r^{\perp}$ in the area $\mathcal{D}$ can constitute a circular vector field surrounding the obstacle.
Nevertheless, it is fairly unreasonable to construct the obstacle avoidance vector field only by $\bm{\Gamma}_r^{\perp}$, since there must exist some situations where the nonholonomic mobile robot will never collide with the obstacle although moving in the area $\mathcal{D}$. In such cases, it is unnecessary to construct obstacle avoidance vector field for the robot with the help of $\bm{\Gamma}_r^{\perp}$. Otherwise, the results would become rather conservative. Therefore, in the following, we will design the obstacle avoidance vector fields in the area $\mathcal{D}$, denoted by $\bm{\Gamma}_o$, according to different situations of the robot with respect to the obstacle. The detailed construction of $\bm{\Gamma}_o$ are provided below.
\begin{figure}[htp]
\centering
\includegraphics[width=0.3\textwidth,trim=0 0 0 0,clip]{ob_avoidance.eps}
\caption{Obstacle avoidance vector fields. Cases~1,2,3 depict three different situations when the robot is approaching the obstacle.}
\label{fig_vehicle_ob_avoid}
\end{figure}
Assume that the robot has entered the area $\mathcal{D}$, that is, the distance $d$ satisfying $r_o<d\leq R_o$. Let $\theta_r$ represent the angle between the velocity direction $\bm{v}$ and the line segment connecting robot and obstacle, as shown in Fig.~\ref{fig_vehicle_ob_avoid}, which can be computed by
\begin{align*}
\theta_r &= \arccos\frac{\langle\bm{v},-\bm{\Gamma}_r\rangle}{\|\bm{v}\| \cdot \|\bm{\Gamma}_r\|}
\end{align*}
Hence, according to the value of $\theta_r$, three different cases can be proposed as follows.
\begin{enumerate}
\item If $\theta_r\geq\frac{\pi}{2}$, as shown in Case~1 of Fig.~\ref{fig_vehicle_ob_avoid}, the robot does not have the risk of colliding with the obstacle. Then, in this case, the obstacle avoidance vector field can absolutely be chosen as the convergent dynamic vector field in the free space, i.e., $\bm{\Gamma}_o=\tilde{\bm{\Gamma}}_d$.
\item If $0<\theta_r<\frac{\pi}{2}$, as shown in Case~2 of Fig.~\ref{fig_vehicle_ob_avoid}, it is possible that the robot will collide with the obstacle in a future time. Thus, we set the obstacle avoidance vector field to be $\bm{\Gamma}_r^{\perp}$, i.e., $\bm{\Gamma}_o=\bm{\Gamma}_r^{\perp}$, which is defined in (\ref{eq_Gamma_r_perp_ob_avoid}). Then, the robot will turn to the direction that is perpendicular to the radius of the obstacle so as to achieve the obstacle avoidance.
\item Except for above two cases, the rest one is $\theta_r=0$, as shown in Case~3 of Fig.~\ref{fig_vehicle_ob_avoid}. Generally, in such a case, it is referred to as the deadlock of a robot, which corresponds to a local minima of an artificial potential. Once $\theta_r=0$, it can be derived that $\langle \bm{\Gamma}_r^{\perp},\bm{v} \rangle = 0$, indicating the vector $\bm{\Gamma}_r^{\perp}$ is perpendicular to the velocity $\bm{v}$, so that we cannot define the obstacle avoidance vector field by checking whether the projection of $\bm{\Gamma}_r^{\perp}$ onto $\bm{v}$ is positive, as given in (\ref{eq_Gamma_r_perp_ob_avoid}). Actually, the reason why we choose the $\bm{\Gamma}_r^{\perp}$ positively projected onto $\bm{v}$ is that such a vector can make the robot turn a smaller attitude angle for obstacle avoidance than the negatively projected one. However, when $\theta_r=0$, the rotation angle from $\bm{v}$ to $\bm{\Gamma}_r^{\perp}$ is always $\frac{\pi}{2}$, either clockwise or anticlockwise. Thus, we can directly choose one of these two rotation directions to define the obstacle avoidance vector field $\bm{\Gamma}_o$, and in this paper, $\bm{\Gamma}_o$ is defined by rotating $\bm{\Gamma}_r$ clockwise through $\frac{\pi}{2}$, that is, $\bm{\Gamma}_o=\bm{R}_{\frac{\pi}{2}}^{\rm T}\bm{\Gamma}_r$, where $\bm{R}_{\frac{\pi}{2}}$ is rotation matrix given in (\ref{eq_rotation_matrix}) with $\theta=\frac{\pi}{2}$.
\end{enumerate}
To summarize, based on above definitions under different situations, the obstacle avoidance vector field can be given by
\begin{equation}\label{eq_ob_avoid_VF}
\bm{\Gamma}_o=\left\{
\begin{aligned}
&\bm{R}_{\frac{\pi}{2}}^{\rm T}\bm{\Gamma}_r,\quad\quad \theta_r=0, \\
&\bm{\Gamma}_r^{\perp},\quad\quad 0<\theta_r<\frac{\pi}{2}, \\
&\tilde{\bm{\Gamma}}_d,\quad\quad\quad\ \theta_r\geq\frac{\pi}{2}.
\end{aligned}
\right.
\end{equation}
Therefore, we can present the dynamic vector fields with obstacle avoidance in $\mathbb{R}^2$ as follows
\begin{equation}\label{eq_VF_all_V1}
\bm{\Gamma}_P=\left\{
\begin{aligned}
&\tilde{\bm{\Gamma}}_d, \qquad d>R_o, \\
&\bm{\Gamma}_o, \quad r_o<d\leq R_o.
\end{aligned}
\right.
\end{equation}
Note that the vector field given in (\ref{eq_VF_all_V1}) would be discontinuous at $d=R_o$. To make the vector field continuous, we introduce the following transition function
\begin{equation}\label{eq_switch_func}
\varsigma=\left\{
\begin{aligned}
0,\qquad\qquad &\qquad d<R_o,\\
\frac{1}{2}\sin\left(\frac{d-R_o}{\epsilon}\pi-\frac{\pi}{2}\right)+\frac{1}{2}, &\quad R_o\leq d\leq R_o+\epsilon, \\
1,\qquad\qquad &\qquad d>R_o+\epsilon,
\end{aligned}
\right.
\end{equation}
where $\epsilon$ is a small positive constant. It is obvious that the transition function $\varsigma$ continuously varies from $0$ to $1$ as the distance $d$ varies from $R_o+\epsilon$ to $R_o$. Therefore, the vector fields proposed in (\ref{eq_VF_all_V1}) can be revised to be a continuous form, that is
\begin{equation}\label{eq_VF_all_V2}
\bm{\Gamma}_P=\varsigma\tilde{\bm{\Gamma}}_d+(1-\varsigma)\bm{\Gamma}_o.
\end{equation}
We summarize above results into the following theorem.
\begin{theorem}\label{theo_DVF_ob_avoid}
Let $(x_d,y_d,\theta_d)$ denote an arbitrarily-specified final state. The dynamic vector field $\bm{\Gamma}_P$ proposed in (\ref{eq_VF_all_V2}) asymptotically converges to $(x,y,\theta)=(x_d,y_d,\theta_d)$ and avoids the collision with the obstacle meanwhile.
\end{theorem}
\begin{IEEEproof}
The convergence of $\tilde{\bm{\Gamma}}_d$ has been proved by Theorem~\ref{theo_DVF}, so that we merely need to prove that $\bm{\Gamma}_o$ will not cause collision of the robot with the obstacle. According to (\ref{eq_Gamma_r_ob_avoid})-(\ref{eq_ob_avoid_VF}), the vector field $\bm{\Gamma}_o$ can be rewritten as
\begin{equation}\label{eq_Gamma_o_proof}
\bm{\Gamma}_o=\bm{R}_{\pm\frac{\pi}{2}}(\bm{p}-\bm{p}_o),
\end{equation}
where $\bm{R}_{\pm\frac{\pi}{2}}$ represents the rotation matrix with $\theta=\pm\frac{\pi}{2}$. Define the following distance function
\begin{equation}
f=\| \bm{p}-\bm{p}_o\|^2-r^2,
\end{equation}
then there holds $f>0$ for $\forall\bm{p}\in\mathcal{D}$. To ensure obstacle avoidance, the distance function $f$ should be guaranteed always positive. Assume that once the robot enters the obstacle avoidance region $\mathcal{D}$, the inital value of $f$ is a constant $c_0>0$. Taking the time derivative along the vector field $\bm{\Gamma}_o$, we have
\begin{equation}
\frac{\rm d}{{\rm d}t}f=2(\bm{p}-\bm{p}_o)^{\rm T}\bm{\Gamma}_o=2(\bm{p}-\bm{p}_o)^{\rm T}\bm{R}_{\pm\frac{\pi}{2}}(\bm{p}-\bm{p}_o)=0,
\end{equation}
which demonstrates that the distance function will maintain $f=c_0>0$ in the future time, indicating that integral curve of the vector field $\bm{\Gamma}_o$ will maintain a constant distance with respect to the obstacle, thus achieving obstacle avoidance by a circular motion.
\end{IEEEproof}
Although Theorem~\ref{theo_DVF_ob_avoid} is derived based on only one obstacle, it can be simply extended to the case of multiple obstacles. Regarding $M$ obstacles, the dynamic vector field is designed to be
\begin{equation}\label{eq_VF_all_V3}
\bm{\Gamma}_P=\prod_{i=1}^M\varsigma_i\tilde{\bm{\Gamma}}_d+\sum_{i=1}^M(1-\varsigma_i)\bm{\Gamma}_{oi},
\end{equation}
where $\tilde{\bm{\Gamma}}_d$ is the convergent dynamic vector field, $\bm{\Gamma}_{oi}$ is the obstacle avoidance vector field around the $i$th obstacle, $\varsigma_i$ is the transition function regarding the $i$th obstacle.
Having obtained the vector field in (\ref{eq_VF_all_V3}), it would not be difficult to design the controller for the nonholonomic mobile robot. Similar to (\ref{eq_DVF_in_Fb}), we transform the vector field $\bm{\Gamma}_{P}$ into the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$ by
\begin{equation}\label{eq_DVF_ob_avoid_in_Fb}
\bm{\Gamma}_{P}^{\mathcal{B}}=\bm{R}^{\rm T}\bm{\Gamma}_P\triangleq
\begin{bmatrix}
\Gamma_{P}^{\mathcal{B}x} \\
\Gamma_{P}^{\mathcal{B}y}
\end{bmatrix}.
\end{equation}
Then, inspired by (\ref{eq_nonholo_controller}), the controller can be given by
\begin{subequations}\label{eq_nonholo_controller_ob_avoid}
\begin{align}
v_x &= k_v\Gamma_{P}^{\mathcal{B}x}, \\
\omega &= -k_{\omega}\prod_{i=1}^M\varsigma_i\theta+k_a{\rm atan}\frac{\Gamma_{P}^{\mathcal{B}y}}{\Gamma_{P}^{\mathcal{B}x}},
\end{align}
\end{subequations}
where $k_v,k_{\omega},k_a$ are all positive scalars.
\section{Dynamic Vector Fields with Collision Avoidance Among Robots}\label{sec_co_avoid}
In this section, we consider the problem of collision avoidance among multiple nonholonomic mobile robots during the motion planning. For simplicity, the collision avoidance between two robots is taken into account at first.
Motivated by the circular vector field presented in the obstacle avoidance, intuitively, we can introduce a virtual obstacle between two robots so that they are able to avoid each other by avoiding the virtual obstacle. To be more specific, as shown in Fig.~\ref{fig_co_avoid_org_1}, regarding two robots positioned at $\bm{p}_i$ and $\bm{p}_j$, when there is a potential collision risk in their sensing ranges, a virtual obstacle is set on the line segment $\overline{\bm{p}_i\bm{p}_j}$. According to the obstacle avoidance vector field in Section~\ref{sec_ob_avoid}, the robots will turn to the direction of $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$, respectively, as the green vectors in Fig.~\ref{fig_co_avoid_org_1}, which are projected positively along $\bm{v}_i$ and $\bm{v}_j$, respectively. Then, two robots will follow the vectors $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$ to avoid the virtual obstacle clockwise. In this way, the collision avoidance between two robots can be realized completely.
Nevertheless, in Fig.~\ref{fig_co_avoid_org_1}, it should be noted that the velocities $\bm{v}_i$ and $\bm{v}_j$ lie in the different sides of the line segment $\overline{\bm{p}_i\bm{p}_j}$. Once $\bm{v}_i$ and $\bm{v}_j$ lie in the same side of $\overline{\bm{p}_i\bm{p}_j}$, as given in Fig.~\ref{fig_co_avoid_org_2}, the obstacle avoidance vectors $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$ which are projected positively along $\bm{v}_i$ and $\bm{v}_j$ will point to the opposite rotation directions around the virtual obstacle, i.e., clockwise and anticlockwise, respectively. Then, two mobile robots will turn to such directions and cause the collision eventually. Therefore, the obstacle avoidance vector field cannot be extended to collision avoidance straightforwardly by introducing an virtual obstacle between two robots.
\begin{figure}[htp]
\centering
\subfigure[$\bm{v}_i$ and $\bm{v}_j$ lying in different sides of the line segment $\overline{\bm{p}_i\bm{p}_j}$]{
\label{fig_co_avoid_org_1}
\includegraphics[width=0.2\textwidth,trim=0 0 0 0,clip]{vehicle_co_avoid_sub1.eps}}\quad
\subfigure[$\bm{v}_i$ and $\bm{v}_j$ lying in the same side of the line segment $\overline{\bm{p}_i\bm{p}_j}$]{
\label{fig_co_avoid_org_2}
\includegraphics[width=0.2\textwidth,trim=0 0 0 0,clip]{vehicle_co_avoid_sub2.eps}}
\caption{collision avoidance between two robots in different situations}
\label{fig_co_avoid_org}
\end{figure}
Based on abovementioned analysis, the key point of the collision avoidance is how to define the direction of vectors $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$ so as to make them both clockwise or both anticlockwise. To solve this problem, we still consider two robots located at $\bm{p}_i$ and $\bm{p}_j$, and suppose that they have been into each other's sensing range, i.e., $d_{ij}=\|\bm{p}_i-\bm{p}_j\|<R_s$. In addition, we assume that the distance threshold for starting collision avoidance denoted by $R_c$ satisfies $R_c<R_s$. That is to say, as shown in Fig.~\ref{fig_vehicle_co_avoid}, when $d_{ij}\leq 2R_c$, two robots are supposed to have a potential risk of collision, and the vector fields should be converted from the mode of target navigation to collision avoidance.
Following the idea of virtual obstacle, we define a circular obstacle with radius $r_{ij}$, whose position vector is given by
\begin{equation}\label{eq_tilde_p_o}
\tilde{\bm{p}}_o=\frac{1}{2}(\bm{p}_i+\bm{p}_j).
\end{equation}
Note that $2r_{ij}$ can be regarded as the minimum safe distance for no-collision. In other words, two robots will collide with each other if $d_{ij}\leq2r_{ij}$. Thus, the collision avoidance vector field will lie in the following area
\begin{equation*}
\mathcal{D}_c=\left\{\bm{q}\in\mathbb{R}^2\ \big| \ r_{ij}<\|\bm{q}-\tilde{\bm{p}}_o\|\leq R_c,\ r_{ij}<R_c<R_s \right\},
\end{equation*}
Let us take the robot $i$ for example. Similar to the obstacle avoidance, the repulsive vector $\bm{\Gamma}_{ri}$ can be defined by
\begin{equation}\label{eq_Gamma_r_co_avoid}
\bm{\Gamma}_{ri}=\bm{p}_i-\tilde{\bm{p}}_o.
\end{equation}
Subsequently, we define the vector $\bm{\Gamma}_{ri}^{\perp}$ perpendicular to $\bm{\Gamma}_{ri}$ as follows
\begin{align}\label{eq_Gamma_r_perp_co_avoid}
\langle \bm{\Gamma}_{ri}^{\perp},\bm{\Gamma}_{ri} \rangle = 0, \quad
\langle \bm{\Gamma}_{ri}^{\perp},\bm{R}_{\frac{\pi}{2}}\bm{v}_i \rangle > 0,
\end{align}
where $\bm{R}_{\frac{\pi}{2}}$ is rotation matrix in (\ref{eq_rotation_matrix}) with $\theta=\frac{\pi}{2}$. Note that $\bm{R}_{\frac{\pi}{2}}$ rotates anticlockwise the velocity direction $\bm{v}_i$ by $\frac{\pi}{2}$, then the vector $\bm{R}_{\frac{\pi}{2}}\bm{v}_i$ points exactly to the $Y_{\mathcal{B}}$-axis of the body-fixed frame $\mathcal{F}_{\mathcal{B}}$. Thus, different from (\ref{eq_Gamma_r_perp_ob_avoid}) in the obstacle avoidance, $\bm{\Gamma}_{ri}^{\perp}$ in (\ref{eq_Gamma_r_perp_co_avoid}) becomes the vector which is projected positively onto $Y_{\mathcal{B}}$-axis rather than $X_{\mathcal{B}}$-axis (i.e., the direction of $\bm{v}_i$), as the green vectors illustrated in Fig.~\ref{fig_vehicle_co_avoid}. Due to the fact that for each robot the $Y_{\mathcal{B}}$-axis is always obtained by rotating anticlockwise $X_{\mathcal{B}}$-axis through $\frac{\pi}{2}$, all the robots will rotate anticlockwise to follow the vector field $\bm{\Gamma}_{ri}^{\perp}$ once having a collision risk. Therefore, the robots will move along the anticlockwise rotation direction to accomplish the collision avoidance.
\begin{figure}[htp]
\centering
\includegraphics[width=0.22\textwidth,trim=0 0 0 0,clip]{vehicle_co.eps}
\caption{Collision avoidance of two nonholonomic mobile robots by vectors $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$}
\label{fig_vehicle_co_avoid}
\end{figure}
According to $\bm{\Gamma}_{ri}$ given in (\ref{eq_Gamma_r_co_avoid}), for robot $i$, the dynamic vector field with collision avoidance in $\mathbb{R}^2$ can be proposed as follows
\begin{equation}\label{DVF_Gamma_Qi}
\bm{\Gamma}_{Qi}=\varsigma\tilde{\bm{\Gamma}}_d+(1-\varsigma)\bm{\Gamma}_{ri}^{\perp},
\end{equation}
where the transition function $\varsigma$ is obtained from (\ref{eq_switch_func}) by replacing $R_o$ with $R_c$. Above results are summarized in the theorem below.
\begin{theorem}\label{theo_DVF_co_avoid}
Let $(x_{di},y_{di},\theta_{di})$ denote an arbitrarily-specified final state for the robot $i$. The dynamic vector field $\bm{\Gamma}_{Qi}$ proposed in (\ref{DVF_Gamma_Qi}) asymptotically converges to $(x,y,\theta)=(x_{di},y_{di},\theta_{di})$ and avoids the collision with other robots.
\end{theorem}
\begin{IEEEproof}
Similar to the proof of Theorem~\ref{theo_DVF_ob_avoid}, we need to prove that $\bm{\Gamma}_{Qi}$ will not cause collisions between the robots. Define the following distance function
\begin{equation}
f_{ij}= \|\bm{p}_i-\bm{p}_j\|^2 - (2r_{ij})^2,
\end{equation}
which satisfies $f_{ij}>0$ for $\forall\bm{p}_i,\bm{p}_j\in\mathcal{D}_c$. Taking the time derivative along the vector fields $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$, we have
\begin{align}\label{eq_dot_f_ij}
\frac{\rm d}{{\rm d}t}f_c &= 2(\bm{p}_i-\bm{p}_j)^{\rm T}(\bm{\Gamma}_{ri}^{\perp}-\bm{\Gamma}_{rj}^{\perp}) \nonumber \\
&= 2(\bm{p}_i-\bm{p}_j)^{\rm T}\bm{R}_{\frac{\pi}{2}}^{\rm T}(\bm{p}_i-\tilde{\bm{p}}_o-\bm{p}_j+\tilde{\bm{p}}_o) \nonumber \\
&=0,
\end{align}
which implying that the distance between robot $i$ and robot $j$ will keep constant in the area $\mathcal{D}_c$. Note that the distance function $f_{ij}$ satisfies $f_{ij}>0$ at the initial time for collision avoidance, then there will still hold $f_{ij}>0$ in the future time. Hence, the robot $i$ will not collide with robot $j$ in the movement.
\end{IEEEproof}
Based on the vector field $\bm{\Gamma}$ designed in (\ref{DVF_Gamma_Qi}), we can present the control inputs $v_i$ and $\omega_i$ further. Similar to aforementioned sections, the controller is still proposed with the aid of the vector field given in the body-fixed frame $\mathcal{F}_{\mathcal{B}}$.
Let $\bm{\Gamma}_{Qi}^{\mathcal{B}}$ denote the formulation of $\bm{\Gamma}_{Qi}$ expressed into the body-fixed frame $\bm{\mathcal{F}}_{\mathcal{B}}$, which can be obtained by
\begin{equation}\label{eq_DVF_co_avoid_in_Fb}
\bm{\Gamma}_{Qi}^{\mathcal{B}}=\bm{R}^{\rm T}\bm{\Gamma}_{Qi}\triangleq
\begin{bmatrix}
\Gamma_{Qi}^{\mathcal{B}x} \\
\Gamma_{Qi}^{\mathcal{B}y}
\end{bmatrix}.
\end{equation}
Nonetheless, there are two facts worth noting in the design of $v_{xi}$ and $\omega_i$, which are different from the controllers in above sections. One fact is that the linear velocities of the robot $i$ and robot $j$ should be equivalent, when these two robots move around the virtual obstacle for collision avoidance. Actually, the definitions in (\ref{eq_tilde_p_o}) and (\ref{eq_Gamma_r_perp_co_avoid}) implicitly guarantees that $\bm{\Gamma}_{ri}^{\perp}$ and $\bm{\Gamma}_{rj}^{\perp}$ have the same amplitudes, so that $\frac{\rm d}{{\rm d}t}f_c=0$ in (\ref{eq_dot_f_ij}) holds. Therefore, once the robots enter the collision avoidance field, we set the linear velocities to be a common constant denoted by $v_c$, which can be decided according to the range of speed for real robots. The other fact is that the angle between the directions of $\bm{v}_i$ and $\Gamma_{ri}^{\perp}$, similar to the angle $\delta$ defined in (\ref{eq_angle_delta}), belongs to $[-\pi,\pi]$ instead of $[-\frac{\pi}{2},\frac{\pi}{2}]$. This is because the vector $\Gamma_{ri}^{\perp}$ is not ensured for positive projection onto $X_{\mathcal{B}}$-axis, but onto $Y_{\mathcal{B}}$-axis. Then, the angle from the direction of $\bm{v}_i$ to $\Gamma_{ri}^{\perp}$ should be decided by
\begin{equation}\label{eq_angle_delta_2}
\delta=-{\rm atan2}(\Gamma_{Qi}^{\mathcal{B}y},\Gamma_{Qi}^{\mathcal{B}x})
\end{equation}
Therefore, based on above two facts, the controller can be designed as
\begin{subequations}\label{eq_nonholo_controller_co_avoid}
\begin{align}
v_{xi} &= k_v\varsigma\tilde{\Gamma}_d^{\mathcal{B}x}+(1-\varsigma)v_c, \\
\omega_i &= -k_{\omega}\varsigma\theta+k_a{\rm atan2}(\Gamma_{Qi}^{\mathcal{B}y},\Gamma_{Qi}^{\mathcal{B}x}),
\end{align}
\end{subequations}
where $k_v,k_{\omega},k_a$ are all positive scalars.
Having investigated the collision avoidance of two robots, it will be not difficult to handle the collision avoidance problem for multiple nonholonomic mobile robots in the motion planning process. Still following above ideas, a virtual obstacle can be introduced for the multi-robot collision avoidance. As we can see from the analysis for two robots, the crucial problem for such an approach is how to define the position of the virtual obstacle. Regarding two robots, the virtual obstacle is set at the middle point of the line segment connecting two robot positions. Such a point can also be interpreted as the convex combination of the two positions with coefficient $\frac{1}{2}$. Inspired by this fact, we can employ the concept of convex combination to decide the position of the virtual obstacle.
At a certain instant, regarding robot $i$, we assume that there are $M_c$ robots ($M_c<N$) whose distance with the robot $i$ is less than the collision avoidance threshold $R_c$. Let $\mathcal{P}$ denote the set which is constituted by the serial numbers of these $M_c$ robots, so that it follows $d_{ij}=\|\bm{p}_i-\bm{p}_j\|<R_c$ for $\forall j\in\mathcal{P}$. The positions of these $M_c$ robots are denoted by $\bm{p}_1,\bm{p}_2,\cdots,\bm{p}_{M_c}$, then we can define the following convex combination
\begin{equation}
\tilde{\bm{p}}_o=\frac{1}{M_c}\sum_{i=1}^{M_c}\bm{p}_i.
\end{equation}
Further, we place the virtual obstacle at the convex combination $\tilde{\bm{p}}_o$ and make these robots move round the virtual obstacle, as the case of two robots given in Fig.~\ref{fig_vehicle_co_avoid}, so as to achieve the collision avoidance among these robots. Once the position of the virtual obstacle is decided, the collision avoidance vector field $\bm{\Gamma}_{ri}^{\perp}$ can be easily obtained by (\ref{eq_Gamma_r_co_avoid}) and (\ref{eq_Gamma_r_perp_co_avoid}). In addition, the control inputs can also be straightforwardly derived as (\ref{eq_nonholo_controller_co_avoid}).
\section{Numerical Simulation Examples}\label{sec_sim}
In this section, four numerical simulation examples are provided, i.e., 1) motion planning in an obstacle-free space, 2) motion planning with obstacle avoidance, 3) coordinated motion planning with collision avoidance, 4) coordinated motion planning with both obstacle and collision avoidance.
\begin{table}[htp]
\renewcommand{\arraystretch}{1.5}
\caption{Initial and final states for motion planning in an obstacle-free space}
\label{tab_MP_Initial_Final}
\centering
\begin{tabular}{c|c|c}
\hline
Case No. & \makecell{Initial condition \\ $(x_0,y_0,\theta_0)$} & \makecell{Specified final state \\ $(x_d,y_d,\theta_d)$} \\
\hline
1 & \multirow{6}*{$(0,0,0)$} & $(0,40,0)$ \\
2 & ~ & $(40,40,\frac{\pi}{2})$ \\
3 & ~ & $(40,0,-\frac{\pi}{2})$ \\
4 & ~ & $(40,-40,0)$ \\
5 & ~ & $(-20,-40,-\frac{\pi}{2})$ \\
6 & ~ & $(-40,0,\pi)$ \\
\hline
\end{tabular}
\end{table}
\begin{example}[Motion planning in an obstacle-free space]
The dynamic vector field (\ref{eq_DVF_arbitr}) is employed in this example to verify the effectiveness of motion planner in an obstacle-free space. The initial and final states of the nonholonomic mobile robot are given in Table~\ref{tab_MP_Initial_Final}, where we choose six different final states including positions and orientations. Note that this is an example for one robots under various requirements of final states, instead of coordinated motion planning of multiple robots. Fig.~\ref{fig_MP_Initial_Final} depicts the initial and specified final states of the robot. It should be mentioned that the case of $(x_d,y_d,\theta_d)=(0,40,0)$ is a quite challenging one, because it is in the lateral direction and has a same attitude as the initial state, while the robot cannot move sideways. The simulation results of robot trajectories are provided in Fig.~\ref{fig_MP_Trajectory}, which shows that the robot reaches the specified positions as well as the desired orientations.
\end{example}
\begin{figure}[htp]
\centering
\subfigure[Initial and specified final states of the robot]{
\label{fig_MP_Initial_Final}
\includegraphics[width=0.23\textwidth,trim=60 5 70 5,clip]{MP_Initial_Final.eps}}
\subfigure[Trajectories of the robot by motion planning]{
\label{fig_MP_Trajectory}
\includegraphics[width=0.23\textwidth,trim=60 5 70 5,clip]{MP_Trajectory.eps}}
\caption{Simulation results of motion planning in an obstacle-free space}
\label{fig_MP_Sim}
\end{figure}
\begin{table}[htp]
\renewcommand{\arraystretch}{1.5}
\caption{Initial and final states for motion planning with obstacle avoidance}
\label{tab_OB_Initial_Final}
\centering
\begin{tabular}{c|c|c}
\hline
Case No. & \makecell{Initial condition \\ $(x_0,y_0,\theta_0)$} & \makecell{Specified final state \\ $(x_d,y_d,\theta_d)$} \\
\hline
1 & $(0,30,0)$ & \multirow{3}*{$(0,0,0)$} \\
2 & $(-30,30,\frac{\pi}{2})$ & ~ \\
3 & $(-35,0,\pi)$ & ~ \\
\hline
\end{tabular}
\end{table}
\begin{figure*}[htp]
\centering
\subfigure[$t=0$s]{
\label{fig_ob_Sim1}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{ob_avoid_t=0.eps}}
\subfigure[$t=5$s]{
\label{fig_ob_Sim2}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{ob_avoid_t=5.eps}}
\subfigure[$t=10$s]{
\label{fig_ob_Sim3}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{ob_avoid_t=10.eps}}
\subfigure[$t=20$s]{
\label{fig_ob_Sim4}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{ob_avoid_t=20.eps}}
\caption{Simulation results of motion planning in an obstacle environment}
\label{fig_ob_Sim}
\end{figure*}
\begin{figure*}[htp]
\centering
\subfigure[$t=0$s]{
\label{fig_co_line_Sim1}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{5_vehicles_line1.eps}}
\subfigure[$t=3$s]{
\label{fig_co_line_Sim2}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{5_vehicles_line2.eps}}
\subfigure[$t=6$s]{
\label{fig_co_line_Sim3}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{5_vehicles_line3.eps}}
\subfigure[$t=15$s]{
\label{fig_co_line_Sim4}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{5_vehicles_line4.eps}}
\caption{Simulation results of coordinated motion planning with collision avoidance (Scenario~1)}
\label{fig_co_line_Sim}
\end{figure*}
\begin{figure*}[htp]
\centering
\subfigure[$t=0$s]{
\label{fig_co_circle_Sim1}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{6_vehicles_circle1.eps}}
\subfigure[$t=3$s]{
\label{fig_co_circle_Sim2}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{6_vehicles_circle2.eps}}
\subfigure[$t=6$s]{
\label{fig_co_circle_Sim3}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{6_vehicles_circle3.eps}}
\subfigure[$t=15$s]{
\label{fig_co_circle_Sim4}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{6_vehicles_circle4.eps}}
\caption{Simulation results of coordinated motion planning with collision avoidance (Scenario~2)}
\label{fig_co_circle_Sim}
\end{figure*}
\begin{example}[Motion planning with obstacle avoidance]
This example considers the motion planning in an obstacle environment, so that the dynamic vector field with obstacle avoidance should be utilized. The initial and final states of one robot is provided in Table~\ref{tab_OB_Initial_Final}, where we choose three different initial conditions. The radius of the obstacle is set to be $r_o=1.5$, while the radius of the region with obstacle-avoidance vector field is $R_o=3$. The simulation time is chosen as $T=20$s, and the trajectories of three different scenarios are given in Fig.~\ref{fig_ob_Sim}. It can be seen that the robot can arrive at the specified final states and avoid the obstacles meanwhile.
\end{example}
\begin{example}[Coordinated motion planning with collision avoidance]
In this example, we provide two scenarios of coordinated motion planning of multiple robots with collision avoidance. In the first scenario, five robots start from a line formation with parallel orientations, as shown in Fig.~\ref{fig_co_line_Sim1}. These robots are required to reach their own position in another line formation and keep the same orientations as their initial ones. The trajectories of the robots at different time instants are illustrated in Fig.~\ref{fig_co_line_Sim2}-Fig.~\ref{fig_co_line_Sim4}, which indicates that achieve the desired final states and avoid collisions with each other. The second scenario considers six robots on a circle, which are expected to exchange their positions with the opposite one and at the same time maintain the initial orientations at the final instant. Such a scenario cause possibly deadlocks for artificial potential or optimization methods, but the vector field proposed in this paper is applicable to this problem, which can be observed from the simulation results in Fig.~\ref{fig_co_circle_Sim}.
\end{example}
\begin{figure*}[htp]
\centering
\subfigure[$t=0$s]{
\label{fig_ob_co_Sim1}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{10_vehicles_1.eps}}
\subfigure[$t=6$s]{
\label{fig_ob_co_Sim3}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{10_vehicles_3.eps}}
\subfigure[$t=15$s]{
\label{fig_ob_co_Sim5}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{10_vehicles_5.eps}}
\subfigure[$t=25$s]{
\label{fig_ob_co_Sim6}
\includegraphics[width=0.235\textwidth,trim=60 5 70 5,clip]{10_vehicles_6.eps}}
\caption{Simulation results of coordinated motion planning with both of obstacle and collision avoidance}
\label{fig_ob_co_Sim}
\end{figure*}
\begin{figure}[htp]
\centering
\includegraphics[width=0.35\textwidth,trim=20 5 20 5,clip]{distance.eps}
\caption{Minimum distance among all pairs of robots at each time instant}
\label{fig_distance}
\end{figure}
\begin{example}[Coordinated motion planning with both obstacle and collision avoidance]
The last example handles the multi-robot motion planning in an obstacle environment, so that we have to take into account the obstacle avoidance as well as the collision avoidance with each other. In this example, the number of the robots is $N=10$ and the safe distance between any pair of robots is set to be $r_{ij}=5$. Besides, the radius of the obstacle is chosen to be $r_o=5$, while the radius of the region with obstacle-avoidance vector field is $R_o=10$. The motions of all robots at different time instants are depicted in Fig.~\ref{fig_ob_co_Sim}, in which the final states of the robots are line formations with desired parallel orientations. It can also be viewed from Fig.~\ref{fig_ob_co_Sim} that the trajectories of the robots have no overlaps with obstacles, implying that the mission of obstacle avoidance is realized successfully. The minimum distance among all pairs of robots at each time instant is shown in Fig.~\ref{fig_distance}, which demonstrates that the safe threshold is not violated during the motion period.
\end{example}
\section{Conclusion}\label{sec_con}
This paper has studied the simultaneous position and orientation planning of multiple nonholonomic mobile robots. Such a planning problem takes into account the position and orientation requirements simultaneously, indicating that the robot can reach the goal point with a specified attitude angle. In contrast to the existing open-loop algorithms, we have proposed a novel global feedback motion planning method, namely, a dynamic vector field, under which the directions of velocity vectors over the 2-D plane are decided by both of position and orientation and the nonholonomic constraint can be handled. In addition, by blending with a circular vector field, the dynamic vector field has been extended to the cases of obstacle and collision avoidance. Future works will focus on the vector-field-based motion planning under more realistic occasions, such as input saturations, external disturbances, measurement noises.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,869,038,157,063 | arxiv | \section{Algorithm}
In this section, we present an approximation algorithm for Correlation
Clustering with Asymmetric Classification Errors.
The algorithm first solves a standard LP relaxation and assigns every edge a length of $x_{uv}$ (see Section~\ref{sec:LP}).
Then, one by one it creates new clusters and removes them from the graph. The algorithm creates a cluster $C$ as follows. It picks a random vertex $p$, called a pivot, among yet unassigned vertices and a random number $R\in [0,1]$. Then, it adds the pivot $p$ and all vertices $u$ with $f(x_{pu}) \leq R$ to $C$, where $f:[0,1]\to [0,1]$ is a properly chosen function, which we define below.
We give a pseudo-code for this algorithm in Algorithm~\ref{alg:ApprAlg}.
\begin{algorithm}[tb]
\caption{Approximation Algorithm}
\label{alg:ApprAlg}
\begin{algorithmic}
\INPUT An instance of Correlation Clustering with Asymmetric Weights $G=(V,E^+,E^-, \mathbf{w}_e)$.
\STATE Initialize $t=0$ and $V_t=V$.
\WHILE{$V_t \neq \varnothing$}
\STATE Pick a random pivot $p_t\in V_t$.
\STATE Choose a radius $R$ uniformly at random in $[0,1]$.
\STATE Create a new cluster $S_t$; add the pivot $p_t$ to $S_t$.
\FORALL{$u\in V_t$}
\IF{$f(x_{p_{t}u})\leq R$}
\STATE Add $u$ to $S_t$.
\ENDIF
\ENDFOR
\STATE Let $V_{t+1} = V_t\setminus S_t$ and $t = t+1$.
\ENDWHILE
\OUTPUT clustering $\mathcal{S}=(S_0,\dots, S_{t-1})$.
\end{algorithmic}
\end{algorithm}
Our algorithm resembles the LP-based correlation clustering algorithms by
\citet{ACN08} and \citet{CMSY15}. However, a crucial difference between our
algorithm and above mentioned algorithms is that our algorithm uses a
``dependant'' rounding. That is, if for two edges $pv_1$ and $pv_2$, we have
$f(x_{pv_1})\leq R$ and $f(x_{pv_2})\leq R$ at some step $t$ of the algorithm
then both $v_1$ and $v_2$ are added to the new cluster $S_t$. The algorithms
by \citet{ACN08} and \citet{CMSY15} make decisions on whether to add $v_1$ to
$S_t$ and $v_2$ to $S_t$, independently. Also, the choice of the function $f$
is quite different from the functions used by~\citet{CMSY15}. In fact, it is
influenced by the paper by~\citet*{GVY96}.
\section{Analysis of the Algorithm}
The analysis of our algorithm follows the general approach proposed by~\citet*{ACN08}.
\citet{ACN08} observed that in order to get upper bounds on the approximation
factors of their algorithms, it is sufficient to consider how these algorithms
behave on triplets of vertices. Below, we present their method adapted to our
settings. Then, we will use Theorem~\ref{thm:L4} to analyze our algorithm.
\subsection{General Approach: Triple-Based Analysis}
\label{triple_Analysis}
Consider an instance of Correlation Clustering $G=(V,E^+,E^-)$ on three vertices $u$, $v$, $w$. Suppose that the edges $uv$, $vw$, and $uw$ have
signs $\sigma_{uv}, \sigma_{vw}, \sigma_{uw}\in \{\pm\}$, respectively. We shall call this instance a triangle $(u,v,w)$ and refer to
the vector of signs $\sigma =(\sigma_{vw}, \sigma_{uw}, \sigma_{uv})$ as the signature of the triangle~$(u,v,w)$.
Let us now assign arbitrary lengths $x_{uv}$, $x_{vw}$, and $x_{uw}$ satisfying the triangle inequality
to the edges $uv$, $vw$, and $uw$ and run one iteration of our algorithm on the triangle $uvw$
(see Algorithm~\ref{alg:alg-one-step}).
\begin{algorithm}[tb]
\caption{One iteration of Algorithm~\ref{alg:ApprAlg} on triangle $uvw$}
\label{alg:alg-one-step}
\begin{algorithmic}
\STATE Pick a random pivot $p\in \{u,v,w\}$.
\STATE Choose a random radius $R$ with the uniform distribution in $[0,1]$.
\STATE Create a new cluster $S$. Insert $p$ in $S$.
\FORALL{$a \in \{u,v,w\}\setminus\{p\}$}
\IF{$f_{\alpha}(x_{pa})\leq R$}
\STATE Add $a$ to $S$ .
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We say that a positive edge $uv$ is in disagreement with $S$ if $u\in S$ and $v\notin S$ or $u\notin S$ and $v\in S$. Similarly,
a negative edge $uv$ is in disagreement with $S$ if $u,v\in S$. Let $cost(u,v\;|\; w)$ be the probability that the
edge $(u,v)$ is in disagreement with $S$ given that $w$ is the pivot.
$$cost(u,v\;|\; w) =
\begin{cases}
\mathrm{Pr}(u\in S, v\notin S \text{ or } u\notin S, v\in S\;|\; p = w),& \text{if } \sigma_{uv} = \text{``$+$''};\\
\mathrm{Pr}(u\in S, v\in S\;|\; p = w),& \text{if } \sigma_{uv} = \text{``$-$''}.
\end{cases}$$
Let $lp(u,v\;|\; w)$ be the LP contribution of the edge $(u,v)$ times the probability of it being removed, conditioned on $w$ being the pivot.
$$lp(u,v\;|\; w) =
\begin{cases}
x_{uv}\cdot \mathrm{Pr}(u\in S \text{ or } v\in S \;|\; p = w),& \text{if } \sigma_{uv} = \text{``$+$''};\\
(1-x_{uv}) \cdot \mathrm{Pr}(u\in S \text{ or } v\in S\;|\; p = w),& \text{if } \sigma_{uv} = \text{``$-$''}.
\end{cases}$$
We now define two functions $ALG^{\sigma}(x,y,z)$ and $LP^{\sigma}(x,y,z)$. To this end,
construct a triangle $(u,v,w)$ with signature $\sigma$ edge lengths $x,y,z$ (where
$x_{vw} = x$, $x_{uw} = y$, $x_{uv} = z$).
Then,
\begin{align*}
ALG^{\sigma}(x,y,z) &= \mathbf{w}_{uv}\cdot cost(u,v\;|\; w) + \mathbf{w}_{uw}\cdot cost(u,w\;|\; v) + \mathbf{w}_{vw}\cdot cost(v,w\;|\; u);\\
LP^{\sigma}(x,y,z) &= \mathbf{w}_{uv}\cdot lp(u,v\;|\; w) + \mathbf{w}_{uw}\cdot lp(u,w\;|\; v) + \mathbf{w}_{vw}\cdot lp(v,w\;|\; u).
\end{align*}
We will use the following theorem from the paper by \citet*{CMSY15} (Lemma~4) to analyze our algorithm.
This theorem was first proved by~\citet*{ACN08} but it was not stated in this form in their paper.
\begin{theorem}[see \cite{ACN08} and \cite{CMSY15}]\label{thm:L4}
Consider a function $f_{\alpha}$ with $f_{\alpha}(0) = 0$. If for all signatures $\sigma=(\sigma_1,\sigma_2,\sigma_3)$
(where each $\sigma_i\in \{\pm\}$) and edge lengths $x$, $y$, and $z$ satisfying the triangle inequality,
we have $ALG^{\sigma}(x,y,z)\leq \rho LP^{\sigma}(x,y,z)$, then the approximation factor of the algorithm is at most $\rho$.
\end{theorem}
\subsection{Analysis of the Approximation Algorithm}
\begin{proof}[Proof of Theorem~\ref{thm:main}] Without loss of generality we
assume that the scaling parameter $\mathbf{w}$ is $1$. We use different functions for
$\alpha \leq 0.169$ and $\alpha \geq 0.169$. Let $A = 3 + 2\log_e 1/\alpha$.
For $\alpha \leq 0.169$, we define
$f_{\alpha}(x)$ as follows (see Figure~\ref{fig:plot-f}):
$$
f_{\alpha}(x)= \left\{
\begin{array}{ll}
1-e^{-Ax}, & \text{if }0\leq x<\frac{1}2-\frac{1}{2A}; \\
1, & \text{otherwise};
\end{array}
\right.
$$
and, for $\alpha \geq 0.169$, we define $f_{\alpha}(x)$ as follows:
\begin{align*}
f_{\alpha}(x)= \left\{
\begin{array}{ll}
0, &\mbox{ if } x<\frac{1}{A} \\
\frac{1-\alpha}{3}, &\mbox{ if } \frac{1}{A}\leq x<\frac{1}{2}-\frac{1}{2A} \\
1, &\mbox{ if } x\geq \frac{1}{2}-\frac{1}{2A} \\
\end{array}
\right.
\end{align*}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel=$x$,
ylabel=$f_{\alpha}(x)$,
grid=both,
minor grid style={gray!25},
major grid style={gray!25},
width=\linewidth,
legend pos=south east,
xtick distance=0.25,
minor tick num=1,
legend cell align={left}
]
\addplot[domain=0:0.4702,line width=0.5pt,solid,color=black] {1 - exp(-16.815*x) };
\addlegendentry{\tiny $f_{\alpha}$ for $\alpha = 0.001$};
\addplot[line width=0.5pt,dashed,color=black] %
table[x=x,y=f,col sep=comma]{data/optimal-f-plot-0.001.csv};
\addlegendentry{\tiny $f_{opt}$ for $\alpha = 0.001$};
\addplot[domain=0:0.459,line width=0.5pt,solid,color=red] {1 - exp(-12.21*x) };
\addlegendentry{\tiny $f_{\alpha}$ for $\alpha = 0.01$};
\addplot[line width=0.5pt,dashed,color=red] %
table[x=x,y=f,col sep=comma]{data/optimal-f-plot-0.01.csv};
\addlegendentry{\tiny $f_{opt}$ for $\alpha = 0.1$};
\addplot[domain=0:0.434,line width=0.5pt,solid,color=blue] {1 - exp(-7.605*x) };
\addlegendentry{\tiny $f$ for $\alpha = 0.1$};
\addplot[line width=0.5pt,dashed,color=blue] %
table[x=x,y=f,col sep=comma]{data/optimal-f-plot-0.1.csv};
\addlegendentry{\tiny $f_{opt}$ for $\alpha = 0.1$};
\addplot[domain=0.4702:1,line width=0.6pt,color=black,dotted,forget plot] {1};
\addplot[domain=0.459:1,line width=0.6pt,color=orange,dotted,forget plot] {1};
\addplot[domain=0.434:1,line width=0.6pt,color=red,dotted,forget plot] {1};
\end{axis}
\node [color=black, rotate=85] at (0.55,2) {\tiny $0.001$};
\node [color=red, rotate=57] at (1.344,4.46) {\tiny $0.01$};
\node [color=blue, rotate=73] at (1.087, 3.23) {\tiny $0.1$};
\node [color=black, rotate=82] at (0.75,1) {\tiny $0.001$};
\node [color=red, rotate=67] at (1.7,3.46) {\tiny $0.01$};
\node [color=blue, rotate=75] at (1.55, 2.2) {\tiny $0.1$};
\end{tikzpicture}
\caption{This plot shows functions $f_{\alpha}(x)$ used in the proof of Theorem~\ref{thm:main} for $\alpha \in\{0.001, 0.01, 0.1\}$. Additionally, it shows optimal functions $f_{opt}(x)$
(see Section~\ref{sec:optimal} for details).
Note that every function $f_{\alpha}(x)$, including $f_{opt}(x)$, has a discontinuity at point $\tau = \nicefrac{1}{2} - \nicefrac{1}{2A}$; for $x \geq \tau$, $f_{\alpha}(x) = 1$.}
\label{fig:plot-f}
\end{figure}
Our analysis of the algorithm relies on Theorem~\ref{thm:L4}. We will show that for every triangle
$(u_1,u_2,u_3)$ with edge lengths $(x_1,x_2,x_3)$ (satisfying the triangle inequality) and signature $\sigma = (\sigma_1,\sigma_2, \sigma_3)$,
we have
\begin{equation}\label{eq:analysis-main}
ALG^{\sigma}(x_1,x_2,x_3)\leq A\cdot LP^{\sigma}(x_1,x_2,x_3).
\end{equation}
Therefore, by Theorem~\ref{thm:L4}, our algorithm gives an $A$-approximation.
Without loss of generality, we assume that $x_1\leq x_2\leq x_3$. When $i\in\{1,2,3\}$ is fixed, we will denote the other two elements of $\{1,2,3\}$ by $k$ and $j$, so that $j < k$.
For $i\in\{1,2,3\}$, let $e_i = (u_j,u_k)$ (the edge opposite to $u_i$), $w_i = \mathbf{w}_{e_i}$, $x_i = x_{u_ju_k}$, $y_i = f_{\alpha}(x_i)$, and
$$t_i = A\cdot lp(u_j, u_k | u_i)-cost(u_j, u_k | u_i).$$
Observe that (\ref{eq:analysis-main}) is equivalent to
the inequality $w_1t_1 + w_2 t_2 + w_3 t_3 \geq 0$. We now prove that this inequality
always holds.
\begin{lemma}\label{lem:wr-geq-0} We have
\begin{equation}\label{eq:analysis-main-alt}
w_1t_1 + w_2 t_2 + w_3 t_3 \geq 0
\end{equation}
\end{lemma}
We express each $t_i$ in terms of $x_i$'s and $y_i$'s.
\begin{claim}\label{cl:what_t_i_looks_like} For every $i\in\{1,2,3\}$, we have
$$
t_i =
\begin{cases}
A(1 - y_j) x_i-(y_k - y_j), & \text{if } \sigma_i = \text{``$+$''}\\
A(1-y_j)(1-x_i)-(1 - y_k) , & \text{if } \sigma_i = \text{``$-$''}
\end{cases}
$$
\end{claim}
\begin{proof}
If $\sigma_i = \text{``$+$''}$, then
\begin{align*}
t_i &= A\cdot lp(u_j, u_k | u_i)-cost(u_j, u_k | u_i)\\
&=A x_{u_ju_k}\cdot \mathrm{Pr}(u_j\in S \text{ or } u_k\in S \;|\; p = u_i] -\mathrm{Pr}(u_j\in S, u_k\notin S \text{ or } u_j\notin S, u_k\in S \;|\; p = u_i]\\
&= A x_i \cdot \mathrm{Pr}(f_{\alpha}(x_k) \leq R \text{ or } f_{\alpha}(x_j) \leq R ) - \mathrm{Pr}(f_{\alpha}(x_k) \leq R < f_{\alpha}(x_j) \text{ or } f_{\alpha}(x_j) \leq R < f_{\alpha}(x_k) )\\
&= A x_i (1 - y_j) - (y_k - y_j),
\end{align*}
where we used that $y_k = f_{\alpha}(x_k) \geq f_{\alpha}(x_j) = y_j$ (since $x_k \geq x_j$ and $f_{\alpha}(x)$ is non-decreasing).
If $\sigma_i = \text{``$-$''}$, then similarly to the previous case, we have
\begin{align*}
t_i &= A\cdot lp(u_j, u_k | u_i)-cost(u_j, u_k | u_i)\\
&= A (1-x_{u_ju_k}) \cdot \mathrm{Pr}(u_j\in S \text{ or } u_k\in S \;|\; p = u_i)- \mathrm{Pr}(u_j\in S, u_k\in S\;|\; p = w) \\
&= A (1-x_i) \cdot \mathrm{Pr}(f_{\alpha}(x_k)\leq R \text{ or } f_{\alpha}(x_j) \leq R) -
\mathrm{Pr}(f_{\alpha}(x_k) \leq R, f_{\alpha}(x_j)\leq R) \\
&= A (1-x_i) \cdot (1 - y_j) - (1 - y_k).
\end{align*}
\end{proof}
We say that edge $e_i$ \textit{pays for itself} if $t_i \geq 0$. Note
that if all edges $e_1, e_2, e_3$ pay for themselves then the desired inequality
$(\ref{eq:analysis-main-alt})$ holds. First, we show that all negative edges pay
for themselves.
\begin{claim}\label{cl:negative_pays}
If $\sigma_i = \text{``$-$''}$, then $t_i\geq 0$.
\end{claim}
\begin{proof}
By Claim~\ref{cl:what_t_i_looks_like}, $t_i = A(1-y_j)(1-x_i)- 1-y_k$.
Thus, we need to show that $A(1-y_j)(1-x_i)\geq 1-y_k$.
If $x_k\geq \frac{1}{2} - \frac{1}{2A}$ then $y_k = 1$, and the inequality trivially holds.
If $x_k< \frac12 - \frac{1}{2A}$, then using $x_j \leq x_k$, we get
$$A >\frac{1}{1-2x_k} \geq \frac{1}{1-x_k - x_j} \geq
\frac{1}{1-x_i},$$
here we used the triangle inequality $x_k + x_j \geq x_i$. Thus
$$
A(1-y_j)(1-x_i)\geq A(1-y_k)(1-x_i)\geq 1-y_k.
$$
\end{proof}
We now show that for short edges $e_i$, it is sufficient to consider only the case
when $\sigma_i = \text{``$+$''}$. Specifically, we prove the following claim.
\begin{claim}\label{positive_is_enough}
Suppose that $x_i<\frac{1}{2}-\frac{1}{2A}$. If (\ref{eq:analysis-main-alt}) holds for $\sigma$ with $\sigma_i = \text{``$+$''}$, then (\ref{eq:analysis-main-alt}) also holds for $\sigma'$ obtained from $\sigma$ by changing the sign of $\sigma_i$ to \text{``$-$''}.
\end{claim}
\begin{proof}
To prove the claim, we show that the value of $t_i$ is greater for $\sigma'$ than for $\sigma$. That is,
$$A(1-y_j)x_i -(y_k-y_j)< A(1-y_j)(1-x_i)-(1-y_k).$$
Note that the values of $t_j$ and $t_k$ do not depend on $\sigma_i$ and
thus do not change if we replace $\sigma$ with $\sigma'$.
Since $f_{\alpha}$ is non-decreasing and $x_j\leq x_k$, we have $y_j\leq y_k$.
Hence,
$$
x_i< \frac{1}{2}-\frac{1}{2A}=\frac{1}{2}+\frac{1}{2A}-\frac{1}{A}\leq \frac{1}{2}+\frac{1}{2A}-\frac{(1-y_k)}{A(1-y_j)}.
$$
Thus,
$$ 2A(1-y_j)x_i<A(1-y_j)+1-y_j-2(1-y_k).$$
Therefore, $$A(1-y_j)x_i -(y_k-y_j)< A(1-y_j)(1-x_i)-(1-y_k),$$ as required.
\end{proof}
Unlike negative edges, positive edges do not necessarily pay for themselves.
We now prove that positive edges of length at least $1/A$ pay for themselves.
\begin{claim}\label{cl:positive_pays}
If $\sigma_i = \text{``$+$''}$ and $x_i \geq 1/A$, then $t_i\geq 0$.
\end{claim}
\begin{proof}
We have,
$$t_i = A(1 - y_j) x_i-(y_k - y_j) \geq (1 - y_j)-(y_k - y_j) = 1 - y_k \geq 0.$$
\end{proof}
We now separately consider two cases $\alpha \leq 0.169$ and $\alpha \geq 0.169$.
\let\qed\relax
\end{proof}
\subsection{Analysis of the Approximation Algorithm for \texorpdfstring{$\alpha \leq 0.169$}{α ≤ 0.169}}
First, we consider the case of $\alpha \leq 0.169$.
\begin{proof}[Proof of Lemma~\ref{lem:wr-geq-0} for $\alpha \leq 0.169$]
We first show that if $x_3<\frac{1}{2}-\frac{1}{2A}$, then all three edges $e_1$, $e_2$, and $e_3$
pay for themselves.
\begin{claim}\label{cl:all_pay} If $x_3<\frac{1}{2}-\frac{1}{2A}$, then
$t_i \geq 0$ for every $i$.
\end{claim}
\begin{proof}
Since $x_3 < \frac{1}{2}-\frac{1}{2A}$, for every $i\in\{1,2,3\}$ we have $x_i
< \frac{1}{2}-\frac{1}{2A}$ and thus $y_i \equiv f_{\alpha}(x_i) = 1- e^{-Ax_i}$.
We show that $t_i \geq 0$ for all $i$. Fix $i$. If $\sigma_i = \text{``$-$''}$,
then, by Claim~\ref{cl:negative_pays}, $t_i \geq 0$. If $\sigma_i = \text{``$+$''}$, then
\begin{multline*}
y_k-y_j= e^{-Ax_j}-e^{-Ax_k}=e^{-Ax_j}\left(1-e^{-A\left(x_k-x_j\right)}\right)
\leq \\
\leq
e^{-Ax_j}A(x_k-x_j)\leq e^{-Ax_j}Ax_i =A(1-y_j)x_i,
\end{multline*}
where the first inequality follows from the inequality $1-e^{-x}\leq x$, and
the second inequality follows from the triangle inequality. Thus, $t_i =
A(1 - y_j) x_i-(y_k - y_j) \geq 0$.
\end{proof}
We conclude that if $x_3< \frac{1}{2} -\frac{1}{2A}$, then
(\ref{eq:analysis-main-alt}) holds. The case $x_3< \frac{1}{2} -\frac{1}{2A}$ is the most interesting
case in the analysis; the rest of the proof is more technical. As a side note,
let us point out that Theorem~\ref{thm:main} has dependence $A = 3 + 2\log_e 1/\alpha$
because (i) $f_{\alpha}(x)$ must be equal to $C - e^{-Ax}$ or a slower growing function so that
Claim \ref{cl:all_pay} holds (ii) Theorem~\ref{thm:L4} requires that $f_{\alpha}(0) = 0$,
and finally (iii) we will need below that
$1-f\left(\frac{1}{2}-\frac{3}{2A}\right) \leq \alpha$.
\medskip
From now on, we assume that $x_3\geq \frac12 -\frac{1}{2A}$ and, consequently,
$y_3 = f(x_3)=1$. Observe that if $x_1\geq \frac{1}{A}$, then all $x_i \geq \frac{1}{A}$ and
thus, by Claims~\ref{cl:negative_pays} and~\ref{cl:positive_pays}, all $t_i \geq 0$
and we are done. Similarly, if $x_2\geq \frac{1}{2}-\frac{1}{2A}$, then
$x_2 \geq \frac{1}{A}$ (since $A \geq 3$). Hence, $t_2\geq 0$ and $t_3 \geq 0$;
additionally, $y_2=y_3=1$. Thus $t_1 = 0$ and inequality~(\ref{eq:analysis-main-alt}) holds.
Therefore, it remains to show that inequality~(\ref{eq:analysis-main-alt}) holds
when
$$x_1 < \frac1A,\quad x_2 < \frac{1}{2}-\frac{1}{2A},
\text{ and } x_3 \geq \frac{1}{2}-\frac{1}{2A}.$$
By Claim \ref{positive_is_enough}, we
may also assume that $\sigma_1 = \text{``$+$''}$ and $\sigma_2 = \text{``$+$''}$.
Since $\alpha \leq 0.169$, we have $A > 5$ and
$$x_2 \geq x_3 - x_1 \geq \bigg(\frac12 - \frac{1}{2A}\bigg) - \frac1A >
\frac1A\text{ and }x_3 \geq \frac{1}{2}-\frac{1}{2A} > \frac1A.$$
Thus, by Claims~\ref{cl:negative_pays} and~\ref{cl:positive_pays}, $t_2 \geq 0$ and $t_3
\geq 0$. Hence, $w_2 t_2 + w_3 t_3 \geq \alpha(w_2+w_3)$.
Also, recall that $e_1$ is a positive edge and thus $w_1 \leq 1$.
Therefore, it is sufficient to show that
\begin{equation}\label{eq:analysis-main-alt-reweighted}
t_1 \geq -\alpha(t_2 + t_3).
\end{equation}
Now we separately consider two possible signatures $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$+$''})$
and $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$-$''})$.
\medskip
\noindent\textbf{First, assume that $\sigma =
(\text{``$+$''},\text{``$+$''},\text{``$+$''})$.} We need to show that
$$
A(1 - y_2) x_1 - (1-y_2)\geq \alpha \bigg(
(1 - y_1) + (y_2 - y_1) - A(1 - y_1) x_2 - A(1 - y_1) x_3
\bigg).
$$
Here, we used that $y_3=1$.
Note that $x_2\geq x_3-x_1\geq \frac{1}{2}-\frac{1}{2A}-\frac{1}{A}=\frac{1}{2}-\frac{3}{2A}$. Therefore,
\begin{align*}
1-y_2\leq 1-\left(1-e^{-A\left(\frac{1}2-\frac{3}{2A}\right)}\right)&=e^{-\frac{3}{2}-\log_e\frac{1}{\alpha}+\frac{3}{2}}=e^{-\log_e\frac{1}{\alpha}}=\alpha.
\end{align*}
Thus, $(1-y_2)+\alpha(1 - y_1) + \alpha(y_2 - y_1)\leq \alpha y_2 +2\alpha(1-y_1)$. To finish the analysis of the case $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$+$''})$, it
is sufficient to show that
\begin{align*}
\alpha y_2 +2\alpha(1-y_1)\leq& A(1 - y_2) x_1+\alpha A(1 - y_1) x_2+ \alpha A(1 - y_1) x_3.
\end{align*}
This inequality immediately follows from the following claim (we simply need to add up (\ref{refined_x_2}) and (\ref{refined_x_3}) and multiply the result by $\alpha$).
\begin{claim}
For $c =0.224$, we have
\begin{align}
(2-c)(1-y_1) &\leq A(1-y_1) x_2 \label{refined_x_2};\text{ and}\\
y_2 + c(1-y_1)&\leq A(1-y_1) x_3 \label{refined_x_3}.
\end{align}
\end{claim}
\begin{proof}
Since $c\geq 2-\log_e\frac{1}{0.169} \geq 2-\log_e\frac{1}{\alpha}$ (recall that $\alpha \leq 0.169$), we have
$$2-c\leq \log_e\frac{1}{\alpha}= \frac{A}{2}-\frac{3}{2}\leq Ax_2.$$
Therefore, (\ref{refined_x_2}) holds.
We also have,
$$c\leq 0.169 + \log_e \frac{1}{0.169}+1-e \leq \alpha+\log_e\frac{1}{\alpha}+1-e.$$
Thus,
$e-\alpha\leq\frac{A}{2}-\frac{1}{2}-c \leq Ax_3 - c$.
Therefore,
\begin{align}\label{intermediate}
e^{-1}\left(Ax_3-c\right)&\geq 1-\alpha e^{-1}= 1-e^{-A\left(\frac{1}2-\frac{1}{2A}\right)}\geq y_2,
\end{align}
where we used that $x_2< \frac{1}{2}-\frac{1}{2A}$ and $y_2 = f_{\alpha}(x_2) = 1 - e^{-Ax_2}$.
Observe that from inequalities (\ref{intermediate}) and $x_1< \frac{1}{A}$ it follows that
$$y_2\leq \left(1-f\Big(\frac{1}{A}\Big)\right)(Ax_3-c)\leq (1-y_1)(Ax_3-c),$$
which implies (\ref{refined_x_3}).
\end{proof}
\medskip
\noindent\textbf{Now, assume that $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$-$''})$.} We need to prove the following inequality,
\begin{equation}\label{e_3_negative}
(1-y_2)+\alpha (1 - y_1+1-y_2)\leq A(1 - y_2) x_1+\alpha A(1 - y_1) (x_2+1-x_3).
\end{equation}
As before,
\begin{equation}\label{e_3_negative_1}
(1-y_2)+\alpha (1 - y_1+1-y_2)\leq \alpha + \alpha (1 - y_1+1-y_2)\leq \alpha +2\alpha(1 - y_1).
\end{equation}
On the other hand,
\begin{align}\label{e_3_negative_2}
A(1 - y_2) x_1+\alpha A(1 - y_1)(x_2+1-x_3)&\geq \alpha A (1 - y_1) (1-x_1+x_1+x_2-x_3)\nonumber\\
&\geq \alpha A (1 - y_1) (1-x_1)\nonumber\\
&\geq \alpha A (1 - y_1) \left(1-\frac{1}{A}\right)\nonumber\\
&=\alpha (1 - y_1)(A-1)
\end{align}
where the second inequality is due to the triangle inequality, and the third inequality is due to $x_1<\frac{1}{A}$.
Finally, observe that $1\leq 2e^{-1}\log_e\frac{1}{\alpha}=e^{-1}(A-3)\leq (1-y_1)(A-3)$.
We get,
\begin{equation}\label{e_3_negative_3}
\alpha (1 - y_1)(A-1)\geq \alpha +2\alpha(1 - y_1).
\end{equation}
Combining~(\ref{e_3_negative_1}), (\ref{e_3_negative_2}), and (\ref{e_3_negative_3}), we get~(\ref{e_3_negative}).
This concludes the case analysis and the proof of Theorem~\ref{thm:main} for the regime $\alpha\leq 0.169$.
\let\qed\relax
\end{proof}
\subsection{Analysis of the Approximation Algorithm for \texorpdfstring{$\alpha\geq 0.169$}{α ≥ 0.169}}
We now consider the case when $\alpha \geq 0.169.$ Observe that for $\alpha \geq 0.169$
\begin{equation}\label{large_alpha_ineq1}
A=3+2\log_e(1/\alpha) \geq \frac{6\alpha +3-(1-\alpha)^2}{3\alpha}
\end{equation}
and
\begin{equation}\label{large_alpha_ineq2}
\frac{1-\alpha}{3}\leq \frac{2\alpha}{1+\alpha}
\end{equation}
\begin{proof}[Proof of Lemma~\ref{lem:wr-geq-0} for $\alpha \geq 0.169$]
Observe that if $x_1\geq \frac{1}{A}$, then all $x_i \geq 1/A$ and thus, by Claims~\ref{cl:negative_pays}
and~\ref{cl:positive_pays}, all $t_i \geq 0$ and we are done. Moreover, if $x_3 < \frac{1}{A}$ then
all $x_i < 1/A$ implying $y_i=0$ and thus, $t_i\geq 0$ for $\sigma_i=``+"$.
This combined with Claim~\ref{cl:negative_pays} imply all $t_i\geq 0$ and we are done.
Similarly, if $x_2\geq \frac{1}{2}-\frac{1}{2A}$, then $x_2\geq 1/A$ (since $A \geq 3$).
Hence, $t_2\geq 0$ and $t_3 \geq 0$; additionally, we have $y_2=y_3=1$.
Thus, $t_1 = 0$ and we are done.
Therefore, we will assume below that
\begin{align*}
x_1 < \frac{1}{A},\;\;x_2 < \frac{1}2-\frac{1}{2A},\;\; x_3\geq \frac{1}{A}.
\end{align*}
Furthermore, by Claim \ref{positive_is_enough}, we may assume $\sigma_1 = \text{``$+$''}$ and $\sigma_2 = \text{``$+$''}$. We consider four cases: (i) $x_2\geq \nicefrac{1}{A},\;x_3\geq \nicefrac{1}{2}-\nicefrac{1}{(2A)}$, (ii) $x_2< \nicefrac{1}{A},\;x_3\geq \nicefrac{1}{2}-\nicefrac{1}{(2A)}$, (iii) $x_2\geq \nicefrac{1}{A},\;x_3< \nicefrac{1}{2}-\nicefrac{1}{(2A)}$, and (iv) $x_2< \nicefrac{1}{A},\;x_3< \nicefrac{1}{2}-\nicefrac{1}{(2A)}$.
\medskip
\noindent\textbf{Consider the case $x_2\geq \frac{1}{A},\;x_3\geq \frac{1}{2}-\frac{1}{2A}.$} Then $y_1=0,\;y_2=\nicefrac{(1-\alpha)}{3},\;y_3=1.$ By Claims~\ref{cl:negative_pays} and~\ref{cl:positive_pays}, $t_2,t_3\geq 0$, and $e_2,e_3$ pay for themselves. If $t_1\geq 0$, we are done.
So we will assume below that $t_1<0$. Then,
\begin{equation}\label{weightAssumption1}
w_1t_1+w_2t_2+w_3t_3\geq 1\cdot t_1+\alpha t_2+\alpha t_3
\end{equation}
(recall that we assume that $e_1$ is a positive edge and thus $w_1\leq 1$).
Now we separately consider two possible signatures $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$+$''})$ and
$\sigma = (\text{``$+$''},\text{``$+$''},\text{``$-$''})$.
\medskip
\noindent\textbf{First, assume that $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$+$''})$.} Because of~(\ref{weightAssumption1}),
to prove~(\ref{eq:analysis-main-alt}) it is sufficient to show
\begin{equation}\label{+++}
(1-y_2)+\alpha +\alpha y_2
\leq A(1-y_2)x_1+\alpha Ax_2 +\alpha Ax_3
\end{equation}
From~(\ref{large_alpha_ineq1}) it follows that
$$
1+\alpha\leq\frac{(1-\alpha)^2}{3}+\alpha (A-1)
$$
which implies~(\ref{helper1}) due to $x_3\geq \frac{1}{2}-\frac{1}{2A}$
\begin{equation}\label{helper1}
1+\alpha\leq \frac{(1-\alpha)^2}{3}+2\alpha Ax_3
\end{equation}
Observe that from~(\ref{helper1}) together with triangle inequality and $y_2=\frac{1-\alpha}{3}\leq 1-\alpha$ it follows that
\begin{equation*}
1+\alpha\leq (1-\alpha)y_2+A(1-y_2)x_1-\alpha Ax_1
+\alpha Ax_1+\alpha Ax_2+\alpha Ax_3
\end{equation*}
which is equivalent to~(\ref{+++}).
\medskip
\noindent\textbf{Now, assume that $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$-$''})$.} Because of~(\ref{weightAssumption1}), to prove~(\ref{eq:analysis-main-alt}) it is sufficient to show
\begin{equation}\label{++-}
(1-y_2)+\alpha+\alpha (1-y_2)\leq A(1-y_2)x_1+\alpha Ax_2+\alpha A(1-x_3)
\end{equation}
From~(\ref{large_alpha_ineq1}) and $y_2=\frac{1-\alpha}{3}$ it follows that
\begin{equation*}
1+2\alpha\leq\frac{(1-\alpha)^2}{3}+\alpha A \leq y_2(1 + \alpha) + \alpha A
\end{equation*}
Since $y_2 \leq 1-\alpha$,
\begin{equation*}
(1 + 2\alpha) \leq (1+\alpha)y_2+A(1-y_2)x_1-\alpha Ax_1 +\alpha A,
\end{equation*}
Hence, using the triangle inequality,
\begin{equation*}
1+2\alpha\leq (1+\alpha)y_2+A(1-y_2)x_1-\alpha Ax_1+\alpha A
+\alpha Ax_1+\alpha Ax_2-\alpha Ax_3.
\end{equation*}
which is equivalent to~(\ref{++-}).
\medskip
\noindent\textbf{Consider the case $x_2< \frac{1}{A},\;x_3\geq \frac{1}{2}-\frac{1}{2A}.$} Then $y_1=y_2=0,\;y_3=1.$ Observe that $t_3\geq 0$ and $t_1,t_2<0$. Then,
\begin{equation}\label{weightAssumption2}
w_1t_1+w_2t_2+w_3t_3 \geq 1\cdot t_1+1\cdot t_2+\alpha t_3.
\end{equation}
(recall that we assume that $e_1,e_2$ are positive edges and thus $w_1,w_2\leq 1$).
Furthermore, since $x_3\geq \frac{1}{2}-\frac{1}{2A}$ we have
\begin{equation}\label{helper3}
Ax_3\geq A(1-x_3)-1.
\end{equation}
From~(\ref{helper3}), we get that if (\ref{eq:analysis-main-alt}) holds for $\sigma$ with $\sigma_3 = \text{``$-$''}$, then (\ref{eq:analysis-main-alt}) also holds for $\sigma'$ obtained from $\sigma$ by changing the sign of $\sigma_3$ to $\text{``$+$''}.$ Thus without loss of generality $\sigma_3=\text{``$-$''}$ and we only need to consider $\sigma = (\text{``$+$''},\text{``$+$''},\text{``$-$''}).$ Then, because of~(\ref{weightAssumption2}), to prove~(\ref{eq:analysis-main-alt}) it is sufficient to show
\begin{equation}\label{++-2}
1+1+\alpha \leq Ax_1+Ax_2+\alpha A(1-x_3).
\end{equation}
From~(\ref{large_alpha_ineq1}) it follows that
$$
A\geq\frac{5+\alpha}{\alpha+1}
$$
which is equivalent to
\begin{equation}\label{helper4}
2+\alpha\leq \alpha A +(1-\alpha)(\frac{A}{2}-\frac{1}{2}).
\end{equation}
Observe that from~(\ref{helper4}) together with triangle inequality and $x_3\geq \frac{1}{2}-\frac{1}{2A}$ it follows that
\begin{equation*}
2+\alpha \leq \alpha A+(1-\alpha)Ax_3= Ax_3+\alpha A(1-x_3)
\leq Ax_1+Ax_2+\alpha A(1-x_3).
\end{equation*}
\medskip
\noindent\textbf{Consider the case $x_2\geq \frac{1}{A},\;x_3< \frac{1}{2}-\frac{1}{2A}$.}
Then $y_1=0,\;y_3=\nicefrac{(1-\alpha)}{3}$. By Claim~\ref{positive_is_enough} we only need
to consider $\sigma=(\text{``$+$''},\text{``$+$''},\text{``$+$''}).$ Then
by Claim~\ref{cl:positive_pays}, $t_2,t_3\geq 0$. Thus, if $t_1\geq 0$ then
$w_1t_1+w_2t_2+w_3t_3\geq 0$. Let us assume that $t_1<0$.
Since $e_1$ is a positive edge, we have $w_1\leq 1$. Thus,
\begin{equation*}
w_1 t_1+w_2 t_2+w_3 t_3\geq 1\cdot t_1+\alpha t_2+\alpha t_3
\end{equation*}
We need to show that the right hand side in the above inequality is non-negative. Replace $t_1$, $t_2$, and $t_3$ with the expressions from Claim~\ref{cl:what_t_i_looks_like}. Now to obtain~(\ref{eq:analysis-main-alt}), it is sufficient to prove that
\begin{equation}\label{+++2}
y_3-y_2+\alpha y_3+\alpha y_2
\leq A(1-y_2)x_1+\alpha Ax_2 +\alpha Ax_3
\end{equation}
Observe that since $x_3\geq \frac{1}{A}$ we have
\begin{equation}\label{helper5}
2\alpha\leq (1-\alpha)y_2+2\alpha Ax_3.
\end{equation}
Inequalities~(\ref{helper5}) and~(\ref{large_alpha_ineq2}) imply
\begin{equation}\label{helper6}
(1+\alpha)y_3\leq (1-\alpha)y_2+2\alpha Ax_3.
\end{equation}
Observe that from~(\ref{helper6}) together with triangle inequality and $ y_2\leq 1-\alpha$ it follows that
\begin{equation*}
(1+\alpha)y_3\leq (1-\alpha)y_2+A(1-y_2)x_1-\alpha Ax_1
+\alpha Ax_1+\alpha Ax_2+\alpha Ax_3
\end{equation*}
which is equivalent to~(\ref{+++2}).
\medskip
\noindent\textbf{Consider the case $x_2< \frac{1}{A},\;x_3< \frac{1}{2}-\frac{1}{2A}.$} Then $y_1=y_2=0.$ By Claim~\ref{positive_is_enough} we only need to consider $\sigma=(\text{``$+$''},\text{``$+$''},\text{``$+$''}).$ Then by Claim~\ref{cl:positive_pays}, $t_3\geq 0$.
If $x_1\geq \nicefrac{y_3}{A}$ then $t_1,t_2\geq 0$ and we are done. Thus we assume $x_1< \nicefrac{y_3}{A}$ which implies $t_1<0$. We consider two
different regimes: (i) $x_2\geq \nicefrac{y_3}{A}$ and (ii) $x_2<\nicefrac{y_3}{A}.$
\medskip
\noindent\textbf{First, assume that $x_2\geq \nicefrac{y_3}{A}$} which implies $t_2\geq 0.$ Then,
\begin{equation}\label{weightAssumption4}
w_1t_1+w_2t_2+w_3t_3\geq 1\cdot t_1+\alpha t_2+\alpha t_3
\end{equation}
(recall that we assume that $e_1$ is a positive edge and thus $w_1\leq 1$).
Because of~(\ref{weightAssumption4}), to prove~(\ref{eq:analysis-main-alt}) it is sufficient to show
\begin{equation}\label{x_2_greater_than_y_3/A}
y_3+\alpha y_3\leq Ax_1+\alpha Ax_2+\alpha Ax_3
\end{equation}
Observe that by~(\ref{large_alpha_ineq2}) and $y_3=\nicefrac{(1-\alpha)}{3}$ we have
\begin{equation*}
(1+\alpha)y_3\leq 2\alpha\leq 2\alpha Ax_3\leq \alpha Ax_3 +\alpha Ax_1+\alpha Ax_2
\leq Ax_1+\alpha Ax_2+\alpha Ax_3
\end{equation*}
where the second inequality follows from $x_3\geq \frac{1}{A}$ and the third inequality follows from triangle inequality.
\medskip
\noindent\textbf{Now, assume that $x_2<\nicefrac{y_3}{A}$} which implies $t_2< 0.$ Then,
\begin{equation}\label{weightAssumption5}
w_1t_1+w_2t_2+w_3t_3\geq 1\cdot t_1+1\cdot t_2+\alpha t_3
\end{equation}
(recall that we assume that $e_1,e_2$ are positive edges and thus $w_1,w_2\leq 1$).
Because of~(\ref{weightAssumption5}), to prove~(\ref{eq:analysis-main-alt}) it is sufficient to show
\begin{equation}\label{x_2_less_than_y_3/A}
2y_3\leq Ax_1+Ax_2+\alpha Ax_3
\end{equation}
Observe that by~(\ref{large_alpha_ineq2}) and $x_3\geq \frac{1}{A}$
\begin{equation*}
2y_3\leq \frac{4\alpha}{1+\alpha}\leq 1+\alpha\leq(1+\alpha)Ax_3\leq Ax_1+Ax_2+\alpha Ax_3
\end{equation*}
where the last inequality follows from triangle inequality.
This concludes the case analysis and the proof of Theorem~\ref{thm:main} for the regime $\alpha\geq 0.169$.
\end{proof}
\section{Analysis of the Algorithm for Complete Bipartite Graphs}
\begin{proof}[Proof of Theorem~\ref{thm:mainBipartite}]
The proof is similar to the proof of Theorem~\ref{thm:main}. Without loss of generality we
assume that the scaling parameter $\mathbf{w}$ is $1$. Define $f(x)$ as follows
$$
f(x)= \left\{
\begin{array}{ll}
1-e^{-Ax}, & \text{if }0\leq x<\frac{1}2-\frac{1}{2A} \\
1, & \text{otherwise}
\end{array}
\right.
$$
where $A = 5 + 2\log_e 1/\alpha$. Our analysis of the algorithm relies on Theorem~\ref{thm:L4}. Since in the proof of Theorem~\ref{thm:L4}, we assumed that
all edges are present, let us add missing edges (edges inside parts) to the bipartite graph and assign them weight $0$; to be specific, we assume that they are positive edges. (It is important to note that Theorem~\ref{thm:L4} is true even when edges have zero weights). We will still refer to these edges as `missing edges'.
We will show that for every triangle $(u_1,u_2,u_3)$ with edge lengths $(x_1,x_2,x_3)$ (satisfying the triangle inequality) and signature $\sigma = (\sigma_1,\sigma_2, \sigma_3)$, we have
\begin{equation}\label{eq:analysis-main-Bipartite}
ALG^{\sigma}(x_1,x_2,x_3)\leq A\cdot LP^{\sigma}(x_1,x_2,x_3)
\end{equation}
Therefore, by Theorem~\ref{thm:L4}, our algorithm gives an $A$-approximation. In addition to Theorem~\ref{thm:L4} we use Claims~\ref{cl:what_t_i_looks_like}, \ref{cl:negative_pays}, \ref{positive_is_enough}, \ref{cl:positive_pays} and~\ref{cl:all_pay}. Recall that proofs of these claims rely on $f$ being non-decreasing which is satisfied by the above choice. Observe that~(\ref{eq:analysis-main-Bipartite}) is equivalent to
\begin{equation}\label{eq:analysis-main-alt-Bipartite}
\sum_{i=1}^3 w_i t_i \geq 0.
\end{equation}
Observe that if $x_1\geq \frac{1}{A}$, then all $x_i \geq \frac{1}{A}$ and
thus, by Claims~\ref{cl:negative_pays} and~\ref{cl:positive_pays}, all $t_i \geq 0$
and we are done. Similarly, if $x_2\geq \frac{1}2-\frac{1}{2A} \geq
\frac{1}{A}$ (since $A > 3$), then $t_2\geq 0$ and $t_3 \geq 0$;
additionally, $y_2=y_3=1$, thus $t_1 = 0$ and we are done. Furthermore, if $x_3<\frac{1}2-\frac{1}{2A}$ then all $x_i<\frac{1}2-\frac{1}{2A}$ and thus, by Claim~\ref{cl:all_pay}, all $t_i \geq 0$
and we are done. Therefore, we will assume below that $x_1 < \frac1A$, $x_2 < \frac{1}2-\frac{1}{2A}$, and $x_3 \geq \frac{1}2-\frac{1}{2A}$.
Further, by the triangle inequality $x_2\geq x_3 - x_1 \geq \frac{A-1}{2A} - x_1 \geq \frac{A-3}{2A}$. We have (here we use that $A \geq 5$),
$$ x_1 \leq \frac{1}{A} \leq \frac{A-3}{2A} \leq \frac{A-1}{2A} - x_1 \leq x_2 < \frac{A-1}{2A} \leq x_3 \leq x_1 + x_2.$$
We will use below that
$$e^{A(x_2 - x_1)} \geq e^{A(\frac{A-1}{2A} - 2x_1)} = e^{2 + \log \frac{1}{\alpha} - 2Ax_1} = e^{2(1-Ax_1)}/\alpha \geq 1/\alpha.$$
By Claim \ref{positive_is_enough}, we may also assume that $\sigma_1 = \text{``$+$''}$ and $\sigma_2 = \text{``$+$''}$ (and since we assume that missing edges are positive). By Claims~\ref{cl:negative_pays} and~\ref{cl:positive_pays}, $t_2 \geq 0$ and $t_3\geq 0$ (edges $e_2$ and $e_3$ pay for themselves). If $t_1 \geq 0$, we are done. So we will assume below that $t_1 < 0$. Since $G$ is a complete bipartite graph, a triangle $(u_1,u_2,u_3)$ contains either (i) no edges or (ii) two edges. In case (i) we have $w_1=w_2=w_3=0$ and (\ref{eq:analysis-main-alt-Bipartite}) holds trivially. In case (ii) if $e_1$ is the missing edge then $w_1=0$ and since $t_2,t_3\geq 0$, (\ref{eq:analysis-main-alt-Bipartite}) holds trivially.
It remains to consider three signatures $\sigma = (\text{``$+$''},\text{``$+$''}, \text{``$\circ$''})$, $\sigma = (\text{``$+$''},\text{``$\circ$''},\text{``$+$''})$
and $\sigma = (\text{``$+$''},\text{``$\circ$''},\text{``$-$''})$ where $\text{``$\circ$''}$ denotes a missing edge (which by our assumption above is a positive edge).
\paragraph{First, assume that $\sigma =(\text{``$+$''},\text{``$+$''}, \text{``$\circ$''})$.}
By Claim~\ref{cl:what_t_i_looks_like}, $t_1 = A(1-y_2)x_1 - (1 - y_2) = - e^{-Ax_2}(1 - Ax_1)$ and $t_2 = A(1-y_1)x_2 - (1 - y_2) = e^{-Ax_1}(Ax_2 - 1)$.
Since $e_3$ is missing, $w_3 = 0$. We have, $w_1 t_1 + w_2 t_2 + w_3 t_3 \geq t_1 + \alpha t_2$ (here we used that $t_1 \leq 0$ and $t_2 \geq 0$). So it suffices to prove that $t_1 + \alpha t_2 > 0$ or, equivalently, $e^{Ax_2}(\alpha t_2 + t_1) \geq 0$. Using that $e^{A(x_2 - x_1)} \geq 1/\alpha$ and $x_2 \geq \frac{A-1}{2A} - x_1$, we get
$$
e^{Ax_2}(\alpha t_2 + t_1) = \alpha e^{A(x_2 - x_1)}(Ax_2 - 1) - (1 - Ax_1) \geq \alpha \cdot \frac{1}{\alpha} \cdot \Bigl(A \bigl(\frac{A-1}{2A} - x_1\bigr) - 1\Bigr) + Ax_1 - 1
= \frac{A-5}{2} > 0,$$
as required.
\paragraph{Now, assume that $\sigma =(\text{``$+$''},\text{``$\circ$''},\text{``$+$''})$.}
Now we have $t_1 = - e^{-Ax_2}(1 - Ax_1)$ (as before) and
$$t_3 = A(1 - y_1) x_3 - (y_2 - y_1) = A e^{-Ax_1} x_3 - (e^{-Ax_1} - e^{-Ax_2})= e^{-Ax_1} (Ax_3 -1) + e^{-Ax_2}.$$
We prove that $t_1 + \alpha t_3 \geq 0$ or, equivalently, $e^{Ax_2}(\alpha t_3 + t_1) \geq 0$.
Using that $e^{A(x_2 - x_1)} \geq 1/\alpha$ and $x_3 \geq \frac{A-1}{2A}$, we get
\begin{align*}
e^{Ax_2}(\alpha t_3 + t_1) &= \alpha \bigl(e^{A(x_2 - x_1)} (Ax_3 -1) + 1\bigr) - (1 - Ax_1) \\
&\geq (Ax_3 -1) + \alpha - (1 - Ax_1) > Ax_3 - 2 \geq \frac{A-1}{2} - 2 \geq 0,
\end{align*}
as required.
\paragraph{Finally, assume that $\sigma =(\text{``$+$''},\text{``$\circ$''},\text{``$-$''})$.}
Now we have $t_1 = - e^{-Ax_2}(1 - Ax_1)$ (as before) and $t_3 = A(1 - y_1) (1 - x_3) - (1 - y_2) = A e^{-Ax_1} (1 - x_3) - e^{-Ax_2}$.
As in the previous case, we prove that $e^{Ax_2}(\alpha t_3 + t_1) \geq 0$. We have,
$$e^{Ax_2}(\alpha t_3 + t_1) = \alpha\bigl(A e^{A(x_2-x_1)} (1 - x_3) - 1\bigr) - (1 - Ax_1)\geq
\underbrace{\alpha\bigl(A e^{A(x_2-x_1)} (1 - x_1 - x_2) - 1\bigr) - (1 - Ax_1)}_{F(x_1,x_2)}.$$
Denote the expression on the right by $F(x_1, x_2)$. We now show that for a fixed $x_1$, $F(x_1, x_2)$ is an increasing function of $x_2$ when $x_2 \in [\frac{A-1}{2A} - x_1, \frac{A-1}{2A})$. Indeed, we have
\begin{align*}
\frac{\partial F(x_1, x_2)}{\partial x_2} &= \alpha A e^{A(x_2 - x_1)} \bigl(A(1 - x_1 - x_2) - 1\bigr)
\geq \alpha A e^{A(x_2 - x_1)} \Bigl(A\Bigl(1 - \frac{1}{A} - \frac{A-1}{2A}\Bigr) - 1\Bigr) \\
&= \alpha A e^{A(x_2 - x_1)} \cdot \frac{A - 3}{2} > 0.
\end{align*}
We conclude that
\begin{align*}
F(x_1, x_2) &\geq F\left(x_1, \frac{A-1}{2A} - x_1\right) = \left.\left(\alpha\bigl(A e^{A(\tilde x_2-x_1)} (1 - x_1-\tilde x_2) - 1\bigr) - (1 - Ax_1)\right)\right|_{\tilde x_2 = \frac{A-1}{2A} - x_1} \\
&\geq \alpha \cdot A\cdot \frac{1}{\alpha}
\cdot \left(1 - \frac{A-1}{2A}\right) - \alpha - (1-Ax_1) = \frac{A + 1}{2} - \alpha -1 +Ax_1 \geq \frac{A + 1}{2} - 2 > 0.
\end{align*}
This concludes the case analysis and the proof of Theorem~\ref{thm:mainBipartite}.
\end{proof}
\section{Integrality Gap}
In this section, we give a $\Theta(\log 1/\alpha)$ integrality gap example for the LP relaxation presented in Section~\ref{sec:LP}. Notice that in the example each positive edge has a weight of $\mathbf{w}^+$ and each negative edge has a weight of $\mathbf{w}^-$ with $\mathbf{w}^+ \geq \mathbf{w}^-$.
\begin{proof}[Proof of Theorem~\ref{thm:intGap}]
Consider a $3$-regular expander $G = (V, E)$ on $n=\Theta((\alpha^2\log^2 \alpha)^{-1})$ vertices.
We say that two vertices $u$ and $v$ are similar if $(u, v) \in E$;
otherwise $u$ and $v$ are dissimilar. That is, the set of positive
edges $E^+$ is $E$ and the set of negative edges $E^-$ is $V\times V \setminus E$. Let $\mathbf{w}^+=1$ and $\mathbf{w}^-=\alpha.$
\begin{lemma}\label{lemma:gap-computation}
The integrality gap of the Correlation Clustering instance $G_{cc} = (V,E^+, E^-)$ described
above is $\Theta(\log \nicefrac{1}{\alpha})$.
\end{lemma}
\begin{proof}
Let $d(u,v)$ be the shortest path distance in $G$. Let $\varepsilon = 2/\log_3 n$. We define a feasible metric LP solution as follows:
$x_{uv} = \min (\varepsilon d(u,v), 1).$
Let $LP^+$ be the $LP$ cost of positive edges, and $LP^-$ be the LP cost of negative edges.
The LP cost of every positive edge is $\varepsilon$ since $d(u,v) = 1$ for $(u,v)\in E$.
There are $\nicefrac{3n}{2}$ positive edges in $G_{cc}$. Thus, $LP_+ < 3n/\log_3 n$. We now estimate $LP^-$. For every
vertex $u$, the number of vertices $v$ at distance less than $t$ is upper bounded by $3^t$ because $G$ is a 3-regular graph. Thus,
the number of vertices $v$ at distance less than $\nicefrac{1}{2} \log_3 n$ is upper bounded
by $\sqrt{n}$. Observe that the $LP$ cost of a negative edge $(u,v)$ (which is equal to $\alpha(1-x_{uv})$) is positive if and
only if $d(u,v) < \nicefrac{1}{2} \log_3 n$. Therefore, the number of negative edges with a positive $LP$ cost
incident on any vertex $u$ is at most $\sqrt{n}$. Consequently, the LP cost of all negative
edges is upper bounded by $\alpha n^{\frac{3}{2}}=\Theta(n/\log\nicefrac{1}{\alpha})$. Hence,
$$LP \leq \Theta(n/\log\nicefrac{1}{\alpha}) + 3n/\log_3 n = \Theta(n/\log\nicefrac{1}{\alpha}).$$
Here, we used that $\log n = \Theta(\log\nicefrac{1}{\alpha})$.
We now lower bound the cost of the optimal (integral) solution. Consider an optimal solution. There are two possible cases.
\begin{enumerate}
\item No cluster contains 90\% of the vertices. Then a constant fraction of positive edges in the expander $G$ are cut and, therefore, the cost of the
optimal clustering is at least $\Theta(n)$.
\item One of the clusters contains at least 90\% of all vertices. Then all negative edges in that cluster are in disagreement with the clustering.
There are at least $\binom{0.9n}{2} - m = \Theta(n^2)$ such edges. Their cost is at least $\Omega(\alpha n^2)$.
\end{enumerate}
We conclude that the cost of the optimal solution is at least $\Theta(n)$ and, thus, the integrality gap is $\Theta(\log (1/\alpha))$.
\end{proof}
We note that in this example $\log (1/\alpha) = \Theta(\log n)$. However, it is easy to construct an integrality gap example where $\log (1/\alpha) \ll \Theta(\log n)$. To do so, we pick the integrality gap example constructed above and create $k\gg n$ disjoint copies of it. To make the graph complete, we add negative edges with (fractional) LP value equal to $1$ to connect each copy to every other copy of the graph. The new graph has $kn \gg n$ vertices. However, the integrality gap remains the same, $\Theta(\log \nicefrac{1}{\alpha})$.
\end{proof}
Now we give a $\Theta(\log 1/\alpha)$ integrality gap example when $G$ is a complete bipartite graph.
\begin{proof}[Proof of Theorem~\ref{thm:intGapBipartite}]
The proof is very similar to that of Theorem~\ref{thm:intGap}. We start with a 3-regular \textit{bipartite} expander $G = (L, R, E)$ on $n=\Theta((\alpha^2\log^2 \alpha)^{-1})$ vertices (e.g., we can use a 3-regular bipartite Ramanujan expander constructed by~\citet*{MSS13}). Then we
define a correlation clustering instance as follows: $G_{cc}=(L,R,E^+,E^-)$ where $E^+=E$ and $E^-=(L\times R) \setminus E$; let $\mathbf{w}^+=1$ and $\mathbf{w}^-=\alpha$.
The proof of Lemma~\ref{lemma:gap-computation} can be applied to $G_{cc}$; we only need to note that if
a cluster contains at least 90\% of the vertices, then there are at least $\Theta(n^2)$ edges of $G_{cc}$ between vertices in the cluster.
It follows that the integrality gap is $\Omega(\log(1/\alpha))$.
\end{proof}
\section{Introduction}
In the Correlation Clustering problem, we are given a set of objects with pairwise similarity
information. Our aim is to partition these objects into clusters that match this information
as closely as possible. The pairwise information is represented as a weighted graph $G$ whose
edges are labelled as ``positive/similar'' and ``negative/dissimilar'' by a noisy binary classifier.
The goal is to find a clustering $\mathcal{C}$ that minimizes the weight of edges disagreeing
with this clustering: A positive edge is in disagreement with $\mathcal{C}$, if its endpoints belong to distinct clusters;
and a negative edge is in disagreement with $\mathcal{C}$ if its endpoints belong to the same cluster. We call this objective the MinDisagree objective.
The MinDisagree objective has been
extensively studied in literature since it was introduced by~\citet*{BBC04}~(see e.g., \cite{CGW03, DEFI06, ACN08,pan2015, CMSY15}).
There are currently two standard models for Correlation Clustering which we will refer to as (1) Correlation Clustering on Complete Graphs
and (2) Correlation Clustering with Noisy Partial Information. In the former model, we assume that graph
$G$ is complete and all edge weights are the same i.e., $G$ is unweighted. In the latter model, we do not make any assumptions
on the graph $G$. Thus, edges can have arbitrary weights and some edges may be missing. These models are quite different
from the computational perspective. For the first model, \citet*{ACN08} gave a 2.5 approximation algorithm. This approximation
factor was later improved to 2.06 by Chawla, Makarychev, Schramm, and Yaroslavtsev [2015]. For the second model,
\citet*{CGW03} and \citet*{DEFI06} gave an $O(\log n)$ approximation algorithm, they
also showed that Correlation Clustering with Partial Noisy Information is as hard as
the Multicut problem and, hence, $O(\log n)$ is likely to be the best possible approximation for this problem.
In this paper, we show how to interpolate between these two models for Correlation Clustering.
We study the Correlation Clustering problem on complete graphs
with edge weights. In our model, the weights on the
edges are constrained such that the ratio of the lightest edge in the graph
to the heaviest positive edge is at least $\alpha \leq 1$. Thus, if $\mathbf{w}$ is the
weight of the heaviest positive edge in the graph, then each positive edge
has weight in $[\alpha \mathbf{w}, \mathbf{w}]$ and each negative edge has weight greater
than or equal to $\alpha \mathbf{w}$.
We argue that this model -- which we call Correlation Clustering with Asymmetric Classification Errors -- is more adept at capturing the subtleties in real world instances than the two standard models. Indeed, the assumptions made by the Correlation Clustering on Complete Graphs model are too strong, since rarely do real world instances have equal edge weights. In contrast, in the Correlation Clustering with Noisy Partial Information model we can have edge weights that are arbitrarily small or large, an assumption which is too weak.
In many real world instances, the edge weights lie in some range $[a, b]$ with $a,b > 0$. Our model captures a larger family of instances.
Furthermore, the nature of classification errors for objects that are
similar and objects that are dissimilar is quite different. In many cases, a
\textit{positive} edge $uv$ indicates that the classifier found some actual
evidence that $u$ and $v$ are similar; while a negative edge simply means
that the classifier could not find any such proof that $u$ and $v$ are
similar, it does not mean that the objects $u$ and $v$ are necessarily
dissimilar. In some other cases, a \textit{negative} edge $uv$ indicates that
the classifier found some evidence that $u$ and $v$ are dissimilar; while a
positive edge simply means that the classifier could not find any such proof.
We discuss several examples below. Note that in the former case, a positive
edge gives a substantially stronger signal than a negative edge and should
have a higher weight; in the latter, it is the other way around: a negative
edge gives a stronger signal than a positive edge and should have a higher
weight. We make this statement more precise in Section~\ref{sec:LML}.
The following examples show how the Correlation Clustering with Asymmetric
Classification Errors model can help in capturing real world instances.
Consider an example from the paper on Correlation Clustering
by~\citet*{pan2015}. In their experiments, \citet{pan2015} used several data
sets including \emph{dblp-2011} and \emph{ENWiki-2013}\footnote{These data
sets are published by~\cite{dataset1, dataset2, dataset3, dataset4}}. In the
graph~\emph{dblp-2011}, \emph{each vertex represents a scientist and two
vertices are connected with an edge if the corresponding authors have
co-authored an article}. Thus, a positive edge with weight $\mathbf{w}^+$ between
Alice and Bob in the Correlation Clustering instance indicates that Alice and
Bob are coauthors, which strongly suggests that Alice and Bob work in similar
areas of Computer Science. However, it is not true that all researchers
working in some area of computer science have co-authored papers with each
other. Thus, the negative edge that connects two scientists who do not have
an article together does not deserve to have the same weight as a positive
edge, and thus can be modeled as a negative edge with weight $\mathbf{w}^- <
\mathbf{w}^+$.
Similarly, the vertices of the graph \emph{ENWiki-2013} are Wikipedia pages.
Two pages are connected with an edge if there is a link from one page to
another. A link from one page to the other is a strong suggestion that the
two pages are related and hence can be connected with a positive edge of
weight $\mathbf{w}^+$, while it is not true that two similar Wikipedia pages
necessarily should have a link from one to the other. Thus, it would be
better to join such pages with a negative edge of weight $\mathbf{w}^- < \mathbf{w}^+$.
Consider now the multi-person tracking problem. The problem is modelled as a
Correlation Clustering or closely related Lifted Multicut
Problem~\cite{tang2016multi,tang2017multiple} on a graph, whose vertices are
people detections in video sequences. Two detections are connected with a
positive or negative edge depending on whether the detected people have
similar or dissimilar appearance (as well as some other information). In this
case, a negative edge $(u,v)$ is more informative since it signals that the
classifier has identified body parts that do not match in detections $u$ and
$v$ and thus the detected people are likely to be different (a positive edge
$(u, v)$ simply indicates that the classifier was not able to find
non-matching body parts).
The Correlation Clustering with Asymmetric Classification Errors model
captures the examples we discussed above. It is instructive to consider an
important special case where all positive edges have weight $\mathbf{w}^+$ and all
negative edges have weight $\mathbf{w}^-$ with $\mathbf{w}^+ \neq \mathbf{w}^-$. If we were to
use the state of the art algorithm for Correlation Clustering on Complete
Graphs on our instance for Correlation Clustering with Asymmetric
Classification Errors (by completely ignoring edge weights and looking at the
instance as an unweighted complete graph), we would get a
$\Theta(\max(\nicefrac{\mathbf{w}^+}{\mathbf{w}^-}, \nicefrac{\mathbf{w}^-}{\mathbf{w}^+}))$
approximation to the MinDisagree objective. While if we were to use the state
of the art algorithms for Correlation Clustering with Noisy Partial
Information on our instance, we would get a $O(\log n)$ approximation to the
MinDisagree objective.
\noindent \textbf{Our Contributions.} In this paper, we present an
approximation algorithm for Correlation Clustering with Asymmetric
Classification Errors. Our algorithm gives an approximation factor of $A = 3 + 2\log_e
\nicefrac{1}{\alpha}$. Consider the scenario discussed above where all
positive edges have weight $\mathbf{w}^+$ and all negative edges have weight
$\mathbf{w}^-$. If $\mathbf{w}^+ \geq \mathbf{w}^-$, our algorithm gets a $(3 + 2\log_e
\mathbf{w}^+/\mathbf{w}^-)$ approximation; if $\mathbf{w}^+ \leq \mathbf{w}^-$, our algorithm gets a
$3$-approximation.
\begin{definition}
Correlation Clustering with Asymmetric Classification Errors is a variant of
Correlation Clustering on a Complete Graph. We assume that the weight
$\mathbf{w}_e$ of each positive edge lies in $[\alpha \mathbf{w}, \mathbf{w}]$ and the weight
$\mathbf{w}_e$ of each negative edge lies in $[\alpha \mathbf{w}, \infty)$, where $\alpha
\in (0,1]$ and $\mathbf{w} > 0$.
\end{definition}
We note here that the assumption that the weight of positive edges is bounded from above is crucuial. Without this assumption (even if we require that negative weights are bounded from above and below), the LP gap is unbounded for every fixed $\alpha$ (this follows from the integrality gap example we present in Theorem~\ref{thm:intGap}).
The following is our main theorem.
\begin{theorem}\label{thm:main}
There exists a polynomial time $A = 3 + 2\log_e 1/\alpha$ approximation
algorithm for Correlation Clustering with Asymmetric Classification Errors.
\end{theorem}
We also study a natural extension of our model to the case of complete bipartite graphs. That is, the positive edges across the biparition have a weight between $[\alpha \mathbf{w}, \mathbf{w}]$ and the negative edges across the bipartition have a weight of at least $\alpha \mathbf{w}$. Note that the state-of-the-art approximation algorithm for Correlation Clustering on Unweighted Complete Bipartite Graphs has an approximation factor of $3$ (see ~\citet{CMSY15}).
\begin{theorem}\label{thm:mainBipartite}
There exists a polynomial time $A = 5 + 2\log_e 1/\alpha$ approximation
algorithm for Correlation Clustering with Asymmetric Classification Errors on complete bipartite graphs.
\end{theorem}
Our next result shows that this approximation ratio is likely best possible for LP-based algorithms.
We show this by exhibiting an instance of Correlation Clustering with
Asymmetric Classification Errors such that integrality gap for the natural LP
for Correlation Clustering on this instance is $\Omega(\log
\nicefrac{1}{\alpha})$.
\begin{theorem}\label{thm:intGap}
The natural Linear Programming relaxation for Correlation Clustering has an
integrality gap of $\Omega(\log \nicefrac{1}{\alpha})$ for instances of
Correlation Clustering with Asymmetric Classification Errors.
\end{theorem}
Moreover, we can show that if there is an $o(\log(1/\alpha))$-approximation algorithm whose running time is polynomial in both $n$ and $1/\alpha$, then there is an $o(\log n)-$approximation algorithm for the general weighted cas
\footnote{The reduction to the general case works as follows. Consider an instance of Correlation Clustering with arbitrary weights. \emph{Guess} the heaviest edge $e$ that is in disagreement with the optimal clustering. Let $\mathbf{w}_e$ be its weight, and set
$\mathbf{w} = n^2 \mathbf{w}_e$, and $\alpha=1/n^4$. Then, assign new weights to all pairs of vertices in the graph. Keep the weights of all edges with weight in the range $[\alpha \mathbf{w}, \mathbf{w}]$. Set the weights of all edges with weight greater than $\mathbf{w}$ to $\mathbf{w}$ and the weights of all edges with weight less than $\alpha \mathbf{w}$ (including missing edges) to $\alpha \mathbf{w}$.
(and also for the MultiCut problem). However, we do not know if there is an $o(\log(1/\alpha))-$approximation algorithm for the problem whose running time is polynomial in $n$ and \emph{exponential} in $1/\alpha$. The existence of such an algorithm does not imply that there is an $o(\log n)-$approximation algorithm for the general weighted case (as far as we know).
We show a similar integraplity gap result for the Correlation Clustering with Asymmetric Classification Errors on complete bipartite graphs problem.
\begin{theorem}\label{thm:intGapBipartite}
The natural Linear Programming relaxation for Correlation Clustering has an
integrality gap of $\Omega(\log \nicefrac{1}{\alpha})$ for instances of
Correlation Clustering with Asymmetric Classification Errors on complete bipartite graphs.
\end{theorem}
Throughout the paper, we denote the set of positive edges by $E^+$ and the
set of negative edges by $E^-$. We denote an instance of the Correlation
Clustering problem by $G=(V, E^+,E^-)$. We denote the weight of edge $e$ by
$\mathbf{w}_e$.
\subsection{Linear Programming Relaxation}\label{sec:LP}
In this section, we describe a standard linear programming (LP) relaxation for Correlation
Clustering which was introduced by~\citet*{CGW03}. We first give an integer programming formulation
of the Correlation Clustering problem. For every pair of vertices $u$ and $v$, the integer program (IP)
has a variable $x_{uv}\in \{0,1\}$, which indicates whether $u$ and $v$ belong to the same cluster:
\begin{itemize}
\item $x_{uv}=0$, if $u$ and $v$ belong to the same cluster; and
\item $x_{uv}=1$, otherwise.
\end{itemize}
We require that $x_{uv}=x_{vu}$, $x_{uu}=0$
and all $x_{uv}$ satisfy the triangle inequality. That is, $x_{uv} + x_{vw}\geq x_{uw}$.
Every feasible IP solution $x$ defines a partitioning $\mathcal{S}=(S_1,\dots,S_T)$ in which
two vertices $u$ and $v$ belong to the same cluster if and only if $x_{uv} = 0$.
A positive edge $uv$ is in disagreement with this partitioning if and only if $x_{uv} = 1$;
a negative edge $uv$ is in disagreement with this partitioning if and only if $x_{uv} = 0$.
Thus, the cost of the partitioning is given by the following linear function:
$$\sum_{uv\in E^+} \mathbf{w}_{uv} x_{uv} + \sum_{uv\in E^-} \mathbf{w}_{uv} (1 - x_{uv}).$$
We now replace all integrality constraints $x_{uv}\in \{0,1\}$ in the integer program
with linear constraints $x_{uv}\in [0,1]$ . The obtained linear program is given in Figure~\ref{fig:LP}.
In the paper, we refer to each variable $x_{uv}$ as the length of the edge $uv$.
\begin{figure}
\hrule height 0.8pt\rule{0pt}{1pt}} %\hrule height 0.4pt\rule{0pt}{1pt}
$$\min \sum_{uv\in E^+} \mathbf{w}_{uv} x_{uv} + \sum_{uv\in E^-} \mathbf{w}_{uv} (1 - x_{uv}).$$
\noindent\textbf{subject to}
\begin{align*}
x_{uw}&\leq x_{uv}+x_{vw}&\text{for all } u,v,w\in V\\
x_{uv}&=x_{vu}&\text{for all } u,v\in V\\
x_{uu}&=0&\text{for all } u\in V\\
x_{uv}&\in [0,1]&\text{for all } u,v\in V
\end{align*}
\rule{0pt}{1pt}\hrule height 0.4pt\rule{0pt}{1pt}} %\hrule height 0.8pt\rule{0pt}{12pt}
\caption{LP relaxation}\label{fig:LP}
\end{figure}
\subsection{Ground Truth Model} \label{sec:LML}
In this section, we formalize the connection between asymmetric classification errors and asymmetric edge weights. For simplicity, we assume that each positive edge has a weight of $\mathbf{w}^+$ and each negative edge has a weight of $\mathbf{w}^-$.
Consider
a probabilistic model in which edge labels are assigned by a noisy classifier. Let
$\mathcal{C}^*=(C^*_1,\dots C^*_T)$ be the ground truth clustering of the vertex set $V$.
The classifier labels each edge within a cluster with a $\text{``$+$''}$ edge
with probability $p^+$ and as a $\text{``$-$''}$ edge with probability $1-p^+$;
it labels each edge with endpoints in distinct clusters as a $\text{``$-$''}$ edge
with probability $q^-$ and as a $\text{``$+$''}$ edge with probability $1-q^-$. Thus,
$(1-p^+)$ and $(1-q^-)$ are the classification error probabilities.
We assume that all classification errors are independent.
We note that
similar models have been previously studied by \cite{BBC04, EW2009, MW10,
ACX13, MMV15-CC} and others. However, the standard assumption in such models
was that the error probabilities, $(1-p^+)$ and $(1-q^-)$, are less than a
half; that is, $p^+ > \nicefrac{1}{2}$ and $q^->\nicefrac{1}{2}$. Here, we
investigate two cases (i) when $p^+ < \nicefrac{1}{2} < q^-$ and (ii) when
$q^- < \nicefrac{1}{2} < p^+$. We assume that $p^{+}+q^{-}>1$, which means
that the classifier is more likely to connect similar objects with a
$\text{``$+$''}$ than dissimilar objects or, equivalently, that the classifier is
more likely to connect dissimilar objects with a $\text{``$-$''}$ than similar
objects. For instance, consider a classifier that looks for evidence that the
objects are similar: if it finds some evidence, it adds a positive edge;
otherwise, it adds a negative edge (as described in our examples
\emph{dblp-2011} and \emph{ENWiki-2013} in the Introduction). Say, the
classifier detects a similarity between two objects in the same ground truth
cluster with a probability of only $30\%$ and incorrectly detects similarity
between two objects in different ground truth clusters with a probability of
$10\%$. Then, it will add a \emph{negative} edge between two similar objects
with probability $70\%$! While this scenario is not captured by the standard
assumption, it is captured by case (i) (here, $p^+ = 0.3 < \nicefrac{1}{2} <
q^- = 0.9$ and $p^+ + q^- > 1$).
Consider a clustering $\mathcal{C}$ of the vertices.
Denote the sets of positive edges and negative edges with
both endpoints in the same cluster by $\text{In}^+(\mathcal{C})$ and $\text{In}^-(\mathcal{C})$, respectively, and
the sets of positive edges and negative edges with endpoints in different clusters by
$\text{Out}^+(\mathcal{C})$ and $\text{Out}^-(\mathcal{C})$, respectively.
Then, the log-likelihood function of the clustering $\mathcal{C}$ is,
\begin{align*}
\ell(G; \mathcal{C}) &=
\log\Big(
{\prod_{(u,v)\in \text{\text{In}}^+(\mathcal{C})}}p^+ \times
\smashoperator{\prod_{(u,v)\in \text{In}^-(\mathcal{C})}} (1-p^+)
\times\smashoperator{\prod_{(u,v)\in \text{Out}^+(\mathcal{C})}}(1-q^-) \times
{\prod_{(u,v)\in \text{Out}^-(\mathcal{C})}}q^-
\Big)\\
&= \log\Big((p^+)^{|\text{In}^+(\mathcal{C})|}(1-p^+)^{|\text{In}^-(\mathcal{C})|}
\cdot (1-q^-)^{|\text{Out}^+(\mathcal{C})|} (q^-)^{|\text{Out}^-(\mathcal{C})|}\Big)\\
&= |\text{In}^+(\mathcal{C})| \log p^+ + |\text{In}^-(\mathcal{C})| \log (1-p^+)
+ |\text{Out}^+(\mathcal{C})| \log (1-q^-) + |\text{Out}^-(\mathcal{C})|\log q^-\\
&= \underbrace{\Big(|E^+| \log p^+ + |E^-| \log q^-\Big)}_{\text{constant expression}}
-
\underbrace{\Big(|\text{Out}^+(\mathcal{C})| \log\frac{p^+}{1-q^-} + |\text{In}^-(\mathcal{C})|
\log \frac{q^-}{1-p^+}\Big)}_{\text{MinDisagree objective}}.
\end{align*}
Let $\mathbf{w}^+ = \log\frac{p^+}{1-q^-}$ and
$\mathbf{w}^- = \log \frac{q^-}{1-p^+}$. Then, the negative term --
$\Big(|\text{Out}^+(\mathcal{C})| \log\frac{p^+}{1-q^-} + |\text{In}^-(\mathcal{C})| \log \frac{q^-}{1-p^+}\Big)$
-- equals $\mathbf{w}^+ |\text{Out}^+(\mathcal{C})| + \mathbf{w}^- |\text{In}^-(\mathcal{C})|$.
Note that
$|\text{Out}^+(\mathcal{C})|$ is the number of positive edges disagreeing with $\mathcal{C}$ and
$|\text{In}^-(\mathcal{C})|$ is the number of negative edges disagreeing with $\mathcal{C}$.
Now observe that the first term in the expression above -- $\Big(|E^+| \log p^+ + |E^-| \log q^-\Big)$ --
does not depend on $\mathcal{C}$. It only depends on the instance $G=(V, E^+, E^-)$.
Thus,
maximizing the log-likelihood function over $\mathcal{C}$ is equivalent to minimizing
the following objective
$$\mathbf{w}^+ ({\text{\# disagreeing \text{``$+$''} edges}}) + \mathbf{w}^- ({\text{\# disagreeing \text{``$-$''} edges}}).$$
Note that we have $\mathbf{w}^+ > \mathbf{w}^-$ when $p^+ < \nicefrac{1}{2} < q^-$ (case
(i) above); in this case, a $\text{``$+$''}$ edge gives a stronger signal than a
$\text{``$-$''}$ edge. Similarly, we have $\mathbf{w}^- > \mathbf{w}^+$ when $q^- <
\nicefrac{1}{2} < p^+$ (case (ii) above); in this case, a $\text{``$-$''}$ edge
gives a stronger signal than a $\text{``$+$''}$ edge.
\section{Better approximation for values of \texorpdfstring{$\alpha$}{\unichar{"25B}} appearing in practice}\label{sec:optimal}
We note that the choice of function $f(x)$ in Theorem~\ref{thm:main} is somewhat suboptimal.
The best function $f_{opt}(x)$ for our analysis of Algorithm~\ref{alg:ApprAlg} can be computed using linear programming (with high precision). Using this function $f_{opt}$,
we can achieve an approximation factor $A_{opt}$ better than the approximation factor
$A_{thm} = 3 + 2\log_e 1/\alpha$ guaranteed by Theorem~\ref{thm:main} (for $\alpha \neq 1$).\footnote{It is also
possible to slightly modify Algorithm~\ref{alg:ApprAlg} so that it gets approximation $A_{opt}$
without explicitly computing $f$. We omit the details here.}
While asymptotically $A_{thm}/A_{opt} \to 1$ as $\alpha \to 0$,
$A_{opt}$ is noticeably better than $A_{thm}$ for many values of $\alpha$ that are likely
to appear in practice (say, for $\alpha \in (10^{-8}, 0.1)$).
We list approximation factors $A_{thm}$ and $A_{opt}$ for several values of $\alpha$ in
Table~\ref{fig:table-A}; we also plot the dependence of $A_{thm}$ and
$A_{opt}$ on $\alpha$ in Figure~\ref{fig:plot-A}.
\begin{table}[t]
\caption{Approximation factors $A_{thm}$ and $A_{opt}$ for different $\alpha$-s.}
\label{fig:table-A}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{rrrr}
\toprule
$\log_e \nicefrac{1}{\alpha}$ & $\nicefrac{1}{\alpha}$ & $A_{thm}$ & $A_{opt}$ \\
\midrule
0 & 1 & 3 & 3\\
1.61 & 5 & 6.22 & 4.32 \\
2.30 & 10 & 7.61 & 4.63\\
3.91 & 50 & 10.82 & 6.07\\
4.61 & 100 & 12.21 & 6.78\\
6.21 & 500 & 15.43 & 8.69\\
6.91 &1000 & 16.82 & 9.62\\
8.52 & 5\,000 & 20.03 & 11.9\\
10 & 22\,026.5 & 23 & 14.2\\
15 & $3.3 \times 10^6$ & 33 & 22.6\\
20 & $4.9\times 10^8$ & 43 & 31.3\\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel=$\log_e \nicefrac{1}{\alpha}$,
ylabel=$A$,
grid=both,
minor grid style={gray!25},
major grid style={gray!25},
width=0.85\linewidth,
legend pos=north west,
xtick distance=10,
minor tick num=1
]
\addplot[domain=0:20,line width=0.75pt,solid,color=blue] {3 + 2*x};
\addlegendentry{\tiny $A_{thm}$};
\addplot[line width=0.75pt,dotted,color=red,smooth] %
table[x=logalpha,y=A,col sep=comma]{data/optimal-A-plot-sparse.csv};
\addlegendentry{\tiny $A_{opt}$};
\end{axis}
\end{tikzpicture}
\caption{Plots of approximation factors $A_{thm}$ and $A_{opt}$.}
\label{fig:plot-A}
\end{figure}
\section{Proof of Theorem~\ref{thm:L4}}
\noindent\textbf{\color{red} Remark: } {\color{red} Previous version(to be removed)}
For the sake of completeness we include the proof of Theorem~\ref{thm:L4} (see \cite{ACN08} and \cite{CMSY15})
\begin{proof}[Proof of Theorem~\ref{thm:L4}]
Consider a function $f$ with $f(0) = 0$. We prove that if for all signatures $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ (where each $\sigma_i\in \{\pm\}$) and edge lengths $x$, $y$, and $z$ satisfying the triangle inequality, we have $ALG^{\sigma}(x,y,z)\leq \rho LP^{\sigma}(x,y,z)$, then the approximation factor of Algorithm~\ref{alg:ApprAlg} is at most $\rho$.
Consider step $t$ of the algorithm. Let the set of active vertices denoted by $V_t$ and $w\in V_t$ be a pivot chosen at step $t$. The algorithm chooses a set $S_t \subseteq V_t$ and removes it from the graph. Notice that if $u \in S_t$ or $v \in S_t$, then the constraint imposed by $(u,v)$ is
satisfied or violated right after step $t$. Specifically, if $(u,v)$ is a positive edge, then the constraint $(u,v)$ is violated if exactly one of the vertices $u,v$ is in $S_t$. If $(u,v)$ is a negative constraint, then it is violated if both $u,v$ are in $S_t$. Denote the total weight of violated constraints at step $t$ by $ALG_t$. Then,
\begin{align*}
ALG_t=&\sum\limits_{\substack{(u,v) \in E^+\\u,v \in V_t}} w_{uv}\cdot\mathds{1} \left(u \in S_t, v \not\in S_t \mbox{ or }u \not\in S_t, v \in S_t\right)\\
&+\sum\limits_{\substack{(u,v) \in E^-\\u,v \in V_t}} w_{uv}\cdot\mathds{1} \left(u \in S_t, v\in S_t\right).
\end{align*}
We charge the constraints violated at step $t$ of the algorithm to the LP weight of the edges removed at step $t$, which we denote by $LP_t$. Then,
\begin{align*}
LP_t =& \sum_{\substack{(u,v) \in E^+\\u,v \in V_t}} w_{uv} x_{uv} \cdot \mathds{1}(u \in S_t \text{ or } v \in S_t)\\
&+ \sum_{\substack{(u,v) \in E^-\\u,v \in V_t}} w_{uv} (1 - x_{uv}) \cdot \mathds{1}(u \in S_t \text{ or } v \in S_t)
\end{align*}
Note that the cost of the solution produced by the algorithm is the total weight of violations across all steps, that is $ALG = \sum\limits_t ALG_t$. Moreover, as every edge is removed exactly once from the graph, we can see that $LP = \sum\limits_t LP_t$. Hence, if we can show that $\mathbb{E}[ALG_t] \leq \rho \mathbb{E}[LP_t]$ for every step $t$, then we can conclude that the approximation factor of the algorithm is at most $\rho$ as,
$$
\mathbb{E}[ALG] = \mathbb{E}\bigg[\sum_t ALG_t \bigg] \leq \rho \cdot \mathbb{E}\bigg[\sum_t LP_t \bigg] = \rho LP
$$
For analysis, we assume that each vertex $u$, has a positive (similar) self-loop, thus $cost(u,u\;|\; w)=lp(u,u \;|\; w) = 0$.
We now write $\mathbb{E}[ALG_t]$ and $\mathbb{E}[LP_t]$ in terms of $cost(\cdot)$ and $lp(\cdot)$ which are defined in Section~\ref{triple_Analysis}.
\begin{align}\label{ALGcost1}
\mathbb{E}[ALG_t \;|\; V_t] =& \sum_{\substack{(u,v) \in E\\u,v \in V_t}} \bigg( \frac{1}{|V_t|} \sum_{w \in V_t} w_{uv} cost(u,v\;|\; w)\bigg)\nonumber\\
=& \frac{1}{2|V_t|}\sum_{\substack{u,v,w \in V_t \\ u \neq v}} w_{uv} cost(u,v\;|\; w)
\end{align}
\begin{align}\label{LPcost1}
\mathbb{E}[LP_t \;|\; V_t] =& \sum_{\substack{(u,v) \in E\\u,v \in V_t}} \bigg( \frac{1}{|V_t|} \sum_{w \in V_t} w_{uv} lp(u,v\;|\; w)\bigg)\nonumber\\
=& \frac{1}{2|V_t|}\sum_{\substack{u,v,w \in V_t \\ u \neq v}} w_{uv} lp(u,v\;|\; w)
\end{align}
We divide the expressions on the right hand side by $2$ because the terms $cost(u,v \;|\; w)$ and $lp(u,v \;|\; w)$ are counted twice. Now adding the terms $cost(u,u \;|\; w)$ and $lp(u,u \;|\; w)$ (both equal to $0$) to~(\ref{ALGcost}) and~(\ref{LPcost}) respectively and grouping the terms containing $u,v$ and $w$ together, we get,
\begin{align*}
\mathbb{E}[ALG_t \;|\; V_t] =& \frac{1}{6|V_t|}\sum_{u,v,w \in V_t}\bigg( w_{uv} cost(u,v\;|\; w)\\
&+ w_{uw} cost(u,w\;|\; v) + w_{wv} cost(w,v\;|\; u)\bigg)\\
=&\frac{1}{6|V_t|}\sum_{u,v,w \in V_t} ALG^\sigma(x,y,z)
\end{align*}
and
\begin{align*}
\mathbb{E}[LP_t \;|\; V_t] =& \frac{1}{6|V_t|}\sum_{u,v,w \in V_t}\bigg(w_{uv} lp(u,v\;|\; w)\\
&+ w_{uw} lp(u,w\;|\; v) + w_{wv} lp(w,v\;|\; u)\bigg)\\
=&\frac{1}{6|V_t|}\sum_{u,v,w \in V_t}LP^\sigma(x,y,z)
\end{align*}
Thus, if $ALG^\sigma(x,y,z) \leq \rho LP^\sigma(x,y,z)$ for all signatures $\sigma$ and edge lengths $x,y,z$ satisfying the triangle inequality, then $\mathbb{E}[ALG_t] \leq \rho \cdot \mathbb{E}[LP_t]$, and, hence, $\mathbb{E}[ALG] \leq \rho \cdot \mathbb{E}[LP]$ which finishes the proof.
\end{proof}
\section{Proof of Theorem~\ref{thm:L4}}
For the sake of completeness we include the proof of Theorem~\ref{thm:L4} (see \cite{ACN08} and \cite{CMSY15}).
\begin{proof}[Proof of Theorem~\ref{thm:L4}] Our first task is to express the cost of violations made by Algorithm~\ref{alg:ApprAlg} and the LP weight in terms of $ALG^\sigma(\cdot)$ and $LP^\sigma(\cdot)$, respectively. In order to do this, we consider the cost of violations made by the algorithm at each step.
Consider step $t$ of the algorithm. Let $V_t$ denote the set of active (yet unclustered) vertices at the start of step $t$. Let $w\in V_t$ denote the pivot chosen at step $t$.
The algorithm chooses a set $S_t \subseteq V_t$ as a cluster and removes it from the graph. Notice that
for each $u \in S_t$, the constraint imposed by each edge of type $(u,v) \in E^+ \cup E^-$ is satisfied or violated right after step $t$. Specifically, if $(u,v)$ is a positive edge, then the constraint $(u,v)$ is violated if exactly one of the vertices $u,v$ is in $S_t$. If $(u,v)$ is a negative constraint, then it is violated if both $u,v$ are in $S_t$. Denote the weight of violated constraints at step $t$ by $ALG_t$. Thus,
\begin{align*}
ALG_t=&\sum\limits_{\substack{(u,v) \in E^+\\u,v \in V_t}} \mathbf{w}_{uv}\cdot\mathds{1} \left(u \in S_t, v \not\in S_t \mbox{ or }u \not\in S_t, v \in S_t\right)+\sum\limits_{\substack{(u,v) \in E^-\\u,v \in V_t}} \mathbf{w}_{uv}\cdot\mathds{1} \left(u \in S_t, v\in S_t\right).
\end{align*}
Similarly, we can quantify the LP weight removed by the algorithm at step $t$, which we denote by $LP_t$.
We count the contribution of all edges $(u,v) \in E^+ \cup E^-$ such that $u \in S_t$ or $v \in S_t$. Thus,
\begin{align*}
LP_t =& \sum_{\substack{(u,v) \in E^+\\u,v \in V_t}} \mathbf{w}_{uv} x_{uv} \cdot \mathds{1}(u \in S_t \text{ or } v \in S_t)+ \sum_{\substack{(u,v) \in E^-\\u,v \in V_t}} \mathbf{w}_{uv} (1 - x_{uv}) \cdot \mathds{1}(u \in S_t \text{ or } v \in S_t)
\end{align*}
Note that the cost of the solution produced by the algorithm is the sum of the violations across all steps, that is $ALG = \sum_t ALG_t$. Moreover, as every edge is removed exactly once from the graph, we can see that $LP = \sum_t LP_t$. We will charge the cost of the violations of the algorithm at step $t$, $ALG_t$, to the LP weight removed at step $t$, $LP_t$. Hence, if we show that $\mathbb{E}[ALG_t] \leq \rho \mathbb{E}[LP_t]$ for every step $t$, then we can conclude that the approximation factor of the algorithm is at most $\rho$, since
$$
\mathbb{E}[ALG] = \mathbb{E}\bigg[\sum_t ALG_t \bigg] \leq \rho \cdot \mathbb{E}\bigg[\sum_t LP_t \bigg] = \rho \cdot LP.
$$
We now express $ALG_t$ and $LP_t$ in terms of $cost(\cdot)$ and $lp(\cdot)$ which are defined in Section~\ref{triple_Analysis}. This will allow us to group together the terms for each triplet $u,v,w$ in the set of active vertices and thus write $ALG_t$ and $LP_t$ in terms of $ALG^\sigma(\cdot)$ and $LP^\sigma(\cdot)$, respectively.
For analysis, we assume that for each vertex $u \in V$, there is a positive (similar) self-loop, and thus we can define $cost(u,u\;|\; w)$ and $lp(u,u \;|\; w)$ formally as follows:
$cost(u,u \;|\; w) = \mathrm{Pr}(u \in S, u \not\in S \;|\; p = w)
= 0$ and $lp(u,u \;|\; w) = x_{uu} \cdot \mathrm{Pr}(u \in S\;|\; p = w) = 0$ (recall that $x_{uu}=0$).
\begin{align}\label{ALGcost}
\mathbb{E}[ALG_t \;|\; V_t&] = \sum_{\substack{(u,v) \in E\\u,v \in V_t}} \bigg( \frac{1}{|V_t|} \sum_{w \in V_t} \mathbf{w}_{uv} \cdot cost(u,v\;|\; w)\bigg)= \frac{1}{2|V_t|}\sum_{\substack{u,v,w \in V_t \\ u \neq v}} \mathbf{w}_{uv}\cdot cost(u,v\;|\; w)
\end{align}
\begin{align}\label{LPcost}
\mathbb{E}[LP_t \;|\; V_t] &= \sum_{\substack{(u,v) \in E\\u,v \in V_t}} \bigg( \frac{1}{|V_t|} \sum_{w \in V_t} \mathbf{w}_{uv} \cdot lp(u,v\;|\; w)\bigg)= \frac{1}{2|V_t|}\sum_{\substack{u,v,w \in V_t \\ u \neq v}} \mathbf{w}_{uv} \cdot lp(u,v\;|\; w)
\end{align}
We divide the expressions on the right hand side by $2$ because the terms $cost(u,v \;|\; w)$ and $lp(u,v \;|\; w)$ are counted twice. Now adding the contribution of terms $cost(u,u \;|\; w)$ and $lp(u,u \;|\; w)$ (both equal to $0$) to~(\ref{ALGcost}) and~(\ref{LPcost}), respectively and grouping the terms containing $u,v$ and $w$ together, we get,
\begin{align*}
\mathbb{E}[ALG_t \;|\; V_t] =& \frac{1}{6|V_t|}\sum_{u,v,w \in V_t}\bigg( \mathbf{w}_{uv}\cdot cost(u,v\;|\; w)+ \mathbf{w}_{uw}\cdot cost(u,w\;|\; v) + \mathbf{w}_{wv}\cdot cost(w,v\;|\; u)\bigg)\\
=&\frac{1}{6|V_t|}\sum_{u,v,w \in V_t} ALG^\sigma(x,y,z)
\end{align*}
and
\begin{align*}
\mathbb{E}[LP_t \;|\; V_t] =& \frac{1}{6|V_t|}\sum_{u,v,w \in V_t}\bigg(\mathbf{w}_{uv}\cdot lp(u,v\;|\; w)+ \mathbf{w}_{uw}\cdot lp(u,w\;|\; v) + \mathbf{w}_{wv}\cdot lp(w,v\;|\; u)\bigg)\\
=&\frac{1}{6|V_t|}\sum_{u,v,w \in V_t}LP^\sigma(x,y,z)
\end{align*}
Thus, if $ALG^\sigma(x,y,z) \leq \rho LP^\sigma(x,y,z)$ for all signatures and edge lengths $x,y,z$ satisfying the triangle inequality, then $\mathbb{E}[ALG_t \;|\; V_t] \leq \rho \cdot \mathbb{E}[LP_t \;|\; V_t]$, and, hence, $\mathbb{E}[ALG] \leq \rho \cdot \mathbb{E}[LP]$ which finishes the proof.
\end{proof}
|
2,869,038,157,064 | arxiv | \section{\label{sec:1} Introduction}
Basis function learning for sound speed fields (SSFs) has played a vital role in a wide range of acoustic signal processing tasks, such as ocean acoustic tomography \cite{munk2009ocean, zhu2020}, underwater target localization/tracking \cite{li2011time, michalopoulou2021matched}, and underwater communications\cite{qu2014two}. {\color{black} Effective basis functions} can significantly reduce the number of unknown parameters to be estimated, thereby making the originally under-determined SSF inversion problem much more manageable. The underlying rationale is that sound speeds are correlated across spatial and temporal domains\cite{munk2009ocean, zhu2020, bianco2016compressive, huang2014method}, making SSFs viable to be accurately represented by a set of basis functions. {\color{black} These basis functions are expected to} have high expressive power such that only a few of them are capable of accurate SSF representation.
Recently, this goal {\color{black} was noticed to coincide} with the aim of unsupervised representation learning\cite{bianco2018machine}, and thus has triggered the surging development of machine learning \cite{theodoridis2020machine} for ocean {\color{black} acoustics} \cite{bianco2019machine, niu2021mode, ozanich2020feedforward}. Specifically, the classical empirical orthogonal functions (EOFs)\cite{leblanc1980underwater} can be interpreted as the basis vectors derived from principal component analysis (PCA)\cite{wold1987principal}, which suggests that the nonlinear variants of PCA, e.g., kernel PCA \cite{scholkopf1997kernel}, can potentially give nonlinear SSF representations. Furthermore, popular dictionary learning (DL) methods\cite{tovsic2011dictionary}, e.g., K-SVD\cite{aharon2006k}, which have shown remarkable performance in the image and video de-noising, were introduced to learn a {\color{black} reduced-order} representation of SSFs \cite{bianco2017dictionary}, showing improved generalization performance in SSF reconstruction. The success behind EOFs and K-SVD-based approach lies in that the basis functions are directly learnt from the training SSF data\cite{elad2010sparse}, {\color{black} while} not generated {\it ad hoc} from a standard set of functions such as Fourier basis functions \cite{bracewell1986fourier} or wavelets \cite{antonini1992image}, {\color{black} which} exemplifies the effectiveness of data-driven approach in representation learning.
\begin{figure*}[t]
\includegraphics[width= 6in]{Figure1.pdf}
\caption{\label{fig:FIG1}{Illustration of three-dimensional (3D) sound speed field (SSF) data and its matrix unfolding operation.}}
\raggedright
\hrule
\end{figure*}
For 3D SSF data, as illustrated in Fig.~\ref{fig:FIG1}, {\color{black} sound speeds} are correlated across 3D coordinates, {\color{black} since the processes driving the ocean sound speed profiles are inherently continuous in space and time.} On the other hand, both EOFs and K-SVD algorithm are designed for two-dimensional (2D) data, e.g., an SSF matrix, and thus did not take the multi-dimensional correlations among sound speeds into account. Therefore, to use EOFs or K-SVD method, we should firstly unfold the 3D SSF data into an SSF matrix (see Fig.~\ref{fig:FIG1}). However, the matrix unfolding operation (also called matricization) {\color{black}breaks} the original 3D structure of data and thus induce information loss\cite{sidiropoulos2017tensor, panagakis2021tensor}. This drawback has been theoretically proved in the multi-dimensional harmonic retrieval task \cite{roemer2014analytical}, and reported in various signal processing applications, including directional-of-arrival (DOA) estimation \cite{cheng2015subspace}, blind source separation \cite{cheng2020learning} and image completion\cite{zhao2015bayesian}. This difficulty leads to an immediate question: {\it how to avoid the information loss caused by matricization in a principled manner? }
This question invites the framework of {\it tensor decomposition} and the associated {\it multi-linear algebra} \cite{kolda2009tensor} , which are much richer than their matrix-based counterparts. Over the past two decades, tensor decomposition has become the primary drive for understanding/representing multi-dimensional data, and has achieved great success in various machine learning and signal processing applications\cite{sidiropoulos2017tensor, panagakis2021tensor}.
In this paper, we not only show the state-of-the-art (SOTA) performance of tensor-based basis function learning for 3D SSFs via extensive numerical results, but also theoretically prove that the classical basis functions (using EOFs and/or Fourier basis functions) \cite{leblanc1980underwater, cornuelle1989ocean, morawitz1996three} are interestingly the special case of the proposed tensor-based learning framework. The latter insight justifies the effectiveness of the tensor-based approach from another angle, and further paves the way for future investigation of better basis functions through the lens of a unified tensor perspective.
The remainder of this paper is organized as follows. In Section~\ref{sec:2}, we briefly review classical basis functions as well as the recent ones for SSF representation. In Section~\ref{sec:3},
representation learning using tensor decomposition is introduced. Then, we propose a tensor-based basis function learning framework for 3D SSF, and further reveal the connections between the classical basis functions and the tensor-based counterparts in Section~\ref{sec:4}. Tensor-based basis function learning algorithms for one 3D SSF and multiple 3D SSFs are introduced in Section~\ref{sec:4}. Extensive numerical results are reported in Section~\ref{sec:5}. Finally, conclusions and future research directions are discussed in Section~\ref{sec:6}.
{\it Notations:} Lower- and upper-case bold letters are used to denote vectors and matrices, respectively. Higher-order tensors are denoted by upper-case bold calligraphic letters. For a tensor $\bc{X}$, $\mb{X}_{(p)}$ stands for its mode-$p$ unfolding matrix. $\bc X \times_p \mb B$ denotes the $p$-mode product between tensor $\bc X$ and matrix $\mb B$. The Kronecker product is denoted by $\otimes$. The superscripts $^{\mathrm{T}}$ and $^{\text{H}}$ stand for transposition and Hermitian respectively. $^\dagger$ denotes the Moore-Penrose pseudo inverse. $\mathrm{diag}(\mb{x})$ denotes the diagonal matrix with $\mb{x}$ on its main diagonal. The identity matrix of order $N$ is denoted by $\mb{I}_N$. $\|\cdot\|_{\mathrm{F}}$ stands for the Frobenius norm. $\mathbb{R}$ and $\mathbb{C}$ are the field of real numbers and complex numbers, respectively.
\section{\label{sec:2} Basis Functions for Sound Speed Fields}
\subsection{\label{Sec II-A} 2D SSF: PCA and EOFs}
Consider a 2D SSF matrix $\mb Y = [\mb y_1, \cdots, \mb y_J] \in \mathbb R^{I \times J}$ {\color{black} containing} $J$ 1D temporal/spatial sound speed profiles (SSPs), where $I$ {\color{black} usually denotes} the number of discrete points in depth.
In order to extract the basis functions that capture the most variances of data, PCA\cite{wold1987principal} is performed on the SSF matrix $\mb Y$. Particularly, $\mb Y$ is first centered by subtracting a mean matrix $\mb M = [\mb m, \cdots, \mb m] \in \mathbb R^{I \times J}$ with $\mb m = \frac{1}{J} \sum_{j=1}^J \mb y_{j} \in \mathbb R^{I\times 1}$, giving a zero-mean SSF matrix $\mb X = [\mb x_1, \cdots, \mb x_J] = \mb Y - \mb M \in \mathbb R^{I \times J}$, in which each element is known as SSF perturbation \cite{munk2009ocean, zhu2020}. Then, the eigenvalue decomposition (EVD) of the correlation matrix $\mb X \mb X^\mathrm{T}$ finds the EOFs \cite{leblanc1980underwater} as follows:
\begin{align}
\mb X \mb X^\mathrm{T} = \mb E \mb \Lambda \mb E^\mathrm{T},
\end{align}
where $\mb E = [\mb e_1, \cdots, \mb e_I]$ {\color{black} contains} the EOFs $\{\mb e_i\}_i$ (eigenvectors) and $\mb \Lambda = \text{diag}([\lambda_1, \cdots, \lambda_I])$ {\color{black} contains the eigenvalue} $\lambda_i$ associated with the $i$-th EOF $\mb e_i$, for $i=1,\cdots, I$. Without loss of generality, it is assumed that $\lambda_1 \geq \cdots \geq \lambda_I$.
Typically, we only retain $K (K \leq I)$ leading-order EOFs to represent the SSF matrix for dimensionality reduction. Given the EOF matrix $\mb E_K = [\mb e_1, \cdots, \mb e_K] \in \mathbb R^{I \times K}$ , the zero-mean SSF matrix can be approximately represented by \cite{leblanc1980underwater}
\begin{align}
\mb X \approx \mb E_K \mb W,
\end{align}
where $\mb W \in \mathbb R^{K \times J}$ is the representation coefficient matrix. Since the EOF matrix $\mb E_K$ is orthonormal (i.e., $\mb E_K^{\text{T}}\mb E_K = \mb I_K$), the least-squares (LS) estimates of the coefficient matrix can be efficiently computed by \cite{leblanc1980underwater}
\begin{align}
\hat{\mb W} = \mb E_K^{\text{T}} \mb X.
\end{align}
For an unseen zero-mean SSF sample $\mb X^* \in \mathbb R^{I \times J^\prime}$, given EOF matrix $\mb E_K$, {\color{black} the} coefficient matrix $\mb W^* \in \mathbb R^{K \times J^\prime}$ used for SSF representation is computed by \cite{leblanc1980underwater}
\begin{align}
\mb W^* = \mb E_K^{\text{T}} \mb X^*.
\end{align}
\subsection{2D SSF: K-SVD and Over-complete Dictionary}
To seek a more {\color{black} effective reduced-order} representation of SSF, dictionary learning methods\cite{tovsic2011dictionary, aharon2006k}, which were originally designed for image/video de-noising, has recently tapped into ocean {\color{black}signal processing} \cite{bianco2017dictionary}. The key idea is to jointly optimize an over-complete dictionary matrix $\mb Q \in \mathbb R^{I \times Z} ( Z \geq I)$ and the associated sparse coefficient matrix $\mb V$ such that the reconstruction error is minimized \cite{aharon2006k, bianco2017dictionary}:
\begin{align}
&\min_{\mb Q \in \mathbb R^{I \times Z}} \left \{ \min_{\mb V \in \mathbb R^{Z \times J}} ||\mb X - \mb Q \mb V ||_\mathrm{F}^2 \right\},\nonumber \\
& \text{s.t.}~~ ||\mb V_{:,j} ||_0 \leq T, ~~ j = 1,\cdots J,
\label{opt:dl}
\end{align}
where $T$ is a pre-defined upper bound value for the number of non-zero elements in each column $\mb V_{:,j}, \forall j$.
To solve {\color{black} the} problem {\color{black} in Eq.~\eqref{opt:dl}} in a computationally efficient manner, K-SVD \cite{aharon2006k} was proposed to alternatively update the dictionary matrix $\mb Q$ (called dictionary update step) and the coefficient matrix $\mb V$ (called sparse coding step). More concretely, in the $t$-th iteration, given dictionary matrix $\mb Q^{t-1}$ and denote the $j$-th column of matrix $\mb V$ as $\mb v_j$, sparse coding step consists of $J$ subproblems \cite{aharon2006k}:
\begin{align}
&\min_{\mb v_j} || \mb x_j - \mb Q^{t-1} \mb v_j ||_F^2 \nonumber \\
& \mathrm{s.t.}~~ ||\mb v_j ||_0 \leq T,~~j = 1, \cdots, J,
\label{opt:sc}
\end{align}
each of which can be efficiently solved by ``off-the-shelf'' sparsity-aware optimization algorithms \cite{theodoridis2020machine}, including orthogonal matching pursuit (OMP) \cite{tropp2007signal}, approximate message passing (AMP)\cite{donoho2010message}, and so forth. Then, given the learnt coefficient matrix $\mb V = [\mb v_1, \cdots, \mb v_J]$, K-SVD algorithm utilizes the K-means method for vector quantization (VQ) codebook design to give an updated dictionary matrix $\mb Q^{t+1}$. {\color{black} The iterative K-SVD algorithm was shown to converge} to a local minima of the problem in Eq.~\eqref{opt:dl} \cite{aharon2006k}.
After the convergence of the K-SVD algorithm, the learnt basis functions for SSF representation are the columns of dictionary matrix $\mb Q$, which demands a large memory to maintain $I\times Z$ dictionary entries. For an unseen zero-mean SSF sample $\mb X^* \in \mathbb R^{I \times J^\prime}$, the coefficient matrix $\mb V^* = [\mb v_1^*, \cdots, \mb v_{J^\prime}^*] \in \mathbb R^{Z \times J^\prime}$ has no closed-form solution. Instead, an iterative algorithm, e.g., OMP \cite{tropp2007signal}, needs to be resorted to solve a sparse coding problem (see problem \eqref{opt:sc}) for estimating each column $\mb v_{j^\prime}^*, \forall j^\prime $, which costs more computational resources than the EOF-based counterpart. Nevertheless, numerical results have demonstrated that the basis functions learnt from K-SVD algorithm can improve the reconstruction performance of SSF \cite{bianco2017dictionary}, compared to the results using EOFs.
\subsection{3D SSF: 2D Fourier Basis Functions and 1D EOFs}
\label{sec:ii-c}
For a 3D SSF, as illustrated in Fig.~\ref{fig:FIG1}, one could first unfold it into a 2D SSF matrix and then apply the EOF-based\cite{leblanc1980underwater} or K-SVD-based\cite{aharon2006k, bianco2017dictionary} algorithm. The unfolding step, however, has broken the inherent 3D structure of SSF data, thereby leading to performance degradation. {\color{black}A} classical method for 3D SSF representation relies on 2D Fourier basis functions and 1D EOFs \cite{cornuelle1989ocean, morawitz1996three}. The key idea is to use EOFs to capture the variations of SSF across different depths {\color{black} and} use 2D Fourier basis functions to describe the horizontal slices of SSF. Specifically,
{\color{black} each de-mean SSF data in $\{ c(x,y,z)\}_{x=1,y=1,z=1}^{M,N,I}$ is }assumed to have the following expression\cite{cornuelle1989ocean, morawitz1996three}:
\begin{align}
&c (x,y,z) = \sum_{f_1=1}^{N_{F_1}}\sum_{f_2=1}^{N_{F_2}} \sum_{k=1}^{{\color{black}K_F}} w_{f_1,f_2,k} \underbrace{[\mb E_{K_F}]_{z,k}}_{\text{1D EOF}} \nonumber\\
& \times \underbrace{ \exp \left(2 \pi j \left[ \frac{x(f_1 -1)}{L_x} \right] \right) \exp \left(2 \pi j \left[ \frac{y(f_2 -1)}{L_y} \right] \right)}_{\text{2D Fourier basis function}},
\label{fourier}
\end{align}
where $[\mb E_{K_F}]_{z,k}$ denotes the $(z,k)$-th element of EOF matrix $\mb E_{K_F}$, which has $K_F$ leading-order EOFs.
{\color{black}$w_{f_1,f_2,k}$ is the corresponding coefficient. $M$ and $N$ denote the two horizontal dimensions (i.e., length and width) of 3D SSF data.} $N_{F_1}$ and $N_{F_2}$ denote the number of Fourier basis functions for the two horizontal axes. $L_x$ and $L_y$ describe the periodicity of the associated Fourier basis function.
The EOF matrix $\mb E_{K_F}$ is obtained by firstly unfolding the 3D SSF along the vertical axis (as illustrated in Fig.~\ref{fig:FIG1}) and then {\color{black} performing} the EVD on the resulting SSF matrix $\mb X^{\text{u}} \in \mathbb R^{I \times MN}$ (as introduced in Section \ref{Sec II-A}). Note that the matrix $\mb X^{\text{u}}$ describes the depth-range characteristics of 3D SSF.
The 2D Fourier basis functions can be generated according to {\color{black}Eq.~\eqref{fourier}}. For the brevity of notation, we define the 2D Fourier matrix $\mb F \in \mathbb C^{MN \times N_{F_1} N_{F_2}}$ as follows:
\begin{align}
\mb F = \mb F_2 \otimes \mb F_1,
\end{align}
where $\mb F_1$ and $\mb F_2$ are defined in {\color{black} Eq.~(9) and (10)}, respectively:
\begin{align}
&\mb F_1 = \begin{bmatrix}
1 & 1 & \cdots & 1 \\
1 & \exp \left(\frac{2 \pi j }{L_x} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{N_{F_1} -1}{L_x} \right] \right) \\
1& \exp \left( \frac{4 \pi j }{L_x} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{2(N_{F_1} -1)}{L_x} \right] \right)\\
\vdots & \vdots & \vdots & \vdots \\
1 & \exp \left( \frac{2(M-1)\pi j}{L_x} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{(M-1)(N_{F_1} -1)}{L_x} \right] \right)
\end{bmatrix} \nonumber \\
&~~~~~~~~~ \in \mathbb C^{M \times N_{F_1}},
\end{align}
\begin{align}
&\mb F_2 = \begin{bmatrix}
1 & 1 & \cdots & 1 \\
1 & \exp \left(\frac{2 \pi j }{L_y} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{N_{F_2} -1}{L_y} \right] \right) \\
1& \exp \left( \frac{4 \pi j }{L_y} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{2(N_{F_2} -1)}{L_y} \right] \right)\\
\vdots & \vdots & \vdots & \vdots \\
1 & \exp \left( \frac{2(N-1)\pi j}{L_y} \right) & \cdots & \exp \left(2 \pi j \left[ \frac{(N-1)(N_{F_2} -1)}{L_y} \right] \right)
\end{bmatrix} \nonumber \\
&~~~~~~~~~ \in \mathbb C^{N \times N_{F_2}}.
\end{align}
Each column in $\mb F$ represents a 2D Fourier basis function.
Given the 1D EOFs and 2D Fourier basis functions, following {\color{black}Eq.~\eqref{fourier}}, any unseen 3D SSF data $\{ c^*(x,y,z)\}_{x=1,y=1,z=1}^{M,N,I}$ can be represented by 3D coefficients $\{ w^*_{f_1,f_2,k} \}_{f_1=1,f_2=1,k=1}^{N_{F_1},N_{F_2},K_F}$. To efficiently compute these coefficients, both {\color{black} the 3D SSF data $\{ c^*(x,y,z)\}_{x=1,y=1,z=1}^{M,N,I}$ and coefficients $\{ w^*_{f_1,f_2,k} \}_{f_1=1,f_2=1,k=1}^{N_{F_1},N_{F_2},K_F}$} can be unfolded into 2D matrices $\mb X^{*, \text{u}} \in \mathbb R^{I \times MN}$ and $\mb W^* \in \mathbb R^{K_F \times N_{F_1} N_{F_2}}$ {\color{black},} respectively (see Fig.~\ref{fig:FIG1}). {\color{black} The coefficients can be computed by}
\begin{align}
\mb W^* = \mb E_{K_F}^\mathrm{T} \mb X^{*, \text{u}} (\mb F^\mathrm{T})^{\dagger} {\color{black}.}
\end{align}
Note that under this scheme, the number of coefficients required for SSF representation is $N_{F_1} N_{F_2} K_F $.
{\color{black} {\it \textbf{Remark 1:}} If the considered spatial area is large and the associated sound speeds have significant variations, larger values of $N_{F_1}$ and $N_{F_2}$ should be chosen. Otherwise, smaller values of $N_{F_1}$ and $N_{F_2}$ could be selected. If history data is available, trial-and-error method is viable to select these two values \cite{theodoridis2020machine}. In recent machine learning, Bayesian approach was leveraged to achieve automatic model order selection \cite{theodoridis2020machine, cheng2020learning, zhao2015bayesian,xule21}. Namely, the hyper-parameters (e.g., $N_{F_1}$ and $N_{F_2}$) might be learnt directly from training data, which is an interesting future research direction.}
\section{Representation Learning Via Tensors}
\label{sec:3}
Before moving to the exploration of more effective 3D SSF representation, we first provide some touches on the preliminaries of tensors\cite{kolda2009tensor}, including terminologies, tensor operations, and tensor decomposition formats. Then, we interpret tensor decomposition in the context of representation learning\cite{panagakis2021tensor, sidiropoulos2017tensor}, showing its paramount role in modern data science and ocean signal processing.
\subsection{Scalar, Vector, Matrix, and Tensor}
\begin{figure}[h]
\includegraphics[width=1\reprintcolumnwidth]{Figure2.pdf}
\caption{\label{fig:FIG2}{ Illustration of scalar, matrix and tensor.}}
\end{figure}
In multilinear algebra, the term {\it order} measures the number of indices used to assess each data element (in scalar form)\cite{kolda2009tensor}. Specifically, vector $\mb a \in \mathbb C^{J_1}$ is the $1$-st order tensor since its element $\mb a_{j_1}$ can be assessed via only one index. Matrix $\mb A \in \mathbb C^{J_1 \times J_2}$ is the $2$-nd order tensor, because two indices are enough to traverse all of its elements $\mb A_{j_1,j_2}$. As a generalization, tensors are of order three or higher. A $P$-th order tensor $\bc A \in \mathbb C^{J_1 \times \cdots \times J_P}$ utilizes $P$ indices to address its elements $\bc A_{j_1, \cdots, j_P}$. For illustration, we depict the scalar, vector, matrix and tensor in Fig.~\ref{fig:FIG2}.
For a $P$-th order tensor $\bc A$, each index corresponds to a {\it mode}\cite{kolda2009tensor}, which is used to generalize the concepts of rows and columns of matrices to tensors. For example, for a third order tensor $\bc A \in \mathbb C^{J_1 \times J_2 \times J_3}$, given indices $j_2$ and $j_3$, the vectors $\bc A_{:,j_2, j_3}$ are termed as {\it mode-1 fibers}.
{\it \textbf{Remark 2:}} The 3D SSF data $\{c(x,y,z)\}_{x=1,y=1,z=1}^{M,N,I}$ can be naturally represented by a third-order tensor $\bc X \in \mathbb R^{M \times N \times I}$, with each element $\bc X_{x,y,z}$ being $c(x,y,z)$.
\subsection{Tensor Unfolding}
Tensor unfolding aims to re-organize the fibers in one mode into a matrix. For a $P$-th order tensor $\bc A \in \mathbb C^{J_1 \times \cdots \times J_P}$, since it has $P$ modes, there are $P$ types of unfolding, each termed as {\it mode-$p$ unfolding}. {\color{black} It is formally defined as follows \cite{kolda2009tensor} and illustrated} in Fig.~\ref{fig:FIG3}.
\begin{figure}[t]
\includegraphics[width=1\reprintcolumnwidth]{Figure3.pdf}
\caption{\label{fig:FIG3}{ Illustration of tensor unfolding.}}
\end{figure}
\begin{tcolorbox}
\textbf{Definition 1 (Mode-$p$ Unfolding)} Given a tensor $\bc A \in \mathbb C^{J_1 \times \cdots \times J_P}$, its mode-$p$ unfolding gives a matrix $\mb A_{(p)} \in \mathbb C^{J_p \times \prod_{k=1, k \neq p}^P J_k}$. Each tensor element $\bc A_{j_1,\cdots,j_P}$ is mapped to the matrix element $ \left [\mb A_{(p)}\right]_{j_p,q}$, where $q = 1 + \sum_{k=1, k\neq p}^P (j_k -1) I_k$ with $I_k = \prod_{m = 1, m\neq p }^{k-1} J_m$.
\end{tcolorbox}
Tensor unfolding is one of the most important operations in tensor-based machine learning and signal processing\cite{panagakis2021tensor, sidiropoulos2017tensor}, since it gives a ``matrix'' view to describe a tensor data, such that fruitful results in linear algebra can be leveraged. {\color{black} Typically}, tensor-based algorithms were mostly developed upon the matrices provided by unfolding operations.
Then, the $p$-mode product between a tensor and a matrix {\color{black} is introduced} as follows \cite{kolda2009tensor} .
\begin{tcolorbox}
\textbf{Definition 2 ($p$-mode Product)} The $p$-mode product between a tensor $\bc A \in \mathbb C^{J_1 \times \cdots \times J_P}$ and a matrix $\mb M \in \mathbb C^{R \times J_p}$ results in a tensor $(\bc A \times_{p} \mb M)$ $\in \mathbb C^{J_1 \times \cdots \times J_{p-1} \times R \times J_{p+1} \times \cdots \times J_P}$, with each element being
\begin{align}
&(\bc A \times_{p} \mb M )_{j_1,\cdots, j_{p-1},r,j_{p+1},\cdots, j_P} \nonumber \\
& = \sum_{j_p =1}^{J_P} m_{r,j_p} \bc A_{j_1,\cdots, j_P}.
\end{align}
\end{tcolorbox}
{\it \textbf{Remark 3:}} The unfolding rule introduced in Fig.~\ref{fig:FIG1} is essentially the mode-$3$ unfolding of a 3D tensor. That is, the unfolding matrix $\mb X^{\text{u}}$ in Section~\ref{sec:ii-c} corresponds to $\mb X_{(3)}$ introduced in Definition 1.
\subsection{Tensor Decomposition for Representation Learning}
\label{sec:iii-c}
To extract low-dimensional yet informative parameters (in terms of smaller tensors, matrices and vectors) from multi-dimensional data, tensor decomposition, which generalizes matrix decomposition to tackle higher-order tensors, has come up as the major tool in recent machine learning and signal processing studies \cite{panagakis2021tensor, sidiropoulos2017tensor}. {\color{black} The extracted parameters are expected to preserve the structures endowed by physical sciences and have clear interpretations.} To achieve this goal, various tensor decomposition formats \cite{kolda2009tensor} have been proposed, in which canonical polyadic decomposition (CPD) and Tucker decomposition are the most well-known and widely adopted.
In this paper, we focus on tensor Tucker decomposition, {\color{black} which} includes CPD as a special case. {\color{black} The definition of Tucker decomposition} is given as follows \cite{kolda2009tensor}.
\begin{tcolorbox}
\textbf{Definition 3 (Tucker Decomposition)}
For a $P$-th order tensor $\bc A \in \mathbb C^{J_1 \times \cdots \times J_P}$, tensor Tucker decomposition is defined as
\begin{align}
\bc A = \bc G \times_1 \mb U^{(1)} \times_2 \mb U^{(2)} \times_3 \cdots \times_P \mb U^{(P)},
\label{tuckerd}
\end{align}
where each factor matrix $\mb U^{(p)} \in \mathbb C^{J_p \times R_p}, {\color{black} \forall p = 1,2, \cdots, P}$, and is usually orthonormal. The core tensor {\color{black} is} $\bc G \in \mathbb C^{R_1 \times R_2 \times \cdots \times R_P}$. The tuple $(R_1, \cdots, R_P)$ is known as {\color{black} the} multi-linear rank.
\end{tcolorbox}
Note that the definition above utilizes the $p$-mode product (see Definition 2). The illustration of tensor Tucker decomposition is provided in Fig.~\ref{fig:FIG4}. Usually, we have $R_p \ll J_p, \forall p$. Note that when the core tensor $\bc G$ is super-diagonal and $R_1 = \cdots = R_P$, Tucker decomposition reduces to CPD \cite{kolda2009tensor} .
More insights can be drawn after interpreting Tucker decomposition \eqref{tuckerd} in the context of representation learning. In particular, the factor matrices $\{\mb U^{(p)} \}_{p=1}^P$ can be treated as the dictionary matrices, thereby providing a common set of basis functions for data representation. On the other hand, the core tensor $\bc G$, as seen in \eqref{tuckerd}, acts as the weighting coefficients that encode the information of
tensor data. In other words, relying on Tucker decomposition, the essence of tensor-based representation learning is to acquire the factor matrices $\{\mb U^{(p)} \}_{p=1}^P$ (i.e., basis functions) from training data $\bc A$, based on which any unseen/test data $\bc A^*$ can be represented by the core tensor $\bc G^*$. We make this interpretation concretely using the 3D SSF data in the next section.
\begin{figure}
\includegraphics[width=1\reprintcolumnwidth]{Figure4.pdf}
\caption{\label{fig:FIG4}{ Illustration of tensor Tucker decomposiiton.}}
\end{figure}
\section{Tensor-based Basis Function Learning}
\label{sec:4}
In this section, 3D SSF representation {\color{black} is re-visited} under the lens of tensor decomposition. In contrast to Section~\ref{sec:ii-c} that designs the basis functions in an empirical manner, we view the 3D SSF data as a third-order tensor $\bc X \in \mathbb R^{M \times N \times I}$(see Remark 2), and propose to learn the basis functions via a data-driven approach. Then, theoretical insights are given that interpret the classical basis functions (using EOFs and/or Fourier basis functions)\cite{leblanc1980underwater, cornuelle1989ocean, morawitz1996three} as the special cases of the proposed tensor-based learning framework. {\color{black} Finally, if multiple 3D SSFs $\{\bc X_t \in \mathbb R^{M \times N \times I}\}_{t=1}^T$ (e.g., from different seasons) are available as training data, we extend the proposed tensor-based basis function learning to jointly process these 3D SSFs.}
\subsection{Tensor-based Basis Function Learning Framework}
\label{sec:HOOI}
{\color{black} In this subsection, a tensor-based basis function learning framework relying on Tucker decomposition is introduced, under which the higher-order orthogonal iteration (HOOI) algorithm is presented to learn the basis functions from one 3D SSF $\bc X \in \mathbb R^{M \times N \times I}$. As introduced in Section~\ref{sec:iii-c}, basis functions are provided by the three factor matrices in Eq.~\eqref{tuckerd}. In the context of representing 3D SSF $\bc X$, they are denoted by $ \mb B^{(1)} \in \mathbb R^{M \times L_1} $, $ \mb B^{(2)} \in \mathbb R^{N \times L_2}$ and $ \mb B^{(3)} \in \mathbb R^{I \times L_3}$, respectively. The core tensor, which contains the coefficients for SSF representation, is denoted by $\bc S \in \mathbb R^{L_1 \times L_2 \times L_3}$. Consequently, we propose the tensor-based basis function learning framework as follows:}
\begin{align}
&\min_{\bc S, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \left \| \bc X - \bc S \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \right\|_\mathrm{F}^2, \nonumber\\
& \text{s.t.}~~ \bc S \in \mathbb R^{L_1 \times L_2 \times L_3}, \nonumber \\
&~~~~~ f(\bc S) \geq \mb 0, ~ \Bar{f} (\bc S) = \mb 0, \nonumber \\
& ~~~~~ \mb B^{(1)} \in \mathbb R^{M \times L_1}, ~ \mb B^{(2)} \in \mathbb R^{N \times L_2}, ~ \mb B^{(3)} \in \mathbb R^{I \times L_3}, \nonumber \\
&~~~~~ g_p(\mb B^{(p)}) \geq \mb 0, ~ \Bar{g}_p(\mb B^{(p)}) = \mb 0, ~ p = 1,2,3,
\label{eq14}
\end{align}
where $f(\cdot)$ and $g_p(\cdot)$ denote the inequality constraints of the argument; and $\Bar{f}(\cdot)$ and $\Bar{g}_p(\cdot)$ represent the equality constraints of the argument. We can devise these constraints by incorporating the prior knowledge of SSF, or for the saving of computational resources. {\color{black} For example, practitioners can devise these constraint functions to embed the structures (e.g., non-negativeness, orthogonality, smoothness) into the basis function learning \cite{sidiropoulos2017tensor, panagakis2021tensor, cheng2020learning}. }
\begin{table}[!t]
\begin{tcolorbox}
{\textbf{\color{black} Algorithm 1: (The HOOI Algorithm)}}
\\
\centering
\begin{ruledtabular}
\begin{tabular}{c}
\leftline{\textbf {Input:} $\bc X \in \mathbb R^{M \times N \times I}$, multi-linear rank $(L_1, L_2, L_3)$.}\\
\leftline{\textbf{Initialize:} $\mb B^{(1),0}, \mb B^{(2),0}, \mb B^{(3),0}$.} \\
\leftline{\textbf{For} $t = 1,2,3,\cdots$} \\
\leftline{ ~~~~$\mb C_{(1)}^{t} = \mb X_{(1)} ( \mb B^{(3),t-1} \otimes \mb B^{(2),t-1})$,} \\
\leftline{ ~~~~$\mb B^{(1),t} \leftarrow$ $L_1$ leading left singular vectors of $\mb C_{(1)}^{t}$,} \\
\leftline{ ~~~~$\mb C_{(2)}^{t} = \mb X_{(2)} ( \mb B^{(3),t-1} \otimes \mb B^{(1),t})$,} \\
\leftline{ ~~~~$\mb B^{(2),t} \leftarrow$ $L_2$ leading left singular vectors of $\mb C_{(2)}^{t}$,} \\
\leftline{ ~~~~$\mb C_{(3)}^{t} = \mb X_{(3)} ( \mb B^{(2),t-1} \otimes \mb B^{(1),t})$,} \\
\leftline{ ~~~~$\mb B^{(3),t} \leftarrow$ $L_3$ leading left singular vectors of $\mb C_{(3)}^{t}$,} \\
\leftline{\textbf{Until Convergence}} \\
\leftline{ $\bc S^{t} = \bc X \times_1 [\mb B^{(1),t}]^\mathrm{T} \times_2 [\mb B^{(2),t}]^\mathrm{T} \times_3 [\mb B^{(3),t}]^\mathrm{T}$.} \\
\leftline{\textbf{Return} $\bc S^{t}, \mb B^{(1),t}, \mb B^{(2),t}, \mb B^{(3),t}.$
}
\end{tabular}
\end{ruledtabular}
\end{tcolorbox}
\label{tab1}
\end{table}
To reduce the number of matrix inversion, which is computationally demanding, orthonormal constraints are usually imposed on the factor matrices. {\color{black} More specifically, $\bar{g}_p(\mb B^{(p)}) = \mb 0$ is set to be: $\left[ \mb B^{(p)} \right]^T \mb B^{(p)} - \mb I = \mb 0$ , i.e., $\left[ \mb B^{(p)} \right]^T \mb B^{(p)} = \mb I $, where $\mb I$ is the identity matrix with matching dimensions. Besides this, other constraints are not devised. Then the problem in Eq. \eqref{eq14} reduces to the following problem:}
\begin{align}
&\min_{\bc S, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \left \| \bc X - \bc S \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \right \|_{\mathrm{F}}^2, \nonumber\\
& \text{s.t.} ~~ \bc S \in \mathbb R^{L_1 \times L_2 \times L_3}, \nonumber \\
& ~~~~~ \mb B^{(1)} \in \mathbb R^{M \times L_1}, ~[\mb B^{(1)}]^\mathrm{T} \mb B^{(1)} = \mb I_{L_1} \nonumber \\
& ~~~~~ \mb B^{(2)} \in \mathbb R^{N \times L_2}, ~[\mb B^{(2)}]^\mathrm{T} \mb B^{(2)} = \mb I_{L_2} \nonumber \\
& ~~~~~ \mb B^{(3)} \in \mathbb R^{I \times L_3}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{L_3}.
\label{eq15}
\end{align}
If factor matrices $\{\mb B^{(p)}\}_{p=1}^3$ are given, the core tensor $\bc S$ can be solved to be \cite{kolda2009tensor}
\begin{align}
\bc S = \bc X \times_1 [\mb B^{(1)}]^\mathrm{T} \times_2 [\mb B^{(2)}]^\mathrm{T} \times_3 [\mb B^{(3)}]^\mathrm{T}.
\label{eq17}
\end{align}
After substituting {\color{black} Eq.~\eqref{eq17}} into {\color{black} Eq.~\eqref{eq15}}, expanding the Frobenius norm and utilizing the orthonormal property of factor matrices, the problem in {\color{black} Eq.~\eqref{eq15}} is equivalent to the following problem:
\begin{align}
& \max_{\{\mb B^{(p)}\}_{p=1}^3} \left \| \bc X \times_1 [\mb B^{(1)}]^\mathrm{T} \times_2 [\mb B^{(2)}]^\mathrm{T} \times_3 [\mb B^{(3)}]^\mathrm{T} \right \|_{\mathrm{F}}^2, \nonumber \\
&\text{s.t.} ~~ \mb B^{(1)} \in \mathbb R^{M \times L_1}, ~[\mb B^{(1)}]^\mathrm{T} \mb B^{(1)} = \mb I_{L_1} \nonumber \\
& ~~~~~~ \mb B^{(2)} \in \mathbb R^{N \times L_2}, ~[\mb B^{(2)}]^\mathrm{T} \mb B^{(2)} = \mb I_{L_2} \nonumber \\
& ~~~~~~ \mb B^{(3)} \in \mathbb R^{I \times L_3}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{L_3}.
\label{eq18}
\end{align}
Problem \eqref{eq18} can be solved via alternating least-squares (ALS) method. In the $t$-th iteration, after fixing factor matrices $\{ \mb B^{(p),t-1}\}_{p = 2}^3$, problem \eqref{eq18} reduces to
\begin{align}
& \max_{\mb B^{(1)} \in \mathbb R^{M \times L_1}} \left \| \left[\mb B^{(1)}\right]^\mathrm{T} \mb C_{(1)}^{t} \right \|_\mathrm{F}^2, \nonumber \\
& \text{s.t.} ~~ [\mb B^{(1)}]^\mathrm{T} \mb B^{(1)} = \mb I_{L_1},
\label{eq181}
\end{align}
where
\begin{align}
\mb C_{(1)}^{t} = \mb X_{(1)} ( \mb B^{(3),t-1} \otimes \mb B^{(2),t-1}).
\end{align}
Note that $\mb X_{(1)}$ is the mode-$1$ unfolding matrix of tensor data $\bc X$ (see Definition 1). {\color{black} The solution to the problem in Eq.~\eqref{eq181}} can be acquired via the singular value decomposition (SVD) of matrix $\mb C_{(1)}^{t}$, giving the following update step:
\begin{align}
\mb B^{(1),t} = \left[ \mb u^{t}_1, \mb u^{t}_2, \cdots, \mb u^{t}_{L_1}\right],
\end{align}
where $\{ \mb u^{t}_l \}_{l=1}^{L_1}$ are $L_1$ leading left singular vectors of $\mb C_{(1)}^{t}$. Similar update steps can be derived for other two factor matrices $\{ \mb B^{(p),t}\}_{p = 2}^3$. Using these results, the algorithm that solves {\color{black} the problem in Eq.~\eqref{eq15}} is summarized in Algorithm 1, which is known as {\it higher-order orthogonal iteration} (HOOI) algorithm \cite{kroonenberg1980principal}. {\color{black} The HOOI algorithm was proved to be convergence guaranteed \cite{kroonenberg1980principal}. }
Using the HOOI algorithm, basis functions (i.e., $\{\mb B^{(p)} \}_{p=1}^3$) can be learnt from training SSF data $\bc X$. For representing any unseen/test 3D SSF data $\bc X^*$, the core tensor $\bc S^*$, which has $L_1 L_2 L_3$ parameters, can be learnt via {\color{black} Eq.~\eqref{eq17}}.
\subsection{Theoretical Insights}
\label{sec:iv-b}
{\color{black} A close connection exists between} the proposed tensor-based basis function learning framework \eqref{eq14} and the classical basis functions using EOFs and Fourier basis functions\cite{leblanc1980underwater, cornuelle1989ocean, morawitz1996three} (as introduced in Section~\ref{Sec II-A} and Section~\ref{sec:ii-c} ). {\color{black} The connection is revealed in the following two propositions, and the proofs are found in the Appendices. }
\begin{table}[!t]
\begin{ruledtabular}
\caption{\label{tab:prop}Differences among different basis functions under the unified tensor perspective.}
\begin{tabular}{|c|c|c|c|}
Basis functions &
\begin{tabular}[c]{@{}c@{}}Factor-1\\ $\mb B^{(1)}$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}Factor-2\\ $\mb B^{(2)}$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}Factor-3\\ $\mb B^{(3)}$ \end{tabular} \\ \hline
EOFs &
\begin{tabular}[c]{@{}c@{}}Identity \\ matrix\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Identity\\ matrix\end{tabular} &
\begin{tabular}[c]{@{}c@{}}{\bf Learnt} \\ {\bf from data}\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}2D Fourier \\ basis functions\\ + 1D EOFs \end{tabular} &
\begin{tabular}[c]{@{}c@{}}Fourier \\ matrix\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Fourier\\ matrix\end{tabular} &
\begin{tabular}[c]{@{}c@{}}{\bf Learnt}\\ {\bf from data}\end{tabular} \\ \hline
HOOI-based &
\begin{tabular}[c]{@{}c@{}}{\bf Learnt} \\ {\bf from data}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}{\bf Learnt} \\ {\bf from data}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}{\bf Learnt} \\ {\bf from data}\end{tabular} \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{tcolorbox}
\textbf{Proposition 1.} The classical basis functions for 2D SSF, expressed by the EOF matrix $\mb E_{K} \in \mathbb C^{I \times K}$, are the optimal solution of the following problem:
\begin{align}
&\min_{\bc S, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \left \Vert \bc X - \bc S \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \right \Vert_\mathrm{F}^2, \nonumber\\
& \text{s.t.} ~~ \bc S \in \mathbb R^{M \times N \times K}, \nonumber \\
& ~~~~~ \mb B^{(1)} = \mb I_{M} \in \mathbb R^{M \times M}, \nonumber \\
& ~~~~~ \mb B^{(2)} = \mb I_{N} \in \mathbb R^{N \times N}, \nonumber \\
& ~~~~~ \mb B^{(3)} \in \mathbb R^{I \times K}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{K},
\label{eq221}
\end{align}
which is a special case of the proposed tensor-based basis function learning framework \eqref{eq14}. \\
\noindent {\it Proof: See Appendix~\ref{appendix-a}.}
\end{tcolorbox}
\begin{tcolorbox}
\textbf{Proposition 2.} The classical basis functions for 3D SSF, expressed by the EOF matrix $\mb E_{K_F} \in \mathbb C^{I \times K_F}$ and the two Fourier matrices $\mb F_1 \in \mathbb C^{M \times N_{F_1}}, \mb F_2 \in \mathbb C^{N \times N_{F_2}}$, are the optimal solution of the following problem:
\begin{align}
&\min_{\bc S, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \left \Vert \bc X - \bc S \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \right \Vert_\mathrm{F}^2, \nonumber\\
& \text{s.t.} ~~ \bc S \in \mathbb C^{N_{F_1} \times N_{F_2} \times K_F}, \nonumber \\
& ~~~~~ \mb B^{(1)} = \mb F_1 \in \mathbb C^{M \times N_{F_1}}, \nonumber \\
& ~~~~~ \mb B^{(2)} = \mb F_2 \in \mathbb C^{N \times N_{F_2}}, \nonumber \\
& ~~~~~ \mb B^{(3)} \in \mathbb R^{I \times K_F}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{K_F},
\label{eq22}
\end{align}
which is a special case of the proposed tensor-based basis function learning framework \eqref{eq14}. \\
\noindent {\it Proof: See Appendix~\ref{appendix-b}.}
\end{tcolorbox}
Proposition 1 and Proposition 2 point out that the classical basis functions \cite{leblanc1980underwater, cornuelle1989ocean, morawitz1996three} and the one learnt using the HOOI algorithm are all special cases of the proposed tensor-based basis function learning framework \eqref{eq14}. Through this unified perspective, the differences of these three types of basis functions are evident, as shown in Table~\ref{tab:prop}. Particularly, in problem \eqref{eq221}, {\color{black} the two factor matrices} are restricted to be the identity matrices, which are too rigid to allow effective SSF representation. On the other hand, in problem \eqref{eq22}, the two factor matrices are designed to the Fourier basis matrices, thus with a higher representation capability than the identity matrices. The identity matrices and Fourier basis matrices are manually designed. On the contrary, in problem \eqref{eq15}, three factor matrices are all learnt from the data. Therefore, problem \eqref{eq15} and the associated HOOI algorithm endue the basis functions (expressed by the learnt factor matrices) a higher flexibility, making them a promising candidate for more effective 3D SSF representation, as corroborated in Section~\ref{sec:5}.
\subsection{Learning Basis Functions from Multiple 3D SSFs}
In Section~\ref{sec:HOOI} and \ref{sec:iv-b}, tensor-based basis function learning using one 3D SSF is introduced. Due to the relatively high correlations of 3D SSFs in several tens of days (e.g., one month)\cite{munk2009ocean, zhu2020}, the learnt basis functions are capable of reduced-order yet accurate 3D SSF representation in such a period. Therefore, using only one 3D SSF $\bc X \in \mathbb R^{M \times N \times I}$, the proposed approach is useful for underwater applications that require short-term SSF forecasting/inversion, e.g., geoacoustic inversion\cite{jiang2008short}, shallow water sound speed profile inversion\cite{zhang2015inversion}, internal wave reconstruction and acoustic propagation calculation\cite{casagrande2011novel}.
On the other hand, if multiple 3D SSFs $\{\bc X_t \in \mathbb R^{M \times N \times I}\}_{t=1}^T$ (e.g., from different seasons) are available, the joint learning of basis functions has potential to realize long-term effective representation\cite{lu2004spatial, long2021variations}, since more SSF variations (e.g., from different seasons) are taken into account. To achieve this goal, we extend the tensor-based basis function learning problem in Eq.~\eqref{eq15} from dealing with one 3D SSF to processing multiple 3D SSFs:
\begin{align}
&\min_{\bc S_t, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \sum_{t=1}^T \left \| \bc X_t - \bc S_t \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \right \|_{\mathrm{F}}^2, \nonumber\\
& \text{s.t.} ~~ \bc S_t \in \mathbb R^{L_1 \times L_2 \times L_3}, \nonumber \\
& ~~~~~ \mb B^{(1)} \in \mathbb R^{M \times L_1}, ~[\mb B^{(1)}]^\mathrm{T} \mb B^{(1)} = \mb I_{L_1} \nonumber \\
& ~~~~~ \mb B^{(2)} \in \mathbb R^{N \times L_2}, ~[\mb B^{(2)}]^\mathrm{T} \mb B^{(2)} = \mb I_{L_2} \nonumber \\
& ~~~~~ \mb B^{(3)} \in \mathbb R^{I \times L_3}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{L_3}.
\label{eq23}
\end{align}
Note that factor matrices $\{\mb B^{(p)} \}_{p=1}^3$ contain common basis functions for multiple 3D SSFs $\{ \bc X_t\}_{t=1}^T$, and the core tensor $\bc S_t$ is associated with the 3D SSF $\bc X_t$, for $t = 1, \cdots, T$.
Using tensor algebra\cite{kolda2009tensor}, it can be shown that the problem in Eq.~\eqref{eq23} is equivalent to
\begin{align}
&\min_{ \tilde{\bc S}, \mb B^{(1)}, \mb B^{(2)},\mb B^{(3)}} \left \| \tilde{\bc X} - \tilde{\bc S} \times_1 \mb B^{(1)} \times_2 \mb B^{(2)} \times_3 \mb B^{(3)} \times_4 \mb I_{T} \right \|_{\mathrm{F}}^2, \nonumber\\
& \text{s.t.} ~~ \tilde{\bc S} \in \mathbb R^{L_1 \times L_2 \times L_3 \times T}, \nonumber \\
& ~~~~~ \mb B^{(1)} \in \mathbb R^{M \times L_1}, ~[\mb B^{(1)}]^\mathrm{T} \mb B^{(1)} = \mb I_{L_1} \nonumber \\
& ~~~~~ \mb B^{(2)} \in \mathbb R^{N \times L_2}, ~[\mb B^{(2)}]^\mathrm{T} \mb B^{(2)} = \mb I_{L_2} \nonumber \\
& ~~~~~ \mb B^{(3)} \in \mathbb R^{I \times L_3}, ~[\mb B^{(3)}]^\mathrm{T} \mb B^{(3)} = \mb I_{L_3},
\label{eq24}
\end{align}
where $\tilde{\bc X} \in \mathbb R^{M\times N \times I \times T}$ and $\tilde{\bc S} \in \mathbb R^{L_1 \times L_2 \times L_3 \times T}$ are obtained by stacking $\{\bc X_t \in \mathbb R^{M \times N \times I}\}_{t=1}^T$ and $\{\bc S_t \in \mathbb R^{L_1\times L_2 \times L_3}\}_{t=1}^T$ along their fourth modes, respectively.
According to Eq.~\eqref{tuckerd} in Definition 1, the problem in Eq.~\eqref{eq24} is a variant of Tucker decomposition problem with one factor matrix being an identity matrix $\mb I_T$. Therefore, a modified HOOI algorithm (labeled as M-HOOI)\cite{kolda2009tensor}, which is summarized in Algorithm 2, can be applied to solve the problem in Eq.~\eqref{eq24}. The derivations of M-HOOI algorithm are similar to those presented in Section~\ref{sec:HOOI}.
Using M-HOOI algorithm, basis functions (i.e., $\{\mb B^{(p)} \}_{p=1}^3$) can be jointly learnt from multiple 3D SSFs $\{\bc X_t \}_{t=1}^T$. For representing an unseen 3D SSF $\bc X^*$, the coefficients in $\bc S^*$ can be acquired via Eq.~\eqref{eq17}.
\begin{table}[t]
\begin{tcolorbox}
{\textbf{\color{black} Algorithm 2: (The M-HOOI Algorithm)}}
\\
\centering
\begin{ruledtabular}
\begin{tabular}{c}
\leftline{\textbf {Input:} $\tilde {\bc X} \in \mathbb R^{M \times N \times I \times T}$, multi-linear rank $(L_1, L_2, L_3)$.}\\
\leftline{\textbf{Initialize:} $\mb B^{(1),0}, \mb B^{(2),0}, \mb B^{(3),0}$.} \\
\leftline{\textbf{For} $t = 1,2,3,\cdots$} \\
\leftline{ ~~~~$\mb C_{(1)}^{t} = \mb X_{(1)} ( \mb I_T \otimes \mb B^{(3),t-1} \otimes \mb B^{(2),t-1})$,} \\
\leftline{ ~~~~$\mb B^{(1),t} \leftarrow$ $L_1$ leading left singular vectors of $\mb C_{(1)}^{t}$,} \\
\leftline{ ~~~~$\mb C_{(2)}^{t} = \mb X_{(2)} ( \mb I_T \otimes \mb B^{(3),t-1} \otimes \mb B^{(1),t})$,} \\
\leftline{ ~~~~$\mb B^{(2),t} \leftarrow$ $L_2$ leading left singular vectors of $\mb C_{(2)}^{t}$,} \\
\leftline{ ~~~~$\mb C_{(3)}^{t} = \mb X_{(3)} (\mb I_T \otimes \mb B^{(2),t-1} \otimes \mb B^{(1),t})$,} \\
\leftline{ ~~~~$\mb B^{(3),t} \leftarrow$ $L_3$ leading left singular vectors of $\mb C_{(3)}^{t}$,} \\
\leftline{\textbf{Until Convergence}} \\
\leftline{ $\tilde{\bc S}^{t} = \tilde{\bc X} \times_1 [\mb B^{(1),t}]^\mathrm{T} \times_2 [\mb B^{(2),t}]^\mathrm{T} \times_3 [\mb B^{(3),t}]^\mathrm{T} \times_4 \mb I_T $.} \\
\leftline{\textbf{Return} $\tilde{\bc S}^{t}, \mb B^{(1),t}, \mb B^{(2),t}, \mb B^{(3),t}.$
}
\end{tabular}
\end{ruledtabular}
\end{tcolorbox}
\label{tab1}
\end{table}
{\it \textbf{Remark 4:}} For the basis functions learnt via data-driven approaches, including EOF, K-SVD, and the tensor-based methods, their performance will degrade when the test data become less correlated with training data. Consequently, the update of basis functions using new training data is required to maintain the good performance of basis functions. But fortunately, the processes driving ocean SSFs are inherently continuous in space and time, the update (or the re-training) of basis functions need not to be very frequent in most cases. There is a trade-off between the sustainability of learnt basis functions and the cost of training data over a span of time. If multiple 3D SSFs across a long span of time (e.g., different seasons) are used as the training data, the learnt basis functions are more likely to realize long-term effective SSF representation, as will be demonstrated in the next section.
{\it \textbf{Remark 5:}} The tensor-based basis function learning leverages low-rank tensor decomposition models to exploit multi-dimensional correlations inside 3D SSFs. It has advantages over classical matrix-based methods when the considered 3D SSFs can be more accurately represented by low-rank tensor models. Under this perspective, the requirements of 3D SSFs are introduced in Appendix~\ref{appendix-d}.
\section{Numerical results}
\label{sec:5}
In this section, numerical results are presented to showcase the excellent performance of the tensor-based basis function learning algorithms (i.e., HOOI algorithm and M-HOOI algorithm) for 3D SSF data representation.
\subsection{Learning Basis Functions from One 3D SSF}
\label{sec:v-a}
In this subsection, the performance of tensor-based basis functions learnt from one 3D SSF is evaluated. The training and test data, baseline algorithms and performance metrics adopted in this subsection are introduced as follows.
\textbf{3D SSF Data:} The 30-days 3D South China Sea (SCS) SSF data $\{\bc X_t \in \mathbb R^{20 \times 20 \times 300 }\}_{t=1}^{30}$ from Dec. 21, 2011 to Jan. 19, 2012 is analyzed in this paper. The data was derived by the 3D conductivity, temperature and depth (CTD) data across the area shown in Fig.~\ref{fig:FIGN2}, and was provided by the Institute of Oceanology, Chinese Academy of Sciences using a data-assimilative hybrid coordinate ocean model (HYCOM). We consider the 3D spatial area $152 \text{km} \times 152 \text{km} \times 2990 \text{m}$. That is, the horizontal resolution is {\color{black} $8 \text{km}$} and the vertical resolution is {\color{black} $10 \text{m}$}. For illustration, three horizontal slices of the $1$-st day SSF data $\bc X_1$, corresponding to depths $40$m, $240$m, and $2490$m respectively, are shown in Fig.~\ref{fig:FIGN2}. In this area, {\color{black} a mesoscale eddy can be observed}, which plays an important role in changing the ocean dynamics of a semi-closed ocean system\cite{zhu2020}.
\textbf{Training Data and Test Data:} The $1$-st day 3D SSF data{\color{black}, $\bc X_1$,} is used as the training data, {\color{black} from which} the basis functions are learnt via the tensor-based HOOI algorithm and other benchmarking algorithms. Visualization of the learnt tensor-based basis functions are provided in Fig.~\ref{fig:FIGN1}, in which the first $5$ columns of the three factor matrices $\{\mb B^{(p)} \}_{p=1}^3$ are plotted. From the mode-3 basis functions (expressed by the columns in $\mb B^{(3)}$), it can be seen that the sound speeds vary much more significantly in the shallow ocean (the depth is smaller than $1000 \text{m}$), while change slightly in the deep ocean (the depth is larger than $2500 \text{m}$). {\color{black} The basis functions expressed by the columns in $\mb B^{(1)}$ and $\mb B^{(2)}$ characterize the sound speed variations over the horizontal domain, which are not provided by classical matrix-based methods (e.g., EOFs).} The remaining 3D SSF data{\color{black}, $\{\bc X_t\}_{t=2}^{30}$,} are used as the test data to assess the representation capability of different basis functions. The data partition scheme follows the convention in ocean signal processing \cite{munk2009ocean, zhu2020}. {\color{black} Namely}, the $1$-st day SSF data is treated as the history record and thus {\color{black} serves as} the training/reference. {\color{black} Note that the 3D spatial SSFs in 30 days are used to evaluate the performance of the algorithms, in order to see whether the basis functions learnt from a particular day are informative enough to represent the 3D SSFs of the following several tens of days (e.g., 29 days). In this regard, the time resolution for data partition is 1 day. The underlying assumption is that the ocean sound speed variations can be well represented by the learnt basis functions in at least 1-day period, which has been corroborated in the numerical study of this section.}
\begin{figure*}[!t]
\centering
\includegraphics[width=1.5 \reprintcolumnwidth]{Figure5.pdf}
\caption{Illustration of Training 3D SSF Data.}
\label{fig:FIGN2}
\hrule
\end{figure*}
\textbf{Baselines:} The benchmarking algorithms include the EOF-based method (labeled as EOF)\cite{leblanc1980underwater}, the K-SVD-based method (labeled as K-SVD)\cite{aharon2006k, bianco2017dictionary}, and the classical basis functions using 2D Fourier basis functions and 1D EOFs (labeled as 2D Fourier + 1D EOF)\cite{cornuelle1989ocean, morawitz1996three}. In this paper, the K-SVD algorithm was implemented by the KSVD-Box v13 (\url{http://www.cs.technion.ac.il/~ronrubin/software.html}), where the OMP algorithm implemented by OMP-Box v10 (\url{http://www.cs.technion.ac.il/~ronrubin/software.html}) was utilized for sparse coding.
\textbf{Performance Metrics:} The representation capabilities of different basis functions are assessed by the root mean square error (RMSE) of SSF reconstruction per horizontal slice, defined by
\begin{align}
\text{RMSE} = \frac{1}{I} \left \| \bc X - \hat{\bc X} \right \|_\mathrm{F},
\end{align}
where $\hat{\bc X} $ is the reconstructed 3D SSF data; $\bc X$ is the ground-truth 3D SSF data; and $I = 300$ is the number of horizontal slices. We also compare their running time in both training and testing process. All the experiments were conducted in Matlab R2019b with a 2.2 GHz 6-Core Intel Core i7 CPU.
\begin{figure*}[!t]
\baselineskip=12pt
\fig{Fig6a.pdf}{2\reprintcolumnwidth}{(a)} \label{fig:FIGN1a}
\fig{Fig6b.pdf}{2\reprintcolumnwidth}{(b)} \label{fig:FIGN1b}
\fig{Fig6c.pdf}{2\reprintcolumnwidth}{(c)} \label{fig:FIGN1c}
\caption{Illustration of tensor-based basis functions: (a) mode-1 basis functions, (b) mode-2 basis functions, and (c) mode-3 basis functions. }
\label{fig:FIGN1}
\hrule
\end{figure*}
\subsubsection{Reconstruction Error under The Similar Number of Representation Coefficients}
First, we assess the representation capability of different basis functions in terms of the reconstruction RMSEs under the similar number of representation coefficients. For the training data $\bc X_1 \in \mathbb R^{20 \times 20 \times 300}$, the tensor-based HOOI algorithm and other benchmarking algorithms (EOF, K-SVD, 2D Fourier + 1D EOF) were run to learn the corresponding basis functions, in which the EOF-based and K-SVD-based {\color{black} algorithms} were performed on the {\color{black} unfolded} 2D SSF matrix, as illustrated in Fig.~\ref{fig:FIG1}. The hyper-parameters of different algorithms were set to let their corresponding representation coefficients have similar numbers, as seen in Table~\ref{tab:table2}. {\color{black} Note that the considered 3D SSF data is with $MN = 20\times 20 = 400$. Therefore, the number of coefficients for EOF and K-SVD are multiples of $400$, and cannot be an arbitrary number. In Case I and Case II, two coefficient numbers (i.e., $800$ and $1200$) are considered for EOF and K-SVD scheme. Then we vary the hyper-parameters of other two algorithms to make their coefficient number closer to $800$ and $1200$. Besides this, the reconstruction performance of different algorithms under a wide range of coefficient number is shown in Fig.~\ref{fig:FIGN10} and Fig.~\ref{fig:FIG7}.}
\begin{table*}
\caption{\label{tab:table2} The hyper-parameters and the number of representation coefficients for different algorithms. In each case, the number of representation coefficients for different algorithms is comparable.}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
Cases & \multicolumn{6}{c|}{Case I} & \multicolumn{6}{c|}{Case II} \\ \hline
Algorithms &
HOOI &
\multicolumn{2}{c|}{EOF} &
\multicolumn{2}{c|}{K-SVD} &
\begin{tabular}[c]{@{}c@{}}2D Fourier \\ + 1D EOF\end{tabular} &
HOOI &
\multicolumn{2}{c|}{EOF} &
\multicolumn{2}{c|}{K-SVD} &
\begin{tabular}[c]{@{}c@{}}2D Fourier\\ + 1D EOF\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Hyper- \\ parameters \end{tabular} &
\begin{tabular}[c]{@{}c@{}}$L_1 =8$\\$L_2 =8$\\$L_3 = 10$\end{tabular} &
$K=2$ &
$K=3$ &
\begin{tabular}[c]{@{}c@{}}$Z=320$\\ $T =2$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}$Z=320$\\ $T =3$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}$N_{F_1} =8$\\ $N_{F_2} =8$\\ $K_F= 10$\end{tabular} &
\begin{tabular}[c]{@{}c@{}}$L_1 =8$\\ $L_2 =10$\\ $L_3 = 10$\end{tabular} &
$K = 2$ &
$K=3$ &
\begin{tabular}[c]{@{}c@{}}$Z=320$\\ $T =2$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}$Z=320$\\ $T =3$ \end{tabular} &
\begin{tabular}[c]{@{}c@{}}$N_{F_1} =8$\\ $N_{F_2} =10$\\ $K_F= 10$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}The Number of \\ Representation\\ Coefficients\end{tabular} & 640 & 800 & 1200 & 800 &1200 & 640 & 800 & 800 &1200& 800 &1200 &800 \\
\end{tabular}
\end{ruledtabular}
\hrule
\end{table*}
\begin{figure*}[!t]
\baselineskip=12pt
\figline{
\fig{Fig7a-eps-converted-to.pdf}{\reprintcolumnwidth}{(a)} \label{fig:FIG5a}
\fig{Fig7b-eps-converted-to.pdf}{\reprintcolumnwidth}{(b)} \label{fig:FIG5b}
}
\caption{The RMSEs of different algorithms versus the training data and the test data under Case I (a) and Case II (b).}
\hrule
\end{figure*}
\begin{figure*}[t]
\includegraphics[width= 6in]{Figure8.pdf}
\caption{Visual effects of the 3D SSF reconstruction for horizontal slices at depth $40$m, $240$m and $2490$m. The test data is the $30$-th day 3D SSF data, and the training data is the $1$-st day 3D SSF data. The hyper-parameters of the algorithms follow those in Case I, Table~\ref{tab:table2}. Particularly, the EOF-based method is with $K =3$, and the K-SVD-based method is with $T =3$. The tensor-based basis functions learnt from the HOOI algorithm give the best reconstruction performance.}
\label{fig:FIG6}
\raggedright
\hrule
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width= 1\reprintcolumnwidth]{Figure9-eps-converted-to.pdf}
\caption{The running time of different algorithms in basis function learning phase and SSF reconstruction phase.}
\label{fig:FIGN9}
\end{figure}
In Fig.~\ref{fig:FIG5a} and Fig.~\ref{fig:FIG5b}, we present the RMSEs versus the training data and the test data under the two cases (see Table \ref{tab:table2}), respectively. {\color{black} Although} the tensor-based basis functions (learnt from HOOI algorithm) are with the minimal number of representation coefficients, the associated test RMSEs are always smaller than those of benchmarking algorithms. On the other hand, in the training phase, the basis functions from K-SVD algorithm give the lowest RMSE, while the RMSE of tensor-based basis functions is the second lowest one. However, in the test phase, the RMSEs of the K-SVD-based basis functions quickly increase {\color{black} and} are always larger than those of the HOOI-based basis functions. This {\color{black} observation indicates} that the K-SVD method overfits the training data, while the HOOI algorithm exhibits much better generalization performance when dealing with unseen data. Finally, although the classical basis functions (2D Fourier + 1D EOF) are with the same number of representation coefficients as the tensor-based counterpart, the resulting RMSEs are higher in both training phase and test phase. The EOF-based method shows the worst performance in 3D SSF representation. {\color{black} Note that a large reduction in the RMSEs of EOF-based method occurs on days 20-24, since the sound speed variations (in the vertical domain) of the $23$-th day are more similar to those of the $1$-st day than other nearby days.} These results show the effectiveness of data-driven approach in representation learning, since the three factor matrices (that contain basis functions) are learnt from data when adopting HOOI algorithm (see Table~\ref{tab:prop}).
In Fig.~\ref{fig:FIG6}, we present the reconstructed SSF horizontal slices at different depths for the $30$-th day test data, with the $1$-st day training data serving as the reference. First, although the test data is very different from the training one, all the basis functions, which are learnt from the training data, can represent the test data to different extents of accuracy. Second, {\color{black} the tensor-based basis functions} learnt from the HOOI algorithm give the best SSF reconstruction. Finally, the overfitting issue of K-SVD method can be also observed.
\subsubsection{Running Time}
In Fig.~\ref{fig:FIGN9}, we present the running time of different algorithms in basis function learning phase (training phase) and SSF reconstruction phase (test phase). The hyper-parameters of the algorithms follow those in Case I, Table II. Particularly, the EOF-based method is with $K = 3$, and the K-SVD-based method is with $T = 3$. {\color{black} Note that in the first day, the training algorithms of different methods are performed, which cost much more time than those of test process. Thus, the running time of test process (corresponding to the 2-30 days) is much less than that of the first day. This is the reason that a decrease of running time occurs. For the 2-30 days (corresponding to test process), the fluctuation of running time is mostly at the order of $10^{-3}$ sec, which can be viewed as the negligible systematic biases of computer hardware.}
{\color{black} The K-SVD-based method} costs the most time in two phases, since both the basis function learning and SSF reconstruction demand iterative algorithms. The classical basis functions (i.e., EOF and 2D Fourier + 1D EOF) are with closed-form expressions in two phases, thus costing much less time. On the other hand, the HOOI algorithm for basis function learning needs iterative updates, thus costing the second most time. However, the {\color{black} reconstructions} using the tensor-based basis functions has a closed-form expression (see {\color{black} Eq.~\eqref{eq17}}) and thus is very fast. In the test phase, note that the running time using the 2D Fourier + 1D EOF basis function is slightly higher than that using the HOOI-based basis functions. {\color{black} The reason is that} the Fourier basis functions have introduced the computations of complex numbers for reconstruction, while the HOOI-based reconstruction only has computations of real numbers.
\subsubsection{The Number of Coefficients Required for Accurate SSF Reconstruction}
In this subsection, we evaluate the representation capabilities of different basis functions in terms of their hyper-parameters (or equivalently the number of representation coefficients). The mapping between the values of hyper-parameters and the number of representation coefficients {\color{black} is shown in Table~\ref{tab:table3} of Appendix~\ref{appendix-c}}.
\begin{figure*}[t]
\baselineskip=12pt
\figline{
\fig{Figure10a-eps-converted-to.pdf}{\reprintcolumnwidth}{(a)} \label{fig:FIG6a}
\fig{Figure10a-eps-converted-to.pdf}{\reprintcolumnwidth}{(b)} \label{fig:FIG6b}
}
\caption{\label{fig:FIGN10}The average test RMSEs versus different number of representation coefficients: (a) varying the value of $L_3$ or $K_F$; (b) changing the values of $\{L_1, L_2 \}$ or $\{N_{F_1}, N_{F_2}\}$.}
\hrule
\end{figure*}
\begin{figure}[!t]
\includegraphics[width=\reprintcolumnwidth]{Figure11-eps-converted-to.pdf}
\caption{\label{fig:FIG7}{ The average test RMSEs versus different number of representation coefficients for EOFs and K-SVD-based basis functions. }}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=\reprintcolumnwidth]{Figure12-eps-converted-to.pdf}
\caption{\label{fig:FIG8}{The number of representation coefficients required for different basis functions that give the average test $\text{RMSE} < 0.25 $(m/s).}}
\end{figure}
{\color{black} In Fig.~\ref{fig:FIG6a}, by varying the value of $L_3$, which determines the vertical resolution of 3D SSF representation, the average test RMSEs (over $29$ test days) of tensor-based basis functions (HOOI) are presented. Meanwhile, by changing the values of $K_F$, the average test RMSEs (over $29$ test days) of the classical basis functions (2D Fourier + 1D EOF) are provided.} On the other hand, {\color{black} changing the values of $\{L_1, L_2 \}$ or $\{N_{F_1}, N_{F_2}\}$ affects the horizontal resolution of 3D SSF representation. The average test RMSEs of the two types of basis functions are shown in Fig.~\ref{fig:FIG6b}.} The tensor-based basis functions give much better reconstruction accuracies than those of the classical basis functions, showing the superiority of tensor tools in 3D Ocean signal processing. In addition, {\color{black} the RMSE decreases more in total due to the horizontal resolution increase, but achieves overall lower value with high vertical resolution.} For comparison, in Fig.~\ref{fig:FIG7}, we present the average test RMSEs of the EOF-based and the K-SVD-based basis functions by setting their hyper-parameters to different values. Note that in Fig.~\ref{fig:FIGN10} and Fig.~\ref{fig:FIG7}, we show the RMSEs versus the number of representation coefficients (see the mapping in Table~\ref{tab:table3} of Appendix~\ref{appendix-c}), to show the effectiveness of different basis functions more straightforwardly.
In Fig.~\ref{fig:FIG8}, we show the number of representation coefficients required for different basis functions such that the average test $\text{RMSE} < 0.25$(m/s) can be achieved. {\color{black} The tensor-based method requires} the minimal number of representation coefficients, showing its high expressive power in representing 3D SSF. Other basis functions require at least {\color{black} two times} number of coefficients than the tensor-based counterpart.
{\color{black}
\subsection{Learning Basis Functions From Multiple 3D SSFs Across Different Seasons}
In this subsection, we assess the performance of tensor-based basis functions learnt from multiple 3D SSFs across different seasons in one year.
\textbf{3D SSF Data:} The 3D South China Sea (SCS) SSFs $\{\bc X_t \in \mathbb R^{13 \times 13 \times 37 }\}$ in the year 2020 are analyzed in this subsection. The data was provided by National Marine Data Center (\url{http://mds.nmdis.org.cn/}). The considered spatial area ($152 \text{km} \times 152 \text{km} \times 2 \text{km}$) is with the same longitudes and latitudes as those in Section~\ref{sec:v-a}, while with lower horizontal and vertical resolutions.
\textbf{Training Data and Test Data:} The 3D SSFs used for training are from four months in the year 2020, namely, February, May, August, and November. Note that these four months correspond to four seasons in one year. In each month, the 3D SSFs of the first three days are selected for the training purpose. Consequently, twelve 3D SSFs across different seasons give the training data. To evaluate the representation performance across four seasons, one-week 3D SSFs in each of these four months are employed as the test data. As a result, there are twenty-eight unseen 3D SSFs used for testing. Note that the selected test 3D SSFs do not contain the training 3D SSFs.
\textbf{Baselines and Performance Metric:} Following those in Section~\ref{sec:v-a}.
\subsubsection{Reconstruction Error under The Similar Number of Representation Coefficients}
We compare the RMSEs of different algorithms given the similar number of representation coefficients in Fig.~\ref{fig:FIGM1}. The hyper-parameters of these algorithms were set to let their corresponding representation coefficients have similar numbers, as shown in Table~\ref{tab:tablenew}. From Fig.~\ref{fig:FIGM1}, the RMSEs of tensor-based basis functions (labeled as M-HOOI), whose coefficient number is the smallest, are lower than other benchmarking algorithms across different seasons in most cases. These results show that the tensor-based basis functions jointly learnt from multiple 3D SSFs are capable of accurate yet reduced-order representation for the 3D SSFs in a long span of time.
\begin{table}[!t]
\caption{ \label{tab:tablenew}The hyper-parameters and the number of representation coefficients for different algorithms.}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c|c|}
Algorithms &
M-HOOI &
EOF &
K-SVD &
\begin{tabular}[c]{@{}c@{}}2D Fourier \\ + 1D EOF\end{tabular} \\ \hline
Hyper-parameters &
\begin{tabular}[c]{@{}c@{}}$L_1 = 6,$\\ $L_2 = 6,$\\ $L_3 = 8$\end{tabular} &
$K = 2$ &
\begin{tabular}[c]{@{}c@{}}$T = 2,$\\ $Z = 40$\end{tabular} &
\begin{tabular}[c]{@{}c@{}}$N_{F_1 }=6, $\\ $N_{F_2 }=6,$\\ $K_F = 8$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}The Number of \\ Representation\\ Coefficients\end{tabular} &
$288$ &
$338$ &
$338$ &
$288$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width= \reprintcolumnwidth]{Figure13-eps-converted-to.pdf}
\caption{The RMSEs of different algorithms versus the test data across different months.}
\label{fig:FIGM1}
\end{figure}
\begin{figure*}[t]
\baselineskip=12pt
\figline{
\fig{Fig14a-eps-converted-to.pdf}{\reprintcolumnwidth}{(a)} \label{fig:FIGM2a}
\fig{Fig14b-eps-converted-to.pdf}{\reprintcolumnwidth}{(b)} \label{fig:FIGM2b}
}
\caption{\label{fig:FIGM2} The RMSEs of tensor-based basis function learning schemes using one 3D SSF and multiple 3D SSFs.}
\hrule
\end{figure*}
\subsubsection{Reconstruction Error using One 3D SSF or Multiple 3D SSFs}
In Fig.~\ref{fig:FIGM2}, we compare the performance of the basis functions learnt from one 3D SSF (labeled as HOOI + one 3D SSF (Month)) and those learnt from multiple 3D SSFs (labeled as M-HOOI + Twelve 3D SSFs Across Four Months). Note that the basis functions compared in this subsection are with the same order, i.e., $L_1 = 6, L_2 =6 , L_3 = 8$. Particularly, in Fig.~\ref{fig:FIGM2a} and Fig.~\ref{fig:FIGM2b}, the $1$-st day 3D SSF of May, 2020 and the $1$-st day 3D SSF of Aug., 2020 are used as the training data for HOOI algorithm respectively. In contrast, the M-HOOI algorithm learns the basis functions from twelve 3D SSFs across four months in 2020 (as introduced at the beginning of this subsection).
Fig.~\ref{fig:FIGM2} shows that the basis functions learnt from one 3D SSF in a particular month can well represent the unseen 3D SSFs in one week of that month. However, for the 3D SSFs in other months, their performance degrades. On the other hand, the basis functions jointly learnt from twelve 3D SSFs across four seasons give much lower RMSEs in most test cases. These results show that using more training 3D SSFs across different seasons, the learnt tensor-based basis functions can realize long-term effective 3D SSF representation. Note that this advantage is at the cost of more historical training data from different months/seasons in the observed sea area, which might not be available in some applications.}
\section{Conclusions and Future Directions}
\label{sec:6}
In this paper, by treating the 3D SSF data as a third-order tensor, a tensor-based basis function learning framework was introduced. Under this framework, the classical basis functions using EOFs and Fourier basis functions can be treated as the special cases. Relying on the Tucker tensor decomposition format, the HOOI algorithm and M-HOOI algorithm were introduced to learn the effective basis functions from one 3D SSF and multiple 3D SSFs in a data-driven fashion. Numerical results using SCS 3D SSF data have showcased the excellent performance of tensor-based basis functions in terms of both reconstruction accuracy and running time.
The HOOI and M-HOOI algorithms exemplify the use of tensor tools (e.g., the proposed tensor-based basis function learning framework) in multi-dimensional ocean signal processing. In future research, it is possible to obtain better basis functions by imposing informative equality/inequality constraints that incorporate more information of 3D SSF, e.g., the fact that sound speeds vary much more significantly in the shallow ocean than those in the deep ocean. {\color{black} The integration} of {\it physical science} and {\it data science} will bring us closer towards the {\it Universal Representation (UR)} of ocean signals.
\section{Acknowledgement}
The authors would like to thank the Institute of Oceanology, Chinese Academy of Sciences, and National Marine Data Center for providing the Ocean 3D SSF data for analysis. This work was supported in part by the National Natural Science Foundation of China under Grant 62001309 and Grant 62071429, and in part by Shanghai Aerospace Science and Technology Innovation Foundation (Grant No. SAST2020-034).
\section{Using \protectREV\TeX\ }
The file \file{README} has retrieval and installation information.
User documentation is presented separately in \file{auguide.tex}.
The file \file{template.aps} is a boilerplate file.
\changes{4.0a}{1998/01/16}{Initial version}
\changes{4.0a}{1998/01/31}{Move after process options, so \cs{clearpage} not in scope of twocolumn}
\changes{4.0a}{1998/01/31}{Rearrange the ordering so numerical ones come first. AO: David, what does this mean?}
\changes{4.0a}{1998/01/31}{use font-dependent spacing}
\changes{4.0a}{1998/01/31}{4.0d had twoside option setting twoside switch to false}
\changes{4.0a}{1998/01/31}{Move after process options, so the following test works}
\changes{4.0a}{1998/01/31}{print homepage}
\changes{4.0a}{1998/01/31}{protect against hyperref revtex kludges which are not needed now}
\changes{4.0a}{1998/06/10}{multiple preprint commands}
\changes{4.0a}{1998/06/10}{comma not space between email and homepage}
\changes{4.0a}{1998/06/10}{single space footnotes}
\changes{4.0b}{1999/06/20}{First modifications by Arthur Ogawa (mailto:arthur\_ogawa at sbcglobal dot net)}
\changes{4.0b}{1999/06/20}{Added localization of \cs{figuresname}}
\changes{4.0b}{1999/06/20}{Added localization of \cs{tablesname}}
\changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{10pt} is in this module.}
\changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{11pt} is in this module.}
\changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{12pt} is in this module.}
\changes{4.0b}{1999/06/20}{AO: made aps.rtx part of revtex4.dtx}
\changes{4.0b}{1999/06/20}{AO: remove duplicates}
\changes{4.0b}{1999/06/20}{call \cs{print@floats}}
\changes{4.0b}{1999/06/20}{Defer assignment until \cs{AtBeginDocument} time.}
\changes{4.0b}{1999/06/20}{Defer decision until \cs{AtBeginDocument} time}
\changes{4.0b}{1999/06/20}{Define three separate environments, defer assignment to \cs{AtBeginDocument} time.}
\changes{4.0b}{1999/06/20}{Frank Mittelbach, has stated in \protect\classname{multicol}: ``The kernel command \cs{@footnotetext} should not be modified.'' Thus, I have removed David Carlisle's redefinition of that command. Note, however, that later versions of \protect\classname{multicol} do not require this workaround. Belt and suspenders.}%
\changes{4.0b}{1999/06/20}{Move this ``complex'' option to the front, where it can be overridden by ``simple'' options.}
\changes{4.0b}{1999/06/20}{New option}
\changes{4.0b}{1999/06/20}{One-line caption sets flush left.}
\changes{4.0b}{1999/06/20}{only execute if appropriate}
\changes{4.0b}{1999/06/20}{Processing delayed to \cs{AtBeginDocument} time}
\changes{4.0b}{1999/06/20}{Removed invocation of nonexistent class option \protect\classoption{groupauthors} and all other class options that should only be invoked by the document. (Otherwise precedence of class options does not work.)}
\changes{4.0b}{1999/06/20}{Restore all media size class option of \protect\file{classes.dtx}}
\changes{4.0b}{1999/06/20}{Stack \cs{preprint} args flush right at right margin.}
\changes{4.0c}{1999/11/13}{(AO, 115) If three or more preprints specified, set on single line, with commas.}
\changes{4.0c}{1999/11/13}{(AO, 129) section* within appendix was producing appendixname}
\changes{4.0c}{1999/11/13}{*-form mandates pagebreak}
\changes{4.0c}{1999/11/13}{also spelled ``acknowledgements''.}
\changes{4.0c}{1999/11/13}{Do not put by REVTeX in every page foot}
\changes{4.0c}{1999/11/13}{grid changes via ltxgrid procedures}
\changes{4.0c}{1999/11/13}{grid changes with ltxgrid}
\changes{4.0c}{1999/11/13}{Insert procedure \cs{checkindate}}
\changes{4.0c}{1999/11/13}{Lose compatability mode.}
\changes{4.0c}{1999/11/13}{New ltxgrid-based code, other bug fixes}
\changes{4.0c}{1999/11/13}{New option ``checkin''}
\changes{4.0c}{1999/11/13}{Prevent an inner footnote from performing twice}
\changes{4.0d}{2000/04/10}{Also alter how lists get indented.}
\changes{4.0d}{2000/04/10}{eprint takes an optional argument, syntactical only in this case.}
\changes{4.0d}{2000/04/10}{New option}
\changes{4.0d}{2000/05/10}{More features and bug fixes: compatability with longtable and array packages. Now certainly incompatible with multicol.}
\changes{4.0d}{2000/05/17}{make longtable trigger the head, too}
\changes{4.0d}{2000/05/18}{But alternative spelling is deprecated.}
\changes{4.0e}{2000/09/20}{New option showkeys}
\changes{4.0e}{2000/11/14}{Bug fixes and minor new features: title block affiliations can have ancillary data, just like authors; clearpage processing revamped, with floats staying in order; widetext ornaments.}
\changes{4.0e}{2000/11/21}{adornments above and below.}
\changes{4.0f}{2001/02/13}{Last bug fixes before release.}
\changes{4.0rc1}{2001/06/17}{Running headers always as if two-sided}
\changes{4.0rc1}{2001/06/18}{grid changes with push and pop}
\changes{4.0rc1}{2001/06/18}{grid changes with push and pop}
\changes{4.0rc4}{2001/07/23}{hyperref is no longer loaded via class option: use a usepackage statement instead}
\changes{4.1a}{2008/01/18}{(AO, 457) Endnotes to be sorted in with numerical citations.}%
\changes{4.1a}{2008/01/18}{(AO, 451) ``Cannot have more than 256 cites in a document''}%
\changes{4.1a}{2008/01/18}{(AO, 457) Endnotes to be sorted in with numerical citations.}%
\changes{4.1a}{2008/01/18}{(AO, 460) ``Proper style is "FIG. 1. ..." (no colon)''}%
\changes{4.1a}{2008/01/18}{(AO, 478) \cs{ds@letterpaper}, so that ``letterpaper really is the default''}%
\changes{4.1a}{2008/01/18}{(AO, 488) Change processing of options to allow an unused option to specify society and journal}%
\changes{4.1a}{2008/01/19}{(AO, 461) Change the csname revtex uses from @dotsep to ltxu@dotsep. The former is understood in mu. (What we wanted was a dimension.)}%
\changes{4.1a}{2008/01/19}{For natbib versions before 8.21, \cs{NAT@sort} was consulted only as natbib was being read in. Now it is fully dynamic.}
\changes{4.1b}{2008/05/29}{The csname substyle@ext is now defined without a dot (.), to be compatible with \LaTeX usage (see @clsextension and @pkgextension).}
\changes{4.1b}{2008/06/01}{(AO) Implement bibnotes through \cs{frontmatter@footnote@produce} instead of \cs{bibnotes@sw}}%
\changes{4.1b}{2008/06/01}{Add option reprint, opposite of preprint, and preferred alternative to twocolumn}
\changes{4.1b}{2008/06/29}{(AO, 455) Be nice to a list within the abstract (assign \cs{@totalleftmargin}).}
\changes{4.1b}{2008/06/30}{(AO) Structure the Abstract using the \texttt{bibliography} environment}
\changes{4.1b}{2008/07/01}{(AO) coordinate \cs{if@twoside} with \cs{twoside@sw}}
\changes{4.1b}{2008/07/01}{(AO) make settings at class time instead of deferring them to later.}
\changes{4.1b}{2008/07/01}{(AO) No longer need to test \cs{chapter} as of \texttt{natbib} version 8.2}
\changes{4.1b}{2008/07/01}{(AO) No longer use \cs{secnumarabic@sw}, instead use \cs{setup@secnums}}
\changes{4.1b}{2008/07/01}{(AO) Provide more diagnostics when \cs{@society} is assigned.}
\changes{4.1b}{2008/07/01}{(AO) provide option longbibliography}
\changes{4.1b}{2008/07/01}{Add \cs{@hangfroms@section}}
\changes{4.1b}{2008/07/01}{Break out \cs{@caption@fignum@sep}}
\changes{4.1b}{2008/07/01}{Class option galley sets \cs{preprintsty@sw} to false}
\changes{4.1b}{2008/07/01}{Code relating to new syntax for frontmatter has been placed in \file{ltxfront.dtx}}
\changes{4.1b}{2008/07/01}{Package textcase is now simply a required package}
\changes{4.1b}{2008/07/01}{Procedures \cs{@parse@class@options@society} and \cs{@parse@class@options@journal} and friends}
\changes{4.1b}{2008/07/01}{Read in all required packages together}
\changes{4.1b}{2008/07/01}{Remove options newabstract and oldabstract}
\changes{4.1b}{2008/08/01}{Section numbering via procedures \cs{secnums@rtx} and \cs{secnums@arabic}.}
\changes{4.1b}{2008/08/04}{As with author formatting, rag the right more, and assign \cs{@totalleftmargin}. Also neutralize \cs{def@after@address}.}%
\changes{4.1b}{2008/08/04}{Rag the right even more: .8\cs{hsize}. Also, assign \cs{@totalleftmargin}.}%
\changes{4.1b}{2008/08/04}{The \texttt{rmp} journal substyle selects \texttt{groupedaddress} by default.}%
\changes{4.1b}{2008/08/04}{Use \cs{setup@hook} to initialize all.}
\changes{4.1c}{2008/08/15}{Document class option longbibliography via \cs{substyle@post}}
\changes{4.1d}{2009/03/27}{Definition of \cs{ @fnsymbol} follows fixltx2e.sty}
\changes{4.1e}{2008/06/29}{(AO, 455) be nice to a list within the abstract}
\changes{4.1f}{2009/07/07}{(AO, 513) Add class option linenumbers: number the lines a la \classname{lineno}}
\changes{4.1f}{2009/07/07}{(AO, 516) Merged references are separated with a semicolon}
\changes{4.1f}{2009/07/10}{(AO, 520) Automatically produce \cs{bibliography} command when needed}%
\changes{4.1f}{2009/07/11}{(AO, 521) Lonely bibliography head}%
\changes{4.1f}{2009/07/11}{(AO, 522) Warn if software is expired}%
\changes{4.1f}{2009/07/15}{(AO, 523) Add class option nomerge, to turn off new natbib 8.3 syntax}
\changes{4.1f}{2009/07/20}{(AO, 524) Makes no sense if citations are superscript numbers and so are footnotes}
\changes{4.1f}{2009/10/05}{(AO, 530) \cs{@fnsymbol}: Failed to import fixltx2e.sty technology. Return to LaTeX core.}
\changes{4.1g}{2009/10/07}{(AO, 525) Remove phantom paragraph above display math that is given in vertical mode}%
\changes{4.1g}{2009/10/07}{(AO, 538) \cs{MakeTextUppercase} inappropriately expands the double backslash}
\changes{4.1h}{2009/10/09}{(AO) Remove expiry code in the release software}%
\changes{4.1i}{2009/10/23}{(AO, 541) Defer assignment of \cs{cite} until after natbib loads}
\changes{4.1j}{2009/10/24}{(AO, 549) Repairing natbib's \cs{BibitemShut} and \cs{bibAnnote}}
\changes{4.1j}{2009/10/25}{(AO, 545) hypertext capabilities off by default; enable with \classoption{hypertext}}
\changes{4.1j}{2009/10/25}{(AO, 552) Repair spacing in \cs{onlinecite}}
\changes{4.1k}{2009/11/06}{(AO, 554) give the \cs{newlabel} command syntax appropriate to the hyperref package}
\changes{4.1n}{2009/11/06}{(AO, 565) restore 4.0 behavior: invoking class option preprint implies class option preprintnumbers}
\changes{4.1n}{2009/11/30}{(AO, 566) restore 4.0 behavior: flush column bottoms}
\changes{4.1n}{2009/12/05}{(AO, 569) Use of \classname{hyperref} interferes with column balancing of last page}%
\changes{4.1n}{2009/12/09}{(AO, 569) execute the after-last-shipout procedures from within the safety of the output routine}%
\changes{4.1n}{2010/01/02}{(AO, 571) Interface \cs{set@footnotewidth} for determining the set width of footnotes}%
\changes{4.1n}{2010/01/02}{(AO, 572) Independent footnote counter for title block. Abstract footnote counter shared with body.}%
\changes{4.1n}{2009/12/13}{(AO, 573) arrange to load \classname{lineno} after any other packages.}%
\changes{4.1n}{2010/01/04}{(AO, 575) the default for journal prstper is longbibliography}%
\changes{4.1n}{2010/01/04}{(AO, 576) In .bst files, remove support for the annote field}%
\changes{4.1n}{2010/01/02}{(AO) fine-tune spacing above and below widetext}%
\changes{4.1n}{2010/01/02}{(AO, 571) class file must set \cs{splittopskip}; fine tune \cs{skip}\cs{footins}; \cs{footnoterule} defined in terms of \cs{skip}\cs{footins}}%
\changes{4.1n}{2010/01/02}{(AO, 572) \cs{@makefntext} and \cs{frontmatter@makefntext} must be defined harmoniously}%
\changes{4.1o}{2010/02/02}{(AO, 575) Automatically incorporate the (Bib\TeX-generated) .bbl into an explicit \env{thebibliography}}%
\changes{4.1o}{2010/02/05}{(AO, 549) Remove patch to natbib, which is now at version 8.31a}
\changes{4.1o}{2010/02/07}{(AO, 578) accommodate the possible space character preceding \cs{BibitemShut}.}
\changes{4.1o}{2010/02/05}{(AO, 579) Endnote shall comprise their own Bib\TeX\ entry type: @FOOTNOTE.}
\changes{4.1o}{2010/02/10}{(AO, 580) Provide a document class option to turn off production of eprint field in bibliography.}
\changes{4.1o}{2010/02/12}{(AO, 580) Control .bst at run time.}%
\changes{4.1o}{2010/02/09}{(AO, 581) Handle case: merged references, with first ending in a stop character.}
\changes{4.1p}{2010/02/24}{(AO, 583) Provide interface to \classname{ltxgrid} \cs{onecolumn@grid@setup} and \cs{twocolumn@grid@setup}}
\changes{4.1p}{2010/02/24}{(AO, 584) Per MD, remove trailing space character from each journal abbreviation: it had caused an extraneous space in the .bbl}
\changes{4.1q}{2010/04/01}{(AO, 586) When .bbl is pasted into the document, prevent automatic bibliography inclusion.}%
\changes{4.1q}{2010/04/13}{(AO, 588) Only write REV\TeX\ -specific BibTeX .bib data if the .bst style is set by REVTeX.}%
\changes{4.1r}{2010/06/22}{(AO, 595) Provide \cs{lovname} along with other List of Videos definitions.}%
\changes{4.2a}{2014/12/31}{(Aptara, MD) Added initial support for SOR and AAPM journals, additional journals for APS, and additional journals and proceedings for AIP, unreleased.}%
\changes{4.2a}{2014/12/31}{(Aptara) Make prb style to follow other Phys. Rev. journals.}%
\changes{4.2a}{2014/12/31}{(Aptara) Corrected indentation for tableofcontents appearing along with listoffigure/listoftable.}%
\changes{4.2a}{2017/11/21}{(MD) Make long bibliography style the default now.}%
\changes{4.2a}{2017/11/28}{(MD) Add call to normalsize to be a good citizen and allow booktabs.sty to work properly}.%
\changes{4.2b}{2018/12/26}{(MD) Make titles in bibliography default, prb style to follow other Phys. Rev. journals, add a unified physrev option as well as prx and prapplied options. Corrected indentation for tableofcontents appearing along with listoffigure/listoftable.}%
\changes{4.2b}{2017/11/21}{(MD) Update options for new titles without "Special Topics" and make prper match style of other journal options}%
\changes{4.2b}{2017/11/21}{(MD) Add options for new APS journals and a generic physrev option for future-proofing}%
\changes{4.2b}{2017/11/22}{(MD) Change default to not use a title page - it seems antiquated}%
\changes{4.2b}{2017/11/22}{(MD) MD - not sure why these parameters were different previously. Made them match except for title.}%
\changes{4.2b}{2017/11/22}{(MD) PACS are obsolete altogether now}%
\changes{4.2b}{2018/12/26}{(MD) Improve control over display of e-print ids in bibliography.}%
\iffalse ltxdoc klootch
This file has version number 4.2e, last revised 2020/10/03.\fi
\section{Introduction}
This is the author's guide to REV\TeX\ ~4.2, the preferred submission
format for all APS and AIP journals. This guide is intended to be a concise
introduction to REV\TeX\ ~4.2. The documentation has been separated out
into smaller units to make it easier to locate essential
information.
The following documentation is also part of the REV\TeX\ ~4.2
distribution. Updated versions of these will be maintained at
the REV\TeX\ ~4.2 homepage located at \url{http://journals.aps.org/revtex/}.
\begin{itemize}
\item \textit{APS Author Guide for REV\TeX\ ~4.2}
\item \textit{Author's Guide to AIP Substyles for REV\TeX\ ~4.2}
\item \textit{REV\TeX\ ~4.2 Command and Options Summary}
\end{itemize}
This guide assumes a working REV\TeX\ ~4.2
installation. Please see the installation instructions included with the
distribution.
\subsection{Changes in REV\TeX\ ~4.2}
The REV\TeX\ \ system for \LaTeX\ began its development in 1986 and has gone through three major revisions since then. REV\TeX\ ~4 was released in August, 2001. Since that time, many user requests for new features were received. These requests were taken care of in the REV\TeX\ ~4.1, which was released in August, 2010. REV\TeX\ ~4.2 is the current release.
REV\TeX\ ~4.2 incorporates the following changes:
\begin{itemize}
\item \textbf{Added support for additional APS journals, \textit{Physical Review X}, \textit{Physical Review Accelerators and Beams}, \textit{Physical Review Applied}, \textit{Physical Review Fluids}, \textit{Physical Review Materials}, and \textit{Physical Review Physics Education Research}.} There are new options \texttt{prx}, \texttt{prab}, \texttt{prapplied}, \texttt{prfluids}, and \texttt{prmaterials}, and \texttt{prper}.
\item \textbf{Added a unified \texttt{physrev} option for \textit{Physical Review} journal style (the \textit{Phys. Rev.} journals have no or few variations).}
\item \textbf{The \texttt{prb} option now conforms with \textit{Physical Review B}'s updated style that uses the same non-superscripted citations as other APS journals}.
\item \textbf{Added support for additional AIP journals, \textit{AIP Advances}, \textit{Applied Physics Letters Materials}, and \textit{Structural Dynamics} as well as \textit{AIP Conference Proceedings}. There are new options \texttt{adv}, \texttt{apm}, \texttt{sd}, and \texttt{cp}.}
\item \textbf{Added support for the Society of Rheology (\texttt{sor}) and its journal, \textit{Journal of Rheology} (\texttt{jor}).}
\item \textbf{The \texttt{reprint} style for AIP's journal JMP was changed to one-column formatting.}
\item \textbf{For all APS journal options, complete article titles are now displayed in bibliography entries citing journal articles when using Bib\TeX\ by default.}
\item \textbf{In the \textit{Phys. Rev.} Bib\TeX\ style file, article titles in the bibliography are set in roman font}.
\item \textbf{The behavior of the \texttt{noeprints} option has been improved}.
\item \textbf{Support has been added for citing data sets in the Bib\TeX\ styles}.
\item \textbf{Support for citing journals that use a DOI instead of pages or article identifiers has been improved (for APS \textit{Phys. Rev.} Bib\TeX\ style)}.
\item \textbf{The indentation of tables of contents have been improved}.
\item \textbf{The \texttt{onecolumn} option no longer defaults to creating a separate title page}.
\item \textbf{The \texttt{showpacs} option is completely ignored now}.
\item \textbf{A bug when using \texttt{booktabs.sty} has been fixed}.
\item \textbf{The formatting of references for some commonly cited journals has been improved in the \textit{Phys. Rev.} Bib\TeX\ style}.
\item \textbf{URLs generated for DOIs now use \texttt{https://doi.org/} as the base in the Bib\TeX\ styles.}
\item \textbf{URLs generated for arXiv.org e-print identifiers now use \texttt{https://arXiv.org/abs/} as the base in the Bib\TeX\ styles.}
\end{itemize}
\subsection{REV\TeX\ ~4 Backwards Compatibility}
Documents prepared under REV\TeX\ ~4 and REV\TeX\ ~4.1 should process correctly under REV\TeX\ ~4.2. However, the formatting of the pages and, if using Bib\TeX, the references may change. Under 4.2, articles typeset with the \texttt{prb} option will be typeset \texttt{cite} commands differently and adjacent punctuation may need to be moved accordingly. Default behaviors some other options may also have changed as described above.
\subsection{Submitting to APS Journals}
Authors using REV\TeX\ ~4.2 to prepare a manuscript for submission to
\textit{Physical Review Letters}, \textit{Physical Review}, \textit{Reviews of Modern Physics},
or other APS journals must also read the companion document \textit{APS Author Guide for REV\TeX\ ~4.2}
distributed with REV\TeX\ \ and follow the guidelines detailed there.
The REV\TeX\ ~4.2 distribution includes both a template
(\file{apstemplate.tex}) and a sample document (\file{apssamp.tex}).
The template is a good starting point for a manuscript. In the
following sections are instructions that should be sufficient for
creating a paper using REV\TeX\ ~4.2.
Further information about submissions to the American
Physical Society may be found at \url{http://journals.aps.org/revtex/}.
\subsection{Submitting to AIP Journals}
REV\TeX\ ~4.2 includes support for the journals of the American Institute of Physics.
The style files and authoring guides for these journals are distributed as part
REV\TeX\ ~4.2 distribution. The distribution includes both a template
(\file{aiptemplate.tex}) and a sample document (\file{aipsamp.tex}).
The template is a good starting point for a manuscript. In the
following sections are instructions that should be sufficient for
creating a paper using REV\TeX\ ~4.2.
More information may be found at
\url{http://publishing.aip.org/authors/preparing-your-manuscript}. Please consult the \textit{Author's Guide to AIP Substyles for REV\TeX\ ~4.2} for more information about submissions to AIP journals, AIP styles files, and other AIP-specific information.
\subsection{Contact Information}\label{sec:aipresources}%
Any bugs, problems, or inconsistencies with REV\TeX\ \ or the APS journal style files should be reported to
REV\TeX\ \ support at \[email protected]+. Reports should include information on the error and a \textit{small}
sample document that manifests the problem if possible (please don't send large files!). Issues related to the AIP journal styles should be sent directly to \[email protected]+.
\section{Some \LaTeXe\ Basics}
REV\TeX\ ~4.2 must sometimes patch the underlying
\LaTeX\ kernel. This means that REV\TeX\ ~4.2 requires a fairly recent version of
\LaTeXe. Versions prior to 2005/12/01 may not work
correctly. REV\TeX\ ~4.2 will be maintained to be compatible with future
versions of \LaTeXe.
\subsection{Useful \LaTeXe\ Markup}
\LaTeXe\ markup is the preferred way to accomplish many basic tasks.
\subsubsection{Fonts}
Because REV\TeX\ ~4.2 is based upon \LaTeXe, it inherits all of the
macros used for controlling fonts. Of particular importance are the
\LaTeXe\ macros \cmd{\textit}, \cmd{\textbf}, \cmd{\texttt} for changing to
an italic, bold, or typewriter font respectively. One should always
use these macros rather than the lower-level \TeX\ macros \cmd{\it},
\cmd{\bf}, and \cmd{\tt}. The \LaTeXe\ macros offer
improvements such as better italic correction and scaling in super-
and subscripts for example. Table~\ref{tab:fonts}
summarizes the font selection commands in \LaTeXe.
\begin{table}
\caption{\label{tab:fonts}\LaTeXe\ font commands}
\begin{ruledtabular}
\begin{tabular}{ll}
\multicolumn{2}{c}{\textbf{Text Fonts}}\\
\textbf{Font command} & \textbf{Explanation} \\
\cmd\textit\marg{text} & Italics\\
\cmd\textbf\marg{text} & Boldface\\
\cmd\texttt\marg{text} & Typewriter\\
\cmd\textrm\marg{text} & Roman\\
\cmd\textsl\marg{text} & Slanted\\
\cmd\textsf\marg{text} & Sans Serif\\
\cmd\textsc\marg{text} & Small Caps\\
\cmd\textmd\marg{text} & Medium Series\\
\cmd\textnormal\marg{text} & Normal Series\\
\cmd\textup\marg{text} & Upright Series\\
&\\
\multicolumn{2}{c}{\textbf{Math Fonts}}\\
\cmd\mathit\marg{text} & Math Italics\\
\cmd\mathbf\marg{text} & Math Boldface\\
\cmd\mathtt\marg{text} & Math Typewriter\\
\cmd\mathsf\marg{text} & Math Sans Serif\\
\cmd\mathcal\marg{text} & Calligraphic\\
\cmd\mathnormal\marg{text} & Math Normal\\
\cmd\bm\marg{text}& Bold math for Greek letters\\
& and other symbols\\
\cmd\mathfrak\marg{text}\footnotemark[1] & Fraktur\\
\cmd\mathbb\marg{text}\footnotemark[1] & Blackboard Bold\\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Requires \classname{amsfonts} or \classname{amssymb} class option}
\end{table}
\subsubsection{User-defined macros}
\LaTeXe\ provides several macros that enable users to easily create new
macros for use in their manuscripts:
\begin{itemize}
\footnotesize
\item \cmd\newcommand\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\item \cmd\newcommand\verb+*+\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\item \cmd\renewcommand\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\item \cmd\renewcommand\verb+*+\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\item \cmd\providecommand\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\item \cmd\providecommand\verb+*+\marg{\\command}\oarg{narg}\oarg{opt}\marg{def}
\end{itemize}
Here \meta{\\command} is the name of the macro being defined,
\meta{narg} is the number of arguments the macro takes,
\meta{opt} are optional default values for the arguments, and
\meta{def} is the actually macro definiton. \cmd\newcommand\ creates a
new macro, \cmd\renewcommand\ redefines a previously defined macro,
and \cmd\providecommand\ will define a macro only if it hasn't
been defined previously. The *-ed versions are an optimization that
indicates that the macro arguments will always be ``short'' arguments. This is
almost always the case, so the *-ed versions should be used whenever
possible.
The use of these macros is preferred over using plain \TeX's low-level
macros such as
\cmd\def{},\cmd\edef{}, and \cmd\gdef{}. APS authors must follow the
\textit{APS Author Guide for REV\TeX\ ~4.2} when defining macros.
\subsubsection{Symbols}
\LaTeXe\ has added some convenient commands for some special symbols
and effects. These are summarized in Table~\ref{tab:special}. See
\cite{Guide} for details.
\begin{table}
\caption{\label{tab:special}\LaTeXe\ commands for special symbols and effects}
\begin{ruledtabular}
\begin{tabular}{lc}
Command & Symbol/Effect\\
\cmd\textemdash & \textemdash\\
\cmd\textendash & \textendash\\
\cmd\textexclamdown & \textexclamdown\\
\cmd\textquestiondown & \textquestiondown\\
\cmd\textquotedblleft & \textquotedblleft\\
\cmd\textquotedblright & \textquotedblright\\
\cmd\textquoteleft & \textquoteleft\\
\cmd\textquoteright & \textquoteright\\
\cmd\textbullet & \textbullet\\
\cmd\textperiodcentered & \textperiodcentered\\
\cmd\textvisiblespace & \textvisiblespace\\
\cmd\textcompworkmark & Break a ligature\\
\cmd\textcircled\marg{char} & Circle a character\\
\end{tabular}
\end{ruledtabular}
\end{table}
\LaTeXe\ provides additional symbols in a
separate package called \classname{latexsym}. To use these symbols, include
the package using:
\begin{verbatim}
\usepackage{latexsym}
\end{verbatim}
\subsection{Using \LaTeXe\ packages with REV\TeX\ }\label{sec:usepackage}%
Many \LaTeXe\ packages are available, for instance, on CTAN at
\url{http://www.ctan.org/tex-archive/macros/latex/required/}
and at
\url{http://www.ctan.org/tex-archive/macros/latex/contrib/}. Full \TeX\ distributions
such as \TeX\
Live \url{http://www.tug.org/texlive/} provide an excellent and complete installation of \TeX\ that is
easy to maintain. Some of these packages
are automatically loaded by REV\TeX\ ~4.2 when certain class options are
invoked and are, thus, ``required.'' They will either be distributed
with REV\TeX\ \ or are already included with a standard \LaTeXe\
distribution.
Required packages are automatically loaded by REV\TeX\ \ on an as-needed
basis. Other packages should be loaded using the
\cmd\usepackage\ command. To load the
\classname{hyperref} package, the document preamble might look like:
\begin{verbatim}
\documentclass{revtex4-2}
\usepackage{hyperref}
\end{verbatim}
Some common (and very useful) \LaTeXe\ packages are \textit{a priori}
important enough that REV\TeX\ ~4.2 has been designed to be specifically
compatible with them.
A bug stemming from the use of one of these packages in
conjunction with any of the APS journals may be reported by contacting
REV\TeX\ \ support.
\begin{description}
\item[\textbf{AMS packages}] REV\TeX\ ~4.2 is compatible with and depends
upon the AMS packages
\classname{amsfonts},
\classname{amssymb}, and
\classname{amsmath}. In fact, REV\TeX\ ~4.2 requires use of these packages
to accomplish some common tasks. See Section~\ref{sec:math} for more.
REV\TeX\ ~4.2 requires version 2.0 or higher of the AMS-\LaTeX\ package.
\item[\textbf{array and dcolumn}]
The \classname{array} and \classname{dcolumn} packages are part of
\LaTeX's required suite of packages. \classname{dcolumn} is required
to align table columns on decimal points (and it in turn depends upon
the \classname{array} package).
\item[\textbf{longtable}]
\file{longtable.sty} may be used for large tables that will span more than one
page. REV\TeX\ ~4.2 dynamically applies patches to longtable.sty so that
it will work in two-column mode.
\item[\textbf{hyperref}] \file{hyperref.sty} is a package that is
used for putting hypertext links into \LaTeXe\ documents.
REV\TeX\ ~4.2 has hooks to allow e-mail addresses and URL's to become
hyperlinks if \classname{hyperref} is loaded.
\item[\textbf{booktabs}] REV\TeX\ ~4.2 improves compatibility with \classname{booktabs.sty}.
\end{description}
Other packages will conflict with REV\TeX\ ~4.2 and should be
avoided. Usually such a conflict arises because the package adds
enhancements that REV\TeX\ ~4.2 already includes. Here are some common
packages that clash with REV\TeX\ ~4.2:
\begin{description}
\item[\textbf{multicol}] \file{multicol.sty} is a package by Frank Mittelbach
that adds support for multiple columns. In fact, early versions of
REV\TeX\ ~4.2 used \file{multicol.sty} for precisely this. REV\TeX\ \
incorporates its own support for multiple-column typesetting.
\item[\textbf{cite}] Donald Arseneau's \file{cite.sty} is often used to provide
support for sorting a \cmd\cite\ command's arguments into numerical
order and to collapse consecutive runs of reference numbers. REV\TeX\ ~4.2
has this functionality built-in already via the \classname{natbib} package.
\item[\textbf{mcite}] REV\TeX\ ~4.2 already contains a lot of this
functionality through its updated syntax for the \cmd\cite\ command and
the latest \classname{natbib} package.
\item[\textbf{endfloat}] The same functionality can be accomplished
using the \classoption{endfloats} class option.
\item[\textbf{float}] \texttt{float.sty} provides a mechanism for creating new float classes with just a few commands. REV\TeX\ ~4.2 has limited compatible with float.sty. If attempting to use this package, be sure to put any \cmd\newfloat\ commands after the \verb+\begin{document}+ line.
\end{description}
\section{The Document Preamble}
The preamble of a \LaTeX\ document is the set of commands that precede
the \envb{document} line. It contains a
\cmd\documentclass\ line to load the REV\TeX\ ~4.2 class (\textit{i.e.},
all of the REV\TeX\ ~4.2 macro definitions), \cmd\usepackage\ macros to
load other macro packages, and other macro definitions.
\subsection{The \emph{documentclass} line}
The basic formatting of the manuscript is controlled by setting
\emph{class options} using
\cmd\documentclass\oarg{options}\aarg{\classname{revtex4-2}}.
The optional arguments that appear in the square brackets control the layout of the
document. At this point, one only needs to choose:
\begin{itemize}
\item Either the \classoption{aps} (default) or \classoption{aip} society option
\item One of the chosen society's journal styles such as \classoption{prl} or \classoption{apl}
\item A layout option such as \classoption{preprint} (single-column formatting), \classoption{reprint} (an approximation
to the selected journal's actual layout which may be one- or two-column depending on the journal), or \classoption{twocolumn}
\end{itemize}
Usually, one would want to use \classoption{preprint} for draft papers. Paper size options are also
available as well. In particular, \classoption{a4paper} is available
as well as the rest of the standard \LaTeX\ paper sizes. A
full list of class options is given in the \textit{REV\TeX\ ~4.2 Command
and Options Summary}.
\subsection{Loading other packages}
Other packages may be loaded into a REV\TeX\ ~4.2 document by using the
standard \LaTeXe\ \cmd\usepackage\ command. For instance, to load
the \classoption{graphics} package, one would use
\verb+\usepackage{graphics}+.
\section{The Front Matter}\label{sec:front}
After choosing the basic look and feel of the document by selecting
the appropriate class options and loading in whatever other macros are
needed, one is ready to move on to creating a new manuscript. After
the preamble, be sure to put in a \envb{document} line (and put
in an \enve{document} as well). This section describes the macros
REV\TeX\ ~4.2 provides for formatting the front matter of the
article. The behavior and usage of these macros can be quite
different from those provided in the \LaTeXe\ \classname{article} class.
\subsection{Setting the title}
The title of the manuscript is simply specified by using the
\cmd\title\aarg{title} macro. A \verb+\\+ may be used to put a line
break in a long title.
\subsection{Specifying a date}%
The \cmd\date\marg{date} command outputs the date on the
manuscript. Using \cmd\today\ will cause \LaTeX{} to insert the
current date whenever the file is run:
\begin{verbatim}
\date{\today}
\end{verbatim}
\subsection{Specifying authors and affiliations}
The REV\TeX\ ~4.2 macros for specifying authors and their affiliations are designed
to save labor for authors and during production. Authors and affiliations are
arranged into groupings called, appropriately enough, \emph{author
groups}. Each author group is a set of authors who share the same set
of affiliations. Author names are specified with the \cmd\author\
macro while affiliations (or addresses) are specified with the
\cmd\affiliation\ macro. Author groups are specified by sequences of
\cmd\author\ macros followed by \cmd\affiliation\ macros. An
\cmd\affiliation\ macro applies to all previously specified
\cmd\author\ macros which don't already have an affiliation supplied.
For example, if Bugs Bunny and Roger Rabbit are both at Looney Tune
Studios, while Mickey Mouse is at Disney World, the markup would be:
\begin{verbatim}
\author{Bugs Bunny}
\author{Roger Rabbit}
\affiliation{Looney Tune Studios}
\author{Mickey Mouse}
\affiliation{Disney World}
\end{verbatim}
The default is to display this as
\begin{center}
Bugs Bunny and Roger Rabbit\\
\emph{Looney Tune Studios}\\
Mickey Mouse\\
\emph{Disney World}\\
\end{center}
This layout style for displaying authors and their affiliations is
chosen by selecting the class option
\classoption{groupedaddress}. Journal styles usually default this option,
so it need not be specified explicitly. The other major way of displaying this
information is to use superscripts on the authors and
affiliations. This can be accomplished by selecting the class option
\classoption{superscriptaddress}. To achieve the display
\begin{center}
Bugs Bunny,$^{1}$ Roger Rabbit,$^{1,2}$ and Mickey Mouse$^{2}$\\
\emph{$^{1}$Looney Tune Studios}\\
\emph{$^{2}$Disney World}\\
\end{center}
one would use the markup
\begin{verbatim}
\author{Bugs Bunny}
\affiliation{Looney Tune Studios}
\author{Roger Rabbit}
\affiliation{Looney Tune Studios}
\affiliation{Disney World}
\author{Mickey Mouse}
\affiliation{Disney World}
\end{verbatim}
Note that REV\TeX\ ~4.2 takes care of any commas and \emph{and}'s that join
the author names together and font selection, as well as any
superscript numbering. Only the author names and affiliations should
be given within their respective macros. See below for further information
regarding the proper way to add footnotes to author names and affiliations.
There is a third class option, \classoption{unsortedaddress}, for
controlling author/affiliation display. The default
\classoption{groupedaddress} will actually sort authors into the
approriate author groups if one chooses to specify an affiliation for
each author. The markup:
\begin{verbatim}
\author{Bugs Bunny}
\affiliation{Looney Tune Studios}
\author{Mickey Mouse}
\affiliation{Disney World}
\author{Roger Rabbit}
\affiliation{Looney Tune Studios}
\end{verbatim}
will result in the same display as for the first case given
above even though Roger Rabbit is specified after Mickey Mouse. To
avoid Roger Rabbit being moved into the same author group as Bugs
Bunny, use the
\classoption{unsortedaddress} option instead. In general, it is safest
to list authors in the order they should appear and specify
affiliations for multiple authors rather than one at a time. This will
afford the most independence for choosing the display option. Finally,
it should be mentioned that the affiliations for the
\classoption{superscriptaddress} are presented and numbered
in the order that they are encountered. These means that the order
will usually follow the order of the authors. An alternative ordering
can be forced by including a list of \cmd\affiliation\ commands before
the first \cmd{\author} in the desired order. Then use the exact same
text for each affilation when specifying them for each author.
If an author doesn't have an affiliation, the \cmd\noaffiliation\
macro may be used in the place of an \cmd\affiliation\ macro.
\subsubsection{Collaborations}
A collaboration name can be specified with the \cmd\collaboration\
command. This is very similar to the \cmd\author\ command. In REV\TeX\ ~4.2, it can
be used with both the \classoption{superscriptaddress} and \classoption{groupedaddress} class options. The
\cmd\collaboration\ command should appear at the end of the list of
authors. The collaboration name will be appear centered in parentheses
between the list of authors and the list of
affiliations. Because collaborations
don't normally have affiliations, one needs to follow the
\cmd\collaboration\ with \cmd\noaffiliation.
\subsubsection{Footnotes for authors, collaborations, affiliations or title}\label{sec:footau}
Often one wants to specify additional information associated with an
author, collaboration, or affiliation such as an e-mail address, an
alternate affiliation, or some other ancillary information.
REV\TeX\ ~4.2 introduces several new macros just for this purpose. They
are:
\begin{itemize}
\item\cmd\email\oarg{optional text}\aarg{e-mail address}
\item\cmd\homepage\oarg{optional text}\aarg{URL}
\item\cmd\altaffiliation\oarg{optional text}\aarg{affiliation}
\item\cmd\thanks\aarg{miscellaneous text}
\end{itemize}
In the first three, the \emph{optional text} will be prepended before the
actual information specified in the required argument. In the APS journal style files, \cmd\email\ and \cmd\homepage\ no longer have a default value. However, in the AIP styles, each have a default text for their optional arguments
(`Electronic address:' and `URL:' respectively). The \cmd\thanks\
macro should only be used if one of the other three do not apply. Any
author name can have multiple occurences of these four macros. Note
that unlike the
\cmd\affiliation\ macro, these macros only apply to the \cmd\author\
that directly precedes it. Any \cmd\affiliation\ \emph{must} follow
the other author-specific macros. A typical usage might be as follows:
\begin{verbatim}
\author{Bugs Bunny}
\email[E-mail me at: ]{[email protected]}
\homepage[Visit: ]{http://looney.com/}
\altaffiliation[Permanent address: ]
{Warner Brothers}
\affiliation{Looney Tunes}
\end{verbatim}
This would result in the footnote ``E-mail me at: \texttt{[email protected]},
Visit: \texttt{http://looney.com/}, Permanent address: Warner
Brothers'' being attached to Bugs Bunny. Note that:
\begin{itemize}
\item Only an e-mail address, URL, or affiliation should go in the
required argument in the curly braces.
\item The font is automatically taken care of.
\item An explicit space is needed at the end of the optional text if one is
desired in the output.
\item Use the optional arguments to provide customized
text only if there is a good reason to.
\end{itemize}
The \cmd\collaboration\ , \cmd\affiliation\ , or even \cmd\title\ can
also have footnotes attached via these commands. If any ancillary data
(\cmd\thanks, \cmd\email, \cmd\homepage, or
\cmd\altaffiliation) are given in the wrong context (e.g., before any
\cmd\title, \cmd\author, \cmd\collaboration, or \cmd\affiliation\
command has been given), then a warning is given in the \TeX\ log, and
the command is ignored.
Duplicate sets of ancillary data are merged, giving rise to a single
shared footnote. However, this only applies if the ancillary data are
identical: even the order of the commands specifying the data must be
identical. Thus, for example, two authors can share a single footnote
indicating a group e-mail address.
Duplicate \cmd\affiliation\ commands may be given in the course of the
front matter, without the danger of producing extraneous affiliations
on the title page. However, ancillary data should be specified for
only the first instance of any particular institution's
\cmd\affiliation\ command; a later instance with different ancillary
data will result in a warning in the \TeX\ log.
It is preferable to arrange authors into
sets. Within each set all the authors share the same group of
affiliations. For each author, give the \cmd\author\ (and appropriate
ancillary data), then follow this author group with the needed group
of \cmd\affiliation\ commands.
If affiliations have been listed before the first
\cmd\author\ macro to ensure a particular ordering, be sure
that any later \cmd\affiliation\ command for the given institution is
an exact copy of the first, and also ensure that no ancillary data is
given in these later instances.
Each journal class option has a default behavior for the placement of these
ancillary information footnotes. For instance, the \classoption{prb} option puts all
such footnotes at the start of the bibliography while the \classoption{prl}
journal styles displays them on the first page. One can override a
journal style's default behavior by specifying explicitly the class
option
\classoption{bibnotes} (puts the footnotes at the start of the
bibliography) or \classoption{nobibnotes} (puts them on the first page).
Please consult the documentation for the various journal style files for further information.
\subsubsection{Specifying first names and surnames}
Many authors have names in which either the surname appears first
or in which the surname is made up of more than one name. To ensure
that such names are accurately captured for indexing and other
purposes, the \cmd\surname\ macro should be used to indicate which portion
of a name is the surname. Similarly, there is a \cmd\firstname\ macro
as well, although usage of \cmd\surname\ should be sufficient. If an
author's surname is a single name and written last, it is not
necessary to use these macros. These macros do nothing but indicate
how a name should be indexed. Here are some examples:
\begin{verbatim}
\author{Andrew \surname{Lloyd Weber}}
\author{\surname{Mao} Tse-Tung}
\end{verbatim}
\subsection{The abstract}
An abstract for a paper is specified by using the \env{abstract}
environment:
\begin{verbatim}
\begin{abstract}
Text of abstract
\end{abstract}
\end{verbatim}
Note that in REV\TeX\ ~4.2 the abstract must be specified before the
\cmd\maketitle\ command and there is no need to embed it in an explicit
minipage environment.
\subsubsection{Structured abstracts}
A new feature in REV\TeX\ ~4.2 is support for \textit{structured abstracts}. A ``structured" abstract is an abstract divided into labeled sections. For instance, \textit{Physical Review C} would like authors to provide abstracts with sections summarizing the paper's \textbf{Background}, \textbf{Purpose}, \textbf{Method}, \textbf{Results}, and \textbf{Conclusions}. This can be accomplished by using the \texttt{description} environment within the \texttt{abstract} environment. For example:
\begin{verbatim}
\begin{abstract}
\begin{description}
\item[Background] This part would describe the
context needed to understand what the paper
is about.
\item[Purpose] This part would state the purpose
of the present paper.
\item[Method] This part describe the methods
used in the paper.
\item[Results] This part would summarize the
results.
\item[Conclusions] This part would state the
conclusions of the paper.
\end{description}
\end{abstract}
\end{verbatim}
\subsection{PACS codes}
PACS codes are obsolete. The \classoption{showpacs} option does nothing, but is present so that older documents may still be processed under REV\TeX\ ~4.2.
\subsection{Keywords}
A \cmd\keywords\ macro may also be used to indicate keywords for the
article.
\begin{verbatim}
\keywords{nuclear form; yrast level}
\end{verbatim}
This will be displayed below the abstract and PACS (if supplied). Like
PACS codes, the actual display of the the keywords is controlled by
two classoptions: \classoption{showkeys} and
\classoption{noshowkeys}. An explicit \classoption{showkeys} must be
included in the \cmd\documentclass\ line to display the keywords.
\subsection{Institutional report numbers}
Institutional report numbers can be specified using the \cmd\preprint\
macro. If the \classoption{preprintnumbers} class option is specified, these will be displayed in the upper right corner of the first page. Multiple \cmd\preprint\ macros maybe supplied (space is
limited though, so only three or less may actually fit). Please note that the \classoption{preprint} class option does not automatically invoke \classoption{preprintnumbers}.
\subsection{maketitle}
After specifying the title, authors, affiliations, abstract, PACS
codes, and report numbers, the final step for formatting the front
matter of the manuscript is to execute the \cmd\maketitle\ macro by
simply including it:
\begin{verbatim}
\maketitle
\end{verbatim}
The \cmd\maketitle\ macro must follow all of the macros listed
above. The macro will format the front matter in accordance with the various
class options that were specified in the
\cmd\documentclass\ line (either implicitly through defaults or
explicitly).
\section{The body of the paper}
For typesetting the body of a paper, REV\TeX\ ~4.2 relies heavily on
standard \LaTeXe\ and other packages (particulary those that are part
of AMS-\LaTeX). Users unfamiliar with these packages should read the
following sections carefully.
\subsection{Section headings}
Section headings are input as in \LaTeX.
The output is similar, with a few extra features.
Four levels of headings are available in REV\TeX\ {}:
\begin{quote}
\cmd\section\marg{title text}\\
\cmd\subsection\marg{title text}\\
\cmd\subsubsection\marg{title text}\\
\cmd\paragraph\marg{title text}
\end{quote}
Use the starred form of the command to suppress the automatic numbering; e.g.,
\begin{verbatim}
\section*{Introduction}
\end{verbatim}
To label a section heading for cross referencing, best practice is to
place the \cmd\label\marg{key} within the argument specifying the heading:
\begin{verbatim}
\section{\label{sec:intro}Introduction}
\end{verbatim}
In some journal substyles, such as those of the APS,
all text in the \cmd\section\ command is automatically set uppercase.
If a lowercase letter is needed, use \cmd\lowercase\aarg{x}.
For example, to use ``He'' for helium in a \cmd\section\marg{title text} command, type
\verb+H+\cmd\lowercase\aarg{e} in \marg{title text}.
Use \cmd\protect\verb+\\+ to force a line break in a section heading.
(Fragile commands must be protected in section headings, captions, and
footnotes and \verb+\\+ is a fragile command.)
\subsection{Paragraphs and General Text}
Paragraphs always end with a blank input line. Because \TeX\
automatically calculates linebreaks and word hyphenation in a
paragraph, it is not necessary to force linebreaks or hyphenation. Of
course, compound words should still be explicitly hyphenated, e.g.,
``author-prepared copy.''
Use directional quotes for quotation marks around quoted text
(\texttt{``xxx''}), not straight double quotes (\texttt{"xxx"}).
For opening quotes, use one or two backquotes; for closing quotes,
use one or two forward quotes (apostrophes).
\subsection{One-column vs. two-column layouts}\label{sec:widetext}
One of the hallmarks of \textit{Physical Review} and many of the AIP journals is their two-column
formatting. REV\TeX\ ~4.2 provides the \classoption{reprint} class option that provides for each
journal class option a close approximation to the journal's actual production formatting. Note that
the \classoption{reprint} option will give either one or two-column formatting as appropriate for the particular journal.
For most APS and AIP journals, the \classoption{reprint} option will take care of formatting the front matter
(including the abstract) as a single column and will typeset the body in two columns. REV\TeX\ ~4.2 has its own
built-in two-column formatting macros to provide well-balanced columns as well as reasonable control over the placement of floats in either
one- or two-column modes. When drafting papers, it is common to use a one-column format. This is best achieved by using the
\classoption{preprint} class option. Authors may override a particular journal's formatting by using the lower level options \classoption{onecolumn} and \classoption{twocolumn}, but best practice is to stick with the \classoption{preprint} and \classoption{reprint} options.
Please note that the \classoption{reprint} class option is only an \textit{approximation} of a journal's final layout. Because of font differences, figure rescaling, and other factors, authors should not expect the \classoption{reprint} option to give fully accurate estimates of an article's ultimate length after being typeset for the journal.
Occasionally it is necessary to change the formatting from two-column to
one-column to better accommodate very long equations that are more
easily read when typeset to the full width of the page. This is
accomplished using the \env{widetext} environment:
\begin{verbatim}
\begin{widetext}
long equation goes here
\end{widetext}
\end{verbatim}
In two-column mode, this will temporarily return to one-column mode,
balancing the text before the environment into two short columns, and
returning to two-column mode after the environment has
finished. REV\TeX\ ~4.2 will also add horizontal rules to guide the
reader's eye through what may otherwise be a confusing break in the
flow of text. The
\env{widetext} environment has no effect on the output under the
\classoption{preprint} class option because this already uses
one-column formatting.
Use of the \env{widetext} environment should be restricted to the bare
minimum of text that needs to be typeset this way. However, short pieces
of paragraph text and/or math between nearly contiguous wide equations
should be incorporated into the surrounding wide sections.
Low-level control over the column grid can be accomplished with the
\cmd\onecolumngrid\ and \cmd\twocolumngrid\ commands. Using these, one
can avoid the horizontal rules added by \env{widetext}. These commands
should only be used if absolutely necessary. Wide figures and tables
should be accommodated using the proper \verb+*+ environments.
\subsection{Cross-referencing}\label{sec:xrefs}
REV\TeX\ {} inherits the \LaTeXe\ features for labeling and cross-referencing
section headings, equations, tables, and figures. This section
contains a simplified explanation of these cross-referencing features.
The proper usage in the context of section headings, equations,
tables, and figures is discussed in the appropriate sections.
Cross-referencing depends upon the use of ``tags,'' which are defined by
the user. The \cmd\label\marg{key} command is used to identify tags for
REV\TeX\ . Tags are strings of characters that serve to label section
headings, equations, tables, and figures that replace explicit,
by-hand numbering.
Files that use cross-referencing (and almost all manuscripts do)
need to be processed through REV\TeX\ \ at least twice to
ensure that the tags have been properly linked to appropriate numbers.
If any tags are added in subsequent editing sessions,
\LaTeX{} will display a warning message in the log file that ends with
\texttt{... Rerun to get cross-references right}.
Running the file through REV\TeX\ \ again (possibly more than once) will
resolve the cross-references. If the error message persists, check
the labels; the same \marg{key} may have been used to label more than one
object.
Another \LaTeX\ warning is \texttt{There were undefined references},
which indicates the use of a key in a \cmd\ref\ without ever
using it in a \cmd\label\ statement.
REV\TeX\ {} performs autonumbering exactly as in standard \LaTeX.
When the file is processed for the first time,
\LaTeX\ creates an auxiliary file (with the \file{.aux} extension) that
records the value of each \meta{key}. Each subsequent run retrieves
the proper number from the auxiliary file and updates the auxiliary
file. At the end of each run, any change in the value of a \meta{key}
produces a \LaTeX\ warning message.
Note that with footnotes appearing in the bibliography, extra passes
of \LaTeX\ may be needed to resolve all cross-references. For
instance, putting a \cmd\cite\ inside a \cmd\footnote\ will require at
least three passes.
Using the \classname{hyperref} package to create hyperlinked PDF files
will cause reference ranges to be expanded to list every
reference in the range. This behavior can be avoided by using the
\classname{hypernat} package available from \url{www.ctan.org}.
\subsection{Acknowledgments}
Use the \env{acknowledgments} environment for an acknowledgments
section. Depending on the journal substyle, this element may be
formatted as an unnumbered section titled \textit{Acknowledgments} or
simply as a paragraph. Please note the spelling of
``acknowledgments.''
\begin{verbatim}
\begin{acknowledgments}
The authors would like to thank...
\end{acknowledgments}
\end{verbatim}
\subsection{Appendices}
The \cmd
\section{<heading>}, \subsection{<heading>}+,
\verb+\subsubsection{<heading>}+ & Start a new section or subsection.\\
\verb+\section*{<heading>}+ & Start a new section without a number.\\
\verb+
\section{Introduction}
Articles published in American Physical Society journals are converted to
an XML file during final journal production. Other formats such
as PDF are derived directly from the XML, which constitutes the version of record.
Even before journal production, the APS editorial process can make use
of the information in a properly prepared manuscript. Information such
as title, authors, affiliations, etc., can be automatically
extracted and used to populate our manuscript database. References can
also be culled, cross-checked for accuracy, and used to create a
linked version for referees and editors. Moreover, time can be saved
as referrals can be made electronically rather than by conventional
mail. Thus, a well-prepared electronic manuscript can enhance the
entire peer review process from author to reader while making the
whole process less expensive. To this end, authors should follow the
guidelines in this document when preparing their submissions to \textit{Physical Review Letters},
\textit{Reviews of Modern Physics}, \textit{Physical Review A-E, Physical Review X, Physical Review Applied. Physical Review Fluids, Physical Review Materials, Physical Review Accelerators and Beams,} and \textit{Physical Review Physics Education Research}.
Updated versions of this document will be made available at \url{https://journals.aps.org/revtex/}. For more complete
descriptions of how to use the REV\TeX\ \ 4.2 macros, please see the
\textit{REV\TeX\ ~4.2 Author's Guide} included with the REV\TeX\ ~4.2
distribution. Questions about REV\TeX\ \ 4.2 and using it to submit to APS journals may be
emailed to \texttt{[email protected]}.
\section{Formatting}
\subsection{Preprint, reprint, and twocolumn options}
REV\TeX\ ~4.2 offers a \classoption{reprint} class option to typeset a manuscript
in a format that is a close approximation to the actual journal's appearance. It should
be emphasized that this is only an \textit{approximation}; a manuscript may be substantially different
in length or appearance after it goes through our production process. This is mostly due to the choice
of fonts and the scaling of figures.
REV\TeX\ \ 4.2 is designed to
make it straightforward to switch between two-column and single-column
formatting just by changing the class option. Authors may submit with
either the \classoption{reprint} or the \classoption{twocolumn} class options.
The \classoption{preprint} primarily does three things: It increases
the font size to 12pt, increases the line spacing, and changes the
formatting to single column.
\subsection{Paper size}
Manuscripts should be submitted to APS formatted for letter size
paper. Papers are sent electronically to referees who may
want to print them out. Letter size formatting ensures that this will
be trouble free for all referees.
\section{Marking up front matter}
Perhaps the most important macros are those
pertaining to the markup of the front matter (title, authors,
affiliations, abstract, etc.). Note that proper
use of the REV\TeX\ \ 4.2 macros means that explicit centering environments
in the front matter are not needed and should not be used.
\subsection{Title}
The title of the manuscript should be specified using the \m{title} macro. A
double backslash {\textbackslash\textbackslash} may be used to force a line break in a long
title.
\subsection{Authors, affiliations, and collaborations}
\label{sec:authors}
REV\TeX\ \ 4.2 makes it straightforward to input author names and link them up properly with affiliations. Authors should let REV\TeX\ \ 4.2 do the work of grouping authors and affiliations and, if using the superscript style, numbering affiliations. Please follow these guidelines:
\begin{itemize}
\item Use a single \m{author} macro for each author's name. REV\TeX\ \ 4.2 automatically puts in all commas and the word `and.'
\item Use the \m{surname} macro to explicitly indicate if an author's family name consists of more than one name or if the family name is not the author's last name.
\item The \m{email} macro may be used to specify an author's e-mail
address. The \m{thanks} macro must not be used for this. Only the
e-mail address itself may appear in the macro's required argument.
\item The \m{homepage} macro may be used to specify a URL associated
with an author. The \m{thanks} macro must not be used for this. Only the
URL may appear in the macro's required argument.
\item The \m{altaffiliation} macro may be used to specify an alternate
affiliation or temporary address for an author. The \m{thanks} macro
must not be used for this. Only the affiliation
may appear in the macro's required argument.
\item The \m{thanks} macro may be used only if one of the more
specific macros list above does not apply.
\item Use a single \m{affiliation} for each affiliation.
\item Superscripts linking authors to affiliations must be
accomplished using the \classoption{superscriptaddress} class option
rather than putting in explicit superscripts by hand.
\item A collaboration may be specified by using the \m{collaboration}
macro. The \m{author} macro must not be used for collaborations.
\end{itemize}
\subsection{Abstract}
The abstract must be specified using the \env{abstract}
environment. Note that in REV\TeX\ \ 4.2, the abstract must come before
the \m{maketitle} command. REV\TeX\ \ 4.2 now allows the the use of the \env{description}
environment within the abstract to provide \textit{structured abstracts}. For instance, \textit{Physical Review C} would like authors to provide abstracts with sections summarizing the paper's \textbf{Background}, \textbf{Purpose}, \textbf{Method}, \textbf{Results}, and \textbf{Conclusions}. This can be accomplished in the following manner:
\begin{verbatim}
\begin{abstract}
\begin{description}
\item[Background] This part would describe the
context needed to understand what the paper
is about.
\item[Purpose] This part would state the purpose
of the present paper.
\item[Method] This part describe the methods
used in the paper.
\item[Results] This part would summarize the
results.
\item[Conclusions] This part would state the
conclusions of the paper.
\end{description}
\end{abstract}
\end{verbatim}
\section{References and footnotes}
Authors are strongly encouraged
to use Bib\TeX\ when preparing their bibliographies. If Bib\TeX\ is used, current production processes
require that the \texttt{.bbl} file be included directly into the
manuscript's main \texttt{.tex} file. REV\TeX\ \ 4.2 comes with two Bib\TeX\ style files for formatting
references, one for the \textit{Physical Review} journals and one
for \textit{Review of Modern Physics}. In 4.2, the Bib\TeX\ styles have been modified to display journal article titles in the bibliography.
The following apply whether
Bib\TeX\ is used or not.
\begin{itemize}
\item Authors should use the \m{cite} and \m{bibitem} commands to create
bibliographies and to refer to items in the bibliography. ``By hand"
numbering of references should be avoided.
\item REV\TeX\ ~4.2 provides new syntax for combining multiple citations into a single entry in the bibliography and for putting extra text before and after a reference. Please refer to \textit{REV\TeX\ ~4.2 Author's Guide} included with the REV\TeX\ ~4.2 distribution for full details.
\item Footnotes must be specified using the \m{footnote}
macro. REV\TeX\ \ 4.2 will place the footnotes in
the bibliography for the \textit{Physical Review}
journals. Please note that even if you don't use Bib\TeX, you may have to run Bib\TeX\ to get the footnotes to appear. Footnotes giving additional information about authors (such
as e-mail addresses) must not be specified using the \m{footnote}
macro (see Section~\ref{sec:authors}).
\item Avoid custom footnotes using \m{footnotemark} and \m{footnotetext} [except in the context of tables (see
Section~\ref{sec:tablenotes})].
\item References should be formatted and specified according to the
\textit{Physical Review Style Guide}. Note that using Bib\TeX\ automatically ensures this.
\item URLs should be specified using the \m{url} macro. Bib\TeX\ will automatically take
care of this if the \texttt{url} field is used.
\item E-print identifiers should be included using the \m{eprint} macro. Bib\TeX\ will automatically take care of this if the \texttt{eprint} field is used.
\end{itemize}
Please see the REV\TeX\ ~4.2 Author's Guide for new features in REV\TeX\ ~4.2's APS Bib\TeX\ styles, including support for citing data sets, journals that use DOIs in place of page numbers, and journals that use year and issue instead of volume to uniquely identify articles.
\section{Body of the paper}
\subsection{Sectioning and cross-referencing}
For sectioning a manuscript, the basic rule is to use the appropriate
sectioning commands (\m{section}, \m{subsection}, \m{subsubsection},
\textit{etc.}). Cross-referencing a section must be done by using the
proper \m{label} and \m{ref} commands. Cross-referencing by hand is
not allowed. \m{part}, \m{chapter}, and \m{subparagraph} should not be
used.
\subsection{Appendices}
Appendices should be specified using the \m{appendix} command which
specifies that all following sections create with the \m{section}
commands are appendices. If there is only one appendix, then the
\m{appendix*} command should be used instead.
\subsection{Acknowledgments}
Any acknowledgments should be included by using the
\env{acknowledgments} environment. Note that in REV\TeX\ ~4.2, this is
an environment and not a command.
\subsection{Counters}
No counters may be created and the standard ones may not be
altered. If an exceptional label is needed for an equation, the \m{tag}
command (requires the \classoption{amsmath} class option) should be used. Please
note that the use of the \m{tag} command may conflict with the use of the \classoption{hyperref} package
due an incompatibility between \classoption{amsmath} and \classoption{hyperref}.
\subsection{Fonts}
It is preferable to avoid the older \TeX\ and \LaTeX\ 2.09 macros for
controlling fonts such as \m{rm}, \m{it}, \textit{etc.} Rather, it is
better to use the macros introduced in \LaTeXe. If the older font
commands are used (they really should be avoided!), be sure to use
curly braces to properly limit the extent of the font
change. \verb+{\bf ...}+ is the correct method.
Commands for controlling text and math font changes are summarized in
Table~\ref{tab:fonts}.
\begin{table}
\caption{\label{tab:fonts}\LaTeXe\ and AMS-\LaTeX\ font summary.}
\begin{ruledtabular}
\begin{tabular}{lp{2in}}
\m{textit} & Italics. Replaces \m{it}\\
\m{textbf} & Bold face. Replaces \m{bf}\\
\m{textrm} & Roman. Replaces \m{rm}\\
\m{textsl} & Slanted. Replaces \m{sl}\\
\m{textsc} & Small caps. Replaces \m{sc}\\
\m{textsf} & Sans serif. Replaces \m{sf}\\
\m{texttt} & Typewriter. Replaces \m{tt}\\
\m{textmd} & Medium series\\
\m{textnormal} & Normal\\
\m{textup} & Upright\\
\m{mathbf} & Bold face\\
\m{mathcal} & Replaces \m{cal}\\
\m{mathit} & Italics\\
\m{mathnormal} & Replaces \m{mit}\\
\m{mathsf} & Sans serif\\
\m{mathtt} & Typewriter\\
\m{mathfrak} & Fraktur: Requires \classoption{amsfonts} or \classoption{amssymb} class option\\
\m{mathbb} & Bold blackboard: Requires \classoption{amsfonts} or \classoption{amssymb} class option\\
\m{bm} & Bold Greek and other math symbols: Requires
\verb+\usepackage{bm}+ and may require the \classoption{amsfonts} class
option
\end{tabular}
\end{ruledtabular}
\end{table}
Bold Greek letters and other bold math symbols should be accomplished
with the use of \texttt{bm.sty} which is distributed as a required
tool with the latest versions of \LaTeXe\ and should be loaded via
\verb+\usepackage{bm}+. This package introduces the \m{bm}
macro. Some bold characters may require using the
\classoption{amsfonts} class option.
New fonts may not be declared with \m{newfont}. Font attribute
commands for selecting a font family, shape, and series are all
disallowed; the standard \LaTeXe\ font selection macros list above
should be used instead.
Finally, the \m{symbol} macro is also not allowed.
\subsection{Environments}
\subsubsection{Lists}
The standard list environments \texttt{itemize}, \texttt{enumerate},
and \texttt{description} are allowed. The \m{item} macro with or without
the optional argument is also allowed. Customization of the list environments
(with macros such as \m{labelstyle}, \m{labelitemi}, \m{labelenumi},
\m{itemsep}, etc.) is allowed but may be ignored in production.
Generalized lists (\m{begin\{list\}}) and trivial lists
(\m{begin\{trivlist\}}) are not allowed.
\subsubsection{Other Environments}
Creating generalized new environments with \m{newenvironment} is not
allowed. Creating a new theorem environment with \m{newtheorem} is
allowed though.
The tabbing environment and the macros \m{=}, \m{$>$}, \m{`}, and
\m{'} are allowed but may be ignored in production. Conversion
programs used in production should recognize the escapes \m{a=},
\m{a'}, and \m{a`} for using the corresponding accents within a
tabbing environment though.
The \env{verbatim} environment is allowed.
\subsection{Boxes}
Most boxes and macros to manipulate them are not allowed. These
include \m{raisebox}, \m{parbox}, \m{minipage}, \m{rulebox},
\m{framebox}, \m{mbox}, \m{fbox}, \m{savebox}, \m{newsavebox},
\m{sbox}, \m{usebox}, and the environment \m{begin\{lrbox\}}. Rules
produced with \m{rule} are not allowed.
\subsubsection{Margin Notes}
Margin notes created with \m{marginpar} are not allowed, as are the
associated style parameters \m{marginparwidth}, \m{marginparsep}, and
\m{marginparpush}.
\section{Math Markup}
In general, all math markup and the standard math environments from
\LaTeXe\ are allowed. These include \m{begin\{math\}},
\m{begin\{displaymath\}}, \m{begin\{equation\}},
\m{begin\{eqnarray\}}, and \m{begin\{eqnarray*\}}. The shortcuts \$,
\$\$, \m{[}, and \m{]} are allowed. In addition, authors may use
almost all of the additional markup introduced by AMS-\LaTeX\ by using
the \classoption{amsmath} class option. The explicit exceptions are
\m{genfrac}, \m{boxed}, and \m{smash}. The markup contained in
\texttt{amsextra} and \texttt{amsthm} may not be used
though. Commutative diagrams created with the \texttt{amscd} package
are acceptable.
\section{Figures}
\subsection{Figure inclusions}
Figures should be included into a REV\TeX\ ~4.2 manuscript by using the
standard \LaTeXe\ macros. \LaTeXe\ includes
several powerful packages for including the files in various
formats. The two main packages are \texttt{graphics} and
\texttt{graphicx}. Both offer a macro called
\m{includegraphics};
they mainly differ in how arguments for
controlling figure placement (\textit{e.g.}, scaling and rotation)
are passed to the \m{includegraphics}.
The \env{figure} environment should be used to add a caption to the
figure and to allow \LaTeX\ to number and place the figures where they
fit best. If a figure needs to be referred to in the text,
rather than manually numbering the figures a \m{label} should be added
to the figure environment (best practice is to put the label within
the argument of the \m{caption} command) and the \m{ref} macro should be used to
reference this label. Figures that span the page should use the
\m{figure*} environment. The \env{picture} environment must not be
used directly (one can include an Encapsulated PostScript figure that
was produced using the \env{picture} environment of course).
\subsection{\label{sec:figplace}Figure placement}
Figures should be placed as close as possible to the point where they are first
referenced. There is no need to place all figures
separately at the end of the manuscript and it is preferred that
authors leave the figures in their natural locations. Authors may
also find useful the REV\TeX\ ~4.2 \classoption{floatfix} class option
which adds emergency float placement processing to avoid ``stuck''
floats which would otherwise be deferred to the end of the job (and
can lead to the fatal \texttt{``Too many unprocessed floats''}
message).
\section{Tables}
\label{sec:tables}
The standard \LaTeXe\ table formatting environments are supported as is
the use of the \texttt{longtable} package. Tables may be reformatted
during production to meet APS style guidelines.
Here are some helpful hints for trying to get tables formatted correctly:
\begin{itemize}
\item Use the \texttt{longtable} package to get tables to break
across pages.
\item The macro \m{squeezetable} will reduce the font size of the
table. This macro must occur within a group outside the table
environment. The proper markup is:
\begin{verbatim}
\begingroup
\squeezetable
\begin{table}
...
\end{table}
\endgroup
\end{verbatim}
\item Try using the float placement option \texttt{H} which will
enable \LaTeX\ to break a float across pages. Long tables are more
attractively set with \env{longtable} however.
\begin{verbatim}
\begin{table}[H]
\begin{ruledtabular}
\begin{tabular}
...
\end{tabular}
\end{ruledtabular}
\end{table}
\end{verbatim}
\end{itemize}
\subsection{Doubled rules and table formatting}
REV\TeX\ \ 4.2 provides the \env{ruledtabular} environment which
automatically puts the scotch rules (double lines) around tables and
formats all enclosed \env{tabular} environments to the full width of
the tables and improves inter-column spacing. This environment should
be used whenever possible.
\subsection{Wide tables}
When typesetting using \classoption{twocolumn}, tables can either span
a single column or both columns. Using the '\verb+*+'-ed version of
the \env{table} or \env{longtable} environments produces wide tables
that span the columns.
Tables that are very wide and that may be better typeset in a
landscape orientation (rotated 90 degrees) should be enclosed in a
\env{turnpage} environment. This will place the rotated table on its own
page. Note that some dvi previewers may not be able to show the table
properly, but \texttt{dvips} and \texttt{pdflatex} work correctly.
\subsection{Table placement}
Tables should be placed as close as possible to the point where they
are first referenced. There is no need to place all tables separately
at the end of the manuscript and this is not desirable for APS
purposes. The class option \classoption{floatfix} may be helpful for
table placement as well as figure placement (see Section~\ref{sec:figplace}).
\subsection{Aligning columns on a decimal point}
The standard \LaTeXe\ macro package \classoption{dcolumn} should be
used to accomplish this.
\subsection{Tablenotes}
\label{sec:tablenotes}
Footnotes in tables (tablenotes) should use the \m{footnote}
macro. However, if more than one reference to the same footnote is
needed, authors may use \m{footnotetext} and \m{footnotemark}. This
will produce notes (labeled by lower-case roman letters) inserted
below the table rather than in the reference section or at the bottom
of the page.
\section{Author-defined macros}
Authors may define convenience macros to save keystrokes. This means
that the macros may not invoke \TeX\ macros such as \m{if} or other
context dependent commands. Also, \LaTeXe\ provides three macros for
declaring new commands: \m{providecommand}, \m{newcommand}, and
\m{renewcommand} (as well as their `\verb+*+'-ed versions). These
should be used. Authors may not use \TeX\relax's low-level commands
\m{def}, \m{edef}, and \m{gdef}.
\section{Summary}
To ensure the best use of \TeX\ manuscripts, authors need to follow
the guidelines specified here. Use of low-level formatting commands to
finely control horizontal and vertical spacing may be ignored during
production, or even worse, make it impossible to convert the
manuscript to XML. Authors should keep
things as simple as possible and correctly use the proper REV\TeX\ ~4.2
or \LaTeXe\ macros. Any questions about usage may be directed to
\texttt{[email protected]}.
\end{document}
\section{}
\subsection{}
\subsubsection{}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AAPM
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://www.aapm.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. In addition, there is another
option available, \texttt{lengthcheck}, which formats the document as closely
as possible to an actual journal article, to facilitate the author's
performance of a length check. Either format may be used for submission
purposes; however, for peer review and production, AAPM will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AAPM that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, AAPM citations are numerical; \cite{feyn54}
to give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AAPM styles for REV\TeX~4 include Bib\TeX\ style file
\verb+aapmrev4-2.bst+, appropriate for
numbered bibliography.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+aapmsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
aapmsamp}) after the first pass of \LaTeX\ produces the file
\verb+aapmsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will flush left by
default.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Note the equation number gets reset again:
\begin{equation}
g^+g^+g^+ \rightarrow g^+g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~.
\end{equation}
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AAPM journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{}
\subsection{}
\subsubsection{}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in mansucripts prepared for submission to APS
journals. Further information can be found in the REV\TeX~4.2
documentation included in the distribution or available at
\url{http://journals.aps.org/revtex/}.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended. Note that top-level section headings are
automatically uppercased. If a specific letter or word should appear in
lowercase instead, you must escape it using \verb+\lowercase{#1}+ as
in the word ``via'' above.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in either the \texttt{preprint} or
\texttt{reprint} style. \texttt{reprint} format mimics final journal output.
Either format may be used for submission purposes. \texttt{letter} sized paper should
be used when submitting to APS journals.
\subsubsection{Wide text (A level-3 head)}
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ command to refer to the page number.)
\paragraph{Note (Fourth-level head is run in)}
The width-changing commands only take effect in two-column formatting.
There is no effect if text is in a single column.
\subsection{\label{sec:citeref}Citations and References}
A citation in text uses the command \verb+\cite{#1}+ or
\verb+\onlinecite{#1}+ and refers to an entry in the bibliography.
An entry in the bibliography is a reference to another document.
\subsubsection{Citations}
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
the entire repertoire of commands in that package are available for your document;
see the \verb+natbib+ documentation for further details. Please note that
REV\TeX\ requires version 8.31a or later of \verb+natbib+.
\paragraph{Syntax}
The argument of \verb+\cite+ may be a single \emph{key},
or may consist of a comma-separated list of keys.
The citation \emph{key} may contain
letters, numbers, the dash (-) character, or the period (.) character.
New with natbib 8.3 is an extension to the syntax that allows for
a star (*) form and two optional arguments on the citation key itself.
The syntax of the \verb+\cite+ command is thus (informally stated)
\begin{quotation}\flushleft\leftskip1em
\verb+\cite+ \verb+{+ \emph{key} \verb+}+, or\\
\verb+\cite+ \verb+{+ \emph{optarg+key} \verb+}+, or\\
\verb+\cite+ \verb+{+ \emph{optarg+key} \verb+,+ \emph{optarg+key}\ldots \verb+}+,
\end{quotation}\noindent
where \emph{optarg+key} signifies
\begin{quotation}\flushleft\leftskip1em
\emph{key}, or\\
\texttt{*}\emph{key}, or\\
\texttt{[}\emph{pre}\texttt{]}\emph{key}, or\\
\texttt{[}\emph{pre}\texttt{]}\texttt{[}\emph{post}\texttt{]}\emph{key}, or even\\
\texttt{*}\texttt{[}\emph{pre}\texttt{]}\texttt{[}\emph{post}\texttt{]}\emph{key}.
\end{quotation}\noindent
where \emph{pre} and \emph{post} is whatever text you wish to place
at the beginning and end, respectively, of the bibliographic reference
(see Ref.~[\onlinecite{witten2001}] and the two under Ref.~[\onlinecite{feyn54}]).
(Keep in mind that no automatic space or punctuation is applied.)
It is highly recommended that you put the entire \emph{pre} or \emph{post} portion
within its own set of braces, for example:
\verb+\cite+ \verb+{+ \texttt{[} \verb+{+\emph{text}\verb+}+\texttt{]}\emph{key}\verb+}+.
The extra set of braces will keep \LaTeX\ out of trouble if your \emph{text} contains the comma (,) character.
The star (*) modifier to the \emph{key} signifies that the reference is to be
merged with the previous reference into a single bibliographic entry,
a common idiom in APS and AIP articles (see below, Ref.~[\onlinecite{epr}]).
When references are merged in this way, they are separated by a semicolon instead of
the period (full stop) that would otherwise appear.
\paragraph{Eliding repeated information}
When a reference is merged, some of its fields may be elided: for example,
when the author matches that of the previous reference, it is omitted.
If both author and journal match, both are omitted.
If the journal matches, but the author does not, the journal is replaced by \emph{ibid.},
as exemplified by Ref.~[\onlinecite{epr}].
These rules embody common editorial practice in APS and AIP journals and will only
be in effect if the markup features of the APS and AIP Bib\TeX\ styles is employed.
\paragraph{The options of the cite command itself}
Please note that optional arguments to the \emph{key} change the reference in the bibliography,
not the citation in the body of the document.
For the latter, use the optional arguments of the \verb+\cite+ command itself:
\verb+\cite+ \texttt{*}\allowbreak
\texttt{[}\emph{pre-cite}\texttt{]}\allowbreak
\texttt{[}\emph{post-cite}\texttt{]}\allowbreak
\verb+{+\emph{key-list}\verb+}+.
\subsubsection{Example citations}
By default, citations are numerical\cite{Beutler1994}.
Author-year citations are used when the journal is RMP.
To give a textual citation, use \verb+\onlinecite{#1}+:
Refs.~\onlinecite{[][{, and references therein}]witten2001,Bire82}.
By default, the \texttt{natbib} package automatically sorts your citations into numerical order and ``compresses'' runs of three or more consecutive numerical citations.
REV\TeX\ provides the ability to automatically change the punctuation when switching between journal styles that provide citations in square brackets and those that use a superscript style instead. This is done through the \texttt{citeautoscript} option. For instance, the journal style \texttt{prb} automatically invokes this option because \textit{Physical
Review B} uses superscript-style citations. The effect is to move the punctuation, which normally comes after a citation in square brackets, to its proper position before the superscript.
To illustrate, we cite several together
\cite{[See the explanation of time travel in ]feyn54,*[The classical relativistic treatment of ][ is a relative classic]epr,witten2001,Berman1983,Davies1998,Bire82},
and once again in different order (Refs.~\cite{epr,feyn54,Bire82,Berman1983,witten2001,Davies1998}).
Note that the citations were both compressed and sorted. Futhermore, running this sample file under the \texttt{prb} option will move the punctuation to the correct place.
When the \verb+prb+ class option is used, the \verb+\cite{#1}+ command
displays the reference's number as a superscript rather than in
square brackets. Note that the location of the \verb+\cite{#1}+
command should be adjusted for the reference style: the superscript
references in \verb+prb+ style must appear after punctuation;
otherwise the reference must appear before any punctuation. This
sample was written for the regular (non-\texttt{prb}) citation style.
The command \verb+\onlinecite{#1}+ in the \texttt{prb} style also
displays the reference on the baseline.
\subsubsection{References}
A reference in the bibliography is specified by a \verb+\bibitem{#1}+ command
with the same argument as the \verb+\cite{#1}+ command.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by Bib\TeX.
REV\TeX~4.2 includes Bib\TeX\ style files
\verb+apsrev4-2.bst+, \verb+apsrmp4-2.bst+ appropriate for
\textit{Physical Review} and \textit{Reviews of Modern Physics},
respectively.
\subsubsection{Example references}
This sample file employs the \verb+\bibliography+ command,
which formats the \texttt{\jobname .bbl} file
and specifies which bibliographic databases are to be used by Bib\TeX\
(one of these should be by arXiv convention \texttt{\jobname .bib}).
Running Bib\TeX\ (via \texttt{bibtex \jobname})
after the first pass of \LaTeX\ produces the file
\texttt{\jobname .bbl} which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ and \verb+\bibfield+ commands).
If not using Bib\TeX, you will have to create the \verb+thebibiliography+ environment
and its \verb+\bibitem+ commands by hand.
Numerous examples of the use of the APS bibliographic entry types appear in the bibliography of this sample document.
You can refer to the \texttt{\jobname .bib} file,
and compare its information to the formatted bibliography itself.
\subsection{Footnotes}%
Footnotes, produced using the \verb+\footnote{#1}+ command,
usually integrated into the bibliography alongside the other entries.
Numerical citation styles do this%
\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.};
author-year citation styles place the footnote at the bottom of the text column.
Note: due to the method used to place footnotes in the bibliography,
\emph{you must re-run Bib\TeX\ every time you change any of your document's footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations; this is the most common
type of equation in \textit{Physical Review}:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
\paragraph{A few notes on \texttt{tag}s}
\verb+\tag{#1}+ requires the \texttt{amsmath} package.
Place the \verb+\tag{#1}+ command before the \verb+\label{#1}+, if any.
The numbering produced by \verb+\tag{#1}+ \textit{does not affect}
the automatic numbering in REV\TeX;
therefore, the number must be known ahead of time,
and it must be manually adjusted if other equations are added.
\verb+\tag{#1}+ works with both single-line and multiline equations.
\verb+\tag{#1}+ should only be used in exceptional cases---%
do not use it to number many equations in your paper.
Please note that this feature of the \texttt{amsmath} package
is \emph{not} compatible with the \texttt{hyperref} (6.77u) package.
Enclosing display math within
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are labeled with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below.
You may include any number of single-line and multiline equations,
although it is probably not a good idea to follow one display math
directly after another.
\begin{subequations}
\label{eq:whole}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\end{subequations}
Giving a \verb+\label{#1}+ command directly after the \verb+\begin{subequations}+,
allows you to reference all the equations in the \texttt{subequations} environment.
For example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans the full page.
The wide format is reserved for long equations
that cannot easily be set in a single column:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;.
\label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show how the output appears in wide format.
(Incidentally, since there is no blank line between the \texttt{equation} environment above
and the start of this paragraph, this paragraph is not indented.)
\section{Cross-referencing}
REV\TeX{} will automatically number such things as
sections, footnotes, equations, figure captions, and table captions.
In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands.
To reference a particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear
within the section heading,
within the footnote text,
within the equation, or
within the table or figure caption.
The \verb+\ref{#1}+ command
is used in text at the point where the reference is to be displayed.
Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}[b
\caption{\label{tab:table1}%
A table that fits into a single column of a two-column layout.
Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically.
This table illustrates left-, center-, decimal- and right-aligned columns,
along with the use of the \texttt{ruledtabular} environment which sets the
Scotch (double) rules above and below the alignment, per APS style.
}
\begin{ruledtabular}
\begin{tabular}{lcdr}
\textrm{Left\footnote{Note a.}}&
\textrm{Centered\footnote{Note b.}}&
\multicolumn{1}{c}{\textrm{Decimal}}&
\textrm{Right}\\
\colrule
1 & 2 & 3.001 & 4\\
10 & 20 & 30 & 40\\
100 & 200 & 300.0 & 400\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.%
\begin{figure}[b]
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
\section{Floats: Figures, Tables, Videos, etc.}
Figures and tables are usually allowed to ``float'', which means that their
placement is determined by \LaTeX, while the document is being typeset.
Use the \texttt{figure} environment for a figure, the \texttt{table} environment for a table.
In each case, use the \verb+\caption+ command within to give the text of the
figure or table caption along with the \verb+\label+ command to provide
a key for referring to this figure or table.
The typical content of a figure is an image of some kind;
that of a table is an alignment.%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the figure* environment to get a wide
figure that spans the page in \texttt{twocolumn} formatting.}
\end{figure*}
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the full page
width in a two-column layout. It is formatted using the
\texttt{table*} environment. It also demonstates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnotemark[1]
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
Insert an image using either the \texttt{graphics} or
\texttt{graphix} packages, which define the \verb+\includegraphics{#1}+ command.
(The two packages differ in respect of the optional arguments
used to specify the orientation, scaling, and translation of the image.)
To create an alignment, use the \texttt{tabular} environment.
The best place to locate the \texttt{figure} or \texttt{table} environment
is immediately following its first reference in text; this sample document
illustrates this practice for Fig.~\ref{fig:epsart}, which
shows a figure that is small enough to fit in a single column.
In exceptional cases, you will need to move the float earlier in the document, as was done
with Table~\ref{tab:table3}: \LaTeX's float placement algorithms need to know
about a full-page-width float earlier.
Fig.~\ref{fig:wide}
has content that is too wide for a single column,
so the \texttt{figure*} environment has been used.%
\begin{table}[b]
\caption{\label{tab:table4}%
Numbers in columns Three--Five are aligned with the ``d'' column specifier
(requires the \texttt{dcolumn} package).
Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point.
Use the ``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&
\multicolumn{1}{c}{\textrm{Three}}&
\multicolumn{1}{c}{\textrm{Four}}&
\multicolumn{1}{c}{\textrm{Five}}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
The content of a table is typically a \texttt{tabular} environment,
giving rows of type in aligned columns.
Column entries separated by \verb+&+'s, and
each row ends with \textbackslash\textbackslash.
The required argument for the \texttt{tabular} environment
specifies how data are aligned in the columns.
For instance, entries may be centered, left-justified, right-justified, aligned on a decimal
point.
Extra column-spacing may be be specified as well,
although REV\TeX~4 sets this spacing so that the columns fill the width of the
table. Horizontal rules are typeset using the \verb+\hline+
command. The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment. Rows whose
columns span multiple columns can be typeset using the
\verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first
row of Table~\ref{tab:table3}).%
Tables~\ref{tab:table1}, \ref{tab:table3}, \ref{tab:table4}, and \ref{tab:table2}%
\begin{table}[b]
\caption{\label{tab:table2}
A table with numerous columns that still fits into a single column.
Here, several entries share the same footnote.
Inspect the \LaTeX\ input for this table to see exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
show various effects.
A table that fits in a single column employs the \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, set with the \texttt{table*} environment.
Long tables may need to break across pages.
The most straightforward way to accomplish this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
However, the \LaTeXe\ package \texttt{longtable} allows headers and footers to be specified for each page of the table.
A simple example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography). The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters. However, it is sometimes necessary to have
multiple entries in the table share the same footnote. In this case,
there is no choice but to manually create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value. Each time the same value for
\texttt{\#1} is used, the same mark is produced in the table. The
\verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment. Examine the \LaTeX\ source and output for
Tables~\ref{tab:table1} and \ref{tab:table2}
for examples.
Video~\ref{vid:PRSTPER.4.010101}
illustrates several features new with REV\TeX4.2,
starting with the \texttt{video} environment, which is in the same category with
\texttt{figure} and \texttt{table}.%
\begin{video}
\href{http://prst-per.aps.org/multimedia/PRSTPER/v4/i1/e010101/e010101_vid1a.mpg}{\includegraphics{vid_1a}}%
\quad
\href{http://prst-per.aps.org/multimedia/PRSTPER/v4/i1/e010101/e010101_vid1b.mpg}{\includegraphics{vid_1b}}
\setfloatlink{http://link.aps.org/multimedia/PRSTPER/v4/i1/e010101}%
\caption{\label{vid:PRSTPER.4.010101}%
Students explain their initial idea about Newton's third law to a teaching assistant.
Clip (a): same force.
Clip (b): move backwards.
}%
\end{video}
The \verb+\setfloatlink+ command causes the title of the video to be a hyperlink to the
indicated URL; it may be used with any environment that takes the \verb+\caption+
command.
The \verb+\href+ command has the same significance as it does in the context of
the \texttt{hyperref} package: the second argument is a piece of text to be
typeset in your document; the first is its hyperlink, a URL.
\textit{Physical Review} style requires that the initial citation of
figures or tables be in numerical order in text, so don't cite
Fig.~\ref{fig:wide} until Fig.~\ref{fig:epsart} has been cited.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
sorsamp}) after the first pass of \LaTeX\ produces the file
\verb+sorsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{}
\subsection{}
\subsubsection{}
\section{}
\subsection{}
\subsubsection{}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
aipsamp}) after the first pass of \LaTeX\ produces the file
\verb+aipsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{Introduction}
This is the author's guide to the AIP substyles for REV\TeX\ ~4.2,
providing a useful formatting tool
for \LaTeX\ users submitting papers to journals
published by the American Institute of Physics.
This guide is intended as an adjunct to the documentation for REV\TeX\ \ itself
(published by the American Physical Society),
so information contained therein is not repeated here,
except as it bears on the specific features of the AIP substyles.
\subsection{Prerequisite Documentation}
The following documentation should be considered your first source of information
on how to prepare your document for use with this format;
they are to be found within the APS REV\TeX\ ~4.2 distribution.
Updated versions of these are maintained at
the REV\TeX\ ~4.2 homepage located at \url{http://journals.aps.org/revtex/},
are also available at the Comprehensive \TeX\ Archive Network (CTAN, see \url{http://www.ctan.org/}),
and form part of the \TeX Live distribution of \TeX.
\begin{itemize}
\item \textit{Author's Guide to REV\TeX\ ~4.2}
\item \textit{REV\TeX\ ~4.2 Command and Options Summary}
\end{itemize}
The present guide builds upon these documents, with which you should already be familiar.
The AIP substyles distribution for REV\TeX\ ~4.2 includes
a sample document (\file{aipsamp.tex}),
a good starting point for
the manuscript you are preparing for submission to an AIP journal.
By using REV\TeX\ 's \textit{Author's Guide to REV\TeX\ ~4.2}, you can develop your
document until it contains all of the content you desire.
This guide informs you on document class options, commands, and
markup guidelines specific to AIP journals.
\subsection{Software Requirements}
This guide assumes a working REV\TeX\ ~4.2 installation including the AIP substyles.
Please see the installation guide included with the distribution.\cite{Note1}
Please note that the AIP substyles work {\it only} with REV\TeX\ ~4.2:
the original REV\TeX\ ~4.0 release does {\it not} make the AIP substyle available, nor is it compatible with them.
For your computer to run REV\TeX\ ~4.2 with the AIP substyles, the following are required:
\begin{itemize}
\item
a working installation of \LaTeX\,
\item
REV\TeX\ ~4.2 and all packages it requires,
\item
the AIP substyles for REV\TeX\ ~4.2, and
\item
any further \LaTeX\ packages used in your document.
\end{itemize}
The easiest way to obtain all of the needed software is to install an up-to-date distribution of \TeX,
like \TeX Live, available on CTAN.
To obtain the most up-to-date version of this software, please see \url{http://publishing.aip.org/authors/preparing-your-manuscript}.
\subsection{Submitting to AIP Journals}
Authors preparing a manuscript for submission to
AIP journals should consult the Information for Contributors for the applicable journal,
available through links at \url{http://scitation.aip.org/authors}.
These requirements are not covered systematically in this author's guide;
you are responsible for understanding the requirements of the particular journal to which
you will submit your article.
For further information about journal requirements, contact the Editorial
Office of the appropriate journal. (Follow links at \url{http://scitation.aip.org/authors}.)
\subsection{Contact Information}\label{sec:resources}%
Any bugs, problems, or inconsistencies concerning the AIP journal substyles
should be reported to AIP support at \href{mailto:[email protected]}{[email protected]}.
Reports should include information on the error and a
\textit{small} sample document that manifests the problem, if possible.
(Please don't send large files!)
Feedback concerning REV\TeX\ ~4.2 itself should be sent, as usual,
to the American Physical Society at\\ \href{mailto:[email protected]}{[email protected]}.
To determine if the problem you are experiencing belongs to REV\TeX\ \ or is specific to the
AIP substyles, simply remove \texttt{aip} from your document class options and rerun
your document. If the problem goes away, you may assume that it is due to the AIP substyles;
if not, it belongs to REV\TeX\ .
\section{Sample \LaTeXe\ Document}
As the REV\TeX\ \ documentation makes clear, your document employs a \LaTeXe\ document class
(specifically \texttt{revtex4-2.cls}), so you should use
the \LaTeXe\ commands and environments familiar to you with, say, the
standard article class \texttt{revtex4-2.cls}, and you will be able to
employ many of the packages you are used to using with \LaTeXe.
Using \texttt{aipsamp.tex} as an example,
your document will start with the usual REV\TeX\ \ \cmd\documentclass\ statement, but with
a particular document class option \texttt{aip} that specifies the AIP substyle:
\begin{verbatim}
\documentclass[aip]{revtex4-2}
\end{verbatim}
You will then invoke the \LaTeXe\-compatible packages your document requires, say:
\begin{verbatim}
\usepackage{graphicx}%
\usepackage{dcolumn}%
\usepackage{bm}%
\end{verbatim}
follow up with your document content:
\begin{verbatim}
\begin{document}
...
\end{verbatim}
and finish with a statement specifying your Bib\TeX\ database:
\begin{verbatim}
|
2,869,038,157,065 | arxiv | \section{Introduction}
Statistical mechanics gives a general prescription for extracting equilibrium properties from a system Hamiltonian. To this aim, the system partition function should be determined in any of a number of equivalent descriptions (ensembles), a task requiring the evaluation of a complicate multidimensional integral. In practice, this can only be accomplished in a few (mainly trivial) cases, which thus obliges one to go over to partial calculations and approximate theories, often of limited scope. With the advent of computer simulation, the analytically intractable program of statistical mechanics could eventually be attacked and its solution finally came within reach, at least for finite, not too large systems.
Occasionally, however, numerical simulation may lead to misconceptions. An example is the loop found in the pressure equation of state constructed by simulation within the coexistence region of liquid and vapor. Similar loops are found in the van der Waals theory of condensation, which uses the double-tangent construction to make the chemical potential everywhere concave as a function of pressure at constant temperature. As a result, the so-called metastable and unstable branches of the original equation of state are thrown out as unphysical. It is so customary to associate condensation with van der Waals theory, that it would tempting to interpret the loops canonical-ensemble simulations always produce in isotherms of intensive variables as van der Waals loops and, therefore, to read them as a sign of the entrance of vapor in a metastable regime. However, as originally remarked in Refs.\,\cite{Mayer,Binder1}, the non-concave regions observed in the equation of state of a finite system are fully equilibrium features arising from the use of periodic boundary conditions in the simulation. In particular, the first inversion of concavity encountered corresponds to the first occurrence of liquid-vapor separation. We have recently showed that, despite their fake character, one can take advantage of these pressure loops to extract, by plain thermodynamic integration, the right liquid-vapor coexistence parameters~\cite{Abramo1,Abramo2}.
In a series of papers~\cite{Binder1,MacDowell1,MacDowell2,Binder2}, Binder and coworkers carried out extensive grand-canonical Monte Carlo (MC) simulations of the Lennard-Jones (LJ) fluid in a periodic cubic box, biasing the sampling in such a way as to constrain the system to stay in the two-phase region. They found a whole sequence of so-called {\em shape transitions} between various ``phases'' differing in the shape of the liquid droplet coexisting with vapor. Specifically, upon increasing the system density $\rho$ the liquid drop changed from spherical (``sph'') to cylindrical (``cyl'') to slab-like (``slab''). Upon increasing $\rho$ further, the reversed sequence of transitions was observed, with interchanged roles between liquid and vapor. At each rearrangement of the liquid-vapor interface, the chemical potential undergoes a sharp drop, followed by a density interval where it stays nearly constant (this region will be referred to in the following as a ``plateau''). MacDowell {\it et al.} have successfully analysed these shape transitions by means of a capillary-drop theory~\cite{MacDowell1,MacDowell2}. In a further series of paper~\cite{Schrader,Block,Troester}, Binder and his group exploited their extremely accurate simulation results to extract information about the interface free energy of curved liquid-vapor interfaces, and thus have access to the Tolman length~\cite{Tolman}, an all-useful parameter in nucleation theory~\cite{Kashchiev,Koga,Prestipino1} (the only caution here would be that in Refs.\,\cite{Schrader,Block,Troester} the interfaces were taken at full equilibrium rather than under the metastable conditions typical of nucleation experiments).
We hereby consider the phenomenon of condensation in canonical-ensemble simulations in the light of a yet different theory which, at variance of van der Waals theory, right from the outset takes into account the periodic repetition of the system in space. The present theory originates from the few-line calculation presented in Sect.\,II-A of Ref.\,\cite{MacDowell1}, but now putting the emphasis on the pressure -- rather than the chemical potential -- equation of state, since it is the pressure that is directly accessible in a canonical-ensemble simulation. The theory is further refined by an extension to elongated-cubic simulation boxes. As a matter of principle, geometric shapes other than spherical, cylindrical, and tetragonal may also occur for the liquid drop in equilibrium with vapor, and this possibility will be looked closely within our theoretical framework.
The remainder of the paper is structured as follows. In Sec. II, we expose the details of our theory. In Sec. III, we compare simulation results for the cut-and-shifted Lennard-Jones potential with theoretical predictions, discussing the relative stability of spherical, cylindrical, and slab-like drops as a function of the box aspect ratio. The stability of other interface shapes, like those observed in simulation runs performed independently near the cylinder-slab transition density~\cite{Abramo2}, is also studied. Finally, we give our conclusions in Sec. IV.
\section{Theory}
\setcounter{equation}{0}
\renewcommand{\theequation}{2.\arabic{equation}}
In order to determine the pressure behavior of a fluid near above the vapor coexistence density, the idea originally put forward in Ref.~\cite{MacDowell1} is to compare the Helmholtz free energy $F$ of the homogeneous vapor with that of various competing heterogeneous ``phases'', differing in the shape of the liquid drop in thermal equilibrium with vapor. Since the thermodynamically stable phase for fixed temperature $T$, volume $V$, and particle number $N$ is the one with minimum $F$, the drop adopts the shape which makes the total free energy as small as possible, consistent with the amount of liquid present at the given density $\rho=N/V$.
If the focus is on the pressure equation of state, rather than on the chemical potential, it is convenient to express the free energy in terms of the specific volume $v=1/\rho$. This choice offers a number of practical advantages as it will be made clear below. Let us first estimate the free energy of the {\em homogeneous vapor} as a function of $v$ at fixed $T$. Denoting $f(T,v)=F/N$ the free energy per particle, a general relation valid up to second-order terms in the deviations $\Delta T=T-T_0$ and $\Delta v=v-v_0$ from a given state point ${\cal S}_0=(T_0,v_0)$ is:
\begin{equation}
f=f_0-\frac{S}{N}\Delta T-P\Delta v-\frac{c_V}{2T}\Delta T^2-\frac{\alpha_P}{K_T}\Delta v\Delta T+\frac{1}{2vK_T}\Delta v^2\,,
\label{2-1}
\end{equation}
with $f_0=f(T_0,v_0)$. In the latter equation $S$ is the entropy, $c_V$ is the constant-volume specific heat, $\alpha_P$ is the isobaric expansion coefficient, and $K_T$ is the isothermal compressibility, all computed at ${\cal S}_0$. Choosing ${\cal S}_0$ to be the condensation point at temperature $T$, for $\Delta T=0$ and $\Delta v=v-v_{\rm v}$ ($v_{\rm v}\equiv 1/\rho_{\rm v}$ being the vapor specific volume at coexistence) it follows from Eq.\,(\ref{2-1}) that
\begin{equation}
\Delta F_{\rm hom}\equiv F-F_{\rm v}=P_{\rm v}(v_{\rm v}-v)N+\frac{1}{2v_{\rm v}K_{\rm v}}(v-v_{\rm v})^2N\,,
\label{2-2}
\end{equation}
denoting $P_{\rm v}$ and $K_{\rm v}$ the condensation pressure and isothermal compressibility of the bulk vapor at $\rho_{\rm v}$ (in the following, we use ``v'' and ``l'' subscripts to denote bulk properties of the coexisting vapor and liquid).
For a vapor system in equilibrium with a {\em spherical} drop of liquid hosting $N_{\rm l}$ atoms, in the capillarity approximation the cost of droplet formation is
\begin{eqnarray}
\Delta F_{\rm sph}\equiv F_{{\rm v}+{\rm l}}-F_{\rm v}&=&(N-N_{\rm l})f_{\rm v}+N_{\rm l}f_{\rm l}+(36\pi)^{1/3}\gamma(N_{\rm l}v_{\rm l})^{2/3}-Nf_{\rm v}
\nonumber \\
&=&N_{\rm l}(f_{\rm l}-f_{\rm v})+(36\pi)^{1/3}\gamma(N_{\rm l}v_{\rm l})^{2/3}\,,
\label{2-3}
\end{eqnarray}
where $\gamma$ is the surface tension at temperature $T$ ({\it i.e.}, the free energy of the planar liquid-vapor interface). In deriving Eq.\,(\ref{2-3}), curvature corrections to surface tension~\cite{Prestipino1} as well as anisotropy effects~\cite{Prestipino2,Prestipino3} have been neglected. Assuming for the vapor and liquid fractions the same chemical potential and pressure as in the bulk limit, the difference $f_{\rm l}-f_{\rm v}$ becomes $P_{\rm v}(v_{\rm v}-v_{\rm l})$. Moreover $Nv=N_{\rm l}v_{\rm l}+(N-N_{\rm l})v_{\rm v}$, as no volume is attached to the interface (this is to be contrasted with the lever-rule estimate of $V_{\rm l}$ made in Ref.\,\cite{MacDowell1}, which implicitly assumed zero adsorption for the interface). In conclusion, we get:
\begin{equation}
\Delta F_{\rm sph}=P_{\rm v}(v_{\rm v}-v)N+(36\pi)^{1/3}\gamma v_{\rm l}^{2/3}\left(\frac{v_{\rm v}-v}{v_{\rm v}-v_{\rm l}}\right)^{2/3}N^{2/3}\,.
\label{2-4}
\end{equation}
From Eqs.\,(\ref{2-2}) and (\ref{2-4}), we readily derive the pressures of the two ``phases'':
\begin{equation}
P_{\rm hom}=P_{\rm v}+\frac{\rho-\rho_{\rm v}}{K_{\rm v}\rho}\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,P_{\rm sph}=P_{\rm v}+\frac{2}{3}(36\pi)^{1/3}\frac{\gamma\rho_{\rm v}}{\rho_{\rm l}-\rho_{\rm v}}\left(\frac{\rho(\rho_{\rm l}-\rho_{\rm v})}{\rho-\rho_{\rm v}}\right)^{1/3}N^{-1/3}\,.
\label{2-5}
\end{equation}
At variance with Ref.\,\cite{MacDowell1} $P_{\rm hom}$ is a non-linear function of $\rho$, which makes it better suited to reproduce the true pressure behavior close to $\rho_{\rm v}$ (see Fig.\,1 below). By the way, the $(\rho-\rho_{\rm v})^2$ term in $P_{\rm hom}$ may not be the exact one since Eq.\,(\ref{2-2}) is only a second-order truncated expansion.
When the liquid drop is a {\em cylinder} extending along the shorter edge of a cuboidal box, hence of length $L_x=L_y=L_z/a=(V/a)^{1/3}$ (where $a>1$ is the box aspect ratio), the free-energy excess over the vapor at $\rho_{\rm v}$ becomes:
\begin{equation}
\Delta F_{\rm cyl}=N_{\rm l}(f_{\rm l}-f_{\rm v})+2\pi\gamma r_{\rm cyl}L_x\,,
\label{2-6}
\end{equation}
where the radius $r_{\rm cyl}$ follows from
\begin{equation}
\pi r_{\rm cyl}^2L_x=N_{\rm l}v_{\rm l}\,\,\,\,\,\,{\rm with}\,\,\,\,\,\,N_{\rm l}=\frac{v_{\rm v}-v}{v_{\rm v}-v_{\rm l}}N\,.
\label{2-7}
\end{equation}
Upon inserting Eq.\,(\ref{2-7}) into (\ref{2-6}), we obtain:
\begin{equation}
\Delta F_{\rm cyl}=P_{\rm v}(v_{\rm v}-v)N+2\pi^{1/2}\gamma a^{-1/6}\rho^{-2/3}\left(\frac{\rho-\rho_{\rm v}}{\rho_{\rm l}-\rho_{\rm v}}\right)^{1/2}N^{2/3}\,,
\label{2-8}
\end{equation}
thus yielding
\begin{equation}
P_{\rm cyl}=P_{\rm v}-\frac{\pi^{1/2}\gamma v_{\rm l}^{1/2}}{3a^{1/6}v^{5/6}}\frac{v_{\rm v}-4v}{v_{\rm v}-v_{\rm l}}\left(\frac{v_{\rm v}-v_{\rm l}}{v_{\rm v}-v}\right)^{1/2}N^{-1/3}\,.
\label{2-9}
\end{equation}
Finally, when the liquid fills a {\em slab} lying perpendicularly to the longer box edge ($L_z$), the free-energy excess becomes:
\begin{equation}
\Delta F_{\rm slab}=N_{\rm l}(f_{\rm l}-f_{\rm v})+2\gamma L_x^2=P_{\rm v}(v_{\rm v}-v)N+\frac{2\gamma}{a^{2/3}}v^{2/3}N^{2/3}\,,
\label{2-10}
\end{equation}
whence a pressure of
\begin{equation}
P_{\rm slab}=P_{\rm v}-\frac{4\gamma}{3a^{2/3}}\rho^{1/3}N^{-1/3}\,.
\label{2-11}
\end{equation}
In the present analysis, a crossing of free energies as a function of $\rho$ entails a change in the relative stability of two drop shapes. Far from being a first-order transition, which can only occur in the thermodynamic limit and is not accompanied by any pressure jump, this shape transition is an equilibrium finite-size effect promoted by periodic boundary conditions. In fact, all shape transitions become rounded crossovers when thermal fluctuations are taken into account (see Refs.\,\cite{MacDowell1,MacDowell2}). Upon equating the free energies two at a time, and providing that the sequence of ``phases'' is the same as found in Ref.\,\cite{MacDowell2}, we obtain the following formulae for the ``transition'' densities:
\begin{eqnarray}
\rho_{{\rm hom}-{\rm sph}}&=&\rho_{\rm v}\left(1-(36\pi)^{1/4}\frac{(2K_{\rm v}\rho_{\rm v}\gamma)^{3/4}}{(\rho_{\rm l}-\rho_{\rm v})^{1/2}}N^{-1/4}\right)^{-1}\,;
\nonumber \\
\rho_{{\rm sph}-{\rm cyl}}&=&\rho_{\rm v}+\frac{4\pi}{81a}(\rho_{\rm l}-\rho_{\rm v})\,;
\nonumber \\
\rho_{{\rm cyl}-{\rm slab}}&=&\rho_{\rm v}+\frac{1}{\pi a}(\rho_{\rm l}-\rho_{\rm v})\,.
\label{2-12}
\end{eqnarray}
In particular, we note that the density range beyond $\rho_{\rm v}$ where the homogeneous vapor is {\em thoroughly stable} vanishes in the thermodynamic limit as $N^{-1/4}$. This $\rho$ interval should not be confused with the vapor metastability region, which instead is a kinetic concept more appropriate to bulk systems. The metastability region of vapor, which extends past the $T$-$P$ coexistence locus inside the liquid region, comprises all nominally-liquid states where a quenched vapor system can be maintained gaseous for a long time (that is, much longer than the typical observation times) before nucleation of the liquid occurs.
With a further little effort, one may also derive the size of the liquid droplet in its various conformations, obtaining:
\begin{eqnarray}
r_{\rm sph}&=&\left(\frac{3}{4\pi}\frac{\rho-\rho_{\rm v}}{\rho_{\rm l}-\rho_{\rm v}}\right)^{1/3}\left(\frac{N}{\rho}\right)^{1/3}\,;
\nonumber \\
r_{\rm cyl}&=&\left(\frac{a^{1/3}}{\pi}\frac{\rho-\rho_{\rm v}}{\rho_{\rm l}-\rho_{\rm v}}\right)^{1/2}\left(\frac{N}{\rho}\right)^{1/3}\,;
\nonumber \\
d_{\rm slab}&=&a^{2/3}\frac{\rho-\rho_{\rm v}}{\rho_{\rm l}-\rho_{\rm v}}\left(\frac{N}{\rho}\right)^{1/3}\,,
\label{2-13}
\end{eqnarray}
where the latter quantity represents the thickness of the liquid slab.
As a final comment, we underline that the above theory would also apply with no modifications to the solid-liquid transition~\cite{Statt}. However, a testing of the theory against simulation data would actually be impossible in this case, because of the tendency of both phases to go deeply metastable (see, {\it e.g.}, Ref.~\cite{Abramo2}), a fact that generally prevents one from observing any shape transition. A notable exception is the numerical evidence reported in Fig.\,2 of Ref.\,\cite{Bernard}, where, thanks to a smart simulation method, the low system dimensionality, and very long runs, it was possible to construct a pressure equation of state showing a plateau in the density range corresponding to the slab formation.
\section{Assessment of the theory: Lennard-Jones model}
\setcounter{equation}{0}
\renewcommand{\theequation}{3.\arabic{equation}}
As originally shown in Ref.\,\cite{MacDowell2}, the sequence of shape transitions in a large periodic cubic box is expected to be hom $\rightarrow$ sph $\rightarrow$ cyl $\rightarrow$ slab, then followed by the reversed transition sequence with the role of vapor and liquid interchanged. This finding is confirmed by the results of our simulations for the cut-and-shifted LJ model and gratifyingly reproduced by our theory for $a=1$ (see Sect.\,III-A below). The possibility of other stable shapes, which as far as we know was never examined before, is discussed in the next Sect.\,III-B.
\subsection{Results from sequential simulations}
In a recent paper~\cite{Abramo2}, we performed extensive Metropolis MC simulations of the {\em cut-and-shifted LJ model} in the $NVT$ ensemble. This model is characterized by the following interaction potential:
\begin{eqnarray}
u(r)=\left\{
\begin{array}{rl}
4\epsilon\left[(\sigma/r)^{12}-(\sigma/r)^6\right]-c\,, & \,\,\,{\rm for}\,\,r<r_{\rm cut}\\
0\,, & \,\,\,{\rm for}\,\,r>r_{\rm cut}
\end{array}
\right.
\label{3-1}
\end{eqnarray}
with $r_{\rm cut}=2.5\sigma$, where the constant $c$ is chosen in such a way as to make $u(r)$ everywhere continuous (from here onward, all quantities will be expressed in the units set by $k_{\rm B},\epsilon$, and $\sigma$, where $k_{\rm B}$ is Boltzmann's constant). The critical temperature of this system is slightly less than 1.10~\cite{Smit}. In particular, in Ref.\,\cite{Abramo2} we considered the behavior of the system along the isotherm $T=0.90$, to see whether in the liquid-vapor region equilibrium can be reached by traditional simulation methods with local moves. To this end we performed simulation runs in a sequence, starting at each density from the last system configuration produced in the previous run at a slightly smaller density. We thus showed that, anywhere within the coexistence region, heterogeneous equilibrium can be established in an affordable time. This is proved by the fact that we were able to obtain the known liquid-vapor coexistence densities by integrating the pressure equation of state across the two-phase region. At least this was the case for a system of $N=1372$ particles or smaller, while a larger system of 4000 particles was found to be plagued by metastability ({\it i.e.}, the data collected along the forward and backward trajectories were not the same, see Fig.\,1 below).
Having accurate simulation results available in the liquid-vapor region gives the opportunity to make a critical appraisal of the theory presented in Sect.\,II. For $T=0.90$, the liquid-vapor coexistence pressure and densities are $P_{\rm v}=0.03146,\rho_{\rm v}=0.0451$, and $\rho_{\rm l}=0.6649$, respectively~\cite{Abramo2}. In order to compute $K_{\rm v}$, we carried out a long $NPT$ simulation of the vapor at $P=P_{\rm v}$, eventually finding an isothermal compressibility of 45.04 (we computed $K_{\rm v}$ from the formula $K_T=\beta(\langle V^2\rangle-\langle V\rangle^2)/\langle V\rangle$, where $\beta=(k_{\rm B}T)^{-1}$ and $\langle\cdots\rangle$ is the isothermal-isobaric average). The only theory parameter left to set is $\gamma$, and we decided to choose it in such a way that $\rho_{{\rm hom}-{\rm sph}}$ roughly coincides with the location of the first pressure drop in the MC data for $N=4000$~\cite{Abramo2} (we thus obtained $\beta\gamma=0.19$). We report in Fig.\,1 the results from theory, and compare them against MC data. In the top panel, we show $\Delta F/N-P_{\rm v}(v_{\rm v}-v)$ for the homogeneous vapor (cf. Eq.\,(\ref{2-2})) and the various heterogeneous phases (cf. Eqs.\,(\ref{2-4}), (\ref{2-8}), and (\ref{2-10})). The sequence of stable conformations as a function of density is as expected, that is hom $\rightarrow$ sph $\rightarrow$ cyl $\rightarrow$ slab. In the middle panel of Fig.\,1 we compare the theoretical pressure (red line) with MC data from both forward- and backward-travelled paths~\cite{Abramo2}: we see an overall good agreement, especially in the slope of the pressure plateaus, but also the location of shape transitions would be well reproduced by the theory considering that the data from the backward trajectory are probably the closest to equilibrium (indeed, it is much easier for the system to preserve its structure during overcompression rather than when reducing the density below the transition threshold). Finally, the bottom panel in Fig.\,1 reports the characteristic size of the liquid drop in each ``phase''.
For high enough densities, other conformations of drop become available to the system, as seen in the complete MC pressure equation of state (Fig.\,2) where each inflection point marks a different shape transition. The theory for these further transitions can be formulated following the same steps as before, by everywhere exchanging vapor with liquid, and the results are shown in Fig.\,3 for the same parameters used to draw Fig.\,1. From a glance at Fig.\,3 we see that the accuracy of the theory is now poorer, though it would be safe to say that it still qualitatively reproduces the MC data. A worse agreement with Monte Carlo should anyway be expected just for the reason that equilibration is more difficult at high density and the spatial definition of the interface between liquid and vapor is also poorer. In summary, we reached a good agreement between simulation and theory by adjusting one single parameter.
In Ref.\,\cite{Abramo2} we also reported simulation data obtained from sequential runs of $N=1500$ Lennard-Jones particles in a periodic cuboidal box with edges in the ratio of 1:1:3, for $T=0.90$. These results revealed that the spherical ``phase'' is apparently never stable whereas the cylindrical ``phase'' is strongly reduced in extent. We tested this conclusion by the theory of Sect.\,II and we indeed found that, under compression, the homogeneous vapor first gives way to a cylindrical drop of liquid, which then changes to a slab upon increasing the density further (see Fig.\,4). Again, we see a clear correlation between the location of the inflection points in the pressure data and the transition thresholds derived from theory. As a last note we observe that, according to the same theory, the stability of the cylindrical drop progressively reduces upon increasing $a$, until the cylindrical phase disappears altogether (for $a\approx 10$) and the homogeneous vapor is thereafter directly transformed into the slab phase. The use of an elongated box is instrumental to obtaining a wider density interval for the study of the planar liquid-vapor interface, which could be useful for example in the determination of the surface tension as a function of temperature.
\subsection{Other drop shapes}
Our next point concerns the evidence, originally found in LJ-model simulations for $T=0.75$~\cite{Abramo2}, of an additional pressure ``plateau'' (in fact, a slightly inclined flat region) lying between the ``cylinder'' and the ``slab'' plateaus. The trick to obtain this result was to start the simulation from scratch at every density. We commented in Ref.\,\cite{Abramo2} that, in the interval of densities corresponding to the extra plateau, the liquid drop has the shape of a slab with a hole.
We carried out $NVT$ MC simulations of the cut-and-shifted LJ fluid for $T=0.72$ in a periodic cubic box, for a number of densities close to $\rho_{\rm cyl-slab}\approx 0.25$ (with the values of $\rho_{\rm v}$ and $\rho_{\rm l}$ taken from Ref.\,\cite{Trokhymchuk}), always assuming a face-centered-cubic structure for the initial configuration of the run. Clearly, this may not be the best choice to study heterogeneous equilibria by simulation, since relaxation times would be much longer than those encountered when performing simulation runs sequentially. However, the point is that by this method we may find out long-lived metastable states that would not be reached by sequential simulations. For each density we first equilibrated the system for 5 million cycles, then gathering equilibrium statistics for just as many cycles. The pressure data obtained for $N=4000$ particles were plotted in the middle panel of Fig.\,5. In the same figure we also reported the outcome of the theory. We see that the data points are rather clearly located over four distinct pressure plateaus, each being representative of a different drop conformation of long life. While three of these plateaus have already been identified as representative of spherical, cylindrical, and slab-like drops, in the fourth plateau centered at $\rho=0.25$ a visual inspection of the system configuration reveals that the shape is novel: the liquid drop resembles either a {\em double cylinder} (DC, see Fig.\,6a) or a {\em punched slab} (PS, see Fig.\,6b). Owing to the periodicity of the simulation box, the DC and PS shapes would actually be similar: it suffices to move the DC center to a box vertex (and possibly rotate the structure just to improve visibility) to realize that a DC is in fact not dissimilar from a PS (Fig.\,7a). Similarly, by moving the hole center to a box vertex the PS of Fig.\,6b ends up looking like a DC (Fig.\,7b). However, while the boundary of the hole associated with a perfect DC is a square, the hole of the PS seen in Fig.\,6b is roughly circular; hence, it would be wrong to conclude that the two geometries are exactly the same up to a folding operation.
Let us first attempt to model the liquid drop as a perfect slab with a cylindrical hole (we call this geometric shape ``type-1 PS''). Let $d$ and $r<L_x/2$ be the slab thickness and hole radius, respectively. The interface area equals
\begin{equation}
A=2(L_x^2-\pi r^2)+2\pi rd
\label{3-2}
\end{equation}
with
\begin{equation}
V=(L_x^2-\pi r^2)d=N_{\rm l}v_{\rm l}\equiv V_{\rm l}\,.
\label{3-3}
\end{equation}
Upon eliminating $r$ in favor of $d$, we obtain:
\begin{equation}
A(d)=\frac{2V_{\rm l}}{d}+2\sqrt{\pi(L_x^2d^2-V_{\rm l}d)}\,.
\label{3-4}
\end{equation}
The previous equation is correct only provided $0<r<L_x/2$, or
\begin{equation}
\frac{V_{\rm l}}{L_x^2}<d<\frac{4}{4-\pi}\frac{V_{\rm l}}{L_x^2}\,.
\label{3-5}
\end{equation}
Upon taking $x=L_x^2d/V_{\rm l}-1=\pi r^2d/V_{\rm l}$ and $y=A/(2L_x^2)-1$, the problem is reduced to finding the absolute minimum of
\begin{equation}
y=-\frac{x}{1+x}+\frac{V_{\rm l}}{L_x^3}\sqrt{\pi(x+x^2)}
\label{3-6}
\end{equation}
in the interval $0<x<\pi/(4-\pi)$. The hole only forms if, besides the local minimum at $x=0$, a deeper (negative) $y$ minimum also occurs (at a certain $x_{\rm min}$ corresponding to $d=d_{\rm min}\propto N^{1/3}$). This requires $\rho$ to be less than a threshold density ($\simeq 0.175$, see Fig.\,8 below; beyond this density, the free energy becomes identical to $\Delta F_{\rm slab}$). With the $A$ so determined, the free energy of the PS is given by
\begin{equation}
\Delta F=P_{\rm v}(v_{\rm v}-v)N+\gamma A\,.
\label{3-7}
\end{equation}
However, representing the slab hole as a perfect cylinder is too rough an approximation, since the physical drop would certainly manage to avoid any sharp edges. A more realistic modelization (say, ``type-2 PS'') will entail a smooth and curved hole boundary, and the most natural solution would be the surface of the innermost half of a torus (IHT, namely the part of the torus which lies inside a cylinder having the same symmetry axis as the torus and radius equal to the torus major radius -- {\it i.e.}, to the distance $r$ from the center of the hole to the center of the tube). While the minor radius of the torus (that is, the radius of the tube section) has to be half of the slab thickness ($d/2$), the major radius must obey $d/2<r<L_x/2$ (when $r=d/2$ the hole closes and the torus becomes a horn torus).
We used Pappus' centroid theorem to compute the area and volume of the IHT of radii $R_<=d/2$ and $R_>=r$. From the general formulas,
\begin{equation}
A_{\rm IHT}=2\pi^2R_<R_>-4\pi R_<^2\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,V_{\rm IHT}=\pi^2R_<^2R_>-\frac{4}{3}\pi R_<^3\,,
\label{3-8}
\end{equation}
it follows that the $A$ and $V$ in Eqs.\,(\ref{3-2}) and (\ref{3-3}) should be replaced with
\begin{equation}
A=2(L_x^2-\pi r^2)+\pi^2dr-\pi d^2\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,V=(L_x^2-\pi r^2)d+\frac{1}{4}\pi^2d^2r-\frac{\pi}{6}d^3\,.
\label{3-9}
\end{equation}
Using the equation $V=V_{\rm l}$ to simplify the $A$ expression, we eventually obtain:
\begin{equation}
A=\frac{2V_{\rm l}}{d}+\frac{1}{2}\pi^2dr-\frac{2}{3}\pi d^2\,,
\label{3-10}
\end{equation}
where $r$ is the solution to
\begin{equation}
r^2-\frac{1}{4}\pi dr+\frac{V_{\rm l}}{\pi d}+\frac{1}{6}d^2-\frac{L_x^2}{\pi}=0\,.
\label{3-11}
\end{equation}
Should the solutions of Eq.\,(\ref{3-11}) be both valid, the one must be chosen that provides the minimum $A$ value. Then, the free energy follows from the general Eq.\,(\ref{3-7}).
Finally, we considered a drop having the shape of two equal cylinders crossing each other at right angles. We call this shape a DC (also the cylinder axes must intersect with one another). While the length of the axes is $L_x/2$, the cylinder radius $r$ should be consistent with the $V=V_{\rm l}$ condition. In order to compute the area and volume of a DC, it comes useful to consider the solid body which the two cylinders have in common, which goes under the name of {\em bicylinder} (or Steinmetz solid). Its area and volume can easily be obtained by multiple integration, and the results are $16r^2$ and $(16/3)r^3$. Clearly, the area and volume of a DC then read:
\begin{equation}
A=4\pi rL_x-16r^2\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,V=2\pi r^2L_x-\frac{16}{3}r^3\,.
\label{3-12}
\end{equation}
From $V=V_{\rm l}$ a third-order algebraic equation is obtained for $r$, which turned out to have only one solution satisfying $0<r<L_x/2$. Upon plugging this $r$ in $A$, we again derive $\Delta F$ from Eq.\,(\ref{3-7}).
For $T=0.72$, the free-energy curves for the three just considered shapes were plotted as a function of the density in the top panel of Fig.\,8. In the same picture, the lower envelope of the free energies for the common shapes was reported for comparison (red line). We see that none of the PS nor the DC ever provide the shape of the stable drop. However, a type-2 PS with $r=L_x/2$ is nearly stable close to $\rho_{\rm cyl-slab}\simeq 0.25$, which may explain why this drop shape was observed in our simulations from scratch (as seen in Fig.\,8, suitable type-1 PS and DC also exist that offer not too bad solutions near $\rho_{\rm cyl-slab}$). Beyond $\rho=0.24$, the optimal type-2 PS has $r=d/2$, meaning that the hole eventually closed and the torus became a horn torus (however, this degenerate type-2 PS is obviously no longer reliable as a drop shape). Finally, notice that changing $T=0.72$ to $T=0.90$ does not modify the theoretical picture in any respect. The conclusion is that, according to the present theory, the three novel shapes considered in this section are metastable. Hence, the sequence of stable drops for $T=0.72$ remains the same identified in Sect.\,III-A for $T=0.90$ and $a=1$.
\section{Conclusions}
Below the critical temperature, the structure of a heterogeneous fluid simulated under periodic boundary conditions is sensitive to the imposed density. Each change of conformation observed within the binodal line goes along with a characteristic fall in the pressure. In this paper, the by-now classical evidence of spherical, cylindrical, and slab-like liquid drops was re-examined from the viewpoint of a theory having its roots in a proof-of-concept calculation found in Ref.~\cite{MacDowell1}. In spite of its simplicity, this theory well reproduced the behavior of a Lennard-Jones fluid along the liquid-vapor region, for both a cubic and an elongated box.
Further drop shapes emerged near the crossover region from cylindrical to slab-like when simulations, rather than being performed in a sequence as is common practice, were started for each density from a face-centered-cubic configuration. In this case, the liquid drop occasionally exhibited the shape of a punched slab with a roughly circular hole. Employing our theory, we found that this peculiar shape of drop is actually only metastable, {\it i.e.}, it is a long-lived conformation which however is doomed to decay.
|
2,869,038,157,066 | arxiv | \section{Introduction}
The effect of a tilted potential in periodic systems has been studied for a long time. This is because the linear potential corresponds to a static electric field and the electric-field effects in solids are an important issue in condensed matter physics from both fundamental and application viewpoints. In this direction, various interesting and significant phenomena, such as the Bloch oscillation~\cite{Bloch1929}, the Zener tunneling~\cite{Zener1934}, and the Wannier-Stark ladder~\cite{Wannier1960, Gluck2002}, have been found. While the understanding of these phenomena has been largely advanced, these phenomena in strongly interacting systems have not been fully understood yet and have been an intriguing topic~\cite{Taguchi2000, Yamakawa2017, Oka2003, Eckstein2010, Meisner2010, Oka2012, Aron2012, Takasan2019breakdown}.
In recent years, a tilted potential is realized in atomic-molecular-optical (AMO) systems such as ultracold atoms and it is used as a convenient tool to induce or control various quantum many-body phenomena. For instance, the bosonic version of the Bloch oscillation~\cite{Kolovsky2004, Mahmud2014, Meinert2014, Geiger2018} and the Zener tunneling~\cite{Tomadin2008, Chen2011, Kolovsky2016LZ} have been studied. Quantum phase transitions induced by a tilt have also been widely investigated~\cite{Sachdev2002, Pielawa2011, Simon2011, Kolodrubetz2012, Kolovsky2016, Buyskikh2019}.
Very recently, a tilted potential has been gathering renewed attention in the context of thermalization problems in quantum systems~\cite{Schulz2019, vanNieuwenburg2019, Sala2020, Khemani2020, Desaules2021, GuardadoSanchez2020, Scherg2021, Kohlert2021, Morong2021, Guo2020}. Remarkably, it has been found that interacting fermions in a tilted lattice show similar behavior to the many-body localization in disordered systems~\cite{Schulz2019, vanNieuwenburg2019}. It is called the Stark many-body localization and has been studied extensively. In similar setups, new mechanisms preventing the thermalization, called the Hilbert space fragmentation~\cite{Sala2020} and Hilbert space shuttering~\cite{Khemani2020}, have been proposed. Quantum many-body scars in a tilted Hubbard model have also been studied~\cite{Desaules2021}. Furthermore, related experiments in ultracold atoms~\cite{GuardadoSanchez2020, Scherg2021, Kohlert2021}, trapped ions~\cite{Morong2021}, and superconducting qubits~\cite{Guo2020} have been conducted. The tilted potential system have become an important platform for investigating quantum many-body phenomena.
\begin{figure}
\includegraphics[width=7.5cm]{fig1_v7.pdf}
\caption{
(a, b) Schematic picture of (a) the Mott insulating regime ($E \ll U$) and (b) the Wannier-Stark localized regime ($U \ll E$). (c) Time average of the double occupancy per site $\overline{\mathcal{N}}_\mathrm{double}$ [Eq.~\eqref{eq:ave_double_occ}], starting from the singly occupied state $\ket{\uparrow\downarrow\uparrow\downarrow\cdots}$ for $L=10$. The time step $\Delta t$ is set to $1/800$. (d) Effective interaction $J_\mathrm{eff}(E)/J_\mathrm{eff}(0)$ defined in Eq.~\eqref{eq:J_eff}. $t_h$ ($t_h^{-1}$) is used as the unit of energy (time).
}
\label{fig:fig1}
\end{figure}
In this paper, we investigate the magnetism, which is one of the most important quantum many-body phenomena, in tilted fermionic Mott insulators. This is experimentally relevant to both Mott insulator materials under a static electric field and AMO systems with a linear potential. One of the authors has studied the magnetism of fermionic Mott insulators with a small tilt, schematically shown in Fig.~\ref{fig:fig1}(a)~\cite{Takasan2019}. In Ref.~\cite{Takasan2019}, it was demonstrated that the antiferromagnetic coupling is enhanced with a tilt, which was used for controlling various magnetic phases with electric fields. Recently, this idea has been shown to be applicable to more generic setups and useful for controlling other types of magnetic interactions~\cite{Furuya2021a, Furuya2021b}. In this paper, we address a broader parameter range of tilt including a much larger one. With a large tilt, it is naively expected that the Mott-insulating state is broken through the many-body Zener breakdown~\cite{Taguchi2000, Yamakawa2017, Oka2003, Eckstein2010, Meisner2010, Oka2012, Aron2012, Takasan2019breakdown}. This is true for the size of the tilt per site at the same order as the on-site interaction. However, with a much larger tilt, the fermions can be localized even under the tilt. This is induced by the Wannier-Stark localization~\cite{Wannier1960, Gluck2002, Schulz2019, vanNieuwenburg2019}, which freezes the charge degree of freedom. Thus, the system is still described as localized spins under a large tilt. Our question is what kind of magnetism emerges in this localized spin system and how the large tilt regime is connected to the small tilt one.
To tackle this issue, we study the one-dimensional Hubbard model with a tilt. One approach is the perturbation theory. We derive the effective spin model for the generic size of tilt and find that the ferromagnetic interaction appears in the large tilt regime. The other approach is to solve the many-body Schr\"odinger equation numerically for tracking the spin dynamics. The numerical result is consistent with the perturbation theory. The dynamics under a tilt itself is also interesting because we can control the speed and time direction by changing the size of the tilt. We mention the application of this dynamics to the experimental measurement of out-of-time ordered correlators~\cite{Larkin1969, Maldacena2016}. Finally, we discuss platforms for experimentally observing the signature of the ferromagnetism.
\section{Model}
We study the one-dimensional fermionic Hubbard model with a linear potential~\cite{FN3}. The Hamiltonian is given by
\begin{align}
H &= - t_h \sum_{j=1}^{L-1} \sum_{\sigma=\uparrow, \downarrow} ( c^\dagger_{j+1 \sigma} c_{j \sigma} + \mathrm{h.c.}) \nonumber \\
& \qquad + U \sum_{j=1}^L n_{j \uparrow} n_{j \downarrow} +\sum_{j=1}^L \sum_{\sigma=\uparrow,\downarrow} jE n_{j \sigma}, \label{eq:model_lgauge}
\end{align}
where $c_{j \sigma}$ ($c_{j \sigma}^\dagger$) is the annihilation (creation) operator of a fermion at the $j$-th site with the spin $\sigma (=\uparrow, \downarrow)$ and $n_{j \sigma}=c_{j \sigma}^\dagger c_{j \sigma}$. Here, we choose the open boundary condition, which corresponds to the realistic setup in the AMO systems such as ultracold atoms in an optical lattice~\cite{GuardadoSanchez2020, Scherg2021, Kohlert2021}. $t_h$ and $U$ represent the hopping amplitude and the on-site interaction energy respectively. Throughout this paper, we use $t_h$ ($t_h^{-1}$) as the unit of energy (time). The size of the tilt is denoted by $E$, which is related to the physical electric field $\mathcal{E}$ in electronic systems as $E=|e|a\mathcal{E}/\hbar$ where $e$ and $a$ are the elementary charge and the lattice constant. To study the properties as localized spin systems, we focus on the half-filled and repulsive ($U>0$) case through this paper. For the later convenience, we introduce the other gauge choice. With a time-dependent gauge transformation $U(t)=\exp[-iEt\sum_{j, \sigma} j n_{j, \sigma}]$, the Hamiltonian is transformed into $\tilde{H}(t)=U^\dagger H U - i U^\dagger \partial_t U$, which is calculated as
\begin{align}
\tilde{H}(t) \!
&= \! - t_h \sum_{j=1}^{L-1} \sum_{\sigma=\uparrow, \downarrow}( e^{iEt} c^\dagger_{j+1 \sigma} c_{j \sigma} + \mathrm{h.c.}) + U \sum_{j=1}^{L} n_{j \uparrow} n_{j \downarrow}. \label{eq:model_vgauge}
\end{align}
This gauge choice is called velocity gauge, whereas the one in Eq.~\eqref{eq:model_lgauge} is called length gauge~\cite{FN4}.
\begin{figure}
\includegraphics[width=7.5cm]{L10N5E0-60U50-dbl-sglcmn-v4.pdf}
\caption{Time evolution of the doublon number per site $\mathcal{N}_\mathrm{double}(t)$ [Eq.~\eqref{eq:double_occ}], where electric field $E$ is switched on at $t=50$. The data for $E\leq (\geq)~50$ are shown in the left (right) panel. We set $U=50$, $L=10$, and $\Delta t=1/3200$, and choose the singly occupied state $\vert\uparrow\downarrow\uparrow\downarrow\cdots\uparrow\downarrow\rangle$ as the initial state. $t_h$ ($t_h^{-1}$) is used as the unit of energy (time).
}
\label{fig:doubleocc}
\end{figure}
\section{Localization of the charge degree of freedom}
\label{sec:charge_dynamics}
Let us start by seeing the charge dynamics. We numerically solve the many-body Schr\"odinger equation of the model \eqref{eq:model_vgauge} directly with the fourth-order Runge--Kutta method, which can treat only small sizes but provide the information independent from approximations as long as we adequately choose the time step $\Delta t$ so that the effect of time discretization is negligible. Note that we use the velocity-gauge Hamiltonian \eqref{eq:model_vgauge} instead of the length-gauge Hamiltonian \eqref{eq:model_lgauge} because of the numerical efficiency~\cite{FN2}. We choose the Ne\'el state $\ket{\uparrow \downarrow \uparrow \downarrow \cdots}$ as the initial state and see the time evolution of doublon number per site,
\begin{gather}
\mathcal{N}_\mathrm{double}(t)=\frac{1}{L}\sum_{j=1}^L \langle\Psi(t)\vert n_{j\uparrow}n_{j\downarrow} \vert\Psi(t)\rangle, \label{eq:double_occ}
\end{gather}
where $\vert\Psi(t)\rangle$ is the many-body wavefunction at time $t$. The field-strength dependence of the doublon dynamics is shown in Fig.~\ref{fig:doubleocc}. For small fields of $E < U$, the doublon number is small under a tilt because the Mott insulating state is still preserved. At the resonant points $nE = U$ ($n=1, 2, \cdots$), the number becomes very large just after applying the electric field, which breaks the Mott insulators. In contrast, for larger values of $E>U$, the double occupancy takes smaller values, of the same order of magnitude as in the Mott insulating regime ($E<U$). This supports the realization of a localized spin system. To obtain the whole picture, we calculate the time average of the doublon number
\begin{gather}
\overline{\mathcal{N}}_\mathrm{double}=\frac{1}{t_1-t_0} \int_{t_0}^{t_1}dt~\mathcal{N}_\mathrm{double}(t), \label{eq:ave_double_occ}
\end{gather}
with $t_0=10$ and $t_1=100$ when we turn on the tilt at $t=0$~\cite{FN7}. The averaged doublon numbers for different points of $(E, U)$ are summarized in Fig.~\ref{fig:fig1}~(c). This figure shows that we have a well-defined Wannier-Stark localized regime and it has a broad range of parameters. Below, we study the magnetism in the Mott-insulating regime and the Wannier-Stark localized regime where the fermions behave as localized spins.
\section{Effective spin Hamiltonian}
\label{sec:effective_model}
To study the magnetism in the Mott and Wannier-Stark regimes, we start with the effective spin model based on the perturbation theory. We consider the strong coupling regime $U \gg t_h$ and treat the hopping term [the first term in Eq.~\eqref{eq:model_lgauge}] as the perturbation.
Let us begin with the small tilt case $|E|<U$. Here, we assume that the value of $E$ is away from the resonant condition $U=n|E|$ ($n=1, 2, 3, \cdots$), where the double occupancy becomes very large as shown in Fig.~\ref{fig:fig1}~(c) and thus the localized states are broken. In contrast, except for the resonant points, the Mott-insulating state survives even under a small tilt. Thus, the system can be described as localized spins [Fig.~\ref{fig:fig1}~(a)]. The effective model for the spins is the Heisenberg chain with a field-dependent coupling,
\begin{gather}
H_\mathrm{eff}=\sum_{j=1}^L J_\mathrm{eff}(E)\bm S_j \cdot \bm S_{j+1}, \label{eq:H_eff_static} \\
J_\mathrm{eff}(E)=\frac{J_0}{1-\left(\frac{E}{U}\right)^2}, \label{eq:J_eff}
\end{gather}
with $J_0=4t_h^2/U$~\cite{Takasan2019, FN5}. Here, $\bm S_j = (S^x_j, S^y_j, S^z_j)$ denotes a spin operator at the $j$-th site. Eq.~\eqref{eq:J_eff} shows that the antiferromagnetic exchange coupling is enhanced by adding the tilt~\cite{Takasan2019}. To clarify the physical meaning of Eq.~\eqref{eq:J_eff}, it is useful to decompose it into the following form,
\begin{gather}
J_\mathrm{eff}(E)=J_{+}(E)+J_{-}(E), \label{eq:J_eff_decomposition} \\
J_{\pm}(E)=\frac{1}{2} \frac{4t_h^2}{U\pm E}. \label{eq:Jpm}
\end{gather}
The contributions $J_+$ and $J_-$ in Eq.~\eqref{eq:J_eff_decomposition} correspond to the perturbation process (i) and (ii) shown in Fig.~\ref{fig:perturbation}~(a) respectively. These two contributions become inequivalent under the field. For simplicity, we focus on $E>0$. In this regime, as shown in Fig.~\ref{fig:perturbation}~(b), the dominant contribution is $J_{-}$ and its denominator decreases by $E$. This means that the energy cost gets smaller due to the tilted potential and the antiparallel spin configuration becomes more favored.
Let us move to the large tilt case $|E|>U$. The most important point is that the derivation for the small tilt is directly applicable to this case. It is because the derivation is formally just using the second-order perturbation starting from the \textit{singly occupied state} and its physical origin is irrelevant to the derivation~\cite{FN5}. As shown in Sec.~\ref{sec:charge_dynamics}, the doublon number in the Wannier-Stark regime has the same order of magnitude as in the Mott regime and thus we can apply the perturbation theory. Therefore, the effective spin interaction in the large tilt case is also given by Eq.~\eqref{eq:J_eff}.
In the large tilt case, the denominator of Eq.~\eqref{eq:J_eff} changes sign and the interaction becomes ferromagnetic. For $E>U$, the dominant contribution is $J_-$ and the corresponding energy cost $E-U$ becomes negative. Thus, the antiparallel spin configuration is energetically unfavorable. This is the origin of the ferromagnetism.
\begin{figure}
\includegraphics[height=5.625cm]{perturbation_v1.pdf}
\caption{(a) Perturbation processes relevant to the kinetic exchange. The two processes (i) and (ii) are inequivalent only when an electric field is applied. (b) $E$-dependence of $J_\mathrm{\mathrm{eff}}(E)$ [Eq.~\eqref{eq:J_eff}], $J_+(E)$, and $J_-(E)$ [Eq.~\eqref{eq:Jpm}]. The plotted value is normalized by $J_0/2=2t_h^2/U$.}
\label{fig:perturbation}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=15cm]{L10N5E_U50imblO5-v7.pdf}
\caption{Time evolution of the local spin imbalance $\mathcal{I}_{L/2}$ [Eq.~\eqref{eq:imbalance}] for $L=10$ before and after the field $E$ is switched on at $t_\mathrm{on} = 50$. We set $U=50$ and $\Delta t = 1/3200$, and choose the singly occupied state $\vert\uparrow\downarrow\uparrow\downarrow\cdots\uparrow\downarrow\rangle$ as the initial state. We vary $E$ for $0\leq E\leq 2U$ and the data for $E \leq U$ and $E \geq U$ are shown in the panel (a) and (b) respectively. In the panel (c), we show all the data with a rescaled time defined in Eq.~\eqref{eq:rescaled_time}. $t_h$ ($t_h^{-1}$) is used as the unit of energy (time).
}
\label{fig:imbalance}
\end{figure*}
We also obtain a consistent effective model using the Floquet theory. Applying the Floquet theory to the velocity-gauge Hamiltonian \eqref{eq:model_vgauge}, we employ the high-frequency expansion~\cite{EckardtRMP2017} and obtain the effective Hamiltonian up to the $1/E^2$ order as,
\begin{align}
H_{\mathrm{eff}}^\mathrm{Floquet}\! &=\! -J_0 \! \left( \frac{U}{E} \right)^{\!\!2}\! \sum_{i=1}^{L-1}\! \left(\bm S_i \cdot \bm S_{i+1}\! -\! \frac{n_i n_{i+1}}{4}\right) \!+\! H^\prime \label{eq:Heff_Floquet},
\end{align}
where $H^\prime=H_\mathrm{Hub}+H_\mathrm{pair}+H_b$, $H_\mathrm{Hub}=U(1-4 t_h^2/E^2)\sum_{i=1}^L n_{i \uparrow} n_{i \downarrow}$, $H_\mathrm{pair}= t_h^2 U/E^2 \sum_{i=2}^{L-1} \sum_{\sigma=\uparrow,\downarrow} (c^\dagger_{i \sigma} c^\dagger_{i \bar{\sigma}} c_{i+1 \sigma} c_{i-1 \bar{\sigma}}+ \mathrm{h.c.})$, and $H_b = t_h^2/E \sum_{\sigma=\uparrow,\downarrow} (n_{N \sigma} - n_{1\sigma})$ with $n_i = \sum_{\sigma=\uparrow,\downarrow} n_{i \sigma}$, $J_0 = 4t^2_h/U$. Here, we use the notation $\bar{\uparrow}=\downarrow (\bar{\downarrow}=\uparrow)$. Since the frequency is $E$, the high-frequency limit corresponds to the large tilt limit and thus the effective Hamiltonian~\eqref{eq:Heff_Floquet} becomes valid in the Wannier-Stark regime. Indeed, the hopping term vanishes in Eq.~\eqref{eq:Heff_Floquet} and it reflects the localization. While the Hamiltonian~\eqref{eq:Heff_Floquet} contains various interactions, such as the spin interaction $H_\mathrm{FM}$, the Hubbard interaction $H_\mathrm{Hub}$, and the pair hopping $H_\mathrm{pair}$, the most important part for us is that $H_\mathrm{FM}$ is ferromagnetic. Note that the coupling $-J_0 (U/E)^2$ is consistent with Eq.~\eqref{eq:J_eff} in the strong field limit $E/U \gg 1$.
Finally, we comment on the previous works which have studied similar effective Hamiltonians. First, effective spin interactions similar to Eq.~\eqref{eq:J_eff} have been discussed in our papers~\cite{Takasan2019, Furuya2021a} and other papers~\cite{Katsura2009, Wang2014, Eckstein2017} while these are limited to the Mott-insulating regime. In recent studies, an effective model similar to Eq.~\eqref{eq:Heff_Floquet} has been derived with the Schrieffer-Wolf transformation for a tilted fermionic Hubbard model~\cite{Scherg2021, Kohlert2021}, though the effective Hamiltonian is not written in terms of spins and the appearance of ferromagnetism is not clear. Similar effective models in the Bose-Hubbard model have been also studied and it was already pointed out that the ferromagnetic interaction is realized with a large tilt~\cite{Trotzky2008, Dimitrova2020}.
\section{Real-time spin dynamics}
In the previous section, we have derived the effective model under a tilt and pointed out the emergence of ferromagnetism. However, the discussion has been based on the perturbation theory and it is still unclear whether the result is robust beyond the approximation. Also, it is not clear yet how to find the signature of ferromagnetism in the observable quantities. Naively, the magnetization induced by a tilt might be regarded as clear evidence, but it is difficult to observe when we start from the untilted Mott insulator. This is because the time-evolution with Eq.~\eqref{eq:model_lgauge} (equivalent Eq.~\eqref{eq:model_vgauge}) conserves the total magnetization and the original Mott insulator has zero magnetization in the low-temperature state~\cite{FN8}.
To clarify these points, we study the real-time spin dynamics and show how to extract the information of the effective interaction. As in Sec.~\ref{sec:charge_dynamics}, we solve the many-body Schr\"odinger equation with the Hamiltonian Eq.~\eqref{eq:model_vgauge} using the fourth-order Runge--Kutta method and obtain the time evolution starting from the Ne\'el state $\ket{\uparrow \downarrow \uparrow \downarrow \cdots}$~\cite{FN2}. To see the spin dynamics, we study the local spin imbalance
\begin{gather}
\mathcal{I}_j = \langle\Psi(t)\vert n_{ j \uparrow} - n_{j \downarrow} \vert\Psi(t)\rangle. \label{eq:imbalance}
\end{gather}
To suppress the boundary effect, we focus on $j=\lfloor L/2 \rfloor$ in our numerical calculation~\cite{FN10}. In order to see the sharp contrast between $E=0$ and $E\neq0$, we first consider the time evolution without a tilt from $t=0$ to $t=t_\mathrm{on}(=50)$ and then apply the electric field after $t=t_\mathrm{on}$.
To clarify the effect of the exchange interaction on the real-time dynamics, we consider the time-evolution operator with the effective Hamiltonian \eqref{eq:H_eff_static} $U_\mathrm{eff}(t)=e^{-i H_\mathrm{eff} t}=\exp[-i f(E)t \sum_{j=1}^L J_0 \bm S_j \cdot \bm S_{j+1}]$ where $J_\mathrm{eff}(E)=f(E)J_0$ and $f(E)=[1-(E/U)^2]^{-1}$. Under the strong coupling condition $U \gg t_h$, the dynamics is expected to be governed by this operator~\cite{FN9}. The remarkable feature is that the $E$-dependence only appears as $f(E)t$ and $f(E)$ works as the scale factor in the time direction. This means that the time evolution is accelerated in the Mott-insulating regime where $J_\mathrm{eff}$ is enhanced. In contrast, the time evolution is reversed in the Wannier-Stark regime since $J_\mathrm{eff}$ becomes negative. Indeed, these features are seen in the time evolution of the local spin imbalance shown in Fig.~\ref{fig:imbalance}~(a) and (b).
To clearly see whether the dynamics follows the effective Hamiltonian, we show these data with a rescaled time $t_\mathrm{rescaled}$, defined as
\begin{align}
t_\mathrm{rescaled} =
\begin{cases}
t & (t < t_\mathrm{on}), \\
t_\mathrm{on} + |f(E)| (t-t_\mathrm{on}) & (t > t_\mathrm{on}),
\end{cases} \label{eq:rescaled_time}
\end{align}
in Fig.~\ref{fig:imbalance}~(c). As seen in this figure, the data collapse into two curves except for the resonant regime $U \simeq E$. This means that the spin dynamics is well-described with the effective Hamiltonian~\eqref{eq:H_eff_static} in both the Mott-insulating and the Wannier-Stark regime. It shows that the exchange coupling under a tilt is given by Eq.~\eqref{eq:J_eff} and thus ferromagnetism is realized with a large tilt. We emphasize that these results are obtained only from the Hubbard model and do not depend on any specific approximation such as the perturbation theory. This is the most important result in this paper. Note that a similar time-evolution of spins in the Hubbard model under oscillating electric fields was already studied in Ref.~\cite{Mentink2015}.
\begin{figure}
\centering
\includegraphics[width=7cm]{L10N5E70_71U50onoff20_imblO5-v5.pdf}
\caption{Time evolution of the local spin imbalance $\mathcal{I}_{L/2}$ [Eq.~\eqref{eq:imbalance}] for $L=10$ when the field $E = 70.71 \sim \sqrt{2}U$ is turned on at $t_\mathrm{on}$ and switched off at $t_\mathrm{off} = t_\mathrm{on} + \delta t$ with $\delta t = 20$. We set $U=50$ and $\Delta t = 1/3200$, and choose the singly occupied state $\vert\uparrow\downarrow\uparrow\downarrow\cdots\uparrow\downarrow\rangle$ as the initial state.
We vary $t_\mathrm{on}$ for $t_\mathrm{on}=50, 75, 100$ and they are shown with different line types. The time duration when $E$ is finite are shown with the shaded regions respectively. The dynamics without $E$ is shown in a gray dashed curve. $t_h$ ($t_h^{-1}$) is used as the unit of energy (time).
}
\label{fig:on-off}
\end{figure}
An interesting application of this dynamics is the realization of time reversal. Since $f(E)$ takes $-1$ at $E=\sqrt{2}U$, the time evolution is exactly reversed. In Fig.~\ref{fig:on-off}, we plot the time evolution when the field $E~(=70.71\sim \sqrt{2}U)$ is turned on at various $t_\mathrm{on}~(=50, 75, 100)$ and switched off at $t_\mathrm{off} = t_\mathrm{on} + \delta t$ with $\delta t = 20$. The spin dynamics is obviously reversed for the duration of $\delta t$ in Fig.~\ref{fig:on-off}. Also, all the data almost coincide after $t=120$ in Fig.~\ref{fig:on-off}. This demonstrates the accuracy of the time-reversal. The time-reversal dynamics is known to be useful for experimentally measuring out-of-time ordered correlators (OTOC), which are used as diagnostics for chaos in quantum systems~\cite{Larkin1969, Maldacena2016}. While there have been many efforts for realizing the time reversal~\cite{Li2017, Garttner2017}, it is still not an easy task. Our finding suggests that the time reversal in the strongly correlated Hubbard model is achieved just by adding a tilt. Since the controls of tilt have been already achieved in various AMO systems~\cite{Simon2011, GuardadoSanchez2020, Guo2020, Dimitrova2020, Scherg2021, Kohlert2021, Morong2021}, our protocol can be useful for experimentally measuring the OTOC.
\section{Discussion and Summary}
Finally, we discuss the experimental platforms to observe the ferromagnetism in tilted Mott insulators. For this purpose, AMO systems are promising because many-body quantum phenomena induced by a large tilted potential have been already studied experimentally in various AMO systems, such as ultracold atoms~\cite{Meinert2014, Geiger2018, Chen2011, Simon2011, GuardadoSanchez2020, Scherg2021, Kohlert2021, Trotzky2008, Dimitrova2020}, trapped ions~\cite{Morong2021}, and superconducting qubits~\cite{Guo2020}. In particular, ultracold fermionic atoms in an optical lattice~\cite{GuardadoSanchez2020, Scherg2021, Kohlert2021} provide an ideal platform for the fermionic Hubbard model \eqref{eq:model_lgauge} and thus they are the most promising platform for our study. The spin-resolved dynamics can be obtained in this setup and thus the imbalance dynamics shown in Fig.~\ref{fig:imbalance} will be directly observable. In solid-state electronic systems, it is difficult to realize a large electric field that can achieve the Wannier-Stark regime~\cite{Schmidt2018}. To avoid this difficulty, the synthetic structured systems, such as semiconductor superlattices~\cite{Mendez1988, Voisin1988}, have been used. The ferromagnetism in the Wannier-Stark regime can be observed in such systems. For this purpose, an array of quantum dots is a good candidate because the fermionic Hubbard model~\cite{Hensgens2017} and the effective Heisenberg spin chain~\cite{vanDiepen2021} become possible to be simulated in this setup thanks to the recent developments in experimental techniques. A more challenging direction is the observation in bulk solids. Recently, the transient signature of the Wannier-Stark ladder is observed in a bulk semiconductor~\cite{Schmidt2018}, and thus similar pump-probe type measurement in strongly correlated materials may enable us to observe the ferromagnetic signature in Mott insulator materials.
In this paper, we have studied the spin interaction in fermionic Mott insulators with a tilt. Using the perturbation theory and the direct calculation of the real-time evolution, we have revealed the effective interaction in the small and large tilt regime and found the appearance of the ferromagnetism. This appears as the change of the speed and time direction in the real-time dynamics, which can be observed in various experimental platforms.
\begin{acknowledgments}
We are thankful to Ehud Altman, Marin Bukov, Takashi Oka, Hosho Katsura, Norio Kawakami, Kensuke Kobayashi, Kaoru Mizuta, Joel E. Moore, and Masafumi Udagawa for valuable discussions. K.T. thanks to Masahiro Sato for the previous collaborations closely related to this project. K.T. was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Contract No. AC02-05CH11231 within the Ultrafast Materials Science Program (KC2203). M.T. was supported by JSPS KAKENHI (KAKENHI Grants No. JP17K17822, JP20K03787, JP20H05270, and JP21H05185).
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
2,869,038,157,067 | arxiv | \section{Introduction}
The study of strongly interacting matter at high density regime, i.e., regions whose densities attain some times the nuclear matter saturation density ($\rho_0$), can be performed through the analysis of astrophysical compact objects such as neutron stars (NSs). In the last years, a huge amount of data related to these systems was provided by the LIGO/Virgo Collaboration (LVC) since their first detection of gravitational waves (GW), a phenomenon predicted by Albert Einstein in 1916 after the formulation of General Relativity~\cite{einstein1,einstein2}. LVC published in Ref.~\cite{bholes1} their results regarding the GW produced by two colliding black holes, detected in 2015 in an event called GW150914. In 2017 the event GW170817, also detected by LVC~\cite{ligo17}, was confirmed to have produced GW from the merger of two NSs in a binary system. This former event gave rise to constraints on the tidal deformabilities of each companion star, namely, an effect analogous to the tides observed in our planet due to presence of the Moon.
Neutron stars can also be used as source of study of dark matter (DM)~\cite{dmrev,zwicky,oort,lensing}. Despite the interaction between dark particles and luminous matter is extremely weak (otherwise DM would be easily detected), the gravitational force can bound this exotic matter to the ordinary one present in the massive astrophysical systems. In that direction, many investigations were performed in which DM is coupled to hadronic relativistic mean field (RMF) models, see Refs.~\cite{rmfdm1,rmfdm2,rmfdm3,rmfdm4,rmfdm5,rmfdm6,rmfdm7,rmfdm8,rmfdm9,rmfdm10,rmfdm11,rmfdm12} for instance. In the most of these studies. the lightest neutralino, belonging to a class of weakly interacting massive particles (WIMP)~\cite{cand1,cand2}, is used as dark particle candidate, but there are other ones, namely, gravitinos, axinos, axions, sterile neutrinos, WIMPzillas, supersymmetric Q-balls, and mirror matter~\cite{cand1,cand2}.
In Ref.~\cite{dmnosso} the lightest neutralino was implemented as the dark particle in a RMF model with short-range correlations (SRC)~\cite{nature,hen2017,Duer2019,rev3,cai,baoanlicai} included. These kind of correlations are observed in electron-induced quasielastic proton knockout experiments, in which nonindependent nucleons are verified to correlate each other in pairs with high relative momentum. Probes of this phenomenon were performed in experiments at the Thomas Jefferson National Accelerator Facility (JLab), where it was found that the most of the emerged pairs are deuteron-like~\cite{orhen}, around $90\%$ in measurements of the \mbox{$^{12}$C nucleus}~\cite{subedi2008}, for instance. Based on this \mbox{RMF-SRC} hadronic model, it was shown in Ref.~\cite{dmnosso} that it is possible to describe NSs with DM content presenting masses in the limits given in Ref.~\cite{cromartie}, namely, $M=2.14^{+0.10}_{-0.09}M_{\odot}$ ($68.3\%$ credible level), and simultaneously in agreement with the recent observational data provided by the NASA’s Neutron star Interior Composition Explorer (NICER) mission that, provided constraints on the mass-radius profiles~\cite{nicer1,nicer2,nicer3,nicer4}. The ``best'' parametrizations of this \mbox{RMF-SRC-DM} model were constructed by taking into account the variation in the symmetry energy slope in a range compatible with the results reported by the updated lead radius experiment~(PREX-2)~\cite{piekaprex2,prex2}, and also overlapping with the boundaries obtained from the analysis of charged pions spectra~\cite{pions}. The results provided in Ref.~\cite{dmnosso} are also compatible with more recent data given in Ref.~\cite{cromartie-apj} regarding the most massive neutron star known, namely, $M=2.08^{+0.07}_{-0.07}M_{\odot}$ ($68.3\%$ credibility).
In this work, we investigate whether it is also possible to describe the constraints related to the dimensionless tidal deformabilities regarding the GW170817 event by using the \mbox{RMF-SRC-DM} model of Ref.~\cite{dmnosso}. In particular, we verify that the system with DM content is able to reproduce the limits of $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo18} (dimensionless tidal deformability of a $ 1.4M_\odot$ NS), the range of $\tilde{\Lambda}=300^{+420}_{-230}$~\cite{ligo19} (quantity related to the dimensionless deformabilities of the binary system stars: $\Lambda_1$ and $\Lambda_2$), and the $\Lambda_1\times\Lambda_2$ regions. Furthermore, we also show that the \mbox{$I$-Love} relation is preserved even in the system with DM. Moreover, we found that the model also satisfies the indirect observational data related to the dimensionless moment of inertia, namely, $\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$. Regarding this last quantity, we remark that the obtained range is not coming from a direct measured observable. It is a derived quantity under certain assumptions, as we make clear later on. We organize all these findings as follows. In Sec.~\ref{form}, we address the main equations regarding the \mbox{RMF-SRC-DM} model. The predictions of the model concerning the GW170817 constraints on the tidal deformabilities and moment of inertia are given in Sec.~\ref{stellar}. Our summary and concluding remarks are presented in Sec.~\ref{summ}.
\section{Hadronic model with SRC and DM }
\label{form}
We start by presenting the model that describes the hadronic matter considered here, defined by its Lagrangian density. It reads
\begin{align}
&\mathcal{L}_{\mbox{\tiny HAD}} = \overline{\psi}(i\gamma^\mu\partial_\mu - M_{\mbox{\tiny nuc}})\psi
+ g_\sigma\sigma\overline{\psi}\psi
- g_\omega\overline{\psi}\gamma^\mu\omega_\mu\psi
\nonumber \\
&- \frac{g_\rho}{2}\overline{\psi}\gamma^\mu\vec{\rho}_\mu\vec{\tau}\psi
+\frac{1}{2}(\partial^\mu \sigma \partial_\mu \sigma - m^2_\sigma\sigma^2)
- \frac{A}{3}\sigma^3 - \frac{B}{4}\sigma^4
\nonumber\\
&-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}
+ \frac{1}{2}m^2_\omega\omega_\mu\omega^\mu
+ \frac{c}{4}(g_\omega^2\omega_\mu\omega^\mu)^2 -\frac{1}{4}\vec{B}^{\mu\nu}\vec{B}_{\mu\nu}
\nonumber \\
&+ \frac{1}{2}\alpha'_3g_\omega^2 g_\rho^2\omega_\mu\omega^\mu
\vec{\rho}_\mu\vec{\rho}^\mu + \frac{1}{2}m^2_\rho\vec{\rho}_\mu\vec{\rho}^\mu.
\label{dlag}
\end{align}
In this expression $\psi$ represents the nucleon field, and $\sigma$, $\omega^\mu$, and $\vec{\rho}_\mu$ are the scalar, vector, and isovector-vector fields related to the mesons $\sigma$, $\omega$, and $\rho$, respectively, with tensors $F_{\mu\nu}$ and $\vec{B}_{\mu\nu}$ given by $F_{\mu\nu}=\partial_\nu\omega_\mu-\partial_\mu\omega_\nu$ and $\vec{B}_{\mu\nu}=\partial_\nu\vec{\rho}_\mu-\partial_\mu\vec{\rho}_\nu$. The mesons masses are $m_\sigma$, $m_\omega$, and $m_\rho$. $M_{\mbox{\tiny nuc}}$ is the nucleon rest mass. Regarding the inclusion of the dark matter content, we proceed as in Ref.~\cite{dmnosso} and consider a dark fermion (mass $M_\chi$, Dirac field $\chi$) interacting with nucleons through the exchange of the Higgs boson (mass $m_h$, scalar field $h$). In this perspective, the Lagrangian density of the total system becomes
\begin{align}
\mathcal{L} &= \overline{\chi}(i\gamma^\mu\partial_\mu - M_\chi)\chi
+ \xi h\overline{\chi}\chi +\frac{1}{2}(\partial^\mu h \partial_\mu h - m^2_h h^2)
\nonumber\\
&+ f\frac{M_{\mbox{\tiny nuc}}}{v}h\overline{\psi}\psi + \mathcal{L}_{\mbox{\tiny HAD}},
\label{dlagtotal}
\end{align}
with $fM_{\mbox{\tiny nuc}}/v$ being the Higgs-nucleon coupling ($v=246$~GeV is the Higgs vacuum expectation value). The constant $\xi$ is the strength of the Higgs-dark particle coupling.
By using the mean-field approximation to the fields, one has $\sigma\rightarrow \left<\sigma\right>\equiv\sigma$, $\omega_\mu\rightarrow \left<\omega_\mu\right>\equiv\omega_0$, $
\vec{\rho}_\mu\rightarrow \left<\vec{\rho}_\mu\right>\equiv \bar{\rho}_{0(3)}$, $h\rightarrow \left<h\right>\equiv h$, that leads to the following field equations
\begin{align}
m^2_\sigma\,\sigma &= g_\sigma\rho_s - A\sigma^2 - B\sigma^3
\\
m_\omega^2\,\omega_0 &= g_\omega\rho - Cg_\omega(g_\omega \omega_0)^3
- \alpha_3'g_\omega^2 g_\rho^2\bar{\rho}_{0(3)}^2\omega_0,
\\
m_\rho^2\,\bar{\rho}_{0(3)} &= \frac{g_\rho}{2}\rho_3
-\alpha_3'g_\omega^2 g_\rho^2\bar{\rho}_{0(3)}\omega_0^2,
\\
[\gamma^\mu (&i\partial_\mu - g_\omega\omega_0 - g_\rho\bar{\rho}_{0(3)}\tau_3/2) - M^*]\psi = 0,
\\
m^2_h\,h &= \xi\rho_s^{\mbox{\tiny DM}} + f\frac{M_{\mbox{\tiny nuc}}}{v}\rho_s
\\
(\gamma^\mu &i\partial_\mu - M_\chi^*)\chi = 0,
\end{align}
with $\tau_3=1$ for protons and $\tau_3=-1$ for neutrons. The effective nucleon and dark effective masses are $M^* = M_{\mbox{\tiny nuc}} - g_\sigma\sigma - f\frac{M_{\mbox{\tiny nuc}}}{v}h$ and $M^*_\chi = M_\chi - \xi h$, respectively. Here we use $\xi=0.01$ and $M_{\chi}=200$~GeV (lightest neutralino). Concerning $f$, we use the central value obtained in Ref.~\cite{cline}, namely, $f=0.3$. Such a combination of values gives a spin-independent scattering cross-section around $10^{-47}$~cm$^2$~\cite{rmfdm4} compatible with experimental data from PandaX-II~\cite{pandaxII}, LUX~\cite{lux}, and DarkSide~\cite{darkside} collaborations. Furthermore, the densities are given by $\rho_s=\left<\overline{\psi}\psi\right>={\rho_s}_p+{\rho_s}_n$, $\rho=\left<\overline{\psi}\gamma^0\psi\right> = \rho_p + \rho_n$,
$\rho_3=\left<\overline{\psi}\gamma^0{\tau}_3\psi\right> = \rho_p - \rho_n=(2y_p-1)\rho$, and
$\rho_s^{\mbox{\tiny DM}} = \left<\overline{\chi}\chi\right>$,
where
\begin{eqnarray}
\rho_s^{\mbox{\tiny DM}} &=&
\frac{\gamma M^*_\chi}{2\pi^2}\int_0^{k_F^{\mbox{\tiny DM}}} \hspace{-0.5cm}\frac{k^2dk}{(k^2+M^{*2}_\chi)^{1/2}}.
\end{eqnarray}
Here $p,n$ defines protons and neutrons, and $\gamma=2$ is the degeneracy factor. The proton fraction is given by $y_p=\rho_p/\rho$, with proton/neutron densities given by $\rho_{p,n}=\gamma{k_F^3}_{p,n}/(6\pi^2)$. ${k_F}_{p,n}$ and $k_F^{\mbox{\tiny DM}}$ are the Fermi momenta related to protons/neutrons, and to the dark particle, respectively.
The thermodynamics of the entire system composed by hadrons and dark matter is determined from the energy density and the pressure, both quantities obtained through the energy-momentum tensor $T^{\mu\nu}$ as $\mathcal{E}=\left<T_{00}\right>$ and $P=\left<T_{ii}\right>/3$. In our case such quantities are given by
\begin{align}
&\mathcal{E} = \frac{m_{\sigma}^{2} \sigma^{2}}{2} +\frac{A\sigma^{3}}{3} +\frac{B\sigma^{4}}{4}
-\frac{m_{\omega}^{2} \omega_{0}^{2}}{2} - \frac{Cg_{\omega}^4\omega_{0}^4}{4}
- \frac{m_{\rho}^{2} \bar{\rho}_{0(3)}^{2}}{2}
\nonumber\\
&+ g_{\omega} \omega_{0} \rho + \frac{g_{\rho}}{2}
\bar{\rho}_{0(3)} \rho_{3} -\frac{1}{2} \alpha'_3 g_{\omega}^{2} g_{\rho}^{2} \omega_{0}^{2}
\bar{\rho}_{0(3)}^{2} + \mathcal{E}_{\mathrm{kin}}^{p} + \mathcal{E}_{\mathrm{kin}}^{n}
\nonumber\\
&+ \frac{m_h^2h^2}{2} + \mathcal{E}_{\mathrm{kin}}^{\mbox{\tiny DM}},
\label{eden}
\end{align}
and
\begin{align}
&P = -\frac{m_{\sigma}^{2} \sigma^{2}}{2} - \frac{A\sigma^{3}}{3} - \frac{B\sigma^{4}}{4}
+ \frac{m_{\omega}^{2} \omega_{0}^{2}}{2} + \frac{Cg_{\omega}^4\omega_0^4}{4}
\nonumber\\
&+ \frac{m_{\rho}^{2} \bar{\rho}_{0(3)}^{2}}{2} + \frac{1}{2} \alpha'_3 g_{\omega}^{2}
g_{\rho}^{2} \omega_{0}^{2} \bar{\rho}_{0(3)}^{2} + P_{\mathrm{kin}}^{p} + P_{\mathrm{kin}}^{n}
- \frac{m_h^2h^2}{2}
\nonumber\\
&+ P_{\mathrm{kin}}^{\mbox{\tiny DM}},
\label{press}
\end{align}
with the kinetic terms of the dark particle written as
\begin{eqnarray}
\mathcal{E}_{\mbox{\tiny kin}}^{\mbox{\tiny DM}} &=& \frac{\gamma}{2\pi^2}\int_0^{k_F^{\mbox{\tiny DM}}}\hspace{-0.3cm}k^2(k^2+M^{*2}_\chi)^{1/2}dk,
\label{ekindm}
\\
P_{\mbox{\tiny kin}}^{\mbox{\tiny DM}} &=&
\frac{\gamma}{6\pi^2}\int_0^{{k_F^{\mbox{\tiny DM}}}}\hspace{-0.5cm}\frac{k^4dk}{(k^2+M^{*2}_\chi)^{1/2}}.
\label{pkindm}
\end{eqnarray}
In the hadronic side of the system, the implementation of the SRC implies in the replacement of the usual step functions in the kinetic terms by the one including the high momentum tail~\cite{cai,lucas}, namely, $n_{n,p}(k) = \Delta_{n,p}$ for $0<k<k_{F\,{n,p}}$ and $n_{n,p}(k) = C_{n,p}\,(k_{F\,{n,p}}/k)^4$ for $k_{F\,{n,p}}<k<\phi_{n,p} \,k_{F\,{n,p}}$ in which $\Delta_{n,p}=1 - 3C_{n,p}(1-1/\phi_{n,p})$, $C_p=C_0[1 - C_1(1-2y_p)]$, $C_n=C_0[1 + C_1(1-2y_p)]$, $\phi_p=\phi_0[1 - \phi_1(1-2y_p)]$ and $\phi_n=\phi_0[1 + \phi_1(1-2y_p)]$. Here we use $C_0=0.161$, $C_1=-0.25$, $\phi_0 = 2.38$ and $\phi_1=-0.56$~\cite{cai,lucas}. Such change leads to modified expressions to the kinetic terms, namely,
\begin{eqnarray}
\mathcal{E}_{\text {kin }}^{n,p} &=& \frac{\gamma \Delta_{n,p}}{2\pi^2} \int_0^{{k_{F\,{n,p}}}}
k^2dk({k^{2}+M^{* 2}})^{1/2}
\nonumber\\
&+& \frac{\gamma C_{n,p}}{2\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}}
\frac{{k_F}_{n,p}^4}{k^2}\, dk({k^{2}+M^{* 2}})^{1/2},
\nonumber \\
P_{\text {kin }}^{n,p} &=&
\frac{\gamma \Delta_{n,p}}{6\pi^2} \int_0^{k_{F\,{n,p}}}
\frac{k^4dk}{\left({k^{2}+M^{*2}}\right)^{1/2}}
\nonumber\\
&+& \frac{\gamma C_{n,p}}{6\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}}
\frac{{k_F}_{n,p}^4dk}{\left({k^{2}+M^{*2}}\right)^{1/2}},
\end{eqnarray}
and
\begin{align}
&{\rho_s}_{n,p} =
\frac{\gamma M^*\Delta_{n,p}}{2\pi^2} \int_0^{k_{F\,{n,p}}}
\frac{k^2dk}{\left({k^{2}+M^{*2}}\right)^{1/2}}
\nonumber\\
&+ \frac{\gamma M^*C_{n,p}}{2\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}}
\frac{{k_F}_{n,p}^4}{k^2} \frac{dk}{\left({k^{2}+M^{*2}}\right)^{1/2}}.
\end{align}
This last quantity is the scalar density of protons and neutrons.
\section{Stellar matter: analysis of the GW170817 constraints}
\label{stellar}
For the description of a NS of mass~$M$ it is needed to solve the widely known TOV equations~\cite{tov39,tov39a} given by $dp(r)/dr=-[\epsilon(r) + p(r)][m(r)
+ 4\pi r^3p(r)]/r^2g(r)$ and $dm(r)/dr=4\pi r^2\epsilon(r)$, where $g(r)=1-2m(r)/r$, whose solution
is constrained to $p(0)=p_c$ (central pressure) and $m(0) = 0$. The condition of $p(R) = 0$ and $m(R)=M$ is satisfied in the star surface, with $R$ defining the NS radius. For the equation of state (EoS) of the matter in the NS core we use the hadronic model with SRC and DM content included. For the NS crust we consider two regions, namely, the outer and the inner crust. For the former we use the EoS proposed by Baym, Pethick and Sutherland (BPS)~\cite{bps} in a density region of $6.3\times10^{-12}\,\mbox{fm}^{-3}\leqslant\rho_{\mbox{\tiny outer}}\leqslant2.5\times10^{-4}\,\mbox{fm}^ {-3}$. For the latter, we follow previous literature~\cite{poly0,poly1,poly2,gogny2,cc2,gogny1,kubis04} and use a polytropic EoS of the form $p(\epsilon)=\mathcal{A}+\mathcal{B}\epsilon^{4/3}$ from $2.5\times10^{-4}\,\mbox{fm}^ {-3}$ to the transition density. The constants $\mathcal{A}$ and $\mathcal{B}$ are found by matching this polytropic formula to the BPS EoS at the interface between the outer and the inner crust, and to the EoS of the homogeneous core at the core-crust transition determined through the thermodynamical method~\cite{gogny1,cc2,kubis04, gonzalez19}.
In the case of systems composed by binary NS's, the phenomenon of tidal forces originated from the gravitational field takes place, with the consequence of inducing tidal deformabilities in each companion object. The particular deformations due to quadrupole moment produces gravitational waves (GW) whose phase depends on the tidal deformability~\cite{tanj10,read,pozzo}. The first measurement of GW detected from a binary NS's, the called GW170817 event, is due to the LIGO/Virgo Collaboration~\cite{ligo17}. Based on the study related to this new data, the LVC established constraints on the dimensionless tidal deformabilities $\Lambda_1$ and $\Lambda_2$ for each companion star of the binary system, as well as on the tidal deformability related to the star of $M=1.4 M_\odot$ ($\Lambda_{1.4}$). An updated version of the constraints regarding these quantities was published in Refs.~\cite{ligo18,ligo19}. Here we test the capability of the hadronic model with SRC and DM included in satisfying these constraints provided by LVC. In order to do that, we calculate the dimensionless tidal deformability as $\Lambda =
2k_2/(3C^5)$, with $C=M/R$ (compactness). The second Love number is given by
\begin{eqnarray}
&k_2& = \frac{8C^5}{5}(1-2C)^2[2+2C(y_R-1)-y_R]\nonumber\\
&\times&\Big\{2C [6-3y_R+3C(5y_R-8)] \nonumber\\
&+& 4C^3[13-11y_R+C(3y_R-2) + 2C^2(1+y_R)]\nonumber\\
&+& 3(1-2C)^2[2-y_R+2C(y_R-1)]{\rm ln}(1-2C)\Big\}^{-1},\qquad
\label{k2}
\end{eqnarray}
with $y_R\equiv y(R)$. $y(r)$ is obtained through the solution of $r(dy/dr) + y^2 + yF(r)
+ r^2Q(r)=0$, solved as part of a coupled system also containing the TOV equations. The quantities $F(r)$ and $Q(r)$ read
\begin{eqnarray}
F(r) &=& \frac{1 - 4\pi r^2[\epsilon(r) - p(r)]}{g(r)} ,
\\
Q(r)&=&\frac{4\pi}{g(r)}\left[5\epsilon(r) + 9p(r) +
\frac{\epsilon(r)+p(r)}{v_s^2(r)}- \frac{6}{4\pi r^2}\right]
\nonumber\\
&-& 4\left[ \frac{m(r)+4\pi r^3 p(r)}{r^2g(r)} \right]^2,
\label{qr}
\end{eqnarray}
where the squared sound velocity is $v_s^2(r)=\partial p(r)/\partial\varepsilon(r)$. Detailed derivations can be found in Refs.~\cite{tanj10,new,hind08,damour,tayl09}.
The input for the TOV equations coupled to the equation for $y(r)$ is the total equation of state of a system under charge neutrality and $\beta$-equilibrium. In our case, we consider a system composed by protons, neutrons, electrons, muons and dark matter. The total energy density and pressure are given by $\epsilon=\mathcal{E}+\sum_l\epsilon_l$ and $p=P + \sum_lp_l$, with $\mathcal{E}$ and $P$ given in Eqs.~(\ref{eden}) and~(\ref{press}), respectively. The index $l$ refer to the leptons (electons and muons). The equations are solved by taking into account the following conditions: $\mu_n - \mu_p = \mu_e=\mu_\mu$ and $\rho_p - \rho_e = \rho_\mu$, where \mbox{$\rho_l=[(\mu_l^2 - m_l^2)^{3/2}]/(3\pi^2)$} for $l=e, \mu$ (we use $m_e=0$ and $m_\mu=105.7$~MeV). The chemical potentials of protons, neutrons, electrons and muons are given, respectively, by $\mu_p$, $\mu_n$, $\mu_e$, and $\mu_\mu$. Electron and muon densities are $\rho_e$, and $\rho_\mu$. In the case of the hadronic model with SRC included, $\mu_p$ and $\mu_n$ are given by
\begin{eqnarray}
&\mu_{p,n}& = 3 C_{p,n} \left[ \mu^{p,n}_{\mathrm{kin}}
- \frac{\left({\phi_{p,n}^2 {k^2_F}_{p,n} + M^{*2}}\right)^{1/2}}{\phi_{p,n}} \right]
\nonumber\\
&+& {4}C_{p,n} {k_F}_{p,n} \ln\left[\frac{\phi_{p,n} {k_F}_{p,n} +
\left(\phi_{p,n}^2{k_F^2}_{p,n}+M^{*2}\right)^{1/2} }{ {k_F}_{p,n} + \left( {k^2_F}_{p,n} + M^{*2}\right)^{1/2}}\right]
\nonumber\\
&+& \Delta_{p,n}\mu^{p,n}_{\mathrm{kin}} + g_{\omega} \omega_{0} \pm \frac{g_\rho}{2}\bar{\rho}_{0_{(3)}},
\end{eqnarray}
with $\mu^{p,n}_{\mathrm{kin}}=({k^2_F}_{p,n}+M^{*2})^{1/2}$, where we have used the definitions $\mu_{p,n}=\partial\mathcal{E}/\partial\rho_{p,n}$.
As in Ref.~\cite{dmnosso}, we use in the hadronic side of the model the updated version of the parametrization FSU2R~\cite{fsu2r-new}, with the following bulk parameters at the saturation density of symmetric nuclear matter: $\rho_0=0.15$~fm$^{-3}$, $B_0=-16.27$~MeV (binding energy), $M^*_0=556.8$~MeV (effective nucleon mass at $\rho_0$), and $K_0=237.7$~MeV (incompressibility at $\rho_0$). We also use $C=0.004$, $M_{\mbox{\tiny nuc}}=939$~MeV, $m_\sigma=497.479$~MeV, $m_\omega=782.5$~MeV, and $m_\rho=763$~MeV. In Ref.~\cite{dmnosso} the authors have also considered uncertainties in $M_0^*$, $K_0$ and $L_0$ (symmetry energy slope at $\rho_0$). It was verified that changes in $L_0$ produce parametrizations that give mass-radius profiles in agreement with astrophysical observations, such as the boundaries of $M=2.14^{+0.10}_{-0.09}M_{\odot}$~\cite{cromartie}, simultaneously with recent data obtained by the NICER mission~\cite{nicer1,nicer2,nicer3,nicer4}. Here we focus on the variation of this specific isovector quantity. In particular we use~\cite{piekaprex2}
\begin{eqnarray}
L_0=(106\pm37)~\mbox{MeV},
\label{sloperange}
\end{eqnarray}
range compatible with the updated results provided by the \mbox{PREX-2} collaboration concerning neutron skin thickness measurements of $^{208}\rm Pb$~\cite{prex2}, and also overlapping with the limits determined from the analysis of charged pions spectra~\cite{pions}. For each value of $L_0$ chosen in this variation, we fix in $\tilde{J}=25.68$~MeV (FSU2R parametrization) the value of the symmetry energy at $\rho=2\rho_0/3$. This value is consistent with the findings presented in Refs.~\cite{piekaprex2,pieka2001}. By taking this procedure, we impose to the hadronic part of the model the linear correlation between $L_0$ and the symmetry energy at the saturation density, $J$. This is a particular relationship verified in literature, see for instance Refs.~\cite{drischler,baoanli,bianca,wei}.
We start by showing in Fig.~\ref{def} the dimensionless tidal deformability generated by the \mbox{RMF-SRC} model with DM included.
\begin{figure}[!htb]
\centering
\vspace{-1cm}
\includegraphics[width=0.54\textwidth]{def.png}
\vspace{-1cm}
\caption{(a) $\Lambda$ as a function of $M/M_\odot$. Full circle: result of $\Lambda_{1.4}=190^{+390}_{-120}$ obtained in Ref.~\cite{ligo18}. (b) Dimensionless tidal deformabilities for the case of high-mass ($\Lambda_1$) and low-mass ($\Lambda_2$) components of the GW170817 event. Confidence lines, namely, 90\% and 50\%, also taken from Ref.~\cite{ligo18}. For both panels, the dark matter content is characterized by $k_F^{\rm DM}=0$ (no DM: gray bands), $0.02$~GeV (red bands) and $0.03$~GeV (blue bands).}
\label{def}
\end{figure}
In Fig.~\ref{def}{\color{blue}a} we present $\Lambda$ as a function of the NS mass in units of $M_\odot$. Each band represents the set of parametrizations generated by the variation of $L_0$ given in Eq.~(\ref{sloperange}). The content of dark matter is defined by the dark Fermi momentum taken here as~$0$, $0.02$~GeV and $0.03$~GeV. As shown in Ref.~\cite{dmnosso}, $k_F^{\rm DM}=0$ represents the system without dark matter. It is clear that in this case (gray band), the parametrizations obtained by using Eq.~(\ref{sloperange}) do not satisfy the constraint of $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo18}. However, the inclusion of DM favors the system to be compatible with the limit provided by LVC. In particular, for $k_F^{\rm DM}=0.03$~GeV (blue band) it is verified that all parametrizations constructed through Eq.~(\ref{sloperange}) are completely inside the range of $\Lambda_{1.4}$. This value of $k_F^{\rm DM}$ was show in Ref.~\cite{dmnosso} to produce NS's in agreement with the recent observational data regarding the mass-radius diagram. Here we confirm that the system composed by this amount of DM is also consistent with the LVC constraint of $\Lambda_{1.4}$.
In Fig.~\ref{def}{\color{blue}b} we show the tidal deformabilities $\Lambda_1$ and $\Lambda_2$ of the binary NS's system related to the GW170817 event, with component masses $M_1$, in the range of $1.37\leqslant M_1/M_\odot \leqslant 1.60$~\cite{ligo17}, and $M_2<M_1$. The diagonal dotted line corresponds to the $\Lambda_1=\Lambda_2$ case, in which $M_1=M_2$. The mass of the companion star is calculated through the relationship between $M_1$, $M_2$ and the chirp mass $\mathcal{M} = (M_1M_2)^{3/5}/(M_1+M_2)^{1/5}=1.188M_\odot$~\cite{ligo17}, i.e., $1.17 \leqslant M_2/M_\odot \leqslant 1.36$~\cite{ligo17,ligo18}. The upper and lower orange lines of the figure correspond to the 90\% and 50\% confidence limits, respectively, also obtained from the GW170817 event~\cite{ligo18}. From this figure, we also verify that the inclusion of DM in the system moves the bands in the direction of satisfying the LVC constraints. Notice that the system in which $k_F^{\rm DM}=0.03$~GeV is totally compatible with the 90\% region for all values chosen for $L_0$ in the range of Eq.~(\ref{sloperange}).
These general features presented in Fig.~\ref{def} are also observed in Refs.~\cite{rmfdm3,eftdm1}, where other RMF-DM models are used (without SRC) including a chiral effective hadronic model. Therefore, our results point out to a particular pattern regarding RMF models with DM included, at least concerning the tidal deformability. However, it is important to mention that by increasing the amount of DM can also enhance~$\Lambda$. This is the case of some models in which a dark matter halo~\cite{nelson,ellis,sagun} is generated. In Ref.~\cite{nelson}, for instance, it is verified an increase of $\Lambda$ for a total dark matter TOV mass~($M_{\mbox{\tiny DM}}$) exceeding $10^{-5}M_\odot$. In the analysis performed in Ref.~\cite{sagun}, bosonic self-interaction dark matter is coupled to a hadronic model through a two-fluid formalism (different from that used in this work). It is shown that for DM particle masses smaller than~$\sim 300$~MeV, $\Lambda$ increases with the DM fraction (here we fix the fermionic DM particle mass in $200$~GeV). Finally, in Ref.~\cite{ellis} the authors show that in the case of formation of dark matter halo, the tidal deformability increases by raising~$M_{\mbox{\tiny DM}}$. The opposite situation is verified in the case of a neutron star with a DM core.
For the neutron stars described in the aforementioned works, a more sophisticated treatment of the contribution of the dark matter is performed, namely, the DM Fermi momentum is not taken as constant along the star radius. In our work, we implement the simpler case of fixing $k_F^{\mbox{\tiny DM}}$ as also performed in Refs.~\cite{rmfdm3,eftdm1}, for instance. However, this latter treatment is completely appropriate for the purposes of the present study, namely, the investigation of tidal deformabilities and its relation with the moment of inertia.
We also performed an additional analysis by taking into account those \mbox{RMF-SRC-DM} parametrizations with a different range for the symmetry energy slope, namely, $40\mbox{ MeV}\leqslant L_0 \leqslant 60\mbox{ MeV}$, value often predicted by some hadronic models. We verified that these specific parametrizations are also compatible with the LIGO/Virgo predictions presented in Fig.~\ref{def}.
In order to identify, in another perspective, the effect on $\Lambda_{1.4}$ of the DM content of the parametrizations generated from Eq.~(\ref{sloperange}), we show in Fig.~\ref{def14} how $\Lambda_{1.4}$ correlates with the isovector quantities $L_0$ and $J$ by taking into account different values of the dark particle Fermi momentum.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.34]{def14.pdf}
\caption{$\Lambda_{1.4}$ as a function of (a) $L_0$ and (b) $J$ for different values of $k_F^{\rm DM}$. The range of $L_0$ is defined in Eq.~(\ref{sloperange}). Green dotted horizontal lines: boundaries of $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo18}.}
\label{def14}
\end{figure}
In Fig.~\ref{def14}{\color{blue}a} we see that $\Lambda_{1.4}$ decreases as $k_F^{\rm DM}$ increases, regardless the value of $L_0$. The same occurs in Fig.~\ref{def14}{\color{blue}b}, now with respect to the symmetry energy at the saturation density. Notice that the dependence of $\Lambda_{1.4}$ on $L_0$ and $J$ reinforce the existence of a linear correlation between these two isovector quantities. Concerning $\Lambda_{1.4}\times L_0$, we remark that this pattern is also observed in a study performed in Ref.~\cite{lucas} in which hadronic model with SRC but without DM was analyzed. However, notice that the inclusion of DM content reduces the increasing of $\Lambda_{1.4}$ as function of $L_0$, since we have $\Delta\Lambda_{1.4}\equiv \Lambda_{1.4}(143)-\Lambda_{1.4}(69)$ given by $272$, $216$ and $138$, respectively, for $k_F^{\rm DM}=0$, $0.02$~GeV and $0.03$~GeV. We also remark that $\Lambda_{1.4}$ as an increasing function of $J$ in our analysis is totally different from the correlation exhibited in Ref.~\cite{lucas}. In that study, the authors considered independent variations of $J$ and $L_0$ and observed a decrease of $\Lambda_{1.4}$ with increase of $J$. Here, the opposite behavior is verified due to the linear correlation presented between $L_0$ and $J$. This relationship emerges since we are forcing a crossing point in the density dependence of the symmetry energy density. As aforementioned, we impose a value of $25.68$~MeV for the symmetry energy at $\rho\simeq 0.1$~fm$^{-3}$. We address the reader to Ref.~\cite{bianca} for a detailed study concerning crossing points and linear correlations of nuclear matter bulk parameters. Moreover, we emphasize that the relationship between $J$, $L$ and tidal deformabilities has been subject of investigation in many other works, such as those pointed out in Refs.~\cite{baoanli,baoanli2,baoanli3,malik19,cpc,fanji,sinha,ptep,liu,angli}.
A quantity directly related to the tidal deformabilities of a binary NS system is the coefficient $\tilde{\Lambda}$ defined as
\begin{eqnarray}
{\tilde{\Lambda}} = {16\over{13}}{{(M_{1}+12M_{2})M_{1}^{4}\Lambda_{1} + (M_{2}+12M_{1}) M_{2}^{4}\Lambda_{2}} \over {(M_{1}+M_{2})^{5}}},\qquad
\label{tilde}
\end{eqnarray}
where $\Lambda_{1}$ and $\Lambda_{2}$ are the dimensionless tidal deformabilities of each star. In the inspiral final phase of a binary colliding NS system, periodic gravitational waves are emitted. The phase of these waves can be expressed in terms of a post-Newtonian expansion yielding a term proportional to $\tilde{\Lambda}$ at the lowest order~\cite{FH}. This result is used in order to investigate the response of the stellar system to the tidal field, being extracted directly from the observed waveform. In Fig.~\ref{deftilde} we show the plots $\tilde{\Lambda}\times L_0$ and $\tilde{\Lambda}\times J$ generated through the \mbox{RMF-SRC} model with different DM contents.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.34]{deftilde.pdf}
\caption{$\tilde{\Lambda}$ as a function of (a) $L_0$ and (b) $J$ for different values of $k_F^{\rm DM}$. The range of $L_0$ is defined in Eq.~(\ref{sloperange}). Dashed lines: range of $\tilde{\Lambda}=300^{+420}_{-230}$ determined by LVC~\cite{ligo19}.}
\label{deftilde}
\end{figure}
In this figure, $\tilde{\Lambda}$ is calculated as a function of the mass of one of the stars of the binary system, i.e., $\tilde{\Lambda}=\tilde{\Lambda}(M_1)$, or $\tilde{\Lambda}=\tilde{\Lambda}(M_2)$. As $M_1$, or $M_2$, is defined into a particular range according to the GW170817 event, each parametrization presenting a specific value of $L_0$, or $J$, produce a range for $\tilde{\Lambda}$. We compare the results obtained for the model with $k_F^{\rm DM}=0$, $0.02$~GeV, and $0.03$~GeV with the constraint $\tilde{\Lambda}=300^{+420}_{-230}$ provided by LVC~\cite{ligo19}. Once again, we notice that the inclusion of DM in the system favors the observational data from the GW170817 event. The decreasing of $\tilde{\Lambda}$ as a function of $k_F^{\rm DM}$ is also observed. Furthermore, as well as the behavior between $\Lambda_{1.4}$ and $L_0$ depicted in Fig.~\ref{def14}, there is also a strong correlation between $\tilde{\Lambda}$ and $L_0$. The same is true for the relationship between $\tilde{\Lambda}$ and $J$.
As a last result, we show in Fig.~\ref{inertia} the dimensionless moment of inertia, $\bar{I}= I/M^3$, calculated from the \mbox{RMF-SRC} model for different values of $k_F^{\rm DM}$. This quantity is determined from the solution of the Hartle's slow rotation equation~\cite{land18,hartle,yagi13}. It is a differential equation for one of the metric decomposition functions~\cite{yagi13}, $\omega(r)$, coupled to the TOV equations. The moment of inertia is defined in terms of $\omega_R\equiv \omega(R)$ as $I=R^3(1-\omega_R)/2$. $\omega_R$ is the frame-dragging function evaluated at the star surface~\cite{land18}.
\begin{figure}[!htb]
\centering
\vspace{-1.2cm}
\includegraphics[width=0.54\textwidth]{inertia.png}
\vspace{-1cm}
\caption{Dimensionless moment of inertia as a function of (a)~dimensionless tidal deformability, and (b)~the ratio $M/M_\odot$. Dashed curve: fitting curve obtained in Ref.~\cite{land18}. The circle with error bars represents an indirect prediction of $\bar{I}(M=1.338M_\odot)$ made in Ref.~\cite{land18} by considering the observational data of dimensionless tidal deformability (see text for more details).}
\label{inertia}
\end{figure}
The authors of Refs.~\cite{science,yagi13} showed that the relation between $\bar{I}$ and $\Lambda$ is independent of the neutron/quark star structure in the case of slowly-rotating stars. In Ref.~\cite{land18} the same result was obtained for a set of~$53$ Skyrme and RMF parametrizations. In Fig.~\ref{inertia}{\color{blue}a} it is clear that the parametrizations generated by the variation in Eq.~(\ref{sloperange}) are indistinguishable regardless the value of $k_F^{\rm DM}$. Therefore, we can conclude that the universal relation between $\bar{I}$ and $\Lambda$, called \mbox{$I$-Love} relation, is preserved even with the inclusion of dark matter in the system. The dashed line in Fig.~\ref{inertia}{\color{blue}a} represents the fitting curve determined in
Ref.~\cite{land18}. We see that the model with DM studied here is compatible with this fitting.
The authors of Ref.~\cite{land18} also determined a range for $\bar{I}$ related to the \mbox{PSR J0737-3039} primary component pulsar, namely, $\bar{I}_\star\equiv\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$. This range was determined by using Skyrme and RMF parametrizations. Initially, it was verified a relation between $\Lambda_\star$ ($\Lambda$ related to $M_\star$) and $\Lambda_{1.4}$ (\mbox{binary-Love} relation). Then, a fitting for the $\Lambda_\star\times \Lambda_{1.4}$ curve was used with the \mbox{$I$-Love} relation in order to determine $\bar{I}_\star$ as a function of~$\Lambda_\star$. Lastly, the observational range $\Lambda_{1.4}=190^{+390}_{-120}$ from LVC was used to establish the limits for $\Lambda_\star$, and consequently, the range $\bar{I}_\star=11.10^{+3.68}_{-2.28}$. In Fig.~\ref{inertia}{\color{blue}b} we verify that the increase of $k_F^{\rm DM}$ produces a decrease of $\bar{I}$. We also find that the system with $k_F^{\rm DM}=0.03$~GeV is completely inside the limits for the moment of inertia of pulsar \mbox{PSR J0737-3039A} predicted in Ref.~\cite{land18}. Furthermore, we verify that parametrizations generated by the \mbox{RMF-SRC} model, i.e., with no dark matter, are in agreement with the mass-radius diagrams obtained from chiral effective theory calculations performed in Refs.~\cite{hebeler,kruger,drischler}, for $R\lesssim 14$~km. Curiously, on the other hand, the compatibility is fully attained with inclusion of dark matter content, specifically for $k_F^{\rm DM}=0.03$~GeV. In summary, this specific content of dark matter, implemented in the RMF model with short-range correlations, is compatible with all constraints derived from the GW170817 event concerning tidal deformabilities and moment of inertia.
\section{Summary and concluding remarks}
\label{summ}
In this work, we investigate the capability of a hadronic relativistic model, with short-range correlations and dark matter content included~\cite{dmnosso}, in reproducing the observational data provided by the LIGO and Virgo Collaboration regarding the binary neutron star system of the GW170817 event, i.e., the one in which gravitational waves emitted from neutron stars merger were detected. We use the lightest neutralino, interacting with nucleons through the exchange of the Higgs boson, as the dark particle. In Ref.~\cite{dmnosso} it was already show that this model also reproduces the recent observational data obtained by the NICER mission~\cite{nicer1,nicer2,nicer3,nicer4}.
We show that the dimensionless tidal deformability~$\Lambda$ decreases as the Fermi momentum of the dark particle increases. In particular, this feature favors the model in satisfying the constraints of $\Lambda_{1.4}=190^{+390}_{-120}$ and $\tilde{\Lambda}=300^{+420}_{-230}$. Furthermore, a clear correlation between $\Lambda_{1.4}$ and the symmetry energy slope, $L_0$, and between $\tilde{\Lambda}$ and $L_0$ is verified for different values of $k_F^{\rm DM}$. Specifically, we use the variation of $L_0=(106\pm37)$~MeV~\cite{piekaprex2}, compatible with the updated results provided by the \mbox{PREX-2} collaboration concerning neutron skin thickness measurements of $^{208}\rm Pb$~\cite{prex2}, and also overlapping with the range found from the analysis of charged pions spectra~\cite{pions}. We also show that the $\Lambda_1\times\Lambda_2$ curves are moved to the direction of the GW170817 observational data.
Finally, we also analyze that the \mbox{$I$-Love} relation, namely, the relationship between $\Lambda$ and dimensionless moment of inertia, $\bar{I}$, is preserved even with the inclusion of dark matter in the system. The constraint of $\bar{I}_\star\equiv\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$, is also satisfied for the system with $k_F^{\rm DM}=0.03$~GeV.
\section*{ACKNOWLEDGMENTS}
This work is a part of the project INCT-FNA proc. No. 464898/2014-5. It is also supported by Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq) under Grants No. 312410/2020-4 (O.L.), No. 308528/2021-2 (M.D.), and 308486/2015-3 (T.F.). We also acknowledge Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) under Thematic Project 2017/05660-0 and Grant No. 2020/05238-9 (O.L., C.H.L, M.D.).
|
2,869,038,157,068 | arxiv | \section{Introduction}
Fact checking has become an increasingly important tool to combat misinformation. Indeed the study of automated fact checking in NLP \cite{vlachos-riedel-2014-fact}, in particular, has yielded a number of valuable insights in recent times. These include
task formulations such as
matching for discovering already fact-checked claims \cite{shaar-etal-2020-known}, identifying neural fake news \cite{zellers2020defending}, fact verification in scientific \cite{wadden-etal-2020-fact} and public health \cite{kotonya-toni-2020-explainable-automated} domains, and end-to-end fact verification \cite{thorne-etal-2018-fever}, which is the subject of the FEVEROUS benchmark dataset \cite{aly2021feverous}.
A majority of automated fact checking studies only consider text as evidence for verifying claims. Recently, there have been a number of works which look at fact-checking
with
structured and semi-structured data, mainly in the form of tables and knowledge bases \cite{2019TabFactA} ---
but fact-checking from both structured and unstructured data has been largely unexplored.
Given the sophistication in the presentation of fake news, it is important to develop fact checking tools for assessing evidence from a wide array of evidence sources in order to reach a more accurate verdict regarding the veracity of
claims.
In this work, we propose a graph-based representation that supports both textual and tabular evidence, thus addressing some of the key limitations of past architectures.
This approach allows us to capture relations between evidence items as well as claim-evidence
pairs, borrowing from the argumentation and argument mining literature \citep{argmining-2020-argument,vecchi-etal-2021-towards}, as well as argument modeling for fact verification \cite{alhindi-etal-2018-evidence}.
We experiment with two formulations for graph learning. For the first, we employ a multi-task learning paradigm to jointly train a graph attention network \cite{velivckovic2017graph} for both the task of evidence extraction --- which we model as a node selection task --- and a graph-level veracity prediction task. In the second, we explicitly separate the verification and extraction tasks, where standard semantic search is used for evidence extraction, and veracity prediction is treated as a graph-level classification problem.
For veracity prediction we predict a label for each claim, one of \textsc{Supports}, \textsc{Refutes}, or \textsc{Not-Enough-Info} (NEI), which is conditioned on all relevant evidence, hence the intuition to frame veracity prediction as a graph-level prediction task.
In both formulations, we employ context-aware table linearization templates to produce per-cell sequence representations of tabular evidence and thus construct evidence reasoning graphs where nodes have heterogeneous evidence types (i.e., representing sentences and tables on the same evidence reasoning graph).
\paragraph{Contributions.}
The three main contributions of the paper are summarized below:
\begin{enumerate}
\item Provide \textbf{insightful empirical analysis} of the new FEVEROUS benchmark dataset.
\item Propose a novel framework for interpretable fact extraction using templates to derive \textbf{context-aware per-cell linearizations}.
\item Present a \textbf{graph reasoning model} for fact verification that supports both structured and unstructured evidence data.
\end{enumerate}
Both the joint model and separately trained models exhibit a significant improvement over the FEVEROUS baseline, as well as significant improvements for label accuracy and evidence recall. Our separated approach to fact extraction and verification achieves a FEVEROUS score of 0.23 and label accuracy of 53\% on the blind test data.
\section{Related Work}\label{sec:related-work}
\paragraph{Graph Reasoning for Fact Verification.} Several works explore graph neural networks (GNN) for fact extraction and verification, both for fine-grained evidence modelling \cite{liu-etal-2020-fine,zhong-etal-2020-reasoning} and evidence aggregation for veracity prediction \cite{zhou-etal-2019-gear}. Furthermore, graph learning has also been leveraged to build fake news detection models which learn from evidence from different contexts; e.g., user-based and content-based data \cite{liu-etal-2020-fine,lu-li-2020-gcan}. There are also non-neural approaches to fake news detection with graphs \cite{AhmadiLPS19,Kotonya-toni-2019-gradual}. However, to the best of our knowledge, this work is the first to employ a graph structure to jointly reason over both text and tabular evidence data in both single task learning (STL) and multi-task learning (MTL) settings.
\paragraph{Table Linearization.} A number of approaches have been adopted in NLP for table linearization. For example, \citet{gupta-etal-2020-infotabs} study natural language inference in the context of table linearizations, in particular they are interested to see if language models can infer entailment relations from table linearizations. The linearization approach employed by \citet{Schlichtkrull-etal-2021-joint}
is also used for automated fact verification. However,
they linearize tables
row- and column-wise, whereas we focus on cel
s
as evidence items in the FEVEROUS dataset are annotated at table-cell level.
\section{Data Analysis}\label{sec:data}
Further to the FEVEROUS dataset statistics discussed by the task description paper \cite{aly2021feverous}, we perform our own data exploration. We present insights from our data analysis of the FEVEROUS dataset, which we use to inform system design choices.
\paragraph{Table types.} Wikipedia tables can be categorized into one of two classes: infoboxes and general tables. Infoboxes are fixed format tables which typically appear in the top right-hand corner of a Wikipedia article. General tables can convey a wider breadth of information (e.g., election results, sports match scores, the chronology of an event)
and typically have more complex structures (e.g., multiple headers).
List items can also be considered as a special subclass of tables, where the number of items is analogous to the number of columns
and the nests of the list signify table rows.
\paragraph{Evidence types.} The first observation we make is that, similar to the FEVER dataset \cite{thorne-etal-2018-fever}, a sizeable portion of the training instances rely on evidence items which are extracted from the first few sentences of a Wikipedia article. The most common evidence items are the first and second sentences in a Wikipedia article, which appear in 36\% and 18\% of evidence sets, respectively. The four most frequent evidence cells all come from the first table, with 49\% of first tables listed as evidence in the train and dev data being infoboxes. Further, the vast majority of cell evidence items are non-header cells, but these only account for approximately 5.1\% of tabular evidence in the train and dev datasets. A summary of these findings is provided in Table \ref{tab:table-type} for the most common evidence types in the training data.
\begin{table}[ht]
\centering
\begin{tabular}{lr}
\toprule
\textbf{ Evidence type} & \textbf{\% Evidence sets} \\
\midrule
List items & 1.6\% \\
Sentences & 67.7\% \\
All tables & 58.2\%\\
\hspace*{3mm} Infoboxes & 26.5\%\\
\hspace*{3mm} General tables & 33.9\%\\
\bottomrule
\end{tabular}
\caption{Prevalence of evidence types in the training data by number of evidence sets in which they appear.}
\label{tab:table-type}
\end{table}
\paragraph{Evidence item co-occurrences.} We investigate the most common evidence pairs, both in individual evidence sets and also in the union of all evidence sets relating to a claim. The most common evidence pair in the training data is (\textsc{sentence\_0}, \textsc{sentence\_1}), which accounts for 3.2\% of evidence co-occurrences. The most common sentence-table cell co-occurrence is (\textsc{cell\_0\_2\_1}, \textsc{sentence\_0}). The most common table cell pair is (\textsc{cell\_0\_2\_0}, \textsc{cell\_0\_2\_1}). All of the ten most common co-occurrences either contain one of the first four sentences in an article or evidence from one of the first two tables.
\paragraph{\textsc{NEI} label.} Lastly, we choose to explore instances of the NEI class. We sample 100 instances of NEI claims from the training data and note their qualitative attributes. We
pay particular attention to this label as it is the least represented in the data. Unlike the FEVER score, the FEVEROUS metric requires the correct evidence, as well as the label, to be supplied for an NEI instance for credit to awarded. Our analysis is summarized in Table \ref{tab:NEI-categories}. We categorize mutations, using the FEVEROUS annotation scheme, as one of three types: entity substitutio
, including more facts than available in the provided evidence (i.e., including additional propositions), and paraphrasing or generalizing. We use \emph{Other} to categorize claims with a mutation not captured by one of these three categories.
\begin{table}[ht]
\centering
\begin{tabular}{p{5cm}r}
\toprule
\textbf{Mutation Type} & \textbf{\% Sample} \\
\midrule
Entity Substitution & 21\% \\
More facts than in evidence & 42\% \\
Paraphrasing or generalizing & 36\% \\
Other & 1\% \\
\bottomrule
\end{tabular}
\caption{We sample 100 NEI instances and categorize them according to the type of lexical mutation which results in the claim being unverifiable.}
\label{tab:NEI-categories}
\end{table}
We note that a number of NEI examples are mutations of \textsc{Supports} or \textsc{Refutes} examples. For example the claim in Table \ref{tab:NEI} is a mutation of a \textsc{Supports} instance where entity substitution (humans $\rightarrow$ reptiles) has been used to make the first clause unverifiable, hence changing the label to NEI.
\begin{table}[ht]
\begin{tabular}{p{7.2cm}}
\hline
\begin{center}
\textbf{Claim} \\
\sethlcolor{beaublue}
\hl{Nucleoporin 153, a protein which in \textbf{reptiles} is encoded by the NUP153 gene},\\
\sethlcolor{bisque}
\hl{is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs.}\\[1em]
\end{center}
\\[1em]\hline
\begin{center}
\textbf{Evidence} \\
\sethlcolor{beaublue} \hl{
Nucleoporin 153 (Nup153) is a protein which in \textbf{humans} is encoded by the NUP153 gene.}\\
\sethlcolor{bisque}
\hl{
It is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs.}
\end{center}
\\[1em]\hline
\end{tabular}
\caption{NEI example where the evidence is highlighted according to the part of the claim to which it refers. The text in \textbf{bold} is the substitution which resulted in the label changing from \textsc{Supports} to \text{NEI}.}
\label{tab:NEI}
\end{table}
\section{Methods}
Our proposed method for fact verification is an end-to-end system comprising
three modules:
\begin{enumerate}[label=(\arabic*)]
\item A robust document retrieval procedure (see Section~\ref{sec:doc_retreival}).
\item An evidence graph construction and intermediate evidence filtering process (see Section~\ref{sec:graph_construction}
.
\item A joint veracity label prediction and evidence selection layer that reasons over the evidence graph (see Section~\ref{sec:graph_reasoning}).
\end{enumerate}
An illustration of the complete pipeline is provided in Figure~\ref{fig:system}, and details of each processing stage are
provided in the following sections.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{images/system_diagram.pdf}
\caption{Our fact verification pipeline. We employ two graph reasoning approaches: STL where the evidence extraction and modelled separately, and MTL where further evidence filtering is performed jointly with veracity prediction by the Graph Reasoner.}
\label{fig:system}
\end{figure*}
\subsection{Document Retrieval}\label{sec:doc_retreival}
For document retrieval, we employ an entity linking and API search approach
similar to that of \citet{hanselowski-etal-2018-ukp}. The {WikiMedia API}\footnote{\url{https://www.mediawiki.org/wiki/API}} is used to query Wikipedia for articles related to the claim, using named entities and noun phrases from the claim as search terms. These retrieved Wikipedia page titles form our candidate document set.
Named entities that are not retrieved by the API are then extracted from the claim
as a handful of these identify pages which are present in the Wikipedia dump (e.g., \textbf{/wiki/Lars\_Hjorth} is present in the provided Wikipedia evidence dump, but is not returned by the WikiMedia API). In the same vein, we discard titles which are returned by the API, but are not in the Wikipedia dump. TF-IDF and cosine similarity are employed to score and rerank the retrieved Wikipedia articles with respect to their similarity to the claim.
As in the approach of \citet{hanselowski-etal-2018-ukp}, the seven highest ranked pages are chosen at test time.
For completeness, we also experiment with approaches to document retrieval which select pages based on a threshold score \citep{nie2019revealing}. Ultimately, we find these methods yield lower precision.
\subsection{Evidence Reasoning Graph}\label{sec:graph_construction}
Similar to other fact verification systems \cite{augenstein-etal-2019-multifc,hidey-etal-2020-deseption}, we jointly train our model for both the evidence selection and veracity prediction tasks. In contrast to these approaches, however, we employ a graph reasoning module for the joint learning of the two tasks.
We choose this approach to exploit the permutation invariance of evidence with respect to a claim, as there is no canonical ordering of evidence. Our graph formulation differs from previous graph-based fact verification systems in that
we construct a \emph{heterogeneous graph to model both tabular and sequence evidence data}.
In the following sections we will describe two specific approaches that are taken for the fact verification task:
\begin{enumerate*}[label=(\arabic*)]
\item where we condition the graph model to learn both node-level, fine-grained evidence selection and graph-level veracity label prediction simultaneously, and
\item where we only learn graph-level veracity prediction.
\end{enumerate*}
\sethlcolor{beaublue}
\begin{table*}[ht]
\centering
\scalebox{0.9}{
\begin{tabular}{cp{5cm}p{7.5cm}}
\toprule
\textbf{Evidence Type} & \textbf{Linearization} & \textbf{Example from FEVEROUS dataset}
\\\midrule
\textbf{Infoboxes} & & \\
\cmidrule{1-1}
Headers & \texttt{TABLE} \textbf{has} \tcbox{\texttt{CELL\_I\_J}} \newline [in \texttt{SUBHEADER}] & Brewster Productions has \tcbox{Genres}. \newline [\textbf{/wiki/Brewster\_Productions}] \\[0.2cm]
Non-headers & \texttt{CELL\_I\_0} \textbf{of} \texttt{TABLE} \newline [in \texttt{SUBHEADER}] \textbf{is} \tcbox{\texttt{CELL\_I\_J}} & Current ranking of Barbora Krejčíková in Singles is \tcbox{No. 65 (16 November 2020)}. [\textbf{/wiki/Barbora\_Krejcikova}]\\
\midrule
\textbf{General tables} & &\\
\cmidrule{1-1}
Headers & \texttt{TABLE} \textbf{has} \tcbox{\texttt{CELL\_I\_J}} \newline [in \texttt{SUBHEADER}] & The 1964 United States Senate election in Maine has \tcbox{Party}.\newline [\textbf{/wiki/1908\_Clemson\_Tigers\_football\_team}]\\[0.2cm]
Non-headers & \texttt{TABLE/PAGE} \textbf{has} \newline \texttt{SUBHEADER\_0} \texttt{CELL\_I\_0}\newline \textbf{in} \texttt{SUBHEADER\_J} \newline\textbf{of} \tcbox{\texttt{CELL\_I\_J}} & 2014 Ladies European Tour has Rank 9 in Player of \tcbox{Florentyna Parker}.\newline [\textbf{/wiki/2014\_Ladies\_European\_Tour}]\\
\midrule
\textbf{List items} & & \\
\cmidrule{1-1}
Without subheaders & \texttt{TITLE} \textbf{includes} \tcbox{\texttt{ITEM\_I\_J}} & Site includes \tcbox{Location, a point or an area on the} \tcbox{Earth's surface or elsewhere.} \newline[\textbf{/wiki/Site}] \\[0.2cm]
With subheaders & \texttt{SUBHEADERS} \textbf{for} \texttt{TITLE} \newline \textbf{includes} \tcbox{\texttt{ITEM\_I\_J}} & The Player Honours for Park Sang-in includes \tcbox{K-League Best XI: 1985} \newline [\textbf{/wiki/Park\_Sang-in}] \\
\bottomrule
\end{tabular}}
\caption{Templates for encoding tabular evidence. \texttt{CELL\_I\_0}, \texttt{SUBHEADER\_0}, \texttt{SUBHEADER\_J}, \texttt{SUBHEADERS}, \texttt{TABLE}, \texttt{TITLE} and \texttt{PAGE} are all context elements. The content of the evidence item is \tcbox{highlighted}. In each case \texttt{ITEM\_I\_J} denotes list item content and \texttt{CELL\_I\_J} denotes table cell content.}
\label{tab:linearization}
\end{table*}
\paragraph{Linearizing Tabular Data.}
We linearize both table and list evidence data and generate from these linearizations a contextualized sequence representation which captures information about each cell as well as its surrounding page elements.
This is accomplished using templates that distinguish explicitly between infoboxes and general tables.
For the latter, we engineer the templates to handle two particular complexities that are present only in general tables:
\begin{enumerate*}[label=(\arabic*)]
\item nested headers, and
\item table cells which span multiple rows and multiple columns (see Figure \ref{fig:complex-tables}).
\end{enumerate*}
Furthermore, we also employ templates for producing context-rich representations of item lists (see Table \ref{tab:linearization} for more details).
\sethlcolor{pink}
\begin{figure}[ht]
\centering
\scalebox{0.79}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{\tcbox{Club}} & \multirow{2}{*}{\tcbox{Season}} & \multicolumn{3}{|c|}{\tcbox{League}} \\ \cline{3-5}
& & Division & Apps & Goals \\ \hline
Santa Cruz & 2019 & Série C & 7 & 1 \\ \hline
\multirow{3}{*}{\tcbox{Athletico Paranaense}} & 2020 & \multirow{2}{*}{\tcbox{Serie A}} & 0 & 0 \\ \cline{2-2}\cline{4-5}
& 2021 & & 0 & 0 \\ \cline{2-5}
& \multicolumn{2}{|c|}{\tcbox{Total}} & 0 & 0 \\ \hline
Guarani (loan) & 2020 & Série B & 5 & 0 \\ \hline
\end{tabular}}
\caption{Example of a complex general table taken from \textbf{/wiki/Elias\_Carioca}. This table contains both multi-row cells and multi-column cells, some of which are headers. They are shown \tcbox{highlighted}.}
\label{fig:complex-tables}
\end{figure}
\paragraph{Graph Structure.} We construct a fully connected graph $G = (V, E)$, where each node $n_i \in V$ represents a claim-evidence pair, similar to previous evidence graphs for automated fact checking \cite{zhao2019transformer,zhou-etal-2019-gear}.
Self-loops are also included in $G$ for each node in order to improve evidence reasoning, so the set of edges for the graph is $ E = \{(n_i, n_j) \text{ }\vert\text{ } n_i, n_j \in V \}$.
At test time, we take
the Wikipedia pages output by the document retrieval module, segment each Wikipedia page into its constituent page items (i.e., sentences, table cells, table captions and list items), and refer to these as evidence items.
These evidence items are then filtered. Using an ensemble of pre-trained S-BERT sentence embeddings \cite{reimers-gurevych-2019-sentence},
we perform semantic search with the claim as our query. Cosine similarity is then used to rank the evidence items. For the joint and single training approaches, we select a different number of evidence nodes; in particular, a larger graph is used with the former.
For training, we select nodes to occupy the graph according to the following rule-set:
\begin{enumerate}[label=(\arabic*)]
\item If gold evidence, include as a node.
\item For claims that require a single evidence item, include the top four candidates returned using our semantic search approach as nodes.
\item For claims with more than one gold evidence item, retrieve the same number of candidates as gold items.
\end{enumerate}
The union of these sets form the collection of nodes, $V$, that occupy the evidence graph $G$.
\paragraph{Node Representations.}
For the initial node representations, similar to \citet{liu-etal-2020-fine} and \citet{zhao2019transformer}, we represent evidence nodes with the claim to which they refer as context.
The claim is concatenated with a constructed context-rich evidence sequence $e_i$. When constructing the sequences, $e_i$, we consider the unstructured evidence items (i.e, sentences and table captions) and the structured table and list items separately.
For sentences and table captions the evidence sequence is generated by concatenating the evidence item with the page title which serves as context. For table cells and list items we perform a per cell linearization, where this linearization forms the evidence sequence for table and list item evidence items (see Table \ref{tab:linearization} for the templates used). For each evidence item, we feed this claim-evidence sequence pair to a RoBERTa encoder \cite{liu2019roberta}, and each node $\mathbf{n}_i \in V$ in an evidence graph has the pooled output of the last hidden state of the [CLS] token, $\mathbf{h}_i^0$ as its initial state:
\begin{equation}
\mathbf{n}_i = \mathbf{h}_i^0 = \text{RoBERTa}_{\text{CLS}}(c, e_i).
\label{eq:rep1}
\end{equation}
\subsection{Evidence Selection and Veracity Prediction}\label{sec:graph_reasoning}
\paragraph{Training graphs.} We train two graph networks, one for joint veracity prediction and evidence extraction, and the second solely for the veracity prediction task.
\paragraph{Oversampling NEI Instances.}
As discussed in Section~\ref{sec:data}, the FEVEROUS dataset suffers from a significant class imbalance with respect to the NEI instances.
Similar to the baseline approach, we employ
techniques for generating new NEI instances in order to
address this issue.
Concretely, we use two data augmentation strategies in order to increase the number of NEI at train time:
\begin{enumerate*}[label=(\arabic*)]
\item evidence set reduction, and
\item claim mutation.
\end{enumerate*}
For the first case, we randomly sample \textsc{Supports} and \textsc{Refutes} instances
and drop evidence. Given the distribution of entity substituted and non-entity substituted mutations --- as discovered in our data analysis (see Section \ref{sec:data}) --- we make the choice to include in the training data: 15,000 constructed NEI examples made using the first approach, and 5,946 NEI examples constructed using the second. This means that a total of 92,237 NEI examples were used for model training.
\paragraph{STL: Separate Verification and Extraction.}
For the first model, we perform the tasks of fact extraction and verification of evidence selection and veracity prediction separately. We make use of an ensemble semantic search method for extracting top evidence items for claims. We employ S-BERT\footnote{We use the `msmarco-distilbert-base-v4' and `paraphrase-mpnet-base-v2'' pretrained models.} to encode the claim and the evidence items separately. We then compute cosine similarity for the claim evidence pair.
The 25 highest ranking tabular evidence items were chosen, and the top-scoring 5 sentences (and captions) for each claim were selected as the nodes of our evidence reasoning graph at test time. This is the evidence limit stated by the FEVEROUS metric.
When constructing the evidence graph at test time, we choose to exclude header cells and list items evidence types as nodes as they account for a very small portion of evidence items (see Section \ref{sec:data}), and experimentation shows that the evidence extraction model has a bias to favour these evidence elements over sentences. We use two GAT layers in our graph reasoning model, with: a hidden layer size of 128, embeddings size of 1024, and a global attention layer for node aggregation. The logits generated by the model are fed directly to a categorical cross entropy loss function, and the veracity label output probability distribution \text{\textbf{p}$_i$}, for each evidence graph $G_i \in \mathcal{G} $, is computed using the relation
\begin{equation}
\mathbf{p}_i = \mathrm{softmax}(\text{MLP}(\mathbf{W} \mathbf{o}_i + \mathbf{b})),
\label{eq:veracity-prob}
\end{equation}
where
\begin{equation}
\mathbf{o}_i = \sum_{n_i \in V}^{n_i} \mathrm{softmax} \left(
h_{\mathrm{gate}} ( \mathbf{x}_n ) \right) \odot
h_{\mathbf{\Theta}} ( \mathbf{x}_n ).
\label{eq:global-attention}
\end{equation}
\paragraph{MTL: Joint Verification and Extraction.}
We also experiment with a joint training or multi-task learning (MTL) approach in order to explore if simultaneously learning the veracity label and evidence items can lead to improvements in the label accuracy metric and also evidence prediction recall and precision. For this approach, we construct larger evidence graphs at test time, including the thirty-five highest ranked evidence items according to the S-BERT evidence extraction module. The intention is for the graph network to learn a binary classification for each claim-evidence pair in the network.
For the multi-task learning model, we increase the dimensions of our graph network by feeding our initial input graphs to two separate GAT components (in order to increase the model's capacity for
learning the more complex multi-task objective), the outputs of which, $\mathbf{h}_a$ and $\mathbf{h}_b$, are concatenated to form representation $\mathbf{h}$ over which we compute global attention,
where the combined representation takes the form:\footnote{We denote the concatenation of vectors $\mathbf{x}$ and $\mathbf{y}$, by $[\mathbf{x}; \mathbf{y}]$.}
\begin{equation}
\mathbf{h} = [ \mathbf{h}_a ; \mathbf{h}_b ].
\end{equation}
The binary cross entropy loss is then used for the node-level evidence selection task, and, as with the separated model, we use categorical cross entropy to compute the graph-level veracity prediction, as shown in (\ref{eq:veracity-prob}) and (\ref{eq:global-attention}).
The resulting joint graph neural network is then trained with the linear-additive objective
\begin{equation}
\mathcal{L}_{\text{joint}} = \lambda \mathcal{L}_{\text{evidence}} + \mathcal{L}_{\text{label}},
\end{equation}
taking the form of a Lagrangian with multiplier $\lambda \geq 0$, where
\begin{equation}
\mathcal{L}_{\text{evidence}} = \mathrm{sigmoid}(\text{MLP}(\mathbf{W}_i \mathbf{h} + \mathbf{b})).
\end{equation}
As with the previous approach, we feed the model logits to our loss functions and use an Adam optimizer to train the network, and set $\lambda = 0.5$.
\subsection{Hyper-parameter Settings}
For all models, we make use of a \textsc{RoBERTA-large} model which is pre-trained on a number of NLI datasets including \textsc{NLI-FEVER} \citep{nie-etal-2020-adversarial}. We use a maximum sequence length of 512 for encoding all claim-evidence concatenated pairs. We experiment with the following learning rates [\underline{1e-5}, 5e-5, 1e-4], ultimately choosing the learning rate underlined. Training was performed using batch size of 64. We train the single objective model for 20k steps,
choosing the weights with the minimum veracity prediction label loss, and train the joint model for 20k
steps, taking the model with highest recall for evidence extraction. The Adam optimizer is used in training for both approaches.
\section{Results}
We report the results of the entire
fact extraction and verification
pipeline, as well as the evaluation of the pipeline's performance for
intermediate stages of the fact verification system, e.g., document retrieval and evidence selection.
\paragraph{Document retrieval.} Our method for DR shows significant improvement on the TF-IDF+DrQA approach used by the baseline. In particular we find that our document retrieval module sees gains from querying the Wikipedia dump for pages related to entities which are not retrieved by the WikiMedia API. However, we do note that our approach struggles to retrieve Wikipedia pages in cases relating to specific events which can only be inferred through reasoning over the claim.
For example, consider the following claim from the development dataset: \textit{``2014 Sky Blue FC season number 18 Lindsi Cutshall (born October 18, 1990) played the FW position.''}. In this case, the document selection process returns \textit{``Sky Blue FC''}, \textit{``Lindsi Cutshall''}, and \textit{``2015 Sky Blue FC season''}, but does not return the gold evidence page \textit{``2014 Sky Blue FC season''} which is required for verification of the claim.
We report recall@$k$ for $k =\{3,5,7\}$ where $k$ is the number of Wikipedia page documents retrieved by the module. Our approach shows significant improvements over the baseline (see Table \ref{tab:document_retrieval}).
\begin{table}[ht]
\centering
\begin{tabular}{lccc}
\toprule
\textbf{Method} & \textbf{Rec@3} & \textbf{Rec@5} & \textbf{Rec@7}\\
\midrule
Baseline & 0.58 & 0.69 & -- \\
Ours & 0.65 & 0.73 & \textbf{0.80}\\
\hline
\end{tabular}
\caption{Document retrieval results measured by Recall@$k$, where $k$ is the number of documents retrieved. Results reported for the dev set.}
\label{tab:document_retrieval}
\end{table}
\paragraph{Evidence selection and veracity prediction.} For evidence selection and veracity prediction, we observe that the approach trained for the single objective of veracity prediction marginally outperforms the jointly trained module (see Table \ref{tab:evidence_coverage}). We hypothesize that the difficulty of learning to select the correct evidence nodes along with predicting veracity might be the cause of this. It is possible that performance of the joint model could be improved with better evidence representation or through the use of a different graph structure, e.g., by incorporating edge attributes.
\begin{table}[H]
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Method} & \textbf{Recall} & \textbf{LA} \\
\midrule
Baseline & 29.51 & 53.22 \\
STL & \textbf{37.20}&
\textbf{
62.89}\\
MTL & 36.25 & 62.21\\
\bottomrule
\end{tabular}
\caption{System performance of the dev set for evidence recall and label accuracy.}
\label{tab:evidence_coverage}
\end{table}
Finally, we submitted our blind test results for STL, which is our best performing method, to the after-competition FEVEROUS leaderboard. Our system outperforms the baseline significantly on both the FEVEROUS metric and also label accuracy as reported in Table \ref{tab:final_results}. Furthermore, our results on the blind test data show almost no degradation from development to test set with respect to the evidence recall which remains at 37\%. So the cause of our reduced FEVEROUS score between the development and test data is mainly due to a decrease in label accuracy from 63\% on the development data to 53\% for the test data. We are confident that this could be improved with better label accuracy for the NEI class.
\begin{table}[ht]
\centering
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c}{\textbf{Dev}} & \multicolumn{2}{c}{\textbf{Test}} \\
\cmidrule{2-5}
& \textbf{LA} & \textbf{FS} & \textbf{LA} & \textbf{FS} \\
\hline
Baseline & 53.22 & 19.28 & 47.60 & 17.70 \\
Ours &
\textbf{62.81} & \textbf{25.71} & \textbf{53.12} & \textbf{22.51} \\
\bottomrule
\end{tabular}
\caption{Results for label accuracy (\textbf{LA}) and FEVEROUS score (\textbf{FS}) for the full pipeline on both the development and blind test datasets.}
\label{tab:final_results}
\end{table}
\subsection{Case Study and System Interpretability}
We present an example of a claim from the development dataset, which requires both tabular and textual evidence to be verified. We show how it is labelled by our pipeline (see Table \ref{tab:case_study}). For this example, our evidence selection module correctly identifies all three evidence items required to fact-check the claim. Furthermore, two of the three evidence items receive the highest relevance scores from our evidence selection module. Of the irrelevant evidence items retrieved for this claim, eleven out of twenty-two come from an unrelated Wikipedia page (``Scomadi Turismo Leggera"). The correct label of \textsc{Supports} is also predicted for this instance.
In order to explore the interpretability system predictions, for this same instance, we analyse the node attention weights for the first GAT layer, they are shown in parenthesis for each predicted evidence item in Table \ref{tab:case_study}. We can see that the two evidence nodes with the highest values both correspond to items in the gold evidence set. However the third gold evidence item, \textsc{Scomadi\_sentence\_15}, has a much lower weight than a number of items which are not in the gold evidence set.
\begin{table}[ht]
\centering
\begin{tabular}{p{7.1cm}}
\toprule
\textbf{Claim}
``In 2019, Scomadi, a private limited company with limited liability, was bought by a British owner which changed Scomadi's management structure."\\
\midrule
\textbf{Evidence} \\
Scomadi\_cell\_0\_0\_1,\newline Scomadi\_sentence\_14, Scomadi\_sentence\_15.\\
\textbf{Predicted Evidence}\\
(1) \tcbox{Scomadi\_cell\_0\_0\_1}\hfill (\textit{0.1794}),\newline
(2) \tcbox{ Scomadi\_sentence\_14}\hfill (\textit{0.1203}),\newline
(3) Scomadi\_table\_caption\_0\hfill (\textit{0.0871}), \newline
(4) Scomadi\_cell\_0\_3\_1 \hfill (\textit{0.0685}), \newline
(5) Scomadi\_cell\_0\_7\_1\hfill (\textit{0.0561}), \newline
(6) Scomadi\_cell\_0\_2\_1\hfill (\textit{0.0472}) \newline
(7) Scomadi\_cell\_0\_8\_1\hfill (\textit{0.0405}) \newline
(8) \tcbox{Scomadi\_sentence\_15}\hfill (\textit{0.0360}), \newline
(9) Scomadi\_sentence\_11\hfill (\textit{0.0324}), \newline
(10) Scomadi\_sentence\_0\hfill (\textit{0.0292}), \newline
(11) Scomadi\_cell\_0\_6\_1\hfill (\textit{0.0266}), \newline
(12) Scomadi\_cell\_0\_5\_1\hfill (\textit{0.0243}), \newline
(13) Scomadi\_cell\_0\_1\_1\hfill (\textit{0.0224}), \newline
(14) Scomadi\_cell\_0\_4\_1\hfill (\textit{0.0208}).\\
\midrule
\textbf{Label} \hfill \textsc{Supports}\\
\textbf{Predicted Label} \hfill \textsc{Supports}\\
\hline
\end{tabular}
\caption{Example claim from the development dataset which requires extracting both tabular and textual evidence in order for it to be verified. For brevity we only show the top fourteen (out of twenty-five) extracted evidence items, correctly predicted evidence is \tcbox{highlighted}.}
\label{tab:case_study}
\end{table}
\section{Conclusion and Future Work}
In this work, we have demonstrated two novel approaches for fact extraction and verification that support both structured and unstructured evidence. These architectures were motivated by literature in argumentation, and also by the empirical analysis presented in Section~\ref{sec:data}. Our results show significant improvement over the shared task baseline for both the joint and separated models, with the latter generating a marginal improvement on the FEVEROUS metric compared with the former. Overall, we conclude that the use of graph-based reasoning in fact verification systems could hold great promise for future lines of work.
We hypothesize that exploring varied task formulations could potentially yield strong improvements in model performance, for example: constructing reasoning graphs on an evidence set level, or using the FEVER dataset to augment the NEI claims used during training, or further fine-tuning sentence embeddings on the FEVEROUS dataset. Furthermore, we believe further insights could be gained by evaluating our table linearization approach on other datasets related to fact verification over tabular data. In addition to this, we hope to conduct further experiments with our graph based approach using structured and unstructured evidence independently, to further investigate which aspect of our approach led to the improvement on the FEVEROUS score.
Incorporating prior knowledge or constraints into the training procedure would also be an interesting direction.
Finally, we
believe that our graph-based approach lends itself well to the extraction of veracity prediction \emph{explanations} \cite{kotonya-toni-2020-explainable}, obtained from evidence extracted from our underpinning graphs as justifications for claims. The ability to provide evidence for a claim, and to justify this, would better enable the integration of these techniques in practical systems.
\paragraph{Disclaimer}
This paper was prepared for informational purposes by the Artificial
Intelligence Research group of JPMorgan Chase \& Co and its affiliates (``J.P.\
Morgan''), and is not a product of the Research Department of J.P.\ Morgan.
J.P.\ Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information
contained herein. This document is not intended as investment research or
investment advice, or a recommendation, offer or solicitation for the purchase
or sale of any security, financial instrument, financial product or service, or
to be used in any way for evaluating the merits of participating in any
transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. \copyright{} 2021 JPMorgan Chase \& Co. All rights reserved.
|
2,869,038,157,069 | arxiv | \section{Introduction}
Thanks to the pervasive use of technology, we collect volumes of data that are doubling every two years. {\em Big Data analytics} aims at extracting hidden knowledge from the data and take valuable actions, often with the support of {\em artificial intelligence} (AI)~\cite{VENIERIS18}. For example, precision medicine can create custom medical practices after analyzing patients' data that are continuously collected. Data sources are inherently distributed and heterogeneous, while accurate predictions and decisions often lead to demanding processing requirements. High-Performance Big Data Analytics (HPDA) applications expose a high parallelism and can benefit from hardware acceleration, but the limited bandwidth and the excessive power consumption for data movements put high pressure on communication and storage. HPDA applications thus require (1) efficient hardware acceleration for data processing~\cite{RDF15,FPGA17}, (2) communication cost reduction, by moving the computation closer to the data sources~\cite{KAMBATLA20142561,MINUTOLI16}, and (3) data protection from unauthorized accesses during all application phases~\cite{jin2020security}.
To accelerate data processing, HPDA systems will combine heterogeneous nodes~\cite{9256819}, with general-purpose processors and FPGA devices, across different technologies (e.g., HPC, cloud computing, and edge devices).
Optimizing communication and storage requires to match the characteristics of the target system (e.g., distribution of the nodes and communication infrastructure, size of on-chip and off-chip memories, and number of memory channels) and the applications (e.g., data distribution and access patterns). AI methods and security threats may impose additional application and architectural constraints, especially when the data and the computation are geographically distributed.
Since a one-fits-all solution is impossible, future HPDA systems will be {\em data-driven} with application-specific optimizations to match the application requirements, the nature and locality of the data, and the hardware characteristics~\cite{KAMBATLA20142561}. Programming such systems necessitates the use of complex data management techniques and domain-specific annotations, which are not well supported in current design frameworks. This leaves most of the effort to the application developers. Solutions to these programmability issues demand methods to represent functional and non-functional properties, drive the hardware-software compilation, and dynamically manage the underlying distributed hardware in order to obtain fast, scalable, and secure HPDA systems.
The EU project EVEREST (dEsign enVironmEnt foR Extreme-Scale big data analyTics on heterogeneous platforms - \url{http://www.everest-h2020.eu}) proposes a design environment for HPDA applications on distributed and heterogeneous systems. The EVEREST target system seamlessly combines nodes with IBM POWER9 CPUs and coherent FPGA accelerators (for cloud computing), and disaggregated FPGA devices~\cite{8071053} (for edge computing). The EVEREST design environment complements state-of-the-art programming models (e.g., OpenCL, SYCL, OpenMP) with domain-specific extensions to (1) provide extra characteristics of the algorithms and data, (2) exploit the available hardware resources with alternative code/hardware variants, (3) promote the use of high-level synthesis (HLS)~\cite{TCAD16} for generating AI accelerators, and (4) improve the dynamic control of the distributed execution~\cite{cima2018hyperloom,Gadioli19Margot}.
\section{EVEREST Approach}
Our {\bf EVEREST System Development Kit (SDK)} is a design environment to ease the description, optimization and execution of Big Data applications with heterogeneous data sources onto FPGA-based architectures, operating at design and run time.
At {\em design time}, we focus on (1) the application description along with non-functional requirements, (2) the generation of several hardware and software variants, and (3) the customization of the distributed memory architecture. We aim at developing a data-driven hardware/software compilation framework that takes as input an application description using a combination of workflow libraries, AI libraries and frameworks, and domain-specific extensions. The compilation engine explores code variants and uses HLS for generating hardware accelerators. We represent the resulting application with mainstream parallel programming models (like SYCL). Flexible memory managers will enable to co-optimize computation, communication, and storage, to move the computation closer to the data, and to implement hardware-assisted data protection.
At {\em runtime}, we build a virtualized environment to dynamically select the code variant to execute for each task, based on the workload and data conditions. The virtualized environment will abstract hardware characteristics of the EVEREST nodes (based on different CPU architectures e.g., x86 on the cloud and ARM/RISC-V on the edge) to present an integrated execution environment for the applications. This combined solution allows designers to match the data requests with the underlying hardware to optimize the data transfers, exploit the spatial parallelism with the hardware accelerators, and react to changes in the workload conditions.
\section{Data-driven Compilation Framework}
\subsection{Application specification and definition of requirements}\label{sec:data}
The EVEREST design framework receives as input the application description (i.e., a workflow pipeline where each node can be specified in C/C++ or with proper AI libraries). Industry-grade applications often encompass end-to-end data processing workflows composed of a large number of interconnected computational tasks of various granularity. EVEREST will feature a scalable platform based on HyperLoom~\cite{cima2018hyperloom} for describing and executing complex workflows in large scale distributed environments with various virtualized heterogeneous resources. The envisioned platform aims to improve resource utilization and reduces the overall workflow processing time.
Application experts are offered {\bf embedded domain-specific languages} (DSLs) to express the semantics and security requirements of computational tasks to enable high-level code optimizations.
DSL extensions have been successfully demonstrated in many domains, such as computational fluid dynamics~\cite{rink_rwdsl18}, hybrid particle-mesh simulations~\cite{karol_toms18}, tensor expression optimizations~\cite{rink_gpce18,rink_array19,chen2018tvm}, and dataflow languages~\cite{ertel_cc18}.
EVEREST proposes a data-centric approach for security, dealing with confidentiality, authentication and integrity of the data handled by the system with {\bf hardware-assisted data protection} applied to both edge devices and data center nodes.
EVEREST will propose a comprehensive library of optimized accelerators for memory and near memory encryption, fitting the area, energy and performance constraints of the platforms. We will include information flow tracking, monitoring, and protection against malicious uses, including side-channel and buffer-overflow attacks~\cite{8356053}.
EVEREST aims at developing a unified MLIR representation for the transparent support of several high-level ML frameworks (e.g., TensorFlow or PyTorch) and high-level optimizers (e.g., XLA, Glow, TVM)~\cite{SZE17}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{EVEREST_compiler-flow.pdf}
\caption{\vspace{-6pt}Overview of the data-driven compilation flow.}\vspace{-6pt}
\label{fig:comp-flow}
\end{figure}
\subsection{Generation of software and hardware variants}
DSLs will be used to concisely express performance-critical functionality and annotate data characteristics and requirements. Tensors and particles are two examples of EVEREST data-centric programming abstractions that will enable optimization of data communications and generation of custom memory subsystems. DSLs for expression languages will enable highly-optimized kernel generation either in software or hardware to enlarge the optimization space~\cite{rink_gpce18}, while allowing more control for provably safe execution~\cite{rink_array19}.
For a higher-level coordination of the workflow kernels, EVEREST will look at functional abstractions to implicitly express the application dataflow~\cite{ertel_cc18,ertel_haskell19} and its integration on HyperLoom~\cite{cima2018hyperloom}.
The compiler front-end unifies the orchestration and the kernel specifications into a single MLIR as shown in Fig.~\ref{fig:comp-flow}.
We will extend the LLVM compilation framework~\cite{lattner07} with dedicated MLIR dialects~\cite{mlir} for domain-specific kernels. The tool chain will support standard exchange formats used in machine learning (e.g., NNEF or ONNX).
The middle-end of the compilation flow will rely on high-level architecture models~\cite[Chapter 6]{castrillon14_springer}\cite{ieee-2804-2019} and simulators~\cite{lowe-power_gem5_2020,menard_samos17} to explore the design space and create {\bf multiple hardware and software variants}.
These variants are performance/energy trade-offs that are exposed to the runtime system.
For instance, a software-only implementation could explore layouts of particles as array-of-structures or structure-of-arrays, or could tile complex tensor expressions to fit the memory hierarchy while allowing different threading implementations for the runtime.
Hardware variants could implement a chain of tensor operations directly on the FPGA logic before writing back to main memory.
Hardware/software partitioning will be driven by annotations and the two parts will be co-optimized, including hardware estimations for code-snippets (cf. Fig.~\ref{fig:comp-flow}).
EVEREST will leverage FPGA resources to create {\bf hardware accelerators with high-level synthesis}, especially for data-intensive and AI tasks. In EVEREST, we use Bambu, an open-source HLS tool based on both GCC and LLVM~\cite{6645550}. Bambu will optimize execution and memory bandwidth of accelerators. Data distribution introduces additional challenges in terms of variable read/write latency and energy based on the location of the data. Since the memory behavior of an application ranges from statically predictable patterns~\cite{FPGA17} to irregular memory accesses~\cite{RDF15}, we will use a {\bf fully automated and transparent memory management} at both compile time and runtime with a combination of polyhedral-based transformations~\cite{WANG14}, multi-port memories~\cite{PILATO17} and dedicated micro-architectures to schedule the memory accesses~\cite{MINUTOLI16}, interleave the memory requests and hide the communication latency with the distributed memories~\cite{PILATO11}. We will generate and optimize such accelerators based on the information extracted from the DSL annotations.
EVEREST will extend high-level synthesis for the automatic integration of security features, like application-specific dynamic information flow tracking~\cite{8114281,8356053}. We will also develop and use a library of cryptographic functions, to ensure data integrity, confidentiality, and authentication. Such cryptographic routines will match application requirements and dynamic behaviors. Dedicated hardware monitors will detect anomalies with respect to the expected data behaviors (timing patterns, access patterns, typical sizes and ranges), activating proper dynamic adaptation in the form of ``auto-protection''.
Given the set of variants, the backend will generate software implementation relying on state-of-the-art programming models (e.g. SYCL) to enable seamless integration in the tooling infrastructure. Meta-information about the variants will be provided to the runtime system to support dynamic selection.
Finally, standard toolchains will be used to generate binaries and bitstreams for the target devices.
\section{Virtualization-based Runtime Optimization}
EVEREST features a {\bf distributed runtime support} to manage and coordinate the computation across the different system nodes. Tasks are defined in a way that allows runtime migration of both data and computations.
FPGA accelerated applications and the runtime framework will be designed with a {\bf virtualized environment} to abstract the hardware resources. This approach improves efficiency and security. Also, we will be able to seamlessly move the computation between edge nodes and also between edge and cloud parts.
The runtime layer optimizes the use of heterogeneous and distributed resources by parallel application instances running in different virtual machines (VMs). The EVEREST virtualized runtime environment automatically manage the code to run and configure the hardware based on the workload conditions and the data distribution. virtualization techniques will abstract hardware characteristics of the heterogeneous target nodes to present an integrated execution environment for the applications. As described in Section~\ref{sec:target}, the nodes may feature different CPUs (i.e., x86 in the cloud and ARM/RISC-V in the edge~\cite{sechkova2019cloud}) and accelerators (e.g., GPUs and FPGAs~\cite{ChiotakisFPGA}). Fig.~\ref{fig:virt_env} shows an overview of the EVEREST virtualized runtime environment. Its implementation includes both hypervisor and guest OS extensions to manage, optimize, and monitor the access to hardware from guest applications. These extensions provide:
\begin{enumerate}[leftmargin=1.5em]
\item {\bf Data protection layer.} The system monitors the execution to identify malicious attacks (see Section~\ref{sec:data} and react by using the security mechanisms added by the compiler.
\item {\bf Dynamic hardware-software adaptation strategy.} We propose an intelligent policy to select the code variant or hardware configuration to execute, among the ones pre-generated at compile time, based on the system status.
\item {\bf Virtualization support and hypervisor extensions.} Hardware configurable parameters, including accelerator APIs, are exposed directly to the applications inside the VMs, requiring also guest OS enhancements (e.g., drivers).
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=0.90\columnwidth]{EVEREST_Virt_env_v2.pdf}
\vspace{-8pt}\caption{Virtualized runtime environment overview.}\vspace{-12pt}
\label{fig:virt_env}
\end{figure}
The EVEREST virtualization environment will interact with the underlying hardware to select the variants to execute.
EVEREST provides {\bf dynamic application auto-tuning capabilities} based on mARGOt~\cite{Gadioli19Margot}, a dynamic decision-maker that performs an automatic selection of the variant to execute for each critical kernel identified at compile time. For example, a variant that makes heavy access to unavailable hardware resources can be replaced by a variant that fits better with the system status. This selection is based on (1) the dynamic characteristics of the target system (e.g., available resources) \cite{Paone14OCL}, (2) the optimization goal set for execution (e.g., performance or energy consumption) \cite{Gadioli18Socrates,khasanov_date20}, (3) the additional dynamic requirements (e.g., security monitoring, data features \cite{vitali19}), and (4) the available techniques for data management (e.g., data representations and distributed allocation). The selection will generalize the concept of affinity between the code variants and the available system configurations and requirements. Hardware monitors will collect the information to make the selection.
Guest programs will configure the underlying hardware or make specific requests based on workload conditions, environment changes and the availability of specific hardware resources (e.g., communication channels, remote notes). API remoting techniques will improve data exchanges.
The distributed runtime also leverages the configuration of pre-defined hardware resources for deep learning, like reconfigurable AI networks.
\section{EVEREST Target System}\label{sec:target}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{everest_concept_v2.png}
\vspace{-18pt}\caption{EVEREST ecosystem overview.}\vspace{-10pt}
\label{fig:ecosystem}
\end{figure}
In EVEREST, we envision a hierarchy of processing environments, as shown in Fig.~\ref{fig:ecosystem}. The outermost layer ({\em End-Point Devices}) receives the stream of data and performs initial processing under strict latency constraints and with the limited performance available in end-point devices. These requirements dictate very fast data pre-processing, inference and perhaps only limited training. Depending on the application, these edge nodes can be complemented by an inner-edge environment that does more extensive processing, training and data analysis. The inner-edge environment features more powerful hardware and less stringent requirements for real-time processing. The results of this layer are then forwarded to the core cloud services (public, private or hybrid), where more extensive analysis and model building is performed on heterogeneous hardware.
Today's edge nodes are typically scaled versions of cloud servers, which primarily combine CPUs with tightly-coupled co-processors (e.g. GPUs). However, CPUs and GPUs are optimized towards batch processing of in-memory data and can hardly provide deterministic performance for the processing of streaming data coming from the I/O channels of end-point devices. Future edge servers call for a new heterogeneous computing node tailored to the processing of streaming data at low power consumption and high energy efficiency. To support this vision, the EVEREST project targets distributed architectures composed of industry established computing nodes, with CPUs and GPUs, as well as experimental heterogeneous nodes with FPGAs. Each experimental node may feature one or more FPGA devices for hardware acceleration and one or more physical memories (either local or external to the FPGA), as shown in Fig.~\ref{fig:arch}. Such systems will run Linux as Operating System (OS) and a hypervisor to manage the hardware resources. Note that the EVEREST approach is not limited to these architectures. In fact, specifying the workflow pipelines at a higher level of abstraction, within the specifications of EVEREST SDK and virtualization technology, as discussed previously, will enable the porting of the applications to architectures with heterogeneous GPU-based nodes and end-user embedded devices.
We aim at developing a {\bf small multi-node demonstrator} based on the technology and the components available during the project’s timeline. To develop the EVEREST SDK for heterogeneous systems, we focus on two state-of-the-art FPGA-based research platforms: a CPU-managed system that rely on tightly-coupled bus-attached FPGAs \cite{did2019} and an FPGA-disaggregated system that relies on loosely-coupled network-attached FPGAs~\cite{8071053}. The final EVEREST demonstrator will feature both nodes to examine how the different architectural configurations can accommodate big data workloads at the edge and on the cloud.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{archfig.pdf}
\vspace{-6pt}\caption{EVEREST featured system as a combination of heterogeneous nodes with OpenCAPI cache coherent and TCP/UDP protocols.}\vspace{-12pt}
\label{fig:arch}
\end{figure}
In EVEREST, the POWER9 node with tightly-coupled bus-attached FPGAs forms the basic platform to research these challenges. In addition, the system is augmented with loosely-coupled network attached FPGAs to increase the parallel processing capability from multiple I/Os streams. The latter platform will be also evaluated as an edge node, where processing closer to the multiple I/O streams can offer great performance due to the low-latency and high-energy efficiency of the FPGAs. Both bus-attached FPGAs and network-attached FPGAs are envisioned as a scale-up opportunity of the POWER9 node, while multiple such nodes will be extended across the data-center, as a scale-out configuration.
The demonstrator based on loosely-coupled network-attached FPGAs will rely on the cloudFPGA platform~\cite{8071053}. CloudFPGA is a research platform that disaggregates the FPGA accelerator from the server, turning it into a stand-alone computing resource. Such network-attached FPGAs can be deployed at large scale and independently of the number of CPU servers in the data-center (DC). The network attachment allows them to seamlessly connect with each other as well as with one or more CPUs. The resulting disaggregated heterogeneous computing infrastructure is capable to dynamically adapt to the scale of any workload. Meanwhile, large-scale applications ranging from business analytics to scientific simulations and AI have started to scale out using distributed frameworks such as Hadoop, Spark, HyperLoom, and Tensorflow.
The cloudFPGA platform enables a user to acquire, distribute, configure and operate stand-alone network-attached FPGAs at large scale in DC infrastructures. The use of a shell-role architecture combined with partial reconfiguration provides for isolation of the system management functions from the user logic within the network-attached FPGA. This approach protects the integrity of the DC network by creating a separation between privileged and non-privileged user logic functions.
\section{EVEREST Use Cases}
We drive our research with three industrial applications: (1) a weather analysis-based {\bf prediction model for the energy trading market}, (2) an application for {\bf air-quality monitoring} of industrial sites, and (3) a {\bf traffic modeling framework for intelligent transportation} in smart cities.
The applications are representatives of future HPDA applications: they have large and heterogeneous data sets ({\em Volume} and {\em Variety}), including historical and real-time data ({\em Velocity}) with important security concerns during communication and storage ({\em Veracity}). They are also aligned with the United Nations Sustainable Development Goals (no. 7, 9, 11 and 13).
\subsection{Weather-based predictions for renewable energy production}
In 2017, for the first time, the European Union generated more electricity from wind, solar and biomass than from coal according to new analysis from \textit{Sandbag} and \textit{Agora Energiewende}.
The European energy market shows a strong interplay between the different energy sources: for example, a drought period can affect the hydroelectric production, demanding gas generation to counterbalance. Also, the prediction of energy production from renewable sources (in particular wind) is uncertain. Renewable energy production forecasting systems currently rely on an ensemble of meteorological predictions provided by global circulation models with grid spacing between 15 and 25 km and hourly temporal resolution. This ensemble predicts variables such as 2m temperature, near-surface wind speed, incoming solar radiation, and rainfall depth, that become input of a subsequent deep learning model trying to characterize the complex input/output relationship of the given power plant under consideration. Even using ensemble approaches, large uncertainties still exist when operating forecasting systems based on meteorological variables predictions at tenths of km’s, especially when dealing with sudden local changes in cloud cover and wind intensity.
In EVEREST, we aim at reducing the cost of imbalance in case of severe meteorological ramp-up/down events. The application will forecast the energy produced by a wind farm in the next day with a 24-hour prediction on a hourly basis.
Thanks to transparent hardware acceleration, we will be able to increase the resolution of weather forecast ensembles to better predict high-localized meteorological variations at hourly scale~\cite{lagasio2019predictive,lagasio2019synergistic}. Thanks to AI tools, we will combine the resulting weather models with historical data.
\subsection{Air-quality monitoring in industrial sites}
Every year new publications show the impact of air quality on public health. The latest figures from the World Health Organization (WHO) show that air pollution kills an estimated seven million people worldwide every year. WHO data shows also that 9 out of 10 people breathe air containing high levels of pollutants, and that the economic impact of this pollution on health is estimated to 5.7K billions of dollars per year. Industry contributes to this impact and must adapt on the one hand to increasing regulatory constraints and on the other hand to citizen pressure, in particular due to the development of low-cost air-quality sensors providing massive amounts of (low quality) spatial information.
To support manufacturers, NUMTECH offers Plum'air, a service that allows an industrial site to collect real-time information about the monitoring and control of the pollution. In forecast mode, it can be used as a decision tool for an industrial site to adapt its activity in order to reduce its impact, especially in the transition phase before the implementation of heavy investments in terms of emission treatment or reduction systems. In this mode, Plum'air aims at forecasting the environmental impacts due to atmospheric releases of an industrial site at local scale (within 10 km from emission sources).
In EVEREST, we forecast the environmental impacts of chemical pollutants combining high-resolution weather ensembles with local data. Together with hardware acceleration, we will be able to obtain accurate information about the environment so that the industrial site can promptly delay production activities that may have an impact (e.g., increase of atmospheric releases) or activate emission reduction treatments.
\subsection{Traffic modeling for intelligent transportation}
Traffic modelling and prediction is a critical component for Smart Cities to build their intelligent traffic management system (ITS). Our approach for designing such a component is by creating a traffic modelling ecosystem comprised of tightly coupled processing elements such as reading big sensory data real-time and of a long-history records; traffic simulator which boosts the raw sensory data dataset into rich training sequences; traffic prediction model which learns from the training data set; route calculation as a service exploiting traffic prediction model. As the main data input into the system we will use provisioned origin-destination matrix (O/D) and a large historical data set of floating car data (FCD). FCD is represented by geo position and the speed of vehicle sensed approximately each 5 seconds from navigation devices, that is from millions of devices every day over the period of several years. However, our model will operate on selected cities (like Vienna) counting thousands of vehicles daily.
Traffic simulator simulates individual clients driving around the smart city by combining both macro and microscopic approaches, optimizing the traffic flow~\cite{Golasowski20,ptosek18,vitali19}. The simulator calculates traffic model in near-real time while it requires access to historical records and a streaming long data chunks for ML predictions. It updates the traffic model for various conditions as well as it can generate training sequences for traffic predictions.
In EVEREST, we will improve the key processing components of the traffic modeling eco-system. The use of efficient AI methods will allow the edge nodes to collect and process more data, while addressing all privacy and security concerns. Also in this case, the EVEREST SDK will allow non-expert designers to easily express the application requirements for the compilation framework and the runtime system.
\subsection{Why using the EVEREST SDK?}
The applications will demonstrate how the EVEREST SDK can unleash novel market opportunities for the respective companies. The EVEREST SDK will provide following benefits:
\begin{itemize}[leftmargin=1.5em]
\item {\bf Quality of predictions}: the possibility of integrating real-time and historical data by means of AI will allow more accurate predictions. This aspect is crucial for all applications as their commercial value lies on precise and timely knowledge extracted from the data.
\item {\bf Performance and energy efficiency}: the efficient use of heterogeneous resources and, in particular, hardware acceleration will reduce the time and the energy spent for obtaining the results with significant competitive advantage. For example, intra-day renewable energy prediction will open new market opportunities. Similarly, industrial sites require fast and efficient systems for air-quality monitoring.
\item {\bf Dynamic adaptation}: due to the distributed and heterogeneous nature of the data (e.g., traffic data), the combination of code and hardware variants, dynamic autotuning, and virtualization will enable a transparent use of the hardware resources even in case of changes to the configurations.
\item {\bf Design productivity}: non-expert programmers will use domain-specific extensions to express the semantics of the application and the security requirements of the data. The EVEREST SDK will automatically carry out the related optimizations, broadening the customers that can be reached with complex heterogeneous platforms.
\item {\bf Programmability support}: the EVEREST SDK will hide the platform details to the application, enabling the porting across target platforms with different characteristics.
\end{itemize}
\section{Concluding Remarks}
EVEREST provides a data-driven design framework for extreme-scale Big Data applications on distributed FPGA-based architectures. The EVEREST SDK combines multiple domain-specific languages, compiler optimizations, and HLS to generate multiple code variants that are dynamically selected by matching characteristics of the application and the available hardware. Our major goals are not only to accelerate the application execution, but also to ease the design of complex AI-enabled applications by non-expert programmers, hiding most of the details of the underlaying hardware system.
\balance
\section*{Acknowledgements}
This project has received funding from the EU Horizon 2020 Programme under grant agreement No 957269.
\bibliographystyle{IEEEtran}
|
2,869,038,157,070 | arxiv | \section{Introduction\label{S:intro}}
The basic building blocks of the corona of the Sun are coronal loops covering a wide range of temperatures. Their lengths cover a vast range from only a few Mm to a sizable fraction of the solar radius. Loops have been revealed as early as the 1940s in coronagraphic observations \citep[][Sect 1.4]{Bray+al:1991} and then in X-rays \citep[e.g.][]{Poletto+al:1975} showing the close connection of the hot coronal plasma of several $10^6$\,K to the magnetic field. When investigating cooler plasma at around $10^6$\,K, e.g. in the spectral bands around 171\,\AA\ and 193\,\AA\ dominated by emission from \ion{Fe}{9} and \ion{Fe}{12}
formed at 0.8\,MK and 1.5\,MK, the loops show up at high contrast.
Small loops related to the chromospheric network are seen at transition region temperatures of about 0.1\,MK, e.g. in \ion{C}{3} or \ion{O}{6} \citep[e.g.][]{Peter:2001:sec,Feldman+al:2003}. In all these observations, the sub-resolution spatial and thermal structure of the loops remains unknown.
The highest resolution data from the corona we currently get on a regular basis are from the Atmospheric Imaging Assembly \cite[AIA;][]{Lemen+al:2012}
on the Solar Dynamics Observatory \citep[SDO;][]{Pesnell+al:2012}. With its spatial scale of 0.6\arcsec\ per pixel and a spatial resolution slightly worse than 1\arcsec\ at least part of the 1\,MK loops show a smooth cross section and seem to be resolved \citep{Aschwanden+Boerner:2011}. These results hint at the loops having a narrow distribution of temperatures, like other previous spectroscopic studies revealed \citep{DelZanna+Mason:2003}, even though they might not be truly isothermal. Therefore some spatial substructure should be expected \citep[e.g.][]{Warren+al:2008}. Recently in their analysis of AIA data \cite{Brooks+al:2012} placed a limit on the diameter of strands composing a loop of some 200\,km and more. Likewise \cite{Antolin+Rouppe:2012} argued that coronal loops should have substructures of 300\,km or smaller, which is based on observations of coronal rain. A detailed discussion of observations and modeling of multi-stranded loops can be found in the review of \cite{Reale:2010}.
To model the (sub) structure of loops one can assume that one single loop is composed by a number of individual strands. In the first of such models \cite{Cargill:1994} used 500 strands, each of which was heated impulsively by nanoflares following the concept of \cite{Parker:1983,Parker:1988}. Many of such models have been investigated since, modifying various parameters with the final goal to empirically understand the appearance of large loops \citep[for a recent approach see e.g.][]{LopezFuentes+Klimchuk:2010}. In 3D MHD models one can directly study the structure of coronal loops, in particular the relation of the coronal emission to the magnetic field. These show that the cross section of the magnetic structure hosting the loop is non-circular and changing along the loop \citep{Gudiksen+Nordlund:2005b,Mok+al:2008}
and that the resulting loop seen in coronal emission appears to have a constant cross section \citep{Peter+Bingert:2012}. Both \cite{Mok+al:2008} and \cite{Peter+Bingert:2012} emphasized that it is not \emph{only} the magnetic structure that defines the loop visible in coronal emission: one has to consider carefully also the thermal structure that forms along and across the magnetic structure. The spatial resolution of these 3D models is (as of yet) not sufficient to study the substructure of loops, i.e. to see if strands form within a loop, where the diameter of these strands is smaller than current observations allow to resolve.
Because the nature of the internal structure of loops is of high interest to the heating mechanism, it is of importance to investigate if the loops are monolithic or multi-stranded --- and if they are multi-stranded to place limits on the diameter of the strands. Likewise, it is of importance to identify and investigate the smallest structures radiating at coronal temperatures. Is there a lower limit for the length of a coronal loop, or are there short structures hidden below the resolution limit of current instrumentation? To address these questions we investigate observations from the High-resolution Coronal Imager \citep[Hi-C;][]{Cirtain+al:2013}. Even though being flown on a sub-orbital rocket and providing only a few minutes worth of data, the spatial resolution is almost six times better than with AIA. This allows us to place new (upper) limits on the strand diameter and to identify miniature coronal loops which are significantly smaller than observed before (at least by a factor of 10) --- smaller than even the tiny cool loops related to the chromospheric network in the transition region.
\begin{figure}
\includegraphics{hic_f01}
\caption{Full field-of-view of the Hi-C observations. This image is
taken in a wavelength band around 193\,{\AA} that under active
region conditions is dominated by emission from \ion{Fe}{12} formed
at around 1.5\,MK. The core of the active region with several
sunspots is located in the top half of the image (cf.\
\fig{F:chromo}). Here we concentrate on the loop
system in the bottom right at the periphery of the active region.
The regions indicated by the large solid square and the dashed
rectangle are shown in \figs{F:roi} and \ref{F:model}. The small
square shows the plage areas zoomed-in in \fig{F:plage}. Here as in
the following figures the count rate is plotted normalized to the
median value in the respective field-of-view. North is top.
\label{F:full}}
\end{figure}
The paper is organized as follows. In \sect{S:strategy} we give an introduction to the observations with Hi-C and their relation to the data from SDO in the Hi-C field of view. The miniature loops are discussed in \sect{S:plage} before we investigate the substructure of larger loops (\sect{S:loops}) and the upper limit of the strands (\sect{S:strands}), should loops not be monolithic. We then briefly turn to a comparison of the structures seen in Hi-C to those found in a 3D MHD model in \sect{S:model}, before we conclude our study.
\section{Observations: Hi-C and AIA\label{S:strategy}}
Images of the corona with unprecedented spatial resolution have been obtained during a rocket flight by the High-resolution Coronal Imager (Hi-C). The instrument and first results have been described by \cite{Kobayashi+al:2013} and \cite{Cirtain+al:2013}. The Hi-C experiment recorded data in a wavelength band around 193\,\AA\ with 2\,s exposure time. Under active region conditions this is dominated by emission from \ion{Fe}{12} (193\,\AA) originating at about 1.5\,MK. The spatial scale is about 0.1\arcsec\ per pixel corresponding to 73 km/pixel. This is about a factor of 5.8 better than what is achieved by AIA/SDO, which is the current workhorse for solar coronal extreme UV imaging studies. The temperature responses of the 193\,\AA\ channels of Hi-C and AIA are very similar.
The effective area of HiC is approximately 5.3 times larger than that of AIA, though the Hi-C pixels cover a roughly 36 times smaller area on the Sun.
In \fig{F:full} we show the full field-of-view of the Hi-C
observation. This frame, which we will investigate in this study,
has been taken at around 18:54:16 UT on 11 July 2012. In this study
we will concentrate on the clear loop-structures in the lower right
part of that image. This is in the periphery of the active region,
away from the sunspots that are found in the upper half of the image
(cf.\ \fig{F:chromo}).
\begin{figure}
\includegraphics{hic_f02}
\caption{Image of the chromosphere co-spatial and co-temporal with
\fig{F:full} taken by AIA in the 1600\,\AA\ channel. As in
\fig{F:full} the large and small squares indicate the field-of-view
displayed in \fig{F:roi} and the zoom of the plage region in
\fig{F:plage}.
\label{F:chromo}}
\end{figure}
\begin{figure*}
\includegraphics{hic_f03}
\caption{Loop system at the periphery of the active region. This shows the 103\arcsec$\times$103\arcsec\ region indicated by the square in \fig{F:full}. The left panel shows the Hi-C observation (1000$\times$1000 pixels), the right displays the data in the same wavelength channel (193\,\AA) recoded by AIA (173$\times$173 pixels). The AIA image is spatially aligned with the Hi-C image and was taken at roughly the same time. The two squares here indicate regions that are magnified in \figs{F:plage} and \ref{F:loop} and that are located in areas dominated by a plage region and a loop system.
\label{F:roi}}
\end{figure*}
The goal of this study is to investigate coronal features that are
not resolvable with AIA. We thus compare the Hi-C image to an AIA
image taken in the same 193\,\AA\ band only seconds after the Hi-C
image. To align the images we had to compensate for a rotation of
1.9$^\circ$ and found the (linear) pixel scaling from Hi-C to AIA to
be a factor of 5.81. For our analysis we will also use images from
the other AIA channels, all of which have been taken between 3\,s
before and 6\,s after the Hi-C image. These we spatially scaled and
aligned to match the AIA\,193\,\AA\ image, and then applied the same
rotation as for the 193\,\AA\ image to have a set of co-spatial
images from Hi-C and AIA. We also make use of the magnetogram taken
by the Helioseismic and Magnetic Imager
\citep[HMI/SDO;][]{Scherrer+al:2012} at 18:53:56, just 20\,s before the
Hi-C image. We scale, rotate and align the magnetogram to match the
AIA 1600\,\AA\ image, so that it is also co-spatial with the Hi-C
image.
\section{Coronal structures\label{S:structures}}
A rough inspection of the region-of-interest for our study as shown in \fig{F:roi} already shows that parts of the Hi-C image look much more crisp than the AIA 193\,\AA\ image, a clear effect of the improved spatial resolution of Hi-C. However, other parts of the image look quite alike, which is particularly true for the large loops in the middle of the region-of-interest.
\subsection{Miniature coronal loops in plage region\label{S:plage}}
To highlight that Hi-C shows miniature coronal loop-like structures almost down to its resolution limit we first investigate a small region in the upper left of the region-of-interest, labeled ``plage'' in \fig{F:roi}. A zoom of this area is shown in the top panels of \fig{F:plage}. Here the increased spatial resolution of Hi-C is clearly evident. To emphasize this, panel (c) of \fig{F:plage} shows diagonal cuts through the Hi-C and AIA images in panels (a) and (b). Prominent substructures are seen in the Hi-C image that are some seven to ten Hi-C pixels wide, corresponding to below 1\arcsec\ (e.g.\ the one indicated by the arrow) -- this corresponds to 1.5 AIA pixels and is therefore not resolved by AIA. Besides these, also various smaller intensity peaks are visible, which are only 2 pixels wide, with the intensity enhancement being clearly above the error level\footnote{%
To estimate the error we assumed Poisson statistics of the photon counting, and propagated this error to the count rate (0.244 photons per DN) accounting for a read-out noise of 18.5 DN.
}.
This confirms that Hi-C can detect small structures down to its resolution limit.
\begin{figure}
\includegraphics{hic_f04}
\caption{Zoom of the \emph{plage region} indicated in \fig{F:roi} by a square. The top two panels show the Hi-C image (150$\times$150 pixel) and the AIA image (26$\times$26 pixel) in that 15\arcsec$\times$15\arcsec\ region. The individual pixels are clearly identifiable in the AIA image. The bottom panel shows the variation of the count rate across the structures along the diagonal indicated by the dashed lines in the top panels. The pixels for the AIA data are indicated by diamonds. The bars indicate the individual pixels of Hi-C, the height of the bars represent the errors. For better comparison the AIA count rate is scaled by the factor given in the plot. The arrows in panels (a) and (c) indicate the position of a miniature coronal loop.
See \sect{S:plage}
\label{F:plage}}
\end{figure}
The nature of the small-scale intensity enhancements in the Hi-C
193\,\AA\ band needs some investigation. In \fig{F:context.plage} we
plot the context of this small plage region in various AIA channels
to investigate the connection throughout the atmosphere. The region
of the strong brightening in 193\,\AA\ coincides with enhanced
emission from the chromosphere as seen in the AIA 1600\,\AA\ channel
(dominated by the \ion{Si}{1} continuum). Even though this is not a
pixel-to-pixel correlation, it is clear that the enhanced coronal
emission occurs in a region with enhanced chromospheric activity.
The emission in the 304\,\AA\ band dominated by the \ion{He}{2} line
shows plasma below 10$^5$\,K and is still closely related to the
chromospheric network. The emission in the 131\,\AA, 171\,\AA, and 193\,\AA\ channels, which is dominated by plasma at 0.5\,MK
(\ion{Fe}{8}), 0.8\,MK (\ion{Fe}{9}), and 1.5\,MK (\ion{Fe}{12}), is
very much concentrated above one edge of the plage region. The image in the 211\,\AA\ channel (2.0\,MK; \ion{Fe}{14}) is almost identical to the 193\,\AA\ channel, which is why we do not include it here.
\begin{figure}
\includegraphics{hic_f05}
\caption{Context of the plage region shown in \fig{F:plage}. The AIA images in 7 channels show a 101\arcsec${\times}$101\arcsec\ region and are taken within seconds of the 193\,\AA\ Hi-C image. The squares indicate the field-of-view show in \fig{F:plage} and are co-spatial with the small squares in \fig{F:roi} labeled ``plage''. In \fig{F:plage} the 193\,\AA\ images of Hi-C and AIA zooming into the small square are shown. Only there the miniature loop can be identifies in the Hi-C image.
The top left panel shows the co-spatial line-of-sight magnetogram in the photosphere as seen by HMI, taken within seconds of the other images.
See \sect{S:plage}.
\label{F:context.plage}}
\end{figure}
This enhanced coronal emission in the plage area could either be due
to small coronal loops within that region or it could originate from
footpoints of long hot loops rooted in that patch of enhanced
magnetic field. However, the 335\,\AA\ and
especially the 94\,\AA\ channels that should show hotter plasma -- if it would be
present -- do not show a signature of a long hot loop rising from
the patch in the center of the region. The 94\,\AA\
channel has two main contributions, one around
1.2\,MK (\ion{Fe}{10}) and another one at 7.5\,MK (\ion{Fe}{18}).
However, the emission pattern in the 94\,\AA\ channel is very similar to the 171\,\AA\ channel, which indicates that there is relatively little plasma reaching temperatures of $\approx$7.5\,MK along the line of sight. Likewise the 335\,\AA\ channel has main contributions around 0.1\,MK, 1\,MK, and 3\,Mm, but lacks a signature of a long hot loop, but shows more similarities to the 171\,AA channel (albeit much more noisy).
For a detailed discussion of the temperature response of the AIA
channels see \citep{Boerner+al:2012}.\footnote{One could also
speculate that the AIA short wavelength channels might show mainly
cool transition region emission; in particular the 193\,\AA\ and
211\,\AA\ channels have a significant contribution from temperatures
around 0.2\,MK (\ion{O}{5}) in quiet Sun conditions. However, then
these two channels should share some characteristics with the
304\,\AA\ channel, which they do not. In fact, the brightening in
193\,\AA\ and 211\,\AA\ is not overlapping at all with the pattern
in 304\,\AA. Thus we can conclude that the 193\,\AA\ and 211\,\AA\
channels really show plasma primarily at 1.5\,MK to 2\,MK.}
Unfortunately there is no X-ray image taken at the same time and location. However, an inspection of an image from the Hinode X-ray telescope \citep[XRT,][]{Golub+al:2007} taken about an hour before does not clearly resolve the issue. Therefore we assume that in the field of view of \fig{F:context.plage} the 193\,\AA\ emission in the center square is not a moss-type emission from a large hot loop.
Based also on Hi-C data, \cite{Testa+al:2013} see moss-type emission at the base of a hot ($>$5\,MK) loop, and find that it shows a high temporal variability. However, they look at a different part of the Hi-C field-of-view which is near a footpoint of a hot loop. In contrast, the brightening we discuss here is not related to a hot loop, as outlined above.
If we can rule out that in our case the emission we see above the plage patch is
coming from the footpoint of long hot loops, it has to come from
small compact hot loops that close within the network patch. Such
small loops are clearly not resolved by the AIA images. However, in
the zoom to the Hi-C image of the plage area in \fig{F:plage}a we
can identify at least one structure that could be interpreted as a
miniature hot loop that reaches temperatures of (at least) about 1.5\,MK because we distinctively see it in the 193\,\AA\ channel of Hi-C (see arrows in panels a and c). This loop would
have a length (footpoint distance) of about only 1.5\arcsec\
corresponding to 1 Mm. For the width only an upper limit of 0.2 to
0.3\arcsec\ (150\,km to 200\,km) can be given, because the cross
section of this structure is barely resolved.
The HMI magnetogram in \fig{F:context.plage} shows only one polarity in the plage area, which would argue against a miniature coronal loop (following the magnetic field lines). However, it could well be that small-scale patches of the opposite polarity are found in this plage area, too. These small-scale opposite polarities would then cancel out in the HMI observations with its limited spatial resolution. At least high-resolution observations \citep[e.g.][]{Wiegelmann+al:2010} as well as numerical simulations of photospheric convection \citep[e.g.][]{Voegler+al:2005} show small-scale opposite polarities in otherwise largely monopolar regions with footpoint distances of the order of 1\,Mm or below. These small bipoles are the root of small flux tubes that have been found by \cite{Ishikawa+al:2010} by inverting high-resolution spectro-polarimetry data. They found this flux tube with a footpoint distance of about 1 Mm to rise through the photosphere and then higher up into the atmosphere.
These miniature loops could be related to the short transition region loops that are found to reside within the chromospheric network \citep{Peter:2001:sec}.
From this discussion we can suggest that the small elongated structures we see in Hi-C above network and plage regions are in fact tiny small loops reaching temperatures of 1.5\,MK or more. These miniature coronal loops would have lengths (of their coronal part) of only around 1\,Mm. If this is indeed the case, they would be interesting objects: they would be so short, that they would barely stick out of the chromosphere!
Measuring from the photosphere, these loops would be only 5\,Mm long, with a 2\,Mm photosphere and chromosphere at each end.
A 5\,Mm long semi-circular loop has a footpoint distance of about 3\,Mm. Thus such miniature loops would span across just a single granule, connecting the small-scale magnetic concentrations in the intergranular lanes.
These miniature loops might be a smaller version of X-ray bright points, which are enhancements of the X-ray emission related to small bipolar structures \citep{Kotoku+al:2007}. Because of limitations of the spatial resolution of the X-ray observations (above 1\arcsec) no X-ray bright points have been observed that are as small as the miniature loop reported here. Short loops have been studied theoretically \citep[e.g.][]{Mueller+al:2003,Sasso+al:2012} indicating that such short hot structures are sensible, even though these short model loops show peak temperatures of well below 1\,MK.
\cite{Klimchuk+al:1987} found that hot short loops with heights below 1000\,km are thermally unstable and evolve into cool loops with temperatures around 10$^5$\,K. This would apply to the miniature loops proposed here, which would not be stable, anyway, because they can be expected to be disturbed rapidly by the convective motions of the granulation.
Future observational and modeling studies will have to show if this interpretation of miniature loops is correct, or the small-scale brightening in the plage region is better understood by the emission from the footpoint of a hot ({\small$\begin{array}{@{}c@{}}>\\[-1.5ex]{\sim}\end{array}$}3\,MK) loop. In particular the upcoming Interface Region Imaging Spectrograph \citep[IRIS;][]{Wuelser+al:2012}\footnote{See also http://iris.lmsal.com.} with its high spatial resolution (0.3\arcsec) covering emission originating from chromospheric to flare temperature to be launched in summer 2013 will be well suited for investigating this from the observational side.
\subsection{Substructure in large coronal loops\label{S:loops}}
We now turn to the discussion of the substructure of larger coronal loops. In \fig{F:roi} a loop system is visible that connects the periphery of the active region to the surrounding network. In the following we will concentrate on the small box labeled ``loops'' in \fig{F:roi}, but our results apply also to the other large loops in this figure. These loops show up also in AIA 171\,\AA\ images but are faint in the AIA 211\,\AA\ band, which hints at a temperature of the loops in the range of 1\,MK to 1.5\,MK.
We choose this particular area of the Hi-C field-of-view because it is the area containing the most structures that can be easily identified as long coronal loops, having arc lengths of longer than about 50\arcsec\ (corresponding to 36\,Mm). The Northern part of the Hi-C field-of-view is dominated by plage and moss-type emission around the sunspots, as is clear from \figs{F:full} and \ref{F:chromo}. This upper part is thus dominated by more compact structures. In the Southern part of the image the region we selected shows the clearest loops. As mentioned also by \cite{Testa+al:2013}, the brightest emission seen by Hi-C is originating from the plage and moss areas. The longer loops presumably reaching higher altitudes might have a lower density and thus a smaller emission when compared to the moss that originates from the footpoints of hotter loops.
\begin{figure}
\includegraphics{hic_f06}
\caption{Zoom of the \emph{loop region} indicated in \fig{F:roi} by a square. Otherwise this is the same as \fig{F:plage}. In addition, here the green line shows the cross-sectional cut averaged over 3\arcsec\ along the loops, i.e., it shows the average variation in the band defined by the two green lines in panel (a).
See \sect{S:loops}.
\label{F:loop}}
\end{figure}
In panels (a) and (b) of \fig{F:loop} we show a zoom into this loop region, where the loops are seen passing roughly along the diagonal. In the AIA image (panel b) the individual pixels are clearly identifiable; this field of view consists of only 26${\times}$26 pixels. In contrast, the Hi-C image (150${\times}$150 pixels; panel a) shows a larger degree of noise. This is due to the lower count rate per pixel because of the higher spatial resolution, i.e.\ the smaller pixel size, and the higher noise level of the Hi-C camera as compared to AIA. Still, from a comparison of the two images in panels (a) and (b) it is clear, that the Hi-C image does not show a coherent substructure of the loops that is aligned with the loops.
This missing substructure of the loops in \fig{F:loop} becomes evident when investigating the cut perpendicular to the loops shown in panel (c) of \fig{F:loop}. If one would subtract the background, the loops in AIA would have a width of some 4 pixels, i.e.\ they would be very close to the resolution limit of AIA. This size corresponds to some 1.8\arcsec\ to 2.4\arcsec\ or 1.3\,Mm to 1.7\,Mm. The cut of the Hi-C image confirms this width, which is very clear for the loops at spatial positions 5\arcsec\ and 10\arcsec. Looking at AIA alone, one might have missed the small structure at 18\arcsec\ which is at the side of a bigger one. These results are consistent quantitatively and qualitatively with \cite{Brooks+al:2013} who looked at short segments of a larger number of (long and short) loops.
Most importantly, the Hi-C data in panel (c) of \fig{F:loop} do not
show an indication of a substructure of the loops that is sticking
out of the noise (the Poisson errors are plotted as bars). When
plotting the cross section of the loops not as a single-pixel cut,
but averaged along the loops, the noise disappears and the loop
cross-sections are smooth. The green line shows the variation in the
3\arcsec\ wide band defined by the green dashed lines in panel (a).
This averaging over 21 Hi-C pixels reduces the (Poisson) noise by a
factor of about 4.5, giving a reduced error of about 20 counts.\footnote{Alternatively one could have averaged in time or in both time and space. For the case at hand this averaging along the loops seems appropriate because the loops in \fig{F:loop} show a nice smooth variation along the loop.} This
just corresponds to the variability still seen in the averaged
cross-sectional cut. In Appendix \ref{S:fft} we give some further details on the size of the loops seen in Hi-C.
From the above we can conclude, that within the instrumental capabilities of Hi-C no plausible substructure of the long coronal loops can be seen, but the long loops are smooth and resolved. The fact that we see structures in the Hi-C image in the plage area (\sect{S:plage}) that are much smaller than the loop cross-sections reassures us that the spatial resolution of Hi-C is sufficient to see a substructure, if it both existed and were bright; and that the averaging process would reveal the structure if it were parallel to the arcsec-scale strands in the loops.
Of course, this observation does not rule out the possibility that the loops might have a substructure on scales smaller than observable by Hi-C. Still, with these Hi-C observations we can set an upper limit for the diameter of individual \emph{strands} that might compose single loops.
\subsection{Upper limit for the diameter of strands in loops\label{S:strands}}
In the following we will estimate an upper limit for the thickness of individual strands composing a coronal loop. For this we will use the argument that the emission seen across the loop, i.e. the cross-sectional cut, should be smooth, just as found in the observations. We will assume that all strands are circular in cross section and run in parallel. This is the simplest model possible, and certainly reality will be much more complicated. However, for a rough estimation these assumptions should suffice.
We start with a loop with diameter $D$ that is composed of a total number $N_t$ of strands, each of which has the same diameter $d$ (see \fig{F:strands}).
Of all the strands only a fraction $f_b$ is bright, so that the number of bright strands is
\begin{equation}\label{E:num.bright}
N_b = N_{t}~f_b ~.
\end{equation}
This fraction $f_b$ is equivalent to the fraction of time each
individual strand with time-dependent heating and temperature and
density structure will be visible in a specific EUV line or channel.
Based on multi-stranded loop models this fraction can be estimated
to have an upper limit of $f_b\approx0.1$ \citep[e.g.][]{Warren+al:2003,Warren+al:2008,Viall+Klimchuk:2011}.
The fraction $f_b$ is also equivalent to the volume filling factor of the (bright) plasma in the corona. In observations of bright points \cite{Dere:2009} found values in the range of 0.003 to 0.3 with a median value of 0.04, which support our choice for an upper limit. One should remember, that the filling factor might be much smaller, in particular when considering cooler plasma. For the transition region \cite{Dere+al:1987} found filling factors ranging from 0.01 down to $10^{-5}$.
\begin{figure}
\includegraphics{hic_f07}
\caption{Cartoon of the multi-stranded loop. The loop with diameter
$D$ is composed of many individual strands with diameter $d$. When
observing the loop, each spatial resolution element of size $R$ of
the instrument corresponds to a column of the cross section. The
hashed area represents one such column. Strands that are bright in a
particular EUV channel (or line) are indicated by a back dot, empty
circles represent strands not radiating in this channel (at this
instant in time). This example consists of $N_t{=}79$ strands in
total, of which $N_b{=}25 $ are bright, corresponding to a fraction
$f_b{=}0.3$. For the loops on the Sun we estimate that they consist
of $N_t{\approx}2500$ strands.
See \sect{S:strands}.
\label{F:strands}}
\end{figure}
When observing with an instrument with a spatial resolution $R$, each resolution element will represent a column of a cut through the loop (cf. \fig{F:strands}). In this \emph{column} there are $N_c$ bright loops. In order to have a smooth cross-sectional profile we require that neighboring resolution elements contain a similar number of bright strands (each of the same brightness). The difference of the number of strands in neighboring resolution elements should then follow Poisson statistics, $\mathit{\Delta}N_c\approx\sqrt{N_c}$. The relative difference of the number of strands in neighboring resolution elements directly gives the brightness variation, and for a smooth profile we require this to be smaller than $\varepsilon$,
\begin{equation}\label{E:num.column}
\frac{\mathit{\Delta} N_c}{N_c} < \varepsilon
\quad\to\quad
N_c > \frac{1}{\varepsilon^2} ~.
\end{equation}
Typically one would require $\varepsilon{\approx}0.1$, i.e., a pixel-to-pixel variation across the loop of less than 10\% for a smooth profile.
From the number of bright strands in one column, $N_c$, we can estimate the number of bright strands in the whole loop, $N_b$, by the ratio of the cross section of the whole loop and of a single column {}(e.g. hashed column in \fig{F:strands}), which together with \eqn{E:num.column}
yields
\begin{equation}\label{E:num.bright.ii}
N_{b} = \frac{\pi\,(D/2)^2}{R\,D}~N_c
\quad\to\quad
N_b > \frac{\pi}{4}~\frac{D}{R}~\frac{1}{\varepsilon^2} ~.
\end{equation}
Assuming that the strands in the loop are packed as dense as possible, the cross section of the loop as a whole and of the $N_t$ strands are related by
\begin{equation}\label{E:densest.packing}
\pi\left(\frac{D}{2}\right)^{\!\!2} = \frac{\pi}{\sqrt{12}} ~ N_t ~ \pi\left(\frac{d}{2}\right)^{\!\!2} ~,
\end{equation}
where the factor of $\pi/\sqrt{12}\approx0.9$ stems from the densest packing of circles in a plane.
Using \eqn{E:num.bright} and (\ref{E:num.bright.ii}) this now gives the upper limit for the diameter of individual strands,
\begin{equation}\label{E:strand.limit}
d ~~ < ~~ \frac{2\sqrt{2}\sqrt[4~]{3}}{\pi} ~~ \varepsilon ~ \sqrt{f_b} ~~ \sqrt{R\,D} ~~~ \approx ~~~ 1.2 ~ \sqrt{f_b} ~ \varepsilon ~\sqrt{R\,D} ~,
\end{equation}
and the lower limit for the total number os strands in the loop,
\begin{equation}\label{E:strand.N}
N_t ~~ > ~~ \frac{\pi}{~4~} ~ \frac{D}{~R~} ~ \frac{1}{~f_b~\varepsilon^2~} ~.
\end{equation}
For the loops observed here with a diameter of $D{\approx}2$\arcsec\ and the resolution of Hi-C of $R{\approx}0.2$\arcsec\ assuming a fraction of the bright loops of $f_b{\approx}0.1$ and a pixel-to-pixel variation of $\varepsilon{\approx}0.1$ in the observation, we find an upper limit for the diameter of an individual strand of $d{\approx}0.025$\arcsec\ corresponding to about
\begin{equation}\label{E:strand.upper.limit}
d < 15\,{\rm{km}}~.
\end{equation}
This strand diameter of only 15\,km is small compared to the loop diameter --- this loop would have to host at least about $N_t{\approx}7500$ strands of which $N_b{\approx}750$ are bright in the given EUV channel at any given time. In Appendix \ref{S:num} we discuss a simple numerical experiment confirming this conclusion. If one would adopt the lower value of 0.003 for the filling factor derived by \cite{Dere:2009}, one would end up with a quarter million strands with diameters of only 3\,km.
Strictly speaking, this discussion applies only for the 193\,\AA\ channel. Other bandpasses respond differently to different heating scenarios in loop models \citep[e.g., see the review of][]{Reale:2010} and thus might show different filling factors. Still, our results can be expected to apply roughly for emission originating from the coronal plasma at 1\,MK to 2\,MK.
Together with the discussion in \sect{S:loops} we conclude that the loops are either monolithic structures, the diameter of the individual strands has to be smaller than 15 km, or else the strands must be implausibly well organized. This new upper limit is more than a factor of 10 smaller than derived from previous studies \citep[e.g.][]{Brooks+al:2012}, which became possible by the enhanced spatial resolution of Hi-C. The multi-stranded loop scenario can only be valid if the upper limit of the strands set by observations is larger than the lower limit for the strand diameter set by basic physical processes such as reconnection, gyration, heat conduction or turbulence. At this point we think that this is the case, however more detailed studies, in particular of MHD turbulence, would be needed for a final conclusion. We discuss these issues briefly in \app{S:lower.limit}.
\section{Loop morphology and comparison to a 3D model\label{S:model}}
In a first attempt for a morphological comparison between the Hi-C observations and a 3D coronal model we will just highlight some common features between observation and model on a qualitative basis. For final conclusions, a more detailed quantitative comparison is needed, and in particular a more in-depth analysis of the model is required to better understand how the various structures do form.
The coronal loops in the field-of-view of the Hi-C observations discussed in this manuscript can be classified (by eye) into three categories (cf. arrows in \fig{F:model}, top panel):
\begin{itemize}
\item[(a)] Expanding envelope that
consists of several non-expanding loops.
\item[(b)] Thin ({\small$\begin{array}{@{}c@{}}<\\[-1.5ex]{\sim}\end{array}$}3\arcsec) non-expanding threads.
\item[(c)] Broad ({\small$\begin{array}{@{}c@{}}>\\[-1.5ex]{\sim}\end{array}$}5\arcsec) loop-like structures with approximately constant cross
section.
\end{itemize}
The individual loops in the expanding structure (a) and the thin
threads (b) seem to show no (significant) expansion. We are
investigating this quantitatively with the Hi-C data and will
present our results in a separate paper. The tendency for
constant cross-section was reported more than a decade ago for loops observed
in the EUV and X-rays \citep{Klimchuk:2000,Watko+Klimchuk:2000,LopezFuentes+al:2006}.
Recently, based on a 3D MHD model, it has been shown that the constant cross
section could be due to the temperature and density structure within
in the expanding magnetic structure, in interplay with the formation
of the coronal emission lines \citep{Peter+Bingert:2012}. This would
work for EUV observations, but it still has to be investigated if
this would work also at X-ray wavelengths which originate from a
much broader range of high temperatures.
Recently, \cite{Malanushenko+Schrijver:2013} have suggested that the constant cross section result may be
an artifact of the observing geometry and the likelihood that the
shape of the cross section varies along the loop. The cross
sectional area could expand, but if it does so preferentially along
the line-of-sight, then the loop thickness in the plane-of-the sky
(i.e., the image) will be constant. This could certainly explain
many loops. However, there should be many other cases where a
different observing geometry reveals a very strong expansion. Such
cases need to be verified before we can accept this explanation for
the constant cross section loops. Also, the thermal structure along and perpendicular to the loops has to be considered, and not the magnetic structure alone, as pointed out by \cite{Mok+al:2008} and \cite{Peter+Bingert:2012}.
\begin{figure}
\includegraphics{hic_f08}
\caption{Morphological comparison of observation and model.
The top panel shows the actual observation of Hi-C (193\,\AA\ band). The field of view (124\arcsec$\times$62\arcsec)
is outlined in \fig{F:full} by the dashed rectangle. The bottom panel shows the coronal emission as synthesized from a 3D MHD model for this channel ($165{\times}109$\,Mm). The arrows point to features that can be found in both model and observations: (a) expanding envelope that consists of several non-expanding loops, (b) thin non-expanding threads, and (c) rather broad loop-like structures with approximately constant cross section.
See \sect{S:model}.
\label{F:model}}
\end{figure}
Because the 3D MHD\ model successfully provided a match to the constant cross-section loops, we compared a snapshot of a 3D MHD model to the Hi-C observation to see if we find the three categories of loop structures also in the emission synthesized from the model. In the 3D MHD we solve the mass, momentum and energy balance in a box spanning 167${\times}$167\,Mm$^2$ horizontally and 80\,Mm in the vertical direction (512$^3$ equidistant grid). Horizontal motions (of the granulation) drive the magnetic field in the photosphere and lead to braiding of magnetic field lines as originally suggested by \cite{Parker:1972}. This process induces currents that are dissipated in the upper atmosphere and by this heat the corona. The details of this model have been described by \cite{Bingert+Peter:2011,Bingert+Peter:2013}.
The numerical experiment we use here differs from the \cite{Bingert+Peter:2011,Bingert+Peter:2013} model by the magnetic field at the lower boundary, an increased size of the computational domain and a higher spatial resolution. As described by \cite{Peter+al:2006} we interpolate the MHD quantities to avoid aliasing effects and compute the emission as it would be seen by AIA using the temperature response functions \citep{Boerner+al:2012,Peter+Bingert:2012}. In \fig{F:model} we show the synthesized emission in the 193\,\AA\ channel\footnote{Because of a slightly too high density in the model transition region we reduce the density (by up to a factor of 2) there in order to avoid small-scale artifacts due to the contribution of plasma at low temperatures.} integrated along the vertical axis, i.e. when looking from straight above.
In the image synthesized from the model we find the same three classes of structures as in the actual observation; the labeled arrows point at such structures. While the thin constant-cross-section loops (b) have been discussed before \citep{Peter+Bingert:2012}, here we also find the non-expanding loops in an expanding envelope (a) and the broad loop-like structures (c). It is beyond the scope of this (mainly observational) study to perform a detailed analysis of the 3D MHD model to investigate how exactly what the nature of these three categories is. Here we simply state that these categories are also found in numerical experiments, so that there is some hope to understand how they come about in future studies.
\section{Conclusions\label{S:conclusions}}
In this study we presented results on the structure of coronal loops based on new observations with the Hi-C rocket telescope providing unprecedented spatial resolution in the EUV down to 0.2\arcsec\ at a spatial scale of 0.1\arcsec\ per pixel.
We have found miniature loops hosting plasma at 1.5\,MK with a length of only about 1\,Mm and and a thickness of below 200\,km (\sect{S:plage}). With other current instrumentation, such as AIA, these would cover just two spatial pixels.
These miniature loops are consistent with small magnetic flux tubes that have been observed to rise through the photosphere into the upper atmosphere. However, it will be a challenge to understand these miniature loops in terms of a (traditional) one-dimensional model. From the observational side, this clearly shows the need for future high-resolution Hi-C-type observations together with high-resolution spectro-polarimetric observations of the photosphere and chromosphere.
In the case of the longer more typical coronal loops we found that the Hi-C observations do not show indications of a sub-structure in these loops (\sect{S:loops}). Therefore these loops with diameters of typically 2\arcsec\ to 3\arcsec\ are either real monolithic entities or they would have to be composed by many strands with diameters well below the resolution limit of Hi-C. Based on some simple assumptions we found that the strands would have to have a diameter of at most 15\,km, which would imply that a loop with 2\arcsec\ diameter would have to be composed of at least 7500 individual strands (\sect{S:strands}). This would compare to a 1\,cm diameter wire rope consisting of wire strands of only 0.1\,mm diameter. No matter if the loops are monolithic or multi-stranded in nature, it still remains puzzling what determines the width of the loop of typically 2\arcsec\ to 3\arcsec, which is found consistently in the Hi-C as well as the AIA data.
The observational time and field-of-view of the Hi-C rocket experiment were limited, so this discussion cannot be generalized. This highlights the need for such high-resolution observations of the corona in future space missions.
Numerical experiments show similar (large) loop structures as found in the Hi-C observations: non-expanding loops in expanding envelopes, thin threads and thick constant cross-section loops. It still needs to be determined how these are produced in the numerical experiments, and if the processes in the model can be realistically applied to the real Sun. Anyway, the 3D numerical experiments provide us with a tool to investigate this in future studies and thus learn more about the nature of loops in the corona, miniature and large.
{
\acknowledgements
We acknowledge the High resolution Coronal Imager instrument grant funded by the
NASA's Low Cost Access to Space program. MSFC/NASA led the mission and partners
include the Smithsonian Astrophysical Observatory in Cambridge, Mass.; Lockheed Martin's
Solar Astrophysical Laboratory in Palo Alto, Calif.; the University of Central Lancashire in Lancashire, England; and the Lebedev Physical Institute of the Russian Academy of Sciences in Moscow.
The AIA and HMI data
used are provided courtesy of NASA/SDO and the AIA and HMI science
teams. The AIA and HMI data have been retrieved using the German
Data Center for SDO. The numerical simulation was conducted at the
High Performance Computing Center Stuttgart (HLRS). This work was partially funded by the Max-Planck/Princeton Center for Plasma Physics.
The work of J.A.K. was supported by the NASA
Supporting Research and Technology and Guest Investigator Programs.
H.P. acknowledges stimulating discussions with Robert Cameron and Aaron Birch.
We thank the anonymous referee for constructive comments.
}
\input{hic.bbl}
\vspace{2cm}
|
2,869,038,157,071 | arxiv |
\section{Introduction}
In this paper we discuss a new algorithm for estimating and improving error terms in the asymptotic solution of linear differential systems. We consider systems of the form
\begin{equation}
Z^\prime(x) = \rho(x) \{ D +R(x) \} Z(x) \;\;\; (a \leq x < \infty),\label{eq:1.1}
\end{equation}
where $Z$ is an $n-$ component vector, $ \rho$ is a real or complex scalar factor, $D$ is a constant $n \times n$ diagonal matrix,
\begin{equation}
D= dg(d_1,...,d_n) \label{eq:1.2}
\end{equation}
with distinct $d_k$, and $R$ is also an $n \times n$ matrix whose entries tend to zero as $ x \rightarrow \infty$, that is, $R(x) \rightarrow 0$ as $ x \rightarrow \infty$.
\par
If it is the case that $\rho (x) R(x)$ is $L(a, \infty)$, the Levinson asymptotic theorem \cite[section 1.3]{MSPE89};\cite{NL48}
states that there are solutions $Z_k$ $(1 \leq k \leq n)$ of (\ref{eq:1.1}) such that
\par
\begin{equation}
Z_k(x)= \{e_k + \eta_k(x) \}\exp (d_k \int_a^x \rho( t)dt ), \label{eq:1.3}
\end{equation}
where $e_k$ is the unit coordinate vector in the $k-$ direction and $\eta_k(x) \rightarrow 0 $ as $ x \rightarrow \infty$. The size of the error term $\eta_k$ is related to the size of $R(x)$ as $x \rightarrow \infty$, and therefore the accuracy of ( \ref{eq:1.3}) can be improved if the perturbation $R(x)$ can be improved -- that is, made smaller in magnitude --- as $x \rightarrow \infty$. Under suitable conditions on
$\rho$ and $R$, this improvement can be effected by applying a sequence of transformations to the solution vector $Z$ in (\ref{eq:1.1}). Our algorithm is concerned with the implementation of this sequence of transformations.
\par
In order to introduce the ideas involved, we consider the transformation
\begin{equation}
Z =(I+P )W, \label{eq:1.4}
\end{equation}
where $I$ is the $ n \times n$ identity matrix, dg$P=0$, and the off-diagonal entries of $P$ are defined by
\begin{equation}
PD-DP=R-dgR. \label{eq:1.5}
\end{equation}
Thus, in terms of the $(i,j)$ entries in the matrices,
\begin{equation}
p_{ij} = r_{ij}/(d_j-d_i) \;\;\; ( i \neq j ). \label{eq:1.6}
\end{equation}
On substituting (\ref{eq:1.4}) into ( \ref{eq:1.1}) and using ( \ref{eq:1.5}), we have
\begin{equation}
W^\prime = \rho \{ \tilde{D} +(I +P)^{-1} (RP-P {\rm dg} R - \rho ^{-1}P^\prime ) \} W, \label{eq:1.7}
\end{equation}
where
\begin{equation} \tilde{D} = D + {\rm dg} R. \label{eq:1.8}
\end{equation}
By ( \ref{eq:1.6}), $ P(x) \rightarrow 0$ as $ x \rightarrow \infty $ and therefore it is clear that there are
circumstances, to be detailed later, in which the perturbation term in the $W$-system has a smaller order of magnitude for large $x$ than the original perturbation $R$. Repetition of the process successively improves the perturbation term. It is to the final system in the process that the Levinson theorem ( \ref{eq:1.3}) is applied, when a prescribed accuracy in the error term has been achieved.
\par
The transformation back from the final system to the original system (\ref{eq:1.1}) yields an improvement of (\ref{eq:1.3}) in which the factor $e_k + \eta_k(x)$ is replaced by
\begin{equation}
\{ I+P_0(x) \}\{e_k + \eta_k(x)\} \label{eq:1.8a}
\end{equation}
with a new $\eta_k$ which has the prescribed accuracy, and the matrix $P_0$ is generated explicitly by our algorithm. The terms in $P_0$ tend to zero as $x \rightarrow \infty$ and $ \eta_k = o(P_0)$. Thus (\ref{eq:1.8a})
provides explicit sub-dominant terms for the asymptotic solution of (\ref{eq:1.1}). In sections 2 and 3, we discuss the sequence of transformations and, in sections 4 and 5, we discuss the algorithm for the generation of the terms in $P_0$.
\par
In a recent paper \cite{BEEM95}, we consider a particular system of the form ( \ref{eq:1.1}) which arises from the $n-$th order differential equation
\begin{equation}
y^{(n)} (x) - Q(x)y(x)=0, \label{eq:1.9}
\end{equation}
and we formulated an algorithm which implements a sequence of transformations of the type (\ref{eq:1.4})--(\ref{eq:1.6}). The main emphasis in \cite{BEEM95}, however, is on the analytic and asymptotic implications of the transformations for the solution of (\ref{eq:1.9}) and for applications to spectral theory. Here, on the other hand, we wish to develop our algorithm from the point of view of symbolic algebra in the context of the general system (\ref{eq:1.1}). We also demonstrate the versatility of our procedure by applying it to other situations than the one covered in \cite{BEEM95}.
\par
Finally in this introduction, we note that the origins of the transformation (\ref{eq:1.5})-(\ref{eq:1.6})
lie in the work of Harris and Lutz \cite{HL74} with more recent developments of these ideas by Eastham \cite{MSPE86}\cite[section 1.7]{MSPE89}. The nature of the matrix $I+P$ is that it is an explicit approximation to the matrix whose columns are eigenvectors of $D+R$, these eigenvectors in general only being explicit in terms of $D$ and $R$ when $n=2$.
\section{The sequence of transformations}
We now define a sequence of transformations
\begin{equation}
Z_m =(I+P_m)Z_{m+1} \;\;\;(m=1,2,...) \label{eq:2.1}
\end{equation}
of the type introduced in (\ref{eq:1.4}) -- (\ref{eq:1.6}), the purpose of
which is to produce differential systems for the $Z_m$, similar to
(\ref{eq:1.1}), but with the perturbation term successively improved. The definition
is almost the same as that given in \cite[section 4]{BEEM95} for the particular
system (\ref{eq:1.1}) which arises from (\ref{eq:1.9}), and so we shall
be brief in this part of the paper. A typical system in the process is
\begin{equation}
Z_m^\prime = \rho (D_m + R_m)Z_m, \label{eq:2.2}
\end{equation}
where $D_m$ is diagonal, with (\ref{eq:1.1}) being the case $m=1$.
The process ends at $m=M$ when the perturbation $R_M$ has a pre-assigned
accuracy in terms of its order of magnitude as $x \rightarrow \infty$.
\par
As already indicated by (\ref{eq:1.7}), $R_m$ will contain
terms of different orders of magnitude as $x \rightarrow \infty$,
and it is the essence of our algorithm to identify and collate
these terms according to their size. Hence we write
\begin{equation}
R_m = V_m +E_m = V_{1m} + V_{2m} +...+V_{\mu m}+E_m, \label{eq:2.3}
\end{equation}
where
\begin{equation}
V_{km}=o(V_{jm}) \;\;\;( x \rightarrow \infty, k >j) \label{eq:2.4}
\end{equation}
and
\begin{equation}
E_{m}=o(V_{\mu m}) \;\;\;(x \rightarrow \infty). \label{eq:2.5}
\end{equation}
Here $E_m $ represents terms which are already of the pre-assigned
accuracy, and they take little part in the transformation process
(\ref{eq:2.1}). The $V_{jm}$ represents terms which are not of that
accuracy, and they are successively replaced by smaller-order
terms as we go through the process. Also as indicated by (\ref{eq:1.8}),
we take any diagonal terms in $V_{1m}$ over to $D_m$ in (\ref{eq:2.2}).
Thus we arrange that
\begin{equation}
{\rm dg} V_{1m}=0 \label{eq:2.6}
\end{equation}
and we write
\begin{equation}
D_m =D+\Delta_m.
\end{equation}
\par
We substitute (\ref{eq:2.1}) into (\ref{eq:2.2}) to eliminate the
dominant term $V_{1m}$ in (\ref{eq:2.3}) and to define the
resulting terms $V_{j,m+1}$ constructively in terms of the $V_{jm}$.
Corresponding to (\ref{eq:1.5}), we define $P_m$ by
\begin{equation}
P_mD-DP_m = V_{1m} \label{eq:2.8}
\end{equation}
with dg$P_m=0$. Then it is easily checked that (\ref{eq:2.1}) and
(\ref{eq:2.2}) give
\begin{eqnarray}
Z^\prime_{m+1} = \rho \{ D_m +(I+P_m)^{-1}(&-&\rho^{-1}
P^\prime_m + T_m +V_{1m}P_m \nonumber \\
&+&(R_m-V_{1m})(I+P_m)) \}Z_{m+1}, \label{eq:2.9}
\end{eqnarray}
where
\begin{equation}
T_m=\Delta_m P_m -P_m \Delta_m. \label{eq:2.10}
\end{equation}
As in \cite[ section 4]{BEEM95}, we show that (\ref{eq:2.9}) can be expressed as
\begin{equation}
Z^\prime_{m+1} = \rho ( D_{m+1} + R_{m+1} ) Z_{m+1} \label{eq:2.11}
\end{equation}
where $R_{m+1}$ has the form
\begin{equation}
R_{m+1}=V_{m+1}+E_{m+1}=V_{1,m+1}+...+V_{\mu, m+1}+E_{m+1} \label{eq:2.12}
\end{equation}
as in (\ref{eq:2.3})-(\ref{eq:2.5}), but with a different $\mu$. To do this,
we let $U$ denote any of the terms on which $(I+P_m)^{-1}$ acts
in (\ref{eq:2.9}). Then we write
\begin{equation}
(I+P_m)^{-1} = I-P_m+P^2_m- ... +(-1)^\nu P^\nu_m + (-1)^{\nu+1}(I+P_m)^{-1}
P^{\nu+1}_m \label{eq:2.13}
\end{equation}
where, for each $U, \nu$ is chosen so that the product
\begin{equation}
(I+P_m)^{-1}P^{\nu +1}_m U \label{eq:2.13a}
\end{equation}
has a sufficiently small order of magnitude to be included with $E_m$
and form part of $E_{m+1}$. Now we group together terms of the same order
of magnitude and denote the dominant term by $S_{m+1}$.
We then obtain (\ref{eq:2.12}) (with $S_{m+1}$ in place of $V_{1,m+1}$),
where $E_{m+1}$ has the pre-assigned accuracy and, by (\ref{eq:2.8}),
$S_{m+1}$ and the $V_{j,m+1}$ are known explicitly in terms of the $V_{jm}$.
Then, finally, we obtain (\ref{eq:2.11}) and (\ref{eq:2.12}) by defining
\begin{displaymath}
D_{m+1}=D_m + {\rm dg}S_{m+1}
\end{displaymath}
and
\begin{displaymath}
V_{1,m+1}=S_{m+1} - {\rm dg}S_{m+1}.
\end{displaymath}
\section{Orders of magnitude}
The transformation process (\ref{eq:2.1}) is carried out for $m=1,2,...,M-1$
and, in order to express the process in terms of an algorithm which
can be implemented in the symbolic algebra system Mathematica, it is
necessary to specify more precisely the orders of magnitude involved.
The starting point is $m=1$ and, in (\ref{eq:2.3}), we suppose that
\begin{equation}
V_1=V_{11} + V_{21} +...+V_{N1}, \label{eq:3.1}
\end{equation}
where
\begin{equation}
V_{j1}(x) =O(x^{-\theta_j}) \;\;\;(1 \leq j\leq N) \label{eq:3.2}
\end{equation}
as $x \rightarrow \infty$ and, corresponding to (\ref{eq:2.4}),
\begin{displaymath}
0< \theta_1 < \theta_2 <...< \theta_N.
\end{displaymath}
We assume that the $\theta_j$ in (\ref{eq:3.2}) are chosen to have their
minimum possible values and, in practice, (\ref{eq:3.2})
represents the exact order of magnitude of $V_{j1}$. We denote by $\sigma$
the set of positive numbers
\begin{equation}
\sigma = \{ n_1 \theta_1 + n_2 \theta_2+...+n_N \theta_N; \; n_1 \geq 1, \;
n_2 \geq 0, ..., n_N \geq 0 \} \label{eq:3.3}
\end{equation}
the $n_j$ being integers. It is possible that different values of
the $n_j$ give the same number in $\sigma$ and, allowing for this, we denote
the distinct numbers in $\sigma$ by $\sigma_1,\sigma_2,...$ in
increasing order. Let us suppose that the pre-assigned accuracy represented
by $E_m$ in (\ref{eq:2.3}) is expressed as
\begin{equation}
E_m(x)=O(x^{-K}) \label{eq:3.4}
\end{equation}
for a given $K>0$. Then we choose the integer $L$ so that
\begin{equation}
\sigma_L < K \leq \sigma_{L+1}. \label{eq:3.5}
\end{equation}
\par
The definition of $\sigma$ in (\ref{eq:3.3}) allows us to
postulate orders of magnitude
\begin{equation}
V_{jm}(x)=O(x^{-\sigma_{m+j-1}}) \label{eq:3.6}
\end{equation}
where we can allow the possibility that some of the $V_{jm} $
( even $V_{1m} $) may be identically zero. To justify (\ref{eq:3.6}),
we note first that
$P_m=O(x^{-\sigma_m})$ by (\ref{eq:2.8}). Then, recalling the use of
(\ref{eq:2.13}) in ( \ref{eq:2.9}), we also have
\begin{displaymath}
P^r_m V_{jm} = O(x^{-r \sigma_m - \sigma_{m+j-1}}),
\end{displaymath}
and again $ r \sigma_m + \sigma_{m+j-1} \in \sigma$ by (\ref{eq:3.3}).
Further, since the combination $r=0$ and $j=1$ does not occur together
here, we have
\begin{displaymath}
r \sigma_m + \sigma_{m+j-1} \geq \sigma_{m+1}.
\end{displaymath}
The term $T_m$ in (\ref{eq:2.9}) is treated similarly. A simple induction
argument on $m$ now establishes (\ref{eq:3.6}) for all $j$ and $m$,
provided only that we add a suitable hypothesis on the term
$\rho^{-1}P^\prime_m$ which appears in (\ref{eq:2.9}) but is not so far included
in the argument. We therefore add the hypothesis that
\begin{equation}
\rho^{-1}P^\prime_m = W_{1m} +...+W_{lm} \label{eq:3.7}
\end{equation}
where, similarly to (\ref{eq:3.6}),
\begin{equation}
W_{jm}(x)=O(x^{-\sigma_{m+j}}) \label{eq:3.8}
\end{equation}
and again we allow the possibility that some $W_{jm}$ may be zero. Since
$P_m^\prime$ depends on $V^\prime_{1m}$ ( see (\ref{eq:2.8})), which
in turn depends on the previous matrices in the process (\ref{eq:2.1}),
the nature of (\ref{eq:3.7}) and ( \ref{eq:3.8}) is that they are
conditions on the successive derivatives of the original $V_{j1}$ which
occur in $R_1$ in (\ref{eq:2.3}) and (\ref{eq:3.1}).
The exact form of these conditions on $V_{j1}$ determines classes of matrices $R_1$ to which
this theory and the consequent algorithms are applicable. Examples of such classes will be given in
section 5.
Thus (\ref{eq:3.7})
and (\ref{eq:3.8}) are consequences of these conditions on $R_1$ which
must be established ( usually by induction) in each application of
the theory. It is these $W_{jm}$ which will appear in our algorithms.
\par
We can now summarise this section by saying that, subject to (\ref{eq:3.2}),
(\ref{eq:3.7}) and ( \ref{eq:3.8}), we have established that
\begin{displaymath}
V_{jm}(x) = O(x^{-\sigma_{m+j-1}})
\end{displaymath}
in (\ref{eq:2.3})-(\ref{eq:2.5}). Also, allowing for the fact that some
$V_{jm}$ in (\ref{eq:2.5}) may be zero, we can write
$\mu=L-m+1$ by (\ref{eq:3.4}) and (\ref{eq:3.5}). The transformation
process (\ref{eq:2.1}) ends when (\ref{eq:2.3}) reduces to
\begin{equation}
R_M =E_M=O(x^{-K}), \label{eq:3.9}
\end{equation} the pre-assigned accuracy, and it follows from (\ref{eq:3.5}) that
\begin{displaymath}
M=L+1, \mu = M-m.
\end{displaymath}
\section{The algorithm }
In this section of the paper we show how the theory that has been developed in sections 1 through 3 may be used to obtain a computer code to calculate the asymptotic expansion of the solutions of (\ref{eq:1.1}) together with an explicit error bound at some point $x \geq X > 0$.
A consequence of the theory that we have exhibited is that, given sufficient computational power,
the quality of the asymptotics that we obtain allows us to take $X$
to be quite small and still maintain high accuracy in the solutions.
\par
As in the discussion in \cite{BEEM95} the algorithm is formulated and
implemented in three stages. All the symbolic algorithms that we shall discuss are implemented in the
symbolic algebra system Mathematica.
The first algorithm, which is concerned with the generation of a set of
recurrence relations to compute the matrix quantities $S_j$,
assumes only that the quantities involved satisfy
non-commutative multiplication.
We recall the comments made after (\ref{eq:3.8}) that general classes of matricies
$R_1$ to which the algorithm is applicable will be given in section 5.
In the following, we write $A_m=(I+P_m)^{-1}$ and we note that expressions such as $P_m$, $T_m$ and $W_{jm}$ appear in the algorithm by virtue of their orders
of magnitude as indicated in section 3.
\begin{Algorithm}
\newcounter{rem1}
\newcounter{rem2}
\newcounter{rem3}
\begin{list}%
{( \alph{rem1} )}{\usecounter{rem1}
\setlength{\rightmargin}{\leftmargin}
\setlength{\rightmargin}{\labelwidth}
\setlength{\leftmargin}{\labelwidth}}
\item
Define $K$ to specify the accuracy (\ref{eq:3.4}).
\item
Define $N$ and $\theta_1, ..., \theta_N$ in (\ref{eq:3.2}) and arrange the distinct numbers in the set $\sigma$ in increasing order. This defines
$\sigma_1, \sigma_2,...$ and determines $L$ in (\ref{eq:3.5}).
For a given $K$, $n_j$ in (\ref{eq:3.3}) satisfies
\begin{list}%
{( \Roman{rem2} )}{\usecounter{rem2}}
\item
$0 \leq n_j \leq [ \frac{ K-\theta_1}{\theta_j}] \;\; (j \geq 2 )$
\item
$1 \leq n_1 \leq [ \frac{ K }{\theta_1}]$.
\end{list}
\item
Start with $D_1$ and $V_{j1} \; ( 1 \leq j \leq N) $ as in (\ref{eq:3.1})--(\ref{eq:3.2}) and put $E_1=0$.
\item
For $ m=1$ to $M-1$,
\begin{list}%
{( \Roman{rem2} )}{\usecounter{rem2}}
\item
$E_{m+1}=A_mE_m(I+P_m)$.
\item
For each $U \in \{W_{jm} \; ( 1\leq j \leq l), T_m,V_{1m}P_m,V_{jm}, V_{jm}P_m \;\; (2 \leq j \leq M-m )\}$,
\begin{list}%
{( \roman{rem3} )}{\usecounter{rem3}}
\item
In (\ref{eq:2.13a}) determine $\nu$.
\item
For $r=0$ to $\nu$,
\begin{itemize}
\item
determine the order $\sigma_{m+k} =r\sigma_m+($order of $U)$ of $P_m^r U$;
\item
Update $V_{k,m+1}= V_{k,m+1} + (-1)^rP^r_m U$.
\end{itemize}
\item
Update $E_{m+1}=E_{m+1}+(-1)^\nu A_mP_m^{\nu+1} U$.
\end{list}
\item
Output $S_{m+1} = V_{1,m+1}$.
\end{list}
\end{list}
\end{Algorithm}
\par
At each stage of the algorithm, $S_{m+1}$ depends on the terms $D_k,P_k,W_{jk}$ and $V_k$ $(1 \leq k \leq m )$. However, because of part $(b)$, the algorithm requires more precise information than its counterpart, Algorithm 6.1, in \cite{BEEM95}. The set $\sigma$ in \cite{BEEM95} has a very simple form, consisting only of numbers $na$, where $n$ is a positive integer and $a \; (>0)$ is a parameter. Thus
$\sigma_n = na$ in (\ref{eq:3.3}), and Algorithm 6.1 in \cite{BEEM95} can be executed without specifying the value of $a$. We give a more general example of the same situation in Example 5.1 below. However, in the wider context of (\ref{eq:3.3}), sufficient information about the parameters $\theta_1,...,\theta_N$ must be provided to Algorithm 4.1 in order to generate all the necessary values of $\sigma_n$. We therefore defer further discussion of the output of Algorithm 4.1 to Example 5.2 in the next section, where values of the parameters are specified.
\par
\begin{Algorithm}
Starting with the precise form of the matrices $D_1$ and $V_1$ and with
$E_1=0$, the expressions $S_2,...,S_{M-1}$ generated by Algorithm 4.1
are evaluated in order. These are then used to evaluate the matrices
$D_{m+1}$ and $V_{1,m+1}$.
\end{Algorithm}
The structure of the algorithm is similar to that of Algorithm 6.2 of \cite{BEEM95}.
As noted in that paper, in order to reduce the computation time a detailed assessment
of the mathematical issues involved at each simplification of an expression must first be made.
Thus the judicious use of the {\it Together, Apart} commands instead of the {\it Simplify}
command can result in a dramatic decrease in the time needed to perform the computation.
At this stage it is necessary to keep the expressions in symbolic form since the $W_{jm}$
must be obtained explicitly. These are computed in terms of $P^{'}_m$, which in turn is obtained from $S_j$ by differentiation.
\par
The final algorithm that is needed in the symbolic part of the computation
is concerned with obtaining an upper bound for the norm $\parallel E_m \parallel
$ of the error term $E_m$. The norm is computed using the sup. norm
by applying the triangle and Cauchy inequalities
\begin{displaymath}
\parallel AB \parallel \leq n \parallel A \parallel\parallel B \parallel , \;
\parallel A+B \parallel \leq \parallel A \parallel+\parallel B \parallel
\end{displaymath}
for $ n \times n $ matrices. As in \cite{BEEM95}
a bound for the norm of the inverse matrix $A_m= (I+P_m)^{-1}$ is given by
\begin{equation}
\parallel A_m \parallel \leq 1 + \parallel P_m \parallel/(1-n
\parallel P_m \parallel ) \label{eq:4.4a}
\end{equation}
provided $ \parallel P_m \parallel <1/ n $.
\begin{Algorithm} \label{alg:6.3}
Compute the ${\it sup.}$ norm of each matrix in $E_m$, using (\ref{eq:4.4a})
for the inverse matrices.
Next apply the triangle and Cauchy inequalities to obtain an upper bound for
the ${\it sup.}$ norm of $E_m$ itself.
\end{Algorithm}
We note that, since Algorithm 4.1 expresses $E_m$ in terms of matrices arising at earlier stages of
(\ref{eq:2.2})-- and therefore ultimately in terms of the $V_{j1}$ in(\ref{eq:1.1})--
so also Algorithm 4.3 ultimately expresses the norm of $E_m$ in terms of norms derived from the original system (\ref{eq:1.1}).
\par
Before moving on to examples of the implementation of the algorithms, we add some detail to (\ref{eq:1.8a}) concerning the generation of the sub-dominant terms in the asymptotic solution of (\ref{eq:1.1}). By (\ref{eq:3.9}) and (\ref{eq:3.5}), the final system in the sequence (\ref{eq:2.2}) is
\begin{equation}
Z'_M = \rho (D_M+E_M)Z_M, \label{eq:4.2}
\end{equation}
where
\begin{equation}
\parallel E_M \parallel \leq c_Mx^{- \sigma_M} \label{eq:4.3}
\end{equation}
and $c_M$ is a constant. We recall that the numbers $\sigma_m$ cover all orders of magnitude which occur. Algorithm 4.3 provides a definite value for $c_M$ in any particular example. As in ( \ref{eq:1.3}), the asymptotic solution of (\ref{eq:4.2}) has the form
\par
\begin{equation}
\{ e_k + \eta_k(x) \} \exp ( \int_a^x d_{kM}(t) \rho(t) dt ), \label{eq:4.4}
\end{equation}
where the $d_{kM}$ are the diagonal entries in $D_M$ and the size of $\eta_k$ can be expressed in terms of $c_M$ and $\sigma_M$ as in\cite [(3.15)]{BEEM95}. What we wish to emphasise here is the transformation back from (\ref{eq:4.2}) to the original system (\ref{eq:1.1}).
As indicated in (\ref{eq:1.8a}), this adds to (\ref{eq:4.4}) the extra factor
\begin{equation}
I+P_0(x) = \Pi_{m=1}^{M-1} \{ I + P_m(x)\}.
\label{eq:4.5}
\end{equation}
Now the definition of $P_m$ in (\ref{eq:2.8}) is in terms of $V_{1m}$, which is provided by Algorithms 4.1 and 4.2. Further, by (\ref{eq:2.8}) and (\ref{eq:3.6}), $P_m(x) = O(x^{-\sigma_m}) \; ( 1 \leq m \leq M-1 ).$
Thus, in terms of (\ref{eq:4.5}), our algorithms provide sub-dominant terms up to
$O(x^{-\sigma_{M-1}})$
in the asymptotic solution of (\ref{eq:1.1}).
\par
We mention one further point concerning the transformation process which leads from (\ref{eq:2.2}) to (\ref{eq:2.9}). Since the derivative $P'_m$ appears in (\ref{eq:2.9}) and since $P_m$ ultimately depends on $V_1$ and $\rho$, each step in the process requires the existence of a further derivative of $V_1$ and $\rho$. Thus the sub-dominant terms in (\ref{eq:4.5}) require the existence of $M-1$ derivatives of $V_1$ and $\rho$. If $V_1$ and $\rho$ are infinitely differentiable then, subject to convergence considerations, (\ref{eq:4.5}) would yield a full asymptotic expansion. It is hoped to deal with this matter in a future paper.
\section{Examples}
\subsection{Example 1}
Let $\rho(x)=x^\gamma$ and $ R(x)=x^{-(1+\gamma)}C$, where $\gamma > 0$ and $C$ is a constant matrix.
Here we have just $N=1$ in (\ref{eq:3.1}) and
\begin{displaymath}
\sigma_m=m(1+\gamma) \;\; ( m=1,2,...).
\end{displaymath}
This example is basically the case considered in \cite [section 5]{BEEM95} with a special choice of $C$ and, as in \cite{BEEM95}, the condition (\ref{eq:3.7})
is easily verified by induction on $m$. The present code has been tested on this example
and the results from Algorithm 4.1 are, up to notational differences, identical with those reported on in \cite{BEEM95}. Further, Algorithms 4.2 and 4.3 return values of the solutions
computed with $4$ iterations that, at $X=40$, are within $10^{-11}$ of those reported on in \cite{BEEM95}.
\subsection{Example 2}
A significantly different example is obtained when $\rho(x)$ and $R(x)$
in (\ref{eq:1.1}) contain periodic factors.
Let $\rho(x)=x^\gamma \phi(x^\beta)$
and
\begin{equation}
R(x)=x^{-(1+\gamma-\beta)}F_1(x^\beta) + x^{-(1+\gamma)}F_2(x^\beta),
\label{eq:5.1}
\end{equation}
where
\begin{equation}
0< \beta < 1+\gamma \label{eq:5.2}
\end{equation}
and $\phi(t),F_1(t)$ and $F_2(t)$ have the same period $\omega$ in $t$, with $\phi$ nowhere zero.
Here we have $N=2$ in (\ref{eq:3.1}) and
\begin{equation}
\theta_1=1+\gamma-\beta,\;\;\; \theta_2=1+\gamma \label{eq:5.3}
\end{equation}
in (\ref{eq:3.1})-(\ref{eq:3.2}).
Corresponding to (\ref{eq:3.6}), we make the induction hypothesis
\begin{displaymath}
V_{jm}=x^{- \sigma_{m+j-1}}U_{jm}(x^\beta)
\end{displaymath}
where $U_{jm}(t)$ has period $\omega$ in $t$. Then, by (\ref{eq:2.8}),
\begin{displaymath}
P_m(x)=x^{-\sigma_m} \Pi_m ( x^\beta)
\end{displaymath}
where $\Pi_m(t)$ has period $\omega$ and the entries $\pi_{ijm}$ in $\Pi_m$ are obtained from those in $U_{1m}$ by the formula
\begin{displaymath}
\pi_{ijm}=u_{ij1m}/(d_j-d_i) \;\;\;( i \neq j ).
\end{displaymath}
Then considering (\ref{eq:3.7}), we have
\begin{eqnarray}
\rho^{-1}P_m^{'} &=& x^{ -( \sigma_m + \gamma +1 -\beta)}(\Pi^{'}_m/\phi)(x^\beta)
-\sigma_m x^{ -( \sigma_m + \gamma +1 )}(\Pi_m/\phi)(x^\beta) \nonumber \\
&=& x^{ -( \sigma_m + \gamma +1 -\beta)} W_1( x^\beta) + x^{ -( \sigma_m + \gamma +1 )}W_2(x^\beta) \label{eq:5.4}
\end{eqnarray}
and hence (\ref{eq:3.7}) holds with $l=2$.
\par
We note that the upper bound (\ref{eq:5.2}) placed on $\beta$ is a restriction on the frequency of oscillations of $\rho$ and $R$ in this example. The same type of condition was imposed in \cite[Example 2.4.1]{MSPE89} in connection with the method of repeated diagonalization. When $\beta > 1 + \gamma$, the asymptotic solution of (\ref{eq:1.1}) requires transformations of an entirely different nature from those based on (\ref{eq:2.1}) and (\ref{eq:2.8}) \cite{MSPE92a}, \cite{YTS}.
\par
In order to discuss the output of Algorithm 4.1 for this example, we have to choose specific values of $\beta$ and $\gamma$, so that part (b) can generate the list of values $\sigma_m$. We make the simple choice $\beta=\gamma=1$, so that $\theta_1=1$ and $\theta_2=2$ in (\ref{eq:5.3}). Then, by (\ref{eq:3.3}), $ \sigma_m =m$.
Also, by (\ref{eq:5.1}) and (\ref{eq:5.4}), we have
\begin{eqnarray*}
V_{11}(x) = x^{-1}F_1(x), && V_{21}(x) = x^{-2}F_2(x), \\
W_{1m}(x)= x^{-(m+1)}W_1(x), && W_{2m}(x) = x^{-(m+2)}W_2(x)
\end{eqnarray*}
in the notation of section 3.
\par
We have noted in section 4 that $S_{m+1}$ depends on $D_k, \; P_k, W_{jk}$ and $V_k \; (1 \leq k \leq m).$ However, the formulae for the $S_{m+1}$
can often be simplified by expressing them in terms of the earlier $S_k$. This reduces the number of terms in the formulae with a consequent reduction in the computational effort required. We now give the output for $S_2,S_3$ and $S_4$:
\begin{eqnarray*}
S_2 &=& V_{11} P_1 + T_1 + V_{12} - W_{11} \\ \nonumber
S_3 &=& -P_1 S_2 + V_{12} P_1 + T_2 - W_{12} - W_{21} \\ \nonumber
S_4 &=& T_3 - W_{13} \\ \nonumber
\end{eqnarray*}
We note that, at this stage, these expressions appear no more complex that those computed in \cite{BEEM95}. However, increased difficulties do occur in the evaluation of the entries in $T_3$, $W_{12},W_{21}$ and $W_{31}$ at the next stage when Algorithm 4.2 is implemented. We discuss this point further in Example 5.3. The time needed on a Sun SPARC-station 10 to compute $S_4$ is $1.6$ seconds, which is comparable with the comparative time $1.1833$ seconds reported on in \cite{BEEM95}.
The similar times are a reflection of the low number of terms that must be manipulated by the symbolic algebra system.
As we remarked previously, the output of Algorithm 4.1 at this point consists only of a set of symbolic expressions which satisfy non-commutative multiplication.
\par
The error term
\begin{eqnarray*}
E_4 &=& -A_3 W_{23}
+ A_4 T_4 - A_4 W_{14} A_4
W_{24} \\
&+& A_1 P_1 ^2 \left( V_{12} P_{1} - W_{21} \right) \\
&+& A_2 \left( -P_1 \left( T_1 + V_{12} - W_{11}\right)+ V_{12} P_1 - P_1 V_{11} P_1 - W_{21} \right) P_2 \\
&+& A_3 V_{31} P_3 + A_4 V_{41} P_4
\end{eqnarray*}
is more complex than that found in \cite{BEEM95}, which consists of only $11$ additive terms. This additional complexity in the $E_4$ term is reflected in the time needed to compute norms when the third stage, Algorithm 4.3, is implemented.
\subsection{Example 3}
Algorithms 4.2 and 4.3 require the input of specific matrices $D_1$ and $V_1$ and, in this example, we introduce a special case of Example 5.2 which arises from the $n-$th order differential equation
\begin{equation}
y^{(n)}(x)-Q(x)y(x)=0. \label{eq:5.5}
\end{equation}
Again, $Q(x)$ contains a periodic factor of the form
\begin{equation}
Q(x)= x ^\alpha f(x^\beta) \label{eq:5.6}
\end{equation}
where $f(t)$ is periodic in $t$ and nowhere
zero, with $0 < \beta < 1 + \frac{\alpha}{n}$.
As in \cite[section 3]{BEEM95}, we write (\ref{eq:5.5}) in the system form
\begin{equation}
Z'=Q^{1/n}(D+Q'Q^{-1-1/n}C)Z, \label{eq:5.7}
\end{equation}
where $Z$ has a first component $y$, $D$ is the diagonal matrix formed by
the $n-$th roots of unity,
\begin{equation}
D=\rm{dg}( \omega_1,...,\omega_n),
\end{equation}
and $C$ is constant with
\begin{equation}
{\rm dg} C = -(n-1)(2n)^{-1}I. \label{eq:5.8}
\end{equation}
It follows from (\ref{eq:5.6}) that
\begin{displaymath}
Q'Q^{-1-1/n}= \beta x^{-(1+\alpha/n-\beta)}( f'f^{-1-1/n})(x^\beta)
+ \alpha x^{-(1+\alpha/n)}f^{-1/n}(x^\beta).
\end{displaymath}
Hence (\ref{eq:5.7}) is the special case of (\ref{eq:1.1}) and (\ref{eq:5.1}) in which
\begin{displaymath}
\gamma = \alpha/n,\; \phi=f^{1/n},\; F_1=\beta f'f^{-1-1/n}C,\; F_2 = \alpha f^{-1/n}C.
\end{displaymath}
\par
We now write (\ref{eq:5.7}) in the form (\ref{eq:2.2}) ( with $m=1$), where $dg V_{11}=0$ as in (\ref{eq:2.6}). Thus taking the diagonal terms from $F_1$ over to $D$ and
using (\ref{eq:5.8}), we define
\begin{equation}
D_1=D-(n-1)(2n)^{-1} \beta x^{-(1+\alpha/n-\beta)}(f'f^{-1-1/n})(x^\beta) I = D+\frac{1}{2}(n-1)pI, \label{eq:5.9}
\end{equation}
where
\begin{displaymath}
p=x^{-\alpha/n} \{ f^{-1/n}(x^\beta) \}'
\end{displaymath}
and
\begin{equation}
R_1=V_{11}+V_{21} \;\;(=V_1), \label{eq:5.10}
\end{equation}
where
\begin{displaymath}
V_{11}=-x^{-\alpha/n}np(C-{\rm dg } C), \; V_{21}=\alpha x ^{-(1+\alpha/n)}f^{-1/n}(x^\beta)C.
\end{displaymath}
Thus (\ref{eq:5.9}) and (\ref{eq:5.10}) are our choice of $D_1$ and $V_1$.
\par
As in the discussion of Example 5.2, we choose the parameter values $\beta=\gamma=1$, that is, $\alpha=n$ and $\beta=1$ in (\ref{eq:5.6}).
Finally, we must also choose the values of $n$ in order to complete the requirements for implementing Algorithms 4.2 and 4.3.
We choose $n=4$, in which case a short calculation gives
\begin{displaymath}
C=-\frac{1}{8}\left ( \begin{array}{cccc}
-3 & 1+i & 1 & 1-i \\
1-i & -3 & 1+i & 1 \\
1 & 1-i & -3 & 1+i \\
1+i & 1 & 1-i & -3
\end{array}
\right )
\end{displaymath}
as in \cite[Algorithm 6.2]{BEEM95}.
\par
The periodic nature of the function $Q(x)$ introduces
additional matrices over the case discussed in Example 5.1 and
the theory expounded in \cite{BEEM95}. As we remarked above,
Algorithm 4.1 needs the specific values of $\beta$ and $\gamma$
to be available. A consequence of this is that all three algorithms must be run for each set of parameter values. However the main additional computational difficulties occur in Algorithms 4.2 and 4.3. In Algorithm 4.2 the extra matrices generated as a consequence of the periodic nature of $Q$ must have their entries evaluated, while in Algorithm 4.3 the norm of the error matrix, which is considerably more complex than that which occurs in Example 5.1,
must be evaluated.
\par
In order to test the performance of the set of algorithms we have chosen
to take
\begin{equation}
f(x)=2+\sin x. \label{eq:5.13}
\end{equation}
However the performance of Algorithms 4.2 and 4.3 is considerably improved if we
work with a generic function $f$ together with the simplification rule
\begin{equation}
f^{''}=2-f
\end{equation}
and the results that we report are based on this latter situation.
A further consequence of the extra complexity in $Q$ is that, with the CPU power and memory that we have available, we can not evaluate the entries in $S_5$. Thus effectively, we can only perform $4$ iterations of Algorithm 1.
The time needed on a SPARC 10 workstation to compute the entries in $S_4$ is
$190$ seconds of CPU time. This compares with the $75$ CPU seconds that was needed in the work reported on in \cite{BEEM95}.
The final algorithm, Algorithm 4.3, deals with the estimation of the $sup.$ norm of the error matrix, in our case $E_4$.
This involves first applying the Cauchy and triangle inequalities to each
matrix component of $E_4$ to obtain an upper bound for
$ \parallel E_4 \parallel $ in terms of the norms of its components. An estimate for the norm of the inverse matrix $A_4$ is given by (\ref{eq:4.4a}).
\par
These norms are estimated by examining and evaluating each component of each matrix.
In doing this we encounter terms involving $p$ and its derivatives.
In order to obtain upper bounds for these terms we symbolically compute expressions for them and note that, for this example,
\begin{displaymath}
3 \geq |f(x)| \geq 1 \;\; (X<x<\infty)
\end{displaymath}
This gives the necessary bound.
Again the increase in the complexity of the expressions, resulting from
the more complex structure of the initial data means that the computation
time that is required is increased over \cite{BEEM95}.
It takes some $1110$ CPU seconds to compute the norm of $E_4$ compared with approximately $550$ CPU seconds for $E_4$ reported in \cite{BEEM95}.
At $X=40$ the bound for the norm of this error matrix is $ 1.63099 \times 10^{-6}$.
\par
We now compute the factor
\begin{displaymath}
I+P_0(x)
\end{displaymath}
discussed in (\ref{eq:4.5}) and apply this to the asymptotic solution (\ref{eq:4.4}) to yield the asymptotic solution of (\ref{eq:5.7}).
We mention that in view of the size of the expressions that are generated in
(\ref{eq:4.5}), we have chosen to evaluate it at $x=40$ using $30$ digits of accuracy.
\section{Concluding remarks}
\subsection{The transformations of Harris and Lutz}
In the introduction, we indicated that the origins of our basic transformation (\ref{eq:1.4})-(\ref{eq:1.6}) lie in the 1974 paper of Harris and Lutz.
In a subsequent paper (\cite[2.4]{HL77}), an extension of (\ref{eq:1.5}) is also discussed. Whereas (\ref{eq:1.5}) can be described as providing a first-order approximation to the exact diagonalization of $D+R$
in (\ref{eq:1.1}), the extension provides a more accurate second-order approximation. These ideas are also discussed in (\cite{MSPE89} pp26-8).
\par
The question arises whether this extension accelerates the process leading to (\ref{eq:3.9}),
and here we indicate why it does not achieve this objective. The essential feature of both (\ref{eq:1.5})
and the extension in \cite[2.4]{HL77} is that they are linear algebraic equations to determine $P$.
They do not involve $P'$.
Thus, in (\ref{eq:2.9}), both the corresponding definitions of $P_m$ yield a term $\rho^{-1}P^{'}_m$ which,
by (\ref{eq:3.7}), contributes expressions $W_{jm}$ satisfying (\ref{eq:3.8}).
Now, although we have allowed the possibility that $W_{1m}$ may be zero, there is
no reason to suppose that it is necessarily zero, and there is therefore nothing to be gained by departing from the simplest definition of $P_m$ based on (\ref{eq:1.5}) and (\ref{eq:1.6}).
\subsection{Other computational algorithms}
Here we indicate how our algorithm has a quite different purpose as compared to the algorithms of \cite{DC82} and \cite{D92}, and it is convenient to refer specifically to the latter.
In \cite{D92}, the differential system is
\begin{displaymath}
Y'(x)=x^{-1}B(x)Y(x)
\end{displaymath}
where $x$ can be a complex variable,
\begin{displaymath}
B=
\left (
\begin{array}{ccccc}
0 & 1 & & & \\
& . & . & & \\
& & . & . & \\
& & & 0 & 1 \\
-b_n & . & . & -b_2 & -b_1 \\
\end{array}
\right )
\end{displaymath}
and each $b$ in the last row has a Laurent series
\begin{equation}
b(x)=({\rm const}.)x^c(1+a_1x^{-1}+...) \;\;\;(x \rightarrow \infty) \label{eq:6.1}
\end{equation}
with rational $c$. With $\infty$ as an irregular singular point, the solutions of the corresponding $n-th$
order differential equation have the asymptotic form
\begin{equation}
f(x) \{ 1 + o(1) \} \label{eq:6.2}
\end{equation}
where the dominant term $f(x)$ comprises the usual logarithmic and exponential factors.
The algorithm developed by \cite{D92} determines $f(x)$ from a knowledge of $B$. The algorithm of ( Della Dora, Di Crescenzo and Tournier 1982)
also allows sub-dominant componants of $f(x)$ to be computed.
In contrast, our algorithm is concerned with improvements to the $o(1)$ term in (\ref{eq:6.2}) and the construction of
sub-dominant terms as explained above in section 1 and at the end of section 4.
It is also the purpose of our paper to cover classes of coefficients which, unlike (\ref{eq:6.1}), contain periodic factors as well as powers of $x$. This was the subject of section 5.
\par
We are grateful to the referees for raising the issues which are covered in this section.
|
2,869,038,157,072 | arxiv |
\section{Hyperbolicity Proof For HASWME}
\subsection{Axisymmetric system with radial velocity expanded}\label{ch:hyperbolicityproof(N,0)}
\begin{theorem}{}\label{AppendixN0TheoremMatrix}
The HASWME system matrix \(A_{HA}^{(N_r,0)} \in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) is given by
\begin{equation}
A_{HA}^{(N_r,0)}=
\begin{pmatrix}
& 1 & & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & & & \\[6pt]
& & & \ddots & \ddots & \frac{N_r+1}{2 N_r+1}\alpha_1 & \\[6pt]
& & & & \frac{N_r-1}{2N_r-1}\alpha_1 & v_{r,m} & \\[6pt]
-v_{r,m}v_{\theta,m} & v_{\theta,m} & & & & & v_{r,m}
\end{pmatrix},
\end{equation}
where all other entries are zero.
\end{theorem}
\begin{proof}
The matrix \(A_{HA}^{(N_r,0)}\) is obtained by computing \(A_A^{(N_r,0)}\), the full system matrix, and then setting the higher order moments to zero \cite{HSWME}. We derive the rows of the system matrix separately:
\textbf{1. Mass and radial momentum balance equations.}
The first two rows of the system matrix correspond to the mass balance and the radial momentum balance equations. It can be easily seen that the first two rows of the coefficient matrix are
\begin{align*}
\begin{pmatrix}
& 1 & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2+\sum_{i=1}^{N_r}\alpha_i^2 & 2v_{r,m} & \frac{2}{3} \alpha_1 & \frac{2}{5}\alpha_2 & \cdots & \frac{2}{2N_r+1}\alpha_N & 0
\end{pmatrix}.
\end{align*}
After setting the higher order moments to zero, we obtain
\begin{align*}
\begin{pmatrix}
& 1 & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3} \alpha_1 & & & &
\end{pmatrix}.
\end{align*}
\textbf{2. Angular momentum balance equation.}
The last row of the coefficient matrix corresponds to the angular momentum balance equation. Recall the flux in this equation $F_{N_r+3}:=hv_{r,m}v_{\theta,m}$. It can be easily verified that
\begin{equation*}
\frac{\partial F_{N_r+3}}{\partial h}=-v_{r,m}v_{\theta,m}, \quad
\frac{\partial F_{N_r+3}}{\partial (hv_{r,m})}=v_{\theta,m}, \quad
\frac{\partial F_{N_r+3}}{\partial (h\alpha_l)}=0, \quad
\frac{\partial F_{N_r+3}}{\partial (hv_{\theta,m})}=v_{r,m}.
\end{equation*}
\textbf{3. The equations for \(\alpha_i,i=1,\ldots,N_r\)}.
The equation for the \(i\)th moment reads
\begin{equation*}
\frac{\partial(h\alpha_i)}{\partial t}+\frac{\partial F_i}{\partial r}=Q_i+P_i.
\end{equation*}
The system matrix \(A_H^{(N_r,0)}\) is composed of a conservative and a non-conservative part.
\textbf{3.1 The conservative part \(\frac{\partial F_i}{\partial r}\).} Recall that
\begin{align*}
F_i&=h\left( 2v_{r,m}\alpha_i+\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\alpha_k \right)=h2v_{r,m}\alpha_i+h\sum_{j,k=1}^{N_r}(2i+1)\left(\int_0^1\phi_i \phi_j\phi_k d\zeta\right) \alpha_j\alpha_k.
\end{align*}
Consider the first term, \(h2v_{r,m}\alpha_i\). With \( \boldsymbol{\alpha}=(\alpha_1,\cdots,\alpha_{N_r})\), it can be easily seen (after setting the higher coefficients to zero) that this term leads to
\begin{equation*}
\frac{\partial 2hv_{r,m}\boldsymbol{\alpha}}{\partial V}=\begin{pmatrix}
-2v_{r,m}\alpha_1 & 2\alpha_1 & 2v_{r,m} & & & \\[6pt]
& & & \ddots & & \\[6pt]
& & & & 2v_{r,m} &
\end{pmatrix}.
\end{equation*}
Now, we look at the second term, with the triple Legendre integral. We call this term \(\Tilde{F_i}:=h\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\alpha_k\). In \cite{HSWME}, it is shown that if \(j=1\), the only cases that need to be considered are \(k=i-1\) (if \(i\geq 2\)) and \(k=i+1\). Analogously, if \(k=1\), the only cases we need to consider are \(j=i-1\) (if \(i\geq 2\)) and \(j=i+1\). According to this observations, we
can distinguish four different situations.
\textbf{3.1.1 Case 1: \(i=1\)}.
If \(i=1\), there are two cases for the index triplet \((i,j,k)\), with \(j=1\) or \(k=1\), which lead to a non-zero term: $(1,1,2) \text{ and } (1,2,1)$.
So we can pull these cases outside of the sum and we get:
\begin{equation*}
\Tilde{F_1}=\sum_{j,k=1}^{N_r}A_{1jk}\alpha_j\alpha_k=2A_{112}h\alpha_1\alpha_2+h\sum_{j,k=2}^{N_r}A_{1jk}\alpha_j\alpha_k.
\end{equation*}
The factor \(2\) appears because both index triplets lead to the same term.
The derivative of the second term with respect to $h$ reads
\begin{equation*}
\frac{\partial \left(h\sum_{j,k=2}^{N_r}A_{1jk}\alpha_j\alpha_k\right)}{\partial h}=-\sum_{j,k=2}^{N_r}A_{1jk}\alpha_j\alpha_k.
\end{equation*}
After setting the higher order moments to zero, this term reduces to zero. The same can be observed when taking the derivative with respect to \(hv_{r,m}\), \(h\alpha_l\) (with \(l=1,\cdots,N_r\)) and \(hv_{\theta,m}\).
The derivatives of the first term read
\begin{equation*}
\frac{\partial(2A_{112}h\alpha_1\alpha_2)}{\partial(h\alpha_l)}=2A_{112}\alpha_1 \delta_{2,l}+2A_{112}\alpha_2 \delta_{1,l}.
\end{equation*}
When we set higher order moments to zero, the second term vanishes. We call the resulting entry \(c_1^F\). This entry corresponds to the derivative with respect to \(h\alpha_2\) and will be discussed later.
All other derivatives are zero, except the derivative with respect to \(h\), but we can easily see that this derivative reduces to zero when we set higher order moments to zero.
\textbf{3.1.2 Case 2: \(i=2\)}.
There are three cases for the index triplet \((i,j,k)\), with \(j=1\) or \(k=1\), which lead to a non-zero term: $(2,1,1), (2,1,3), \text{ and } (2,3,1)$.
We can pull these terms outside of the sum:
\begin{equation*}
\Tilde{F_2}=\sum_{j,k=1}^{N_r}A_{2jk}\alpha_j\alpha_k=A_{211}h\alpha_1^2+2A_{213}h\alpha_1\alpha_3+h\sum_{j,k=2}^{N_r}A_{2jk}\alpha_j\alpha_k.
\end{equation*}
Again, the derivatives of the last term all reduce to zero when setting the higher order moments to zero. Denoting the first and the second term by \(\underline{\Tilde{F}_2}:=A_{211}h\alpha_1^2+2A_{213}h\alpha_1\alpha_3\), we have:
\begin{align*}
\frac{\partial\underline{\Tilde{F}_2}}{\partial (h\alpha_l)}=2A_{211}\alpha_1\delta_{1,l}+2A_{213}\alpha_1\delta_{3,l}+2A_{213}\alpha_3\delta_{1,l}.
\end{align*}
Setting higher order moments to zero, this reduces to
\begin{equation*}
2A_{211}\alpha_1\delta_{1,l}+2A_{213}\alpha_1\delta_{3,l}.
\end{equation*}
The first term leads to an entry \(a_2^F\). The second term leads to an entry \(c_2^F\).
In the same way, one can easily see that
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_2}}{\partial h}=-A_{211}\alpha_1^2-\alpha_1\alpha_3,\quad\frac{\partial\underline{\Tilde{F}_2}}{\partial (hv_{r,m})}=0,\quad\frac{\partial\underline{\Tilde{F}_2}}{\partial (hv_{\theta,m})}=0.
\end{equation*}
When higher order moment are set to zero, this only results in a term \(-\frac{2}{3}\alpha_1^2\) in the first column.
\vspace{3mm}
\textbf{3.1.3 Case 3: \(2<i\leq N_r-1\)}.
In this case, there are four possibilities for the index triplet \((i,j,k)\): $(i,1,i-1), (i,1,i+1), (i,i-1,1), \text{ and } (i,i+1,1)$.
We can pull this terms outside of the sum:
\begin{equation*}
\Tilde{F_i}=\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\alpha_k=2A_{i,1,i-1}h\alpha_1\alpha_{i-1}+2A_{i,1,i+1}h\alpha_1\alpha_{i+1}+h\sum_{j,k=2}^{N_r}A_{ijk}\alpha_j\alpha_k.
\end{equation*}
The derivatives of the last term all reduce to zero when setting the higher order moments to zero. Denoting the rest by \(\underline{\Tilde{F}_i}:=2A_{i,1,i-1}h\alpha_1\alpha_{i-1}+2A_{i,1,i+1}h\alpha_1\alpha_{i+1}\) yields
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_i}}{\partial (h\alpha_l)}=2A_{i,1,i-1}\alpha_1\delta_{i-1,l}+2A_{i,1,i-1}\alpha_{i-1}\delta_{1,l} +2A_{i,1,i+1}\alpha_1\delta_{i+1,l}+2A_{i,1,i+1}\alpha_{i+1}\delta_{1,l}.
\end{equation*}
Setting higher order moments to zero, this reduces to
\begin{equation*}
2A_{i,1,i-1}\alpha_1\delta_{i-1,l}+2A_{i,1,i+1}\alpha_1\delta_{i+1,l}.
\end{equation*}
The first term leads to an entry \(a_i^F\). The second term leads to an entry \(c_i^F\).
For the other derivatives, it is again easy to see that
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_i}}{\partial h}=-2A_{i,1,i-1}\alpha_1\alpha_{i-1}-\alpha_1\alpha_{i+1},\quad\frac{\partial\underline{\Tilde{F}_i}}{\partial (hv_{r,m})}=0,\quad\frac{\partial\underline{\Tilde{F}_i}}{\partial (hv_{\theta,m})}=0.\\
\end{equation*}
When higher order moments are set to zero, the derivative with respect to \(h\) reduces to zero.
\textbf{3.1.4 Case 4: \(i=N_r\)}.
In this case, there are two possibilities for the index triplet \((i,j,k)\): $(N_r,1,N_r-1) \text{ and } (N_r,N_r-1,1)$.
Analogously to the previous cases, it can be easily shown that this equation only leads to an entry \(a_{N_r}^F\) in the \((N_r-1)\)th column of the \((N_r+2)\)th row, the row corresponding to the equation for the \(N_r\)th moment.
Using orthogonality and recursion formulas (see also \cite{HSWME}), we have:
\begin{align*}
&a_i^F=\frac{2i}{2i-1}\alpha_1, \qquad i=2,\ldots,N_r,\\[8pt]
&c_i^F=\frac{2i+2}{2i+3}\alpha_1, \qquad i=1,\ldots,N_r-1.
\end{align*}
The entry \(a_i^F\) corresponds to a derivative with respect to \(h\alpha_{i-1}\) and is located on the first lower diagonal. The entry \(c_i^F\) corresponds to a derivative with respect to \(h\alpha_{i+1}\) and is located on the first upper diagonal leading to the modified flux derivative \(\frac{\partial F}{\partial V}_{mod}\):
\begin{equation}
\frac{\partial F}{\partial V}_{mod}=
\begin{pmatrix}
& 1 & & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & 2v_{r,m} & c_1^F & & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & a_2^F & 2v_{r,m} & \ddots & & & \\[6pt]
& & & \ddots & \ddots & c_{N_r-1}^F & \\[6pt]
& & & & a_{N_r}^F & 2v_{r,m} & \\[6pt]
-v_{r,m}v_{\theta,m} & v_{\theta,m} & & & & & v_{r,m}
\end{pmatrix}
\end{equation}
\textbf{3.2 The non-conservative part \(Q_i\)}.
Recall the non-conservative part in the equation for \(h\alpha_i\):
\begin{equation}\label{proof_Nr_nc}
Q_i=v_{r,m}\frac{\partial(h\alpha_i)}{\partial r}-\sum_{j,k=1}^{N_r}B_{ijk}\alpha_k\frac{\partial(h\alpha_j)}{\partial r}.
\end{equation}
Clearly, the first term leads to the term \(v_{r,m}\) on the diagonal of \(Q\). The second term is simplified when we set higher order moments to zero:
\begin{equation*}
-\sum_{j=1}^{N_r}(2i+1)\left(\int_0^1\phi_i'\left( \int_0^1\phi_jd\hat{\zeta}\right)\phi_1d\zeta\right)\alpha_1\frac{\partial(h\alpha_j)}{\partial r}.
\end{equation*}
It can be proved that \(B_{ij1}=0\) for \(|i-j|\neq 1\), except for the first two rows. \(Q_i\) does not depend on any derivative with respect to \(h,hv_{r,m}\) and \(hv_{\theta,m}\), so the entries of the corresponding columns will be zero. The entries of the first two rows and the last row are also all zero; the mass and momentum balances do not contain non-conservative terms. By considering three different cases (\(i=1\), \(1<i\leq N_r-1\) and \(i=N_r\)) in an analogous way as for the conservative terms, we can see that the second term in \eqref{proof_Nr_nc} only leads to non-zero entries \(a_i^Q\) on the first lower diagonal and non-zero entries \(c_i^Q\) on the first upper diagonal:
\begin{align*}
&a_i^Q=\frac{i+1}{2i-1}\alpha_1 \qquad i=2,\ldots,N_r,\\[8pt]
&c_i^Q=\frac{i}{2i+3}\alpha_1 \qquad i=1,\ldots,N_r-1.
\end{align*}
This results in the following non-conservative part \(Q_{mod}\):
\begin{equation}\label{nonconsflux}
Q_{mod}=
\begin{pmatrix}
& & & & & & & \\[6pt]
& & & & & & & \\[6pt]
& & & v_{r,m} & c_1^Q & & & \\[6pt]
& & & a_2^Q & v_{r,m} & \ddots & & \\[6pt]
& & & & \ddots & \ddots & c_{N_r-1}^Q & \\[6pt]
& & & & & a_{N_r}^Q & v_{r,m} & \\[6pt]
& & & & & & &
\end{pmatrix}
\end{equation}
The modified system matrix is given by
\begin{equation*}
A_{HA}^{(N_r,0)}=\frac{\partial F}{\partial V}_{mod}-Q_{mod}.
\end{equation*}
This completes the proof.
\end{proof}
\begin{theorem}\label{Appendix20CharPol}
The HASWME system matrix \(A_{HA}^{(N_r,0)}\in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) has the following characteristic polynomial:
\begin{equation}
\chi_{A_{HA}^{(N_r,0)}}(\lambda)=(\lambda-v_{r,m})\cdot\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N_r,0)}}(\lambda -v_{r,m}),
\end{equation}
where \(A_2^{(N_r,0)}\in \mathbb{R}^{N_r\times N_r}\) is defined as
\begin{equation}\label{appendix20A2}
A_2^{(N_r,0)}=
\begin{pmatrix}
& c_1 & & \\[6pt]
a_2 & & \ddots & \\[6pt]
& \ddots & & c_{N_r-1} \\[6pt]
& & a_{N_r} &
\end{pmatrix},
\end{equation}
where
\begin{align}
a_i&=\frac{i-1}{2i-1}\alpha_1, \qquad i=2,\ldots,N_r,\\[5pt]
c_i&=\frac{i+2}{2i+3}\alpha_1, \qquad i=1,\ldots,N_r-1,
\end{align}
are the values below and above the diagonal, respectively.
\end{theorem}
\begin{proof}
The characteristic polynomial of the modified system matrix \(A_{HA}^{(N_r,0)}\) is by definition
\begin{equation*}
\chi_{A_{HA}^{(N_r,0)}}(\lambda)=\det(A_{HA}^{(N_r,0)}-\lambda I) :=\lvert A_{HA}^{(N_r,0)}-\lambda I \rvert.
\end{equation*}
First, we develop with respect to the first column:
\small
\begin{multline*}
\qquad\lvert A_{HA}^{(N_r,0)}-\lambda I \rvert=\\[7pt]
\begin{vmatrix}
-\lambda & 1 & & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m}-\lambda & \frac{2}{3}\alpha_1 & & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m}-\lambda & \frac{3}{5}\alpha_1 & & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m}-\lambda & \ddots & & & \\[6pt]
& & & \ddots & \ddots & \frac{N_r+1}{2 N_r+1}\alpha_1 & \\[6pt]
& & & & \frac{N_r-1}{2N_r-1}\alpha_1 & v_{r,m}-\lambda & \\[6pt]
-v_{r,m}v_{\theta,m} & v_{\theta,m} & & & & & v_{r,m}-\lambda\\
\end{vmatrix}\\[11pt]
=(v_{r,m}-\lambda)\cdot
\underbrace{
\begin{vmatrix}
& 1 & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & & \\[6pt]
& & & \ddots & \ddots & \frac{N_r+1}{2 N_r+1}\alpha_1 \\[6pt]
& & & & \frac{N_r-1}{2N_r-1}\alpha_1 & v_{r,m}
\end{vmatrix}.
}_{=:\lvert A_1^{(N_r,0)} \rvert}
\end{multline*}
\normalsize
In \cite{HSWME}, it is proved that
\begin{equation*}
\chi_{A_1^{(N_r,0)}}(\lambda)=\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N_r,0)}}(\lambda -v_{r,m}),
\end{equation*}
with \(A_2^{(N_r,0)}\) as in \eqref{appendix20A2}. Note that in \cite{HSWME}, \(u_m\) is used instead of \(v_{r,m}\).
This completes the proof.
\end{proof}
\begin{corollary}\label{Appendix20Eigenvalues1}
The eigenvalues of the one-dimensional radial system matrix are also eigenvalues of the modified axisymmetric hyperbolic system matrix \(A_{HA}^{(N_r,0)}\). Moreover, the only additional eigenvalue of \(A_{HA}^{(N_r,0)}\) is \(\lambda=v_{r,m}\).
\end{corollary}
\begin{proof}
This follows immediately from the Theorem \ref{Appendix20CharPol}.
\end{proof}
\begin{corollary}\label{Appendix20Eigenvalues2}
The eigenvalues of the modified hyperbolic axisymmetric system matrix \(A_{HA}^{(N_r,0)} \in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) are the real numbers.
\begin{equation*}
\lambda_1=v_{r,m}, \quad \lambda_{2,3}=v_{r,m}\pm \sqrt{gh+\alpha_1^2},
\lambda_{i+3}=v_{r,m}+b_i\cdot\alpha_1, \qquad i=1,\ldots,N_r,
\end{equation*}
with \(b_i\cdot \alpha_1\) the real roots of \(A_2^{(N_r,0)}\), defined in \eqref{appendix20A2}, and where all \(b_i\) are pairwise distinct.
\end{corollary}
\begin{proof}
According to \ref{Appendix20CharPol}, the characteristic polynomial of \(A_{HA}^{(N_r,0)}\) is given by:
\begin{equation*}
\chi_{A_{HA}^{(N_r,0)}}(\lambda)=(\lambda-v_{r,m})\cdot\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N_r,0)}}(\lambda -v_{r,m}).
\end{equation*}
From the first factor, we can immediately see that $\lambda_1=v_{r,m}$.
From the second factor, we easily obtain $\lambda_{2,3}=v_{r,m}\pm \sqrt{gh+\alpha_1^2}$.
In \cite{HSWME}, it is shown that the roots of \(\chi_{A_2^{(N_r,0)}}(\lambda -v_{r,m})\) have the form \(\lambda_i=v_{r,m}+b_i\cdot \alpha_1\), where all \(b_i\) are pairwise distinct. Moreover, in \cite{equilibriumStability} it is proved that the roots are real.
\end{proof}
\subsection{Axisymmetric system with full velocity expanded}\label{ch:hyperbolicityproof(N,N)}
\begin{theorem}\label{Appendix22TheoremMatrix}
The HASWME system matrix \(A_{HA}^{(N,N)} \in \mathbb{R}^{(2N+3)\times (2N+3)}\) is given by
\begin{equation}\label{appendix22systemMatrix}
A_{HA}^{(N,N)}=
\begin{pmatrix}
\mathbf{A}^{(N,N)} & \mathbf{0}^{(N,N)} \\
\mathbf{B}^{(N,N)} & \mathbf{C}^{(N,N)}
\end{pmatrix},
\end{equation}
where the blocks are given by
\begin{equation*}
\mathbf{A}^{(N,N)}=
\begin{pmatrix}
& 1 & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & & \\[6pt]
& & & \ddots & \ddots & \frac{N+1}{2 N+1}\alpha_1 \\[6pt]
& & & & \frac{N-1}{2N-1}\alpha_1 & v_{r,m}
\end{pmatrix},
\end{equation*}
\begin{align*}
\mathbf{B}^{(N,N)}&=
\begin{pmatrix}
-v_{r,m}v_{\theta,m} -\frac{\alpha_1\gamma_1}{3} & v_{\theta,m} & \frac{\gamma_1}{3} & & & \\[6pt]
-v_{r,m}\gamma_1-v_{\theta,m}\alpha_1 & \gamma_1 & & \frac{2}{5}\gamma_1 & & \\[6pt]
-\frac{2}{3}\alpha_1\gamma_1 & & -\frac{1}{3}\gamma_1 & & \ddots & \\[6pt]
& & & \ddots & \ddots & \frac{N}{2N+1}\gamma_1 \\[6pt]
& & & & -\frac{1}{2N-1}\gamma_1 &
\end{pmatrix},\\[20pt]
\mathbf{C}^{(N,N)}&=
\begin{pmatrix}
v_{r,m} & \frac{\alpha_1}{3} & & & & \\[6pt]
\alpha_1 & v_{r,m} & \frac{2}{5}\alpha_1 & & & \\[6pt]
& \frac{2}{3}\alpha_1 & v_{r,m} & \ddots & & \\[6pt]
& & \ddots & \ddots & \frac{N}{2N+1}\alpha_1 \\[6pt]
& & & \frac{N}{2N-1}\alpha_1 & v_{r,m}
\end{pmatrix},
\end{align*}
with \(\mathbf{A}^{(N,N)} \in \mathbb{R}^{(N+2)\times (N+2)}\), \(\mathbf{B}^{(N,N)}\in \mathbb{R}^{(N+1)\times (N+2)}\) and \(\mathbf{C}^{(N,N)} \in \mathbb{R}^{(N+1)\times (N+1)}\) and where all other entries are zero. \(\mathbf{0}^{(N,N)} \in \mathbb{R}^{(N+2)\times (N+1)}\) is a zero matrix.
\end{theorem}
\begin{proof}
The matrix \(A_{HA}^{(N,N)}\) is obtained by computing \(A_A^{(N,N)}\), the full system matrix, and then setting the higher order moments to zero similar to \cite{HSWME}. The computation of the matrix entries is analogous to the proof of Theorem \ref{AppendixN0TheoremMatrix}, so we proceed in the same way.
\textbf{1. First \(2+N\) equations.}
The first \(N+2\) equations correspond to the mass and the radial momentum balance equations and the equations for \(h\alpha_i\) (\(i=1,\ldots,N\)). The conservative flux and the non-conservative flux in these equations are exactly the same as for the \((N_r,0)\)th order systems, so we can use the results of Theorem \ref{AppendixN0TheoremMatrix}. Furthermore, the conservative flux and the non-conservative flux in all these equations do not depend on the angular moment \(h\gamma_i\) (\(i=1,\ldots,N\)). With this observations, it is clear that the first \(2+N\) rows of the matrix are given by
\begin{equation*}
\begin{pmatrix}
& 1 & 0 & \cdots & & & & & 0 \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & 0 & \cdots & & & & 0\\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & 0 & \cdots & & & 0\\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & 0 & \cdots & & 0\\[6pt]
& & & \ddots & \ddots & \frac{N_r+1}{2 N_r+1}\alpha_1 & 0 & \cdots & 0\\[6pt]
& & & & \frac{N_r-1}{2N_r-1}\alpha_1 & v_{r,m} & 0 & \cdots & 0
\end{pmatrix}.
\end{equation*}
\textbf{2. Angular momentum balance equation.}
The \((N+3)\)th row of the coefficient matrix corresponds to the angular momentum balance equation. Recall the flux in this equation is
\begin{equation*}
F_{N_r+3}:=hv_{r,m}v_{\theta,m}+h\sum_{j=1}^{N}\frac{\alpha_j\gamma_j}{2j+1}.
\end{equation*}
It can be easily verified that
\begin{align*}
&\frac{\partial F_{N_r+3}}{\partial h}=-v_{r,m}v_{\theta,m}-\sum_{j=1}^{N}\frac{\alpha_j\gamma_j}{2j+1}\quad\myeq\quad -v_{r,m}v_{\theta,m}-\frac{\alpha_1\gamma_1}{3},\\[6pt]&\frac{\partial F_{N_r+3}}{\partial (hv_{r,m})}=v_{\theta,m}\quad\myeq\quad v_{\theta,m},\quad\frac{\partial F_{N_r+3}}{\partial (h\alpha_l)}=\frac{\gamma_l}{2l+1}\quad\myeq\quad \frac{\gamma_l}{2l+1}\delta_{1,l},\\[6pt]&\frac{\partial F_{N_r+3}}{\partial (hv_{\theta,m})}=v_{r,m}\quad\myeq\quad v_{r,m},\quad
\frac{\partial F_{N_r+3}}{\partial (h\gamma_l)}=\frac{\alpha_l}{2l+1}\quad\myeq\quad \frac{\alpha_l}{2l+1}\delta_{1,l},
\end{align*}
where \(\quad \myeq \quad\) is setting the higher order moments to zero.
\textbf{3. The equations for \(\boldsymbol{\gamma_i},i=1,\ldots,N\)}.
The equation for the \(i\)th moment is given by
\begin{equation*}
\frac{\partial(h\gamma_i)}{\partial t}+\frac{\partial F_i}{\partial r}=Q_i+P_i.
\end{equation*}
The system matrix \(A_H^{(N,N)}\) is composed of a conservative and a non-conservative part.
\textbf{3.1 The conservative part \(\frac{\partial F_i}{\partial r}\)}.
Recall that
\begin{align}
F_i&=h\left( v_{r,m}\gamma_i+v_{\theta,m}\alpha_i+\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\gamma_k \right)\\
&=hv_{r,m}\gamma_i+hv_{\theta,m}\alpha_i+h\sum_{j,k=1}^{N_r}(2i+1)\left(\int_0^1\phi_i \phi_j\phi_k d\zeta\right) \alpha_j\gamma_k. \label{consflux}
\end{align}
Consider the first term in the right-hand side in Equation \eqref{consflux}, \(hv_{r,m}\gamma_i\). Clearly, this term does not depend on \(\alpha_l\) (\(l=1,\ldots,N\)) and on \(v_{\theta,m}\), so all the derivatives with respect to \(h\alpha_l\) and \(v_{\theta,m}\) will be zero. Furthermore, we have:
\begin{equation*}
\frac{\partial hv_{r,m}\gamma_i}{\partial h}=-v_{r,m}\gamma_i, \quad\frac{\partial hv_{r,m}\gamma_i}{\partial (hv_{r,m})}=\gamma_i, \quad\frac{\partial hv_{r,m}\gamma_i}{\partial (h\gamma_l)}=v_{r,m}\delta_{i,l}.
\end{equation*}
Then we consider the second term in the right-hand side in Equation \eqref{consflux}, \(hv_{\theta,m}\alpha_i\). This term does not depend on \(\gamma_l\) (\(l=1,\ldots,N\)) and on \(hv_{\theta,m}\), so all the derivatives with respect to \(h\gamma_l\) and with respect to \(hv_{\theta,m}\) will be zero. Furthermore, we have:
\begin{equation*}
\frac{\partial hv_{\theta,m}\alpha_i}{\partial h}=-v_{\theta,m}\alpha_i, \quad\frac{\partial hv_{\theta,m}\alpha_i}{\partial (hv_{\theta,m})}=\alpha_i, \quad\frac{\partial hv_{\theta,m}\alpha_i}{\partial (h\alpha_l)}=v_{\theta,m}\delta_{i,l}.
\end{equation*}
Introducing the notation
\begin{equation*}
V_{\boldsymbol{\alpha}}:=(h,hv_{r,m},h\alpha_1,\cdots,h\alpha_N),\quad V_{\boldsymbol{\gamma}}:=(hv_{\theta,m},h\gamma_1,\cdots,h\alpha_N),
\end{equation*}
with \( \boldsymbol{\alpha}=(\alpha_1,\cdots,\alpha_N)\) and with \( \boldsymbol{\gamma}=(\gamma_1,\cdots,\gamma_N)\), we obtain
\begin{equation*}
\frac{\partial (hv_{r,m}\boldsymbol{\gamma}+hv_{\theta,m}\boldsymbol{\alpha})}{\partial V_{\boldsymbol{\alpha}}}\quad\myeq\quad \begin{pmatrix}
-v_{r,m}\gamma_1-v_{\theta,m}\alpha_1 & \gamma_1 & v_{\theta,m} & & \\[6pt]
& & & \ddots & \\[6pt]
& & & & v_{\theta,m}
\end{pmatrix},
\end{equation*}
\begin{equation*}
\frac{\partial (hv_{r,m}\boldsymbol{\gamma}+hv_{\theta,m}\boldsymbol{\alpha})}{\partial V_{\boldsymbol{\gamma}}}\quad\myeq\quad \begin{pmatrix}
\alpha_1 & v_{r,m} & & \\[6pt]
& & \ddots & \\[6pt]
& & & v_{r,m}
\end{pmatrix}.
\end{equation*}
We denote the last term of the right-hand side of Equation \eqref{consflux} with the triple Legendre integral as \(\Tilde{F_i}:=h\sum_{j,k=1}^{N}A_{ijk}\alpha_j\alpha_k\). In \cite{HSWME}, it is shown that if \(j=1\), the only cases that need to be considered are \(k=i-1\) (if \(i\geq 2\)) and \(k=i+1\). Analogously, if \(k=1\), the only cases we need to consider are \(j=i-1\) (if \(i\geq 2\)) and \(j=i+1\). Analogously to the corresponding section in the proof of Theorem \ref{AppendixN0TheoremMatrix}, we can consider four different situations.
\textbf{3.1.1 Case 1: \(i=1\).}
If \(i=1\), there are two cases for the index triplet \((i,j,k)\), with \(j=1\) or \(k=1\), which lead to a non-zero term: $(1,1,2) \text{ and } (1,2,1)$.
So we can pull these cases outside of the sum and we get:
\begin{equation*}
\Tilde{F_1}=\sum_{j,k=1}^{N}A_{ijk}\alpha_j\gamma_k=A_{112}h\alpha_1\gamma_2+A{121}h\alpha_2\gamma_1+h\sum_{j,k=2}^{N}A_{ijk}\alpha_j\gamma_k.
\end{equation*}
The derivative of the last term with respect to $h$ reads
\begin{equation*}
\frac{\partial \left(h\sum_{j,k=2}^{N}A_{ijk}\alpha_j\alpha_k\right)}{\partial h}=-\sum_{j,k=2}^{N}A_{ijk}\alpha_j\gamma_k.
\end{equation*}
After setting the higher order moments to zero, this term reduces to zero. The same can be observed when taking the derivative with respect to \(hv_{r,m}\), \(h\alpha_l\) and \(h\gamma_l\) (\(l=1,\ldots,N\)) and \(hv_{\theta,m}\).
The derivative of the first and second term denoted as \(\underline{\Tilde{F_1}}:=A_{112}h\alpha_1\gamma_2+A_{121}h\alpha_2\gamma_1\) are
\begin{align*}
\frac{\partial\underline{\Tilde{F_1}}}{\partial(h\alpha_l)}&=A_{112}\gamma_2 \delta_{1,l}+A_{112}\gamma_1 \delta_{2,l}\quad \myeq \quad A_{112}\gamma_1 \delta_{2,l},\\[6pt]
\frac{\partial\underline{\Tilde{F_1}}}{\partial(h\gamma_l)}&=A_{112}\alpha_1 \delta_{2,l}+A_{112}\alpha_2 \delta_{1,l}\quad \myeq \quad A_{112}\alpha_1 \delta_{2,l}.
\end{align*}
We denote the resulting entries \(_{\alpha}c_1^F\) and \(_{\gamma}c_1^F\), respectively.
All other derivatives are zero, except for the derivative with respect to \(h\), but we can easily see that this derivative reduces to zero when we set higher order moments to zero.
\textbf{3.1.2 Case 2: \(i=2\)}.
Now, there are three cases for the index triplet \((i,j,k)\), with \(j=1\) or \(k=1\), which lead to a non-zero term: $(2,1,1), (2,1,3), \text{ and } (2,3,1)$.
We can pull this terms outside of the sum:
\begin{equation}
\Tilde{F_2}=\sum_{j,k=1}^{N}A_{ijk}\alpha_j\gamma_k
=A_{211}h\alpha_1\gamma_1+A_{213}h\alpha_1\gamma_3+A_{231}h\alpha_3\gamma_1+h\sum_{j,k=2}^{N}A_{ijk}\alpha_j\gamma_k.\label{consflux2}
\end{equation}
Again, the derivatives of the last term in the right-hand side of Equation \eqref{consflux2} all reduce to zero when setting the higher order moments to zero. Denoting the first three terms by \(\underline{\Tilde{F}_2}:=A_{211}h\alpha_1\gamma_1+A_{213}h\alpha_1\gamma_3+A_{231}h\alpha_3\gamma_1\), we obtain
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_2}}{\partial (h\alpha_l)}\quad \myeq \quad A_{211}\gamma_1\delta_{1,l}+A_{213}\gamma_1\delta_{1,3},\quad \frac{\partial\underline{\Tilde{F}_2}}{\partial (h\gamma_l)}\quad \myeq \quad A_{211}\alpha_1\delta_{1,l}+A_{213}\alpha_1\delta_{3,l}.
\end{equation*}
Considering the left equation, the first term leads to an entry \(_{\alpha}a_2^F\) and the second term leads to an entry \(_{\alpha}c_2^F\). Considering the right equation, the first term leads to an entry \(_{\gamma}a_2^F\) and the second term leads to an entry \(_{\gamma}c_2^F\).
Regarding the other derivatives, one can easily see that
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_2}}{\partial h}\quad \myeq \quad -A_{211}\alpha_1\gamma_1-\alpha_1\gamma_3,~ \frac{\partial\underline{\Tilde{F}_2}}{\partial (hv_{r,m})}=0,~ \frac{\partial\underline{\Tilde{F}_2}}{\partial (hv_{\theta,m})}=0.
\end{equation*}
\textbf{3.1.3 Case 3: \(2<i\leq N-1\)}.
In this case, there are four possibilities for the index triplet \((i,j,k)\): $(i,1,i-1), (i,1,i+1), (i,i-1,1), \text{ and } (i,i+1,1)$.
We can pull this terms outside of the sum:
\begin{multline}
\Tilde{F_i}=\sum_{j,k=1}^{N}A_{ijk}\alpha_j\gamma_k\\
=A_{i,1,i-1}h\alpha_1\gamma_{i-1}+A_{i,1,i+1}h\alpha_1\gamma_{i+1}+A_{i,i-1,1}h\alpha_{i-1}\gamma_1 +A_{i,i+1,1}h\alpha_{i+1}\gamma_1+h\sum_{j,k=2}^{N}A_{ijk}\alpha_j\gamma_k. \label{consflux3}
\end{multline}
The derivatives of the last term of the right-hand side of Equation \eqref{consflux3} all reduce to zero when setting the higher order moments to zero. Denoting the other terms by \(\underline{\Tilde{F}_i}:=A_{i,1,i-1}h\alpha_1\gamma_{i-1}+A_{i,1,i+1}h\alpha_1\gamma_{i+1}+A_{i,i-1,1}h\alpha_{i-1}\gamma_1+A_{i,i+1,1}h\alpha_{i+1}\gamma_1\), we obtain
\begin{align*}
\frac{\partial\underline{\Tilde{F}_i}}{\partial (h\alpha_l)}\quad &\myeq \quad A_{i,1,i-1}\gamma_1\delta_{i-1,l}+A_{i,1,i+1}\gamma_{i+1}\delta_{i+1,l},\\[6pt]
\frac{\partial\underline{\Tilde{F}_i}}{\partial (h\gamma_l)}\quad &\myeq \quad A_{i,1,i-1}\alpha_1\delta_{i-1,l}+A_{i,1,i+1}\alpha_{i+1}\delta_{i+1,l}.
\end{align*}
Considering the first equation, the first term leads to an entry \(_{\alpha}a_i^F\), while the second term leads to en entry \(_{\alpha}c_i^F\). Analogously, the first term of the second equation leads to an entry \(_{\gamma}a_i^F\) and the second term of the second equation leads to an entry \(_{\gamma}a_i^F\).
To summarize, the \(_{\alpha}a_i^F\)'s and the \(_{\alpha}c_i^F\) correspond to derivatives with respect to \(h\alpha_{i-1}\) and with respect to \(h\alpha_{i+1}\) of the conservative flux in the equation for \(\gamma_i\), respectively. The \(_{\gamma}a_i^F\)'s and the \(_{\gamma}c_i^F\) correspond to derivatives with respect to \(h\gamma_{i-1}\) and with respect to \(h\gamma_{i+1}\) of the conservative flux in the equation for \(\gamma_i\), respectively.
Furthermore, we find:
\begin{equation*}
\frac{\partial\underline{\Tilde{F}_i}}{\partial h}\quad \myeq \quad 0,\quad\frac{\partial\underline{\Tilde{F}_i}}{\partial (hv_{r,m})}\quad \myeq \quad0,\quad\frac{\partial\underline{\Tilde{F}_i}}{\partial (hv_{\theta,m})}\quad \myeq \quad0.\\
\end{equation*}
\textbf{3.1.4 Case 4: \(i=N\)}.
In this case, there are two possibilities for the index triplet \((i,j,k)\): $(N,1,N-1) \text{ and } (N,N-1,1)$.
Analogously to the previous cases, it can be easily shown that this case only leads to entries \(_{\alpha}a_N^F\), \(_{\alpha}c_N^F\), \(_{\gamma}a_N^F\) and \(_{\gamma}c_N^F\).
Using orthogonality and recursion formulas, we have:
\begin{align*}
&_{\alpha}a_i^F=\frac{i}{2i-1}\gamma_1 \qquad \text{and} \qquad _{\gamma}a_i^F=\frac{i}{2i-1}\alpha_1, \qquad i=2,\ldots,N,\\[8pt]
&_{\alpha}c_i^F=\frac{i+1}{2i+3}\gamma_1 \qquad \text{and} \qquad _{\gamma}c_i^F=\frac{i+1}{2i+3}\alpha_1, \qquad i=1,\ldots,N-1.
\end{align*}
In conclusion, we have for the modified conservative coefficient matrix:
\begin{equation*}
\frac{\partial F}{\partial V_{\alpha}}_{mod}=
\begin{pmatrix}
& 1 & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & 2v_{r,m} & c_1^F & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & a_2^F & 2v_{r,m} & \ddots & & \\[6pt]
& & & \ddots & \ddots & c_{N-1}^F & \\[6pt]
& & & & a_N^F & 2v_{r,m} \\[6pt]
-v_{r,m}v_{\theta,m}-\frac{\alpha_1\gamma_1}{3} & v_{\theta,m} & \frac{\gamma_1}{3} & & & \\[6pt]
-v_{r,m}\gamma_1-v_{\theta,m}\alpha_1 & \gamma_1 & v_{\theta,m} & _{\alpha}c_1^F & & \\[6pt]
-\frac{2}{3}& & _{\alpha}a_2^F & v_{\theta,m} & \ddots & \\[6pt]
& & & \ddots & \ddots & _{\alpha}c_{N-1}^F \\[6pt]
& & & & _{\alpha}a_N^F & v_{\theta,m}
\end{pmatrix},
\end{equation*}
\begin{equation*}
\frac{\partial F}{\partial V_{\gamma}}_{mod}=
\begin{pmatrix}
& & & & & \\[6pt]
& & & & & \\[6pt]
& & & & & \\[6pt]
& & & & & \\[6pt]
& & & & & \\[6pt]
& & & & & \\[6pt]
v_{r,m} & \frac{\alpha_1}{3} & & & \\[6pt]
\alpha_1 & v_{r,m} & _{\gamma}c_1^F & \\[6pt]
& _{\gamma}a_2^F & v_{r,m} & \ddots & \\[6pt]
& & \ddots & \ddots & _{\gamma}c_{N-1}^F \\[6pt]
& & & _{\gamma}a_N^F & v_{r,m}
\end{pmatrix}.
\end{equation*}
\textbf{3.2 The non-conservative part \(Q_i\)}.
The non-conservative part in the equations for \(h,hv_{r,m}\) and \(h\alpha_i\), \(i=1,\ldots,N\), does not depend on derivatives with respect to \(h\gamma_l\), \(l=1,\ldots,N\).
Recall the non-conservative part in the equation for \(h\gamma_i\):
\begin{equation}\label{nonConservativeFlux}
Q_i=v_{\theta,m}\frac{\partial(h\alpha_i)}{\partial r}-\sum_{j,k=1}^{N_r}B_{ijk}\gamma_k\frac{\partial(h\alpha_j)}{\partial r}.
\end{equation}
Thus, the equations for \(h\gamma_i\), \(i=1,\ldots,N\), do not depend on derivatives with respect to \(h\gamma_l\), \(l=1,\ldots,N\), either. From this, it follows that we can write the modified non-conservative terms (i.e., with higher order moments set to zero) as
\begin{equation*}
Q_{mod}=
\begin{pmatrix}
Q_1 & Q_2 \\
Q_3 & Q_4
\end{pmatrix},
\end{equation*}
in which \(Q_2\in\mathbb{R}^{(N+3)\times N}\) and \(Q_4\in\mathbb{R}^{N\times N}\) are zero matrices. \(Q_1\in\mathbb{R}^{(N+3)\times (N+3)}\) is the coefficient matrix of the non-conservative flux in the \((N,0)\)th order system, see Equation \eqref{nonconsflux} in the proof of Theorem \ref{AppendixN0TheoremMatrix}. The remaining entries to compute are the entries of the matrix \(Q_3 \in\mathbb{R}^{N\times (N+3)}\). This matrix corresponds to the equation for \(\gamma_i\), \(i=1,\ldots,n\), and to the derivatives with respect to \(h,hv_{r,m}\) and \(h\alpha_l\) (\(l=1,\ldots,N)\).
The first term of the non-conservative flux leads to the term \(v_{\theta,m}\) on the diagonal of \(Q_3\). The second term is simplified when we set higher order moments to zero:
\begin{equation*}
\sum_{j=1}^{N}(2i+1)\left(\int_0^1\phi_i'\left( \int_0^1\phi_jd\hat{\zeta}\right)\phi_1d\zeta\right)\gamma_1\frac{\partial(h\alpha_j)}{\partial r}.
\end{equation*}
It can be proved that \(B_{ij1}=0\) for \(|i-j|\neq 1\), except for the first two rows. \(Q_i\) does not depend on any derivative with respect to \(h\), \(hv_{r,m}\) and \(hv_{\theta,m}\), so the entries of the corresponding columns will be zero. Analogously to the proof of Theorem \ref{AppendixN0TheoremMatrix}, we can see that the second term in \eqref{nonConservativeFlux} only leads to non-zero entries \(a_i^Q\) on the first lower diagonal and non-zero entries \(c_i^Q\) on the first upper diagonal:
\begin{align*}
&a_i^Q=\frac{i+1}{2i-1}\gamma_1, \qquad i=2,\ldots,N,\\[8pt]
&c_i^Q=\frac{i}{2i+3}\gamma_1, \qquad i=1,\ldots,N-1.
\end{align*}
So \(Q_3\) has the following form:
\begin{equation*}
Q_3=
\begin{pmatrix}
& & & v_{r,m} & c_1^Q & & \\[6pt]
& & & a_2^Q & v_{r,m} & \ddots & & \\[6pt]
& & & & \ddots & \ddots & c_{N_r-1}^Q & \\[6pt]
& & & & & a_{N_r}^Q & v_{r,m} &\\[6pt]
\end{pmatrix}.
\end{equation*}
From this, the matrix \(Q_{mod}\) can be constructed and the modified system matrix is
\begin{equation*}
A_{HA}^{(N,N)}=\frac{\partial_F}{\partial V}_{mod}-Q_{mod}.
\end{equation*}
This completes the proof.
\end{proof}
\begin{theorem}\label{appendix22CharPol}
The HASWME system matrix \(A_{HA}^{(N,N)}\in \mathbb{R}^{(N+3)\times (N+3)}\) has the following characteristic polynomial:
\begin{equation}
\chi_{A_{HA}^{(N,N)}}(\lambda)=\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N,N)}}(\lambda -v_{r,m})\cdot\chi_{A_3^{(N,N)}}(\lambda),
\end{equation}
where \(A_2^{(N,N)}\in \mathbb{R}^{N\times N}\) and \(A_3^{(N,N)}\in \mathbb{R}^{(N+1)\times (N+1)}\) are defined as
\begin{equation}
A_2^{(N,N)}=
\begin{pmatrix}
& c_1 & & \\[6pt]
a_2 & & \ddots & \\[6pt]
& \ddots & & c_{N_r-1} \\[6pt]
& & a_{N_r} &
\end{pmatrix}, \qquad
A_3^{(N,N)}=
\begin{pmatrix}
v_{r,m} & \frac{\alpha_1}{3} & & \\[6pt]
\alpha_1 & v_{r,m} & g_1 & & \\[6pt]
& f_2 & v_{r,m} & \ddots & \\[6pt]
& & \ddots & \ddots & g_{N-1} \\[6pt]
& & & f_N & v_{r,m}
\end{pmatrix},
\end{equation}
where the entries are given by
\begin{align*}
a_i&=\frac{i-1}{2i-1}\alpha_1 \qquad \text{and} \qquad f_i=\frac{i}{2i-1}\alpha_1, \qquad i=2,\ldots,N,\\[5pt]
c_i&=\frac{i+3}{2i+3}\alpha_1 \qquad \text{and} \qquad g_i=\frac{i+1}{2i+3}\alpha_1, \qquad i=1,\ldots,N-1,\\[5pt]
\end{align*}
\end{theorem}
\begin{proof}
The characteristic polynomial of the modified system matrix \(A_H^{(N,N)}\) is by definition
\begin{equation*}
\chi_{A_{HA}^{(N,N)}}(\lambda)=\det(A_{HA}^{(N,N)}-\lambda I) :=\lvert A_{HA}^{(N,N)}-\lambda I \rvert.
\end{equation*}
Recall that the system matrix has the following form:
\begin{equation*}
A_{HA}^{(N,N)}=
\begin{pmatrix}
\mathbf{A^{(N,N)}} & \mathbf{0} \\
\mathbf{B^{(N,N)}} & \mathbf{C^{(N,N)}}
\end{pmatrix},
\end{equation*}
where the explicit form of the blocks \(\boldsymbol{A^{(N,N)}},\boldsymbol{B^{(N,N)}}\) and \(\boldsymbol{C^{(N,N)}}\) is given in Theorem \ref{appendix22systemMatrix}. This is a lower triangular block matrix. It is known that the determinant of a triangular block matrix is given by the determinant of its diagonal blocks, see e.g. \cite{silvester_determinants_2000}. So we have:
\begin{equation*}
\lvert A_{HA}^{(N,N)}-\lambda I \rvert =\lvert A^{(N,N)}-\lambda I \rvert \cdot \lvert C^{(N,N)}-\lambda I \rvert.
\end{equation*}
According to \cite{SWME}, the first factor yields:
\begin{equation*}
\lvert A^{(N,N)}-\lambda I \rvert=\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N,N)}}(\lambda -v_{r,m}).
\end{equation*}
The observation that \(\mathbf{C}\) is exactly the matrix \(A_3^{(N,N)}\) as defined above completes the proof.
\end{proof}
\begin{lemma}\label{AppendixLemmaDADInverse}
Let \(A\in \mathbb{R}^{n\times n}\) and \(D=diag(d_1,\cdots,d_n)\in\mathbb{R}^{n\times n}\), with \(d_i\neq 0\) (\(i=1,\cdots,n\)). Then \(A\) and \(DAD^{-1}\) have the same eigenvalues.
\end{lemma}
\begin{proof}
The proof is not given here.
\end{proof}
\begin{lemma}\label{AppendixLemmaSymmetric}
Let \(A\in \mathbb{R}^{n\times n}\) be a real tridiagonal matrix such that \(a_{i,i+1}a_{i+1,i}>0\), \(i=1,\ldots,n-1\). Then there is a real diagonal matrix \(D\) such that \(DAD^{-1}\) is symmetric.
\end{lemma}
\begin{proof}
The proof is not given here.
\end{proof}
\begin{corollary}
The eigenvalues of the modified axisymmetric hyperbolic system matrix \(A_{HA}^{(N,N)} \in \mathbb{R}^{(2N+3)\times (2N+3)}\) are the real numbers
\begin{align*}
\lambda_{1,2}&=v_{r,m}\pm \sqrt{gh+\alpha_1^2},\\[5pt]
\lambda_{i+2}&=v_{r,m}+b_i\cdot\alpha_1, \qquad i=1,\ldots,N,\\[5pt]
\lambda_{i+2+N}&=v_{r,m}+s_i\cdot\alpha_1,\qquad i=1,\ldots,N+1.
\end{align*}
with \(b_i \cdot \alpha_1\) the real roots of \(A_2^{(N,N)}\), and \(s_i \cdot \alpha_1\) the real roots of \(A_3^{(N,N)}\), from Theorem \ref{appendix22CharPol} and where all the \(b_i\)'s and the \(s_i\)'s are pairwise distinct.
\end{corollary}
\begin{proof}
Recall that the characteristic polynomial of \(A_{HA}^{(N,N)}\) is given by
\begin{equation*}
\chi_{A_{HA}^{(N,N)}}(\lambda)=\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N,N)}}(\lambda -v_{r,m})\cdot \chi_{A_3^{(N,N)}}(\lambda).
\end{equation*}
From the first factor, we easily obtain
\begin{equation*}
\lambda_{1,2}=v_{r,m}\pm \sqrt{gh+\alpha_1^2}.
\end{equation*}
In \cite{SWME}, it is shown that the roots of \(\chi_{A_2^{(N,N)}}(\lambda-v_{r,m})\) have the form \(\lambda_i=v_{r,m}+b_i\cdot \alpha_1\). Moreover, it is proved in \cite{equilibriumStability} that the roots are real.
It remains to prove that the eigenvalues of the matrix \(A_3^{(N,N)}\) are real. For \(\alpha_1=0\), it is obvious that all the eigenvalues are real. If \(\alpha_1\neq0\), it follows from Lemma \ref{AppendixLemmaDADInverse} that there is a real diagonal matrix \(D\) such that \(DAD^{-1}\) is symmetric. Since a symmetric matrix has real eigenvalues, it follows from Lemma \ref{AppendixLemmaSymmetric} that all the eigenvalues of \(A_3^{(N,N)}\) are real. This completes the proof.
\end{proof}
\section{System Matrix Of Second Order Axisymmetric System With Full Velocity Expanded}
\label{app:B}
\begin{comment}
The system matrix of the \((2,2)\)th order ASWME system is given by:
\begin{equation*}
A_A^{(2,2)}=
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
d_1 & 2 v_{r,m} & \frac{2 \alpha _1}{3} & \frac{2 \alpha _2}{5} & 0 & 0 & 0 \\
d_2 & 2 \alpha _1 & v_{r,m}+\alpha _2 & \frac{3 \alpha _1}{5} & 0 & 0 & 0 \\
d_3 & 2 \alpha _2 & \frac{\alpha _1}{3} & v_{r,m}+\frac{3 \alpha _2}{7} & 0 & 0 & 0 \\
\Tilde{d}_1 & v_{\theta ,m} & \frac{\gamma _1}{3} & \frac{\gamma _2}{5} & v_{r,m} & \frac{\alpha _1}{3} & \frac{\alpha _2}{5} \\
\Tilde{d}_2 & \gamma _1 & \frac{3 \gamma _2}{5} & \frac{\gamma _1}{5} & \alpha _1 & v_{r,m}+\frac{2 \alpha _2}{5} & \frac{2\alpha_1}{5} \\
\Tilde{d}_3 & \gamma _2 & -\frac{\gamma _1}{3} & \frac{\gamma _2}{7} & \alpha _2 & \frac{2 \alpha _1}{3} & v_{r,m}+\frac{2 \alpha_2}{7}
\end{pmatrix},
\end{equation*}
where
\end{comment}
The entries in the first column of the system matrix of the \((2,2)\)th order ASWME system in Equation \eqref{A22} are given by:
\begin{align*}
d_1 &= gh-v_{r,m}^2-\frac{\alpha_1^2}{3}-\frac{\alpha_2^2}{5},\\[5pt]
d_2 &= -v_{r,m}-\frac{4}{5}\alpha_1\alpha_2,\\[5pt]
d_3 &= -\frac{2}{21} \left(3 \alpha _2 \left(7 v_{r,m}+\alpha _2\right)+7 \alpha _1^2\right),\\[5pt]
\Tilde{d}_1 &= -v_{r,m} v_{\theta ,m}-\frac{1}{3} \alpha _1 \gamma _1-\frac{\alpha _2 \gamma _2}{5} ,\\[5pt]
\Tilde{d}_2 &= \frac{1}{5} \left(-\gamma _1 \left(5 v_{r,m}+2 \alpha _2\right)-\alpha _1 \left(5 v_{\theta ,m}+2
\gamma _2\right)\right),\\[5pt]
\Tilde{d}_3 &= -\gamma _2 v_{r,m}-\frac{1}{7} \alpha _2 \left(7 v_{\theta ,m}+2 \gamma _2\right)-\frac{2}{3} \alpha _1
\gamma _1.
\end{align*}
\section{Conclusion}
In this paper we derived the first hyperbolic moment models for shallow flow in cylindrical coordinate systems. The paper started with a brief review of the derivation of the SWME. The reference system was then transformed to cylindrical coordinates and ASWME were derived for axisymmetric flow. We showed the breakdown of hyperbolicity of the ASWME by means of hyperbolicity plots. This lead to the derivation of the first hyperbolic axisymmetric model called HASWME for which we showed hyperbolicity using an in-depth analysis of the system matrix. In the numerical simulation of a test case with discontinuous initial height profile, the hyperbolic model resulted in a more accurate approximation compared to a reference solution. The numerical simulation of a test case with smooth initial data showed that the approximation error decreases when the order of the model is increased. Moreover, both the HASWME and the ASWME showed to be accurate approximations with respect to a reference solution.
We conclude that the ASWME and the HASWME are two successful models for the simulation of free surface flow. Although a lot of the structure of the equations is inherited from the 1D equations, they allow for the simulation of 2D axisymmetric flow. The models can therefore be seen as an intermediate stage between the 1D and 2D models.
Ongoing work focuses on the extension of the axisymmetric models to moment models that are fully 2D and the numerical simulation of these models. Another point of interest would be to perform an equilibrium stability analysis of the ASWME and the HASWME. A final suggestion for future work is the derivation of tailored numerical methods for the axisymmetric models.
\section*{Data Availability Statement}
The datasets generated and analysed during this study are available from the corresponding author on reasonable request.
\section*{Acknowledgements}
The authors would like to acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds (University of Groningen).
\section{Axisymmetric Shallow Water Moment Equations}
\label{ASWME}
This section gives a description of the derivation of axisymmetric shallow water moment equations, as a consistent extension of the Cartesian moment models, derived in \cite{SWME}. Next, we show that the axisymmetric systems lack global hyperbolicity by analysing the eigenvalues of the system matrix.
\subsection{Derivation of reference system}
Formulating the standard incompressible Navier-Stokes equations in cylindrical coordinates
\((r,\theta,z)\) and assuming a hydrostatic pressure, we obtain
\begin{align}\label{initialSystemCyl1}
\partial_r v_r+\frac{v_r}{r}+\frac{1}{r}\partial_{\theta}v_{\theta}+\partial_z w&=0,\\\label{initialSystemCyl2}
\partial_t v_r+\partial_r(v_r^2)+\frac{1}{r}\partial_{\theta}(v_rv_{\theta})+\partial_z\left( v_r w \right)+\frac{1}{r}\left(v_r^2-v_{\theta}^2\right)&=-\frac{1}{\rho}\partial_r p+\frac{1}{\rho}\partial_z\tau_{rz}+g e_r,\\\label{initialSystemCyl3}
\partial_t v_{\theta}+\partial_r (v_rv_{\theta})+\frac{1}{r}\partial_{\theta}\left(v_{\theta}^2\right)+\partial_z\left(v_{\theta}w \right)+\frac{2}{r}v_rv_{\theta}&=-\frac{1}{\rho}\frac{1}{r} \partial_{\theta} p+\frac{1}{\rho}\partial_z\tau_{\theta z}+g e_{\theta},\\\label{initialSystemCyl4}
0&=-\frac{1}{\rho}\partial_z p+g e_z,
\end{align}
where the functions of interest are the water height \(h(t,r,\theta): [0,T] \times \mathbb{R} \times [0,2\pi] \to \mathbb{R} \), the radial velocity \(v_r(t,r,\theta,z): [0,T] \times \mathbb{R} \times [0,2\pi] \times [h_b(t,r,\theta),h_s(t,r,\theta)] \to \mathbb{R}\), and the angular velocity \(v_{\theta}(t,r,\theta,z): [0,T] \times \mathbb{R} \times [0,2\pi] \times [h_b(t,r,\theta),h_s(t,r,\theta)] \to \mathbb{R}\). These are functions of time \(t\) and space \((r,\theta,z)\). The deviatoric stress tensor terms in cylindrical coordinates are denoted by \(\tau_{rz}\) and \(\tau_{\theta z}\). From the last equation, we find that the hydrostatic pressure is given by
\begin{equation*}
p(t,r,\theta,z)=(h_s(t,r,\theta)-z)\rho g e_z,
\end{equation*}
where \(h_s(t,r,\theta)\) is the surface topography.
Similar to \cite{SWME}, the variable \(z\) is transformed to \(\xi \in [0,1]\), the so-called $\sigma$-coordinates, using
\begin{equation*}
\zeta=\frac{z-h_b(t,r,\theta)}{h(t,r,\theta)},
\end{equation*}
where \(h_b(t,r,\theta)\) is the bottom topography.
The so-called vertical coupling defined in \cite{SWME} is
\begin{equation*}
h\omega[h,v_r,v_{\theta}]=-\partial_r\left(h \int_0^{\zeta}v_{r,d}d\Hat{\zeta} \right)-\frac{1}{r}\partial_{\theta}\left(h \int_0^{\zeta}v_{\theta,d}d\Hat{\zeta} \right)- \frac{h}{r}\int_0^{\zeta}v_{r,d}d\Hat{\zeta},
\end{equation*}
where \(v_{r,d}\) and \(v_{\theta,d}\) are the deviations from the average radial velocity \nomenclature{\(v_{r,m}\)}{Average velocity in radial direction, in cylindrical systems}\(v_{r,m}=\int^1_0 v_r d\zeta\) and the average angular velocity \nomenclature{\(v_{\theta,m}\)}{Average velocity in angular direction, in cylindrical systems}\(v_{\theta,m}=\int^1_0 v_{\theta} d\zeta\), respectively.
Using these transformations, the reference system in cylindrical coordinates reads
\footnotesize
\begin{align}\label{refsystemcylindrical1}
&\partial_t h+\partial_r (hv_{r,m})+\frac{1}{r}\partial_{\theta}(hv_{\theta,m})+\frac{1}{r}hv_{r,m}=0,\\\label{refsystemcylindrical2}
&\partial_t (hv_r)+\partial_r\left(hv_r^2+\frac{g}{2}e_zh^2\right)+\frac{1}{r}\partial_{\theta}(hv_rv_{\theta})+\partial_{\zeta}\left(h v_r \omega-\frac{1}{\rho}\tau_{rz} \right)+\frac{h}{r}\left(v_r^2-v_{\theta}^2\right)=gh(e_r-e_z\partial_rh_b),\\\label{refsystemcylindrical3}
&\partial_t (hv_{\theta})+\partial_r (hv_rv_{\theta})+\frac{1}{r}\partial_{\theta}\left(hv_{\theta}^2+\frac{g}{2}e_zh^2\right)+\partial_{\zeta}\left(h v_{\theta}\omega -\frac{1}{\rho}\tau_{\theta z} \right)+\frac{2h}{r}v_rv_{\theta}=gh\left(e_y-\frac{1}{r}e_z\partial_{\theta}h_b\right).
\end{align}
\normalsize
Note the forcing terms containing a factor \(\frac{1}{r}\) due to the cylindrical coordinates.
Furthermore, the system in cylindrical coordinates has some similarity with the one-dimensional system analyzed in \cite{HSWME}, especially if the flow is considered uniform in the angular direction $\theta$ (the axisymmetric case). The hyperbolic regularization later in this paper is therefore based on the findings of the one-dimensional case.
\subsection{Moment equations}\label{section:momentequations}
Analogously to the derivation of the one-dimensional SWME in \cite{SWME}, moment equations can be derived for the reference system \eqref{refsystemcylindrical1}-\eqref{refsystemcylindrical3}. First, the deviation of the radial velocity \(v_{r}(t,r,\theta,\zeta)\) and the angular velocity \(v_{\theta}(t,r,\theta,\zeta)\) from their means \(v_{r,m}(t,r,\theta)\) and \(v_{\theta,m}(t,r,\theta)\) is modelled as a polynomial expansion:
\begin{align*}
v_r(r,\theta,\zeta,t)&=v_{r,m}(r,\theta,t)+\sum_{j=1}^{N_r}\alpha_j(r,\theta,t)\phi_j(\zeta),\\
v_{\theta}(r,\theta,\zeta,t)&=v_{\theta,m}(r,\theta,t)+\sum_{j=1}^{N_{\theta}}\gamma_j(r,\theta,t)\phi_j(\zeta),
\end{align*}
where \(\phi_j\) are the shifted and normalized Legendre polynomials, defined by:
\begin{equation}\label{legendre}
\phi_j(\zeta)=\frac{1}{j!}\frac{d^j}{d\zeta^j}\left( \zeta - \zeta^2 \right)^j.
\end{equation}
The polynomials \eqref{legendre} form an orthogonal basis on $[0,1]$ satisfying the orthogonality relation
\begin{equation*}
\int^1_0 \phi_m(\zeta) \phi_n (\zeta) d\zeta = \frac{1}{2n+1}\delta_{mn}.
\end{equation*}
The coefficients \(\alpha_j: [0,T] \times \mathbb{R} \times [0,2\pi] \to \mathbb{R} \), with \(j \in [1,2,\ldots, N_r]\), and \(\gamma_j: [0,T] \times \mathbb{R} \times [0,2\pi]\), with \(j \in [1,2,\ldots, N_{\theta}]\) are the basis functions. They are functions of the time \(t\) and the two-dimensional space \((r,\theta)\). The expansion orders in radial direction and in angular direction are denoted by \(N_r\) and \(N_{\theta}\), respectively. Note that \(N_r\) and \(N_{\theta}\) do not have to be the same. An advantage of this approach is its flexibility; a larger \(N_r\) and/or a larger \(N_{\theta}\) allows for modelling more complex flow, while the choice \(N_r=N_{\theta}=0\) leads to a constant velocity profile, as in the SWE.
Assuming that the fluid is Newtonian, we can close the system with
\begin{equation*}
\frac{1}{\rho}\tau_{rz}=\frac{\nu}{h} \partial_{\zeta}v_r,\qquad \frac{1}{\rho}\tau_{\theta z}=\frac{\nu}{h}\partial_{\zeta}v_{\theta}
\end{equation*}
and use (weakly enforced) boundary conditions according to \cite{SWME}
\begin{equation*}
-\frac{\nu}{h}[\partial_{\zeta} v_r]^{\zeta=1}_{\zeta=0}=\frac{\nu}{\lambda}v_r|_{\zeta = 0}, \qquad-\frac{\nu}{h}[\partial_{\zeta} v_{\theta}]^{\zeta=1}_{\zeta=0}=\frac{\nu}{\lambda}v_{\theta}|_{\zeta = 0},
\end{equation*}
with slip length \(\lambda\) and kinematic viscosity \(\nu\).
The full cylindrical SWME are derived by multiplying \eqref{refsystemcylindrical2} with the Legendre polynomials \(\phi_j(\zeta)\) (\(j=0,\ldots,N_r\)) and integrating with respect to \(\zeta\), and by multiplying \eqref{refsystemcylindrical3} with the \(\phi_j(\zeta)\) (\(j=0,\ldots,N_{\theta}\)) and integrating with respect to \(\zeta\) \cite{SWME}. The mass balance equation \eqref{refsystemcylindrical1} completes the moment equations. The resulting equations read:
\paragraph{Mass balance equation:}
\begin{equation*}
\partial_{t}h+\frac{1}{r}hv_{r,m}+\partial_r\left( hv_{r,m}\right)+\frac{1}{r}\partial_{\theta}\left(hv_{\theta,m}\right)=0.
\end{equation*}
\paragraph{Projected momentum balance equations:}
\begin{multline*}
\partial_t(hv_{r,m})+\partial_r \underbrace{\left( h\left( v_{r,m}^2+\sum_{j=1}^{N_r}\frac{\alpha_j^2}{2j+1}\right) +\frac{g}{2}e_zh^2\right)}_{:=F_r^0}+\frac{1}{r}\partial_{\theta}\underbrace{\left( h\left( v_{r,m}v_{\theta,m}+\sum_{j=1}^{min\{N_r,N_{\theta}\}}\frac{\alpha_j\gamma_j}{2j+1} \right) \right)}_{:=F_{\theta}^0}\\ =\frac{1}{r}\underbrace{h\left( -v_{r,m}^2+v_{\theta,m}^2-\sum_{j=1}^{N_r}\frac{\alpha_j^2}{2j+1}+\sum_{j=1}^{N_{\theta}}\frac{\gamma_j^2}{2j+1} \right)}_{:=G_0}\underbrace{-\frac{\nu}{\lambda}\left( v_{r,m}+\sum_{j=1}^{N_r}\alpha_j \right)}_{:=S_0}+gh(e_r-e_z\partial_r h_b),
\end{multline*}
\begin{multline*}
\partial_t(hv_{\theta,m})+\partial_r\underbrace{\left( h\left( v_{r,m}v_{\theta,m}+\sum_{j=1}^{min\{N_r,N_{\theta}\}}\frac{\alpha_j\gamma_j}{2j+1} \right) \right)}_{:=\Tilde{F}_r^0}+\frac{1}{r}\partial_{\theta} \underbrace{\left( h\left( v_{\theta,m}^2+\sum_{j=1}^{N_{\theta}}\frac{\gamma_j^2}{2j+1}\right) +\frac{g}{2}e_zh^2\right)}_{:=\Tilde{F}_{\theta}^0} \\=\frac{1}{r}\underbrace{\left(-2h\left( v_{r,m}v_{\theta,m}+\sum_{j=1}^{min\{N_r,N_{\theta}\}}\frac{\alpha_j\gamma_j}{2j+1} \right)\right)}_{:=\Tilde{G}_0}
\underbrace{-\frac{\nu}{\lambda}\left( v_{\theta,m}+\sum_{j=1}^{N_{\theta}}\gamma_j \right)}_{\Tilde{S}_0}+gh\left(e_{\theta}-e_z\frac{1}{r}\partial_{\theta}h_b\right).
\end{multline*}
\paragraph{Moment equations:}
The equations for the coefficients of the radial velocity profile \(\alpha_i, i=1,\ldots, N_r\), are given by:
\begin{equation*}
\partial_t(h\alpha_i)+\partial_r F_r^i+\frac{1}{r}\partial_{\theta}F_{\theta}^i=Q_r^i\frac{\partial V}{\partial r}+\frac{1}{r}Q_{\theta}^i\frac{\partial V}{\partial \theta}+\frac{1}{r}G_i+S_i,
\end{equation*}
with
\begin{align*}
F_r^i&=h\left( 2v_{r,m}\alpha_i+\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\alpha_k \right),~
F_{\theta}^i=h\left( v_{r,m}\gamma_i+v_{\theta,m}\alpha_i+\sum_{j=1}^{N_r}\sum_{k=1}^{N_{\theta}}A_{ijk}\alpha_j\gamma_k \right),\\[4pt]
Q_r^i\frac{\partial V}{\partial r}&=v_{r,m}\partial_r(h\alpha_i)-\sum_{j,k=1}^{N_r}B_{ijk}\alpha_k\partial_r(h\alpha_j),~
Q_{\theta}^i\frac{\partial V}{\partial \theta}=v_{r,m}\partial_{\theta}(h\gamma_i)-\sum_{j=1}^{N_{\theta}}\sum_{k=1}^{N_r}B_{ijk}\alpha_k \partial_{\theta}(h\gamma_j),\\[4pt]
G_i&= h\left( -v_{r,m}\alpha_i+2v_{\theta,m}\gamma_i-\sum_{j,k=1}^{N_r}A_{ijk}\alpha_j\alpha_k + \sum_{j,k=1}^{N_{\theta}}A_{ijk}\gamma_j\gamma_k -\sum_{j,k=1}^{N_r}B_{ijk}\alpha_k \alpha_j \right),\\[4pt]
S_i&=-(2i+1)\frac{\nu}{\lambda}\left( v_{r,m}+\sum_{j=1}^{N_r}\alpha_j\left( 1+\frac{\lambda}{h}C_{ij} \right) \right).
\end{align*}
\(F_r^i\) and \(\frac{1}{r} F_{\theta}^i\) contain the conservative flux, while \(Q_r^i\) and \(\frac{1}{r} Q_{\theta}^i\) contain the non-conservative flux.
The forcing terms due to the formulation in cylindrical coordinates, are grouped in \(\frac{1}{r}G_i\). \(S_i\) is the source term, which contains the friction parameters \(\lambda\) and \(\nu\).
The equations for the coefficients of the angular velocity profile \(\gamma_i, i=1,\ldots,N_{\theta},\) read:
\begin{equation*}
\partial_t(h\gamma_i)+\partial_r \Tilde{F}_r^i+\frac{1}{r}\partial_{\theta}\Tilde{F}_{\theta}^i=\Tilde{Q}_r^i\frac{\partial V}{\partial r}+\frac{1}{r}\Tilde{Q}_{\theta}^i\frac{\partial V}{\partial \theta}+\frac{1}{r}\Tilde{G_i}+\Tilde{S_i},
\end{equation*}
with
\begin{align*}
\Tilde{F}_r^i&=h\left(v_{r,m}\gamma_i+v_{\theta,m}\alpha_i+\sum_{j=1}^{N_r}\sum_{k=1}^{N_{\theta}}A_{ijk}\alpha_j\gamma_k \right),~
\Tilde{F}_{\theta}^i=h\left( 2v_{\theta,m}\gamma_i+\sum_{j,k=1}^{N_{\theta}}A_{ijk}\gamma_j\gamma_k \right),
\end{align*}
\begin{align*}
\Tilde{Q}_r^i\frac{\partial V}{\partial r}&=v_{\theta,m}\partial_r(h\alpha_i)-\sum_{j=1}^{N_r}\sum_{k=1}^{N_{\theta}}B_{ijk}\gamma_k\partial_r(h\alpha_j),~
\Tilde{Q}_{\theta}^i\frac{\partial V}{\partial \theta}=v_{\theta,m}\partial_{\theta}(h\gamma_i)-\sum_{j,k=1}^{N_{\theta}}B_{ijk}\gamma_k\partial_{\theta}(h\gamma_j),\\[4pt]
\Tilde{G}_i&=-h\left( 2v_{r,m}\gamma_i+v_{\theta,m}\alpha_i+\sum_{j=1}^{N_r}\sum_{k=1}^{N_{\theta}}\left(2A_{ijk}+B_{ijk}\right)\alpha_j\gamma_k \right),\\[4pt]
\Tilde{S}_i&=-(2i+1)\frac{\nu}{\lambda}\left(v_{\theta,m}+ \sum_{j=1}^{N_{\theta}}\gamma_j\left( 1+\frac{\lambda}{h}C_{ij} \right) \right).
\end{align*}
\(\Tilde{F}_r^i\) and \(\frac{1}{r}\tilde{F}_{\theta}^i\) contain the conservative flux, while \(\Tilde{Q}_r^i\) and \(\frac{1}{r}\Tilde{Q}_{\theta}^i\) contain the non-conservative flux.
The forcing terms due to the formulation in cylindrical coordinates are grouped in \(\frac{1}{r}\Tilde{G}_i\). \(\Tilde{S}_i\) is the source term, which contains the friction parameters \(\lambda\) and \(\nu\).
The full system can be written in compact form as
\begin{equation}\label{systemFormCylindrical}
\frac{\partial V}{\partial t}+A_r\frac{\partial V}{\partial r} +A_{\theta}\frac{\partial V}{\partial \theta}=G(V)+S(V),
\end{equation}
with
\begin{align*}
&V=(h,hv_{r,m},h\alpha_1,\ldots,h\alpha_{N_r},hv_{\theta,m},h\gamma_1,\ldots,h\gamma_{N_\theta}), \quad A_r=\frac{\partial F_r}{\partial V}-Q_r, \quad A_{\theta}=\frac{\partial F_{\theta}}{\partial V}-Q_{\theta}, \\[5pt]
&F_r = \left( hv_{r,m}, F_r^0, \ldots, F_r^{N_r}, \Tilde{F}_r^0, \ldots, \tilde{F}_r^{N_{\theta}} \right), \quad F_{\theta} = \frac{1}{r}\left( hv_{\theta,m}, F_\theta^0, \ldots, F_\theta^{N_r}, \Tilde{F}_\theta^0, \ldots, \tilde{F}_\theta^{N_{\theta}} \right), \\[5pt]
&Q_r = \left( 0, Q_r^0, \ldots, Q_r^{N_r}, \Tilde{Q}_r^0, \ldots, \tilde{Q}_r^{N_{\theta}} \right), \quad Q_{\theta} = \frac{1}{r}\left( 0, Q_\theta^0, \ldots, Q_\theta^{N_r}, \Tilde{Q}_\theta^0, \ldots, \tilde{Q}_\theta^{N_{\theta}} \right), \\[5pt]
&G(V) = \frac{1}{r}\left(hv_{r,m}, G_0, \ldots, G_{N_r}, \Tilde{G}_0, \ldots, \Tilde{G}_{N_{\theta}}\right), \quad S(V) = (0, S_0, \ldots, S_{N_r}, \Tilde{S}_0, \ldots, \Tilde{S}_{N_{\theta}}).
\end{align*}
The vector V contains the unknown variables, in convective form. \(A_r\) and \(A_{\theta}\) are the one-dimensional system matrices in radial direction and angular direction, respectively.
\(G(V)\) is a vector containing the forcing terms, while the vector \(S(V)\) contains the source terms.
\subsection{Axisymmetric Shallow Water Moment Equations (ASWME)}
System \eqref{systemFormCylindrical} carries information in both radial and angular direction. In many situations, however, waves in the fluid propagate more or less only in radial direction, see Figure \ref{fig:axisymmetricFlow}.
A classical example of such axisymmetric cases are tsunamis \cite{tsunamiHakata,tobias_idealized_2011}. In \cite{tropicalCyclones}, axisymmetric shallow water equations are used to understand the intensity of tropical cyclones. Axisymmetric currents have also been thoroughly studied using the axisymmetric shallow water equations, see, e.g., \cite{axisymmetricGravityCurrents} and \cite{selfSimilar}. An axisymmetric moment system can be obtained starting from \eqref{systemFormCylindrical}.
\begin{figure}[ht]
\centering
\resizebox{6cm}{4.75cm}{
\def2.0{2.0}
\def0.6{0.6}
\def1.7{1.7}
\begin{tikzpicture}
\def43{43}
\coordinate (O) at (0,0);
\coordinate (P1) at (0:1.7);
\coordinate (P2) at (45:1.7);
\coordinate (P3) at (90:1.7);
\coordinate (P4) at (135:1.7);
\coordinate (P5) at (180:1.7);
\coordinate (P6) at (225:1.7);
\coordinate (P7) at (270:1.7);
\coordinate (P8) at (315:1.7);
\coordinate (P9) at (22.5:1.7);
\coordinate (P10) at (67.5:1.7);
\coordinate (P11) at (112.5:1.7);
\coordinate (P12) at (202.5:1.7);
\coordinate (P13) at (247.5:1.7);
\coordinate (P14) at (292.5:1.7);
\coordinate (P15) at (157.5:1.7);
\coordinate (P16) at (337.5:1.7);
\coordinate (X) at (2.0,0);
\coordinate (R) at (43:1.7);
\draw[->,line width=0.9] (-2.0,0) -- (1.08*2.0,0) node[right] {$x$};
\draw[->,line width=0.9] (0,-2.0) -- (0,1.08*2.0) node[left] {$y$};
\draw[vector] (O) -- (P1);
\draw[vector] (O) -- (P2);
\draw[vector] (O) -- (P3);
\draw[vector] (O) -- (P4);
\draw[vector] (O) -- (P5);
\draw[vector] (O) -- (P6);
\draw[vector] (O) -- (P7);
\draw[vector] (O) -- (P8);
\draw[vector] (O) -- (P9);
\draw[vector] (O) -- (P10);
\draw[vector] (O) -- (P11);
\draw[vector] (O) -- (P12);
\draw[vector] (O) -- (P13);
\draw[vector] (O) -- (P14);
\draw[vector] (O) -- (P15);
\draw[vector] (O) -- (P16);
\draw[color=red!60, very thick](0,0) circle (1.7);
\draw[color=red!60, thick](0,0) circle (1.2);
\draw[color=red!60, very thick](0,0) circle (0.7);
\end{tikzpicture}
}
\caption{Axisymmetric flow.}
\label{fig:axisymmetricFlow}
\end{figure}
In the cylindrical system, axisymmetric flow does not depend on \(\theta\).
Mathematically, all derivatives with respect to \(\theta\) are zero. Note that this does not mean that there is no angular velocity.
This is a strong simplification, but as argued above there are many phenomena that can be modelled with an axisymmetric system.
One main advantage of this simplified model is its resemblance to the one-dimensional models, see \cite{SWME} and \cite{HSWME}.
Setting all derivatives with respect to \(\theta\) to zero, \eqref{systemFormCylindrical} reduces to
\begin{equation}\label{systemFormCylindricalReduced}
\frac{\partial V}{\partial t}+A_A\frac{\partial V}{\partial r} = G(V) + S(V),
\end{equation}
where \(A_A:=A_r=\frac{\partial F_r}{\partial V}-Q_r\). This system is called the \emph{Axisymmetric Shallow Water Moment Equations} (ASWME). The axisymmetric system with radial order \(N_r\) and angular order \(N_{\theta}\) is called the \((N_r,N_{\theta})\)th order axisymmetric system. The system matrix of the \((N_r,N_{\theta})\)th order axisymmetric system is denoted by \(A_A^{(N_r,N_\theta)}\).
Two special classes of the ASWME are considered in this paper: (1) Axisymmetric systems with only the radial velocity expanded, i.e. the \((N_r,0)\)th order axisymmetric systems and (2) axisymmetric systems with full velocity expanded, i.e. the \((N,N)\)th order axisymmetric systems. In the first case, we approximate the angular velocity by its mean \(v_{\theta,m}\) and the radial velocity by a polynomial expansion of order \(N_r\). In the latter case, we approximate both the angular velocity and the radial velocity by a polynomial expansion of the same order.
For both classes, the hyperbolicity of the systems is analyzed. Hyperbolicity is a property of a system of first-order partial differential equations that ensures that information propagates with real and finite propagation speed. It is a requirement for the equations to be robust against small perturbations of the initial data \cite{stabilityConditions,equilibriumStability}.
In \cite{HSWME}, it is observed that the one-dimensional SWME lack global hyperbolicity and an instability that can be related to this loss of hyperbolicity is found in a numerical test case in which shocks are present. We therefore construct a hyperbolic regularization of the axisymmetric model before performing numerical simulations in the next section.
\subsubsection{Hyperbolicity breakdown of axisymmetric systems with radial velocity expanded}
First, the hyperbolicity loss of the axisymmetric systems with radial velocity expanded is discussed. These systems are characterized by a constant velocity profile in angular direction and useful in situations where the angular velocity is small compared to the radial velocity.
As one example, we consider the second order axisymmetric system with radial velocity expanded, which reads
\begin{equation*}
\partial_t
\underbrace{\begin{pmatrix}
h\\[6pt]
hv_{r,m}\\[6pt]
h\alpha_1\\[6pt]
h\alpha_2\\[6pt]
hv_{\theta,m}
\end{pmatrix}}_V
+\partial_r
\underbrace{
\begin{pmatrix}
hv_{r,m}\\[6pt]
\frac{h\alpha_1^2}{3}+\frac{h\alpha_2^2}{5}+\frac{gh^2}{2}+hv_{r,m}^2\\[6pt]
\frac{4}{5}\alpha_1\alpha_2+2h\alpha_1 v_{r,m}\\[6pt]
\frac{2}{3}h\alpha_1^2+\frac{2}{7}h\alpha_2^2+2h\alpha_2 v_{r,m}\\[6pt]
h v_{r,m} v_{\theta,m}
\end{pmatrix}}_{F_r}
=Q_r\partial_r
\begin{pmatrix}
h\\[6pt]
hv_{r,m}\\[6pt]
h\alpha_1\\[6pt]
h\alpha_2\\[6pt]
hv_{\theta,m}
\end{pmatrix}
+G(V)+S(V),
\end{equation*}
with non-conservative matrix
\begin{equation*}
Q_r=
\begin{pmatrix}
0 & 0 & 0 & 0 & 0\\[6pt]
0 & 0 & 0 & 0 & 0\\[6pt]
0 & 0 &-h\frac{\alpha_2}{5}+h v_{r,m} & \frac{\alpha_1}{5} & 0\\[6pt]
0 & 0 & \alpha_1 & \frac{h\alpha_2}{7}+v_{r,m} & 0\\[6pt]
0 & 0 & 0 & 0 & 0
\end{pmatrix},
\end{equation*}
and where G(V) and S(V) contain the forcing terms and the source terms, respectively, which can be obtained from Section \ref{section:momentequations}. For conciseness, the explicit expressions for these terms are not given here.
This results in the system matrix
\begin{equation*}
A_A^{(2,0)}=\frac{\partial F_r}{\partial V}-Q_r=
\begin{pmatrix}
v_{r,m}&h&0&0&0\\[6pt]
g+\frac{5\alpha_1^2+3\alpha_2^2}{15h}&v_{r,m}&\frac{2 \alpha_1}{3}&\frac{2\alpha_2}{5}&0\\[6pt]
\frac{4\alpha_1\alpha_2}{5h}&\alpha_1&\alpha_2+v_{r,m}&\frac{3\alpha_1}{5}&0\\[6pt]
-\frac{7\alpha_1^2+3\alpha_2^2}{21h}&\alpha_2&\frac{\alpha_1}{3}&\frac{\alpha_2}{7}+v_{r,m}&0\\[6pt]
0 & 0 & 0 & 0 & v_{r,m}
\end{pmatrix}.
\end{equation*}
The eigenvalues of the matrix \(A_A^{(2,0)}\) are of the form \(\lambda=v_{r,m}+c\sqrt{gh}\), where \(c\) is any root of the polynomial
\begin{multline*}
c^5-\frac{10\alpha_2}{7}c^4-\left( 1+\frac{6\alpha_1^2}{5}+\frac{6\alpha_2^2}{35} \right)c^3-\left(- \frac{10\alpha_2}{7}+\frac{6\alpha_1^2\alpha_2}{35}-\frac{22\alpha_2^3}{35} \right)c^2\\-\left( -\frac{\alpha_1^2}{5}-\frac{\alpha_1^4}{5}+\frac{3\alpha_2^2}{7}+\frac{6\alpha_1^2\alpha_2^2}{35}+\frac{\alpha_2^4}{35} \right)c=0,
\end{multline*}
where \(\alpha_1\) and \(\alpha_2\) have been scaled with \(\frac{1}{\sqrt{gh}}\) for readability. Figure \ref{fig:20AxisymmetricHypPlot} shows that the model looses hyperbolicity for some values of \(\alpha_1\) and \(\alpha_2\). In the red regions, the imaginary part of the eigenvalues becomes non-zero. Note the similarity with the hyperbolicity regions of the one-dimensional SWME in \cite{HSWME}.
\begin{figure}[ht]
\centering
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{image/Chapters/HyperbolicityPlots/20thOrderHyperbolicityPlot_final_final.png}
\caption{Second order axisymmetric system with radial velocity expanded.}
\label{fig:20AxisymmetricHypPlot}
\end{subfigure}
\hfill
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{image/Chapters/HyperbolicityPlots/22thOrderHyperbolicityPlot_final_final.png}
\caption{Second order axisymmetric system with full velocity expanded.}
\label{fig:22AxisymmetricHypPlot}
\end{subfigure}
\caption{Hyperbolicity region of the Second order axisymmetric system with radial velocity expanded (a) and the Second order axisymmetric system with full velocity expanded (b). Red regions indicate loss of hyperbolicity. Compare \cite{HSWME}.}
\label{fig:axisymmetricHypPlots}
\end{figure}
Hyperbolicity plots of the systems with radial velocity expanded up to order five have been obtained and they all display a lack of hyperbolicity in some regions. Thus, we conclude that the axisymmetric systems with full velocity expanded are generally not globally hyperbolic. This is no surprise, because a lot of the structure in these systems is inherited from the one-dimensional SWME, see Section \ref{HASWMESection}.
\subsubsection{Hyperbolicity breakdown of axisymmetric system with full velocity expanded}
Also the axisymmetric systems with full velocity expanded lack global hyperbolicity. This hyperbolicity loss already occurs in the second order system. The system matrix of the second order axisymmetric system with full velocity expanded is given by:
\begin{equation}
A_A^{(2,2)}=
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
d_1 & 2 v_{r,m} & \frac{2 \alpha _1}{3} & \frac{2 \alpha _2}{5} & 0 & 0 & 0 \\
d_2 & 2 \alpha _1 & v_{r,m}+\alpha _2 & \frac{3 \alpha _1}{5} & 0 & 0 & 0 \\
d_3 & 2 \alpha _2 & \frac{\alpha _1}{3} & v_{r,m}+\frac{3 \alpha _2}{7} & 0 & 0 & 0 \\
\Tilde{d}_1 & v_{\theta ,m} & \frac{\gamma _1}{3} & \frac{\gamma _2}{5} & v_{r,m} & \frac{\alpha _1}{3} & \frac{\alpha _2}{5} \\
\Tilde{d}_2 & \gamma _1 & \frac{3 \gamma _2}{5} & \frac{\gamma _1}{5} & \alpha _1 & v_{r,m}+\frac{2 \alpha _2}{5} & \frac{2\alpha_1}{5} \\
\Tilde{d}_3 & \gamma _2 & -\frac{\gamma _1}{3} & \frac{\gamma _2}{7} & \alpha _2 & \frac{2 \alpha _1}{3} & v_{r,m}+\frac{2 \alpha_2}{7}
\end{pmatrix},
\label{A22}
\end{equation}
where the entries of the first column are not shown explicitly here for conciseness. They are given in Appendix \ref{app:B}. The eigenvalues of \(A_A^{(2,2)}\) are of the form \(\lambda=v_{r,m}+c\sqrt{gh}\), where \(c\) is any root of the polynomial
\begin{equation*}
-c ^7+\left(\frac{9 \alpha _1^2}{5}+1\right) c ^5 + \left(-\frac{23 \alpha _1^4}{25}-\frac{4 \alpha _1^2}{5}\right) c^3 +\left(\frac{3 \alpha _1^6}{25}+\frac{3 \alpha _1^4}{25}\right) c=0.
\end{equation*}
Note that, unlike the matrix \(A_A^{(2,2)}\) itself, the characteristic polynomial of this matrix and the eigenvalues do not depend on \(\gamma_1\) and \(\gamma_2\)
due to the lower triangular block structure of \(A_A^{(2,2)}\).
The loss of hyperbolicity is displayed in Figure \ref{fig:22AxisymmetricHypPlot}, which shows the hyperbolicity region of the second order axisymmetric system with full velocity expanded. Note that, interestingly, both plots in Figure \ref{fig:axisymmetricHypPlots} are identical, implying that the loss of hyperbolicity is induced by the equations for the radial variables \(h,hv_{r,m},hv_{\theta,m}\) and \(h\alpha_i\), with \(i=1,\ldots,N_r\). Again, the hyperbolicity plot is seemingly identical to the hyperbolicity plot of the one-dimensional second order system \cite{HSWME}. This suggests that the lack of hyperbolicity of the two-dimensional systems is closely related to the hyperbolicity loss of the one-dimensional SWME.
\section{Hyperbolic Axisymmetric Shallow Water Moment Equations}
\label{HASWMESection}
As shown in the previous section, the ASWME clearly lack hyperbolicity. In \cite{HSWME}, the loss of hyperbolicity of the one-dimensional SWME is overcome by modifying the system matrix, based on a similar approach from kinetic theory \cite{Koellermeier2020g,Fan2016,
Koellermeier2014}. We will extend this idea to the quasi one-dimensional ASWME.
First, note that both the first-order axisymmetric system with radial velocity expanded (i.e. the (1,0)th order system) and the first-order axisymmetric system with full velocity expanded (i.e. the (1,1)th order system) are globally hyperbolic, as can easily be verified using \ref{section:momentequations}. The \emph{Hyperbolic Axisymmetric Shallow water Moment Equations} (HASWME) are derived by linearizing the \((N_r,N_{\theta})\)th order system matrix \(A_{A}^{(N_r,N_{\theta})}\) around linear deviations from the constant equilibrium velocity profile,
i.e.,
\begin{equation*}
(h,v_{r,m},\alpha_1,\ldots,\alpha_{N_r},v_{\theta,m},\gamma_1,\ldots,\gamma_{N_{\theta}})\longrightarrow (h,v_{r,m},\alpha_1,0,\ldots,0,v_{\theta,m},\gamma_1,0,\ldots,0).
\end{equation*}
Practically, the higher order coefficients \(\alpha_i\) and \(\gamma_i\), with \(i \geq 2 \), are set to zero in the system matrix. This way, the HASWME system with modified system matrix \(A_{HA}\) is obtained:
\begin{equation}\label{systemFormHyperbolicAxisymmetric}
\frac{\partial V}{\partial t}+A_{HA}\frac{\partial V}{\partial r} = G(V) + S(V).
\end{equation}
The analytical forms of the hyperbolic axisymmetric systems with radial velocity expanded and the hyperbolic axisymmetric systems with full velocity expanded can be derived, allowing for deeper mathematical analysis of the HASWME model.
\begin{remark}
We note that similar to \cite{HSWME} and \cite{koellermeier_steady_2022}, different ways to perform a hyperbolic regularization exist. Additional regularization terms could be used to construct models with specific eigenvalues. However, we leave this option for potential future work and focus on the standard hyperbolic regularization method here.
\end{remark}
\subsection{Analytical form of the axisymmetric system with radial velocity expanded}
Deriving the full system matrix and then applying the hyperbolic regularization mentioned above, the hyperbolic system matrix is obtained.
\begin{theorem}\label{20orderTheoremMatrixForm}
The HASWME system matrix \(A_{HA}^{(N_r,0)} \in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) is given by
\begin{equation}\label{2systemMatrix}
A_{HA}^{(N_r,0)}=
\begin{pmatrix}
& 1 & & & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & & & \\[6pt]
& & & \ddots & \ddots & \frac{N_r+1}{2 N_r+1}\alpha_1 & \\[6pt]
& & & & \frac{N_r-1}{2N_r-1}\alpha_1 & v_{r,m} & \\[6pt]
-v_{r,m}v_{\theta,m} & v_{\theta,m} & & & & & v_{r,m}
\end{pmatrix},
\end{equation}
where all other entries are zero.
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,0)}.
\end{proof}
The structure of the system matrix of the hyperbolic axisymmetric systems with radial velocity expanded is very similar to the structure of the one-dimensional \(N_r\)th order system matrix. There is one additional equation (for \(hv_{\theta,m}\)) and one additional partial derivative with respect to \(hv_{\theta,m}\), compare \cite{SWME}. This means that the results in \cite{HSWME} can be used to investigate the hyperbolicity of the hyperbolic axisymmetric systems with radial velocity expanded. In particular, the following theorem yields the characteristic polynomial in analytical form.
\begin{theorem}\label{20OrderCharPol}
The HASWME system matrix \(A_{HA}^{(N_r,0)}\in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) has the following characteristic polynomial:
\begin{equation}\label{2charpol}
\chi_{A_{HA}^{(N_r,0)}}(\lambda)=(\lambda-v_{r,m})\cdot\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N_r,0)}}(\lambda -v_{r,m}),
\end{equation}
where \(A_2^{(N_r,0)}\in \mathbb{R}^{N_r\times N_r}\) is defined as
\begin{equation}
A_2^{(N_r,0)}=
\begin{pmatrix}
& c_1 & & \\[6pt]
a_2 & & \ddots & \\[6pt]
& \ddots & & c_{N_r-1} \\[6pt]
& & a_{N_r} &
\end{pmatrix},
\end{equation}
where the entries of $A_2^{(N_r,0)}$ are given by
\begin{align}
a_i&=\frac{i-1}{2i-1}\alpha_1 \qquad i=2,\ldots,N_r,\\[5pt]
c_i&=\frac{i+2}{2i+3}\alpha_1 \qquad i=1,\ldots,N_r-1.
\end{align}
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,0)}.
\end{proof}
Finally, it is proved that all the hyperbolic axisymmetric systems with radial velocity expanded have real eigenvalues.
\begin{theorem}\label{theorem:eigenvaluesN0}
The eigenvalues of the modified hyperbolic axisymmetric system matrix \(A_{HA}^{(N_r,0)} \in \mathbb{R}^{(N_r+3)\times (N_r+3)}\) are the real numbers
\begin{align}
\lambda_1&=v_{r,m},\\[5pt]
\lambda_{2,3}&=v_{r,m}\pm \sqrt{gh+\alpha_1^2},\\[5pt]\label{eigenvalues_n0}
\lambda_{i+3}&=v_{r,m}+b_i\cdot\alpha_1, \qquad i=1,\ldots,N_r,
\end{align}
with \(b_i\cdot \alpha_1\) the real roots of \(A_2^{(N_r,0)}\) from Theorem \ref{20OrderCharPol} and where all \(b_i\) are pairwise distinct.
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,0)}.
\end{proof}
In comparison with the eigenvalues of \(N_r\)th order one-dimensional HSWME, see \cite{HSWME}, \(\lambda_1=v_{r,m}\) is the only new eigenvalue, due to the structure of the system matrix of the axisymmetric equations, which are closely related to the one-dimensional HSWME \cite{HSWME}.
\begin{remark}\label{remark:Nr_odd}
Note that for the \(N_r\)th order systems with radial velocity expanded with \(N_r\) odd, there is an eigenvalue \(\lambda=v_{r,m}\) with algebraic multiplicity two. It remains to show that this eigenvalue also has geometric multiplicity two, so that the system matrix is real diagonalizable. In all explicitly computed cases (up to order \((5,0)\)), this was the case. A general proof for the special case of \(N_r\) odd is left for future work.
\end{remark}
\begin{corollary}\label{corollary:hyperbolicityN0}
If \(\alpha_1\neq 0\) and if \(N_r\) is odd, the \(N_r\)th order HASWME with radial velocity expanded are globally hyperbolic.
\end{corollary}
\begin{proof}
This follows immediately form Theorem \ref{theorem:eigenvaluesN0}.
\end{proof}
The condition that \(N_r\) is odd is already discussed in Remark \ref{remark:Nr_odd}. The first condition, \(\alpha_1 \neq 0\), is a subtle observation and has to do with the multiplicity of the eigenvalues. This will be discussed in more detail in the next section.
\subsection{Analytical form of the axisymmetric system with full velocity expanded}
The structure of the hyperbolic axisymmetric systems with radial velocity expanded, and thus of the one-dimensional systems in particular, facilitates the derivation of the analytical form of the hyperbolic axisymmetric systems with full velocity expanded. Since there are \(N\) additional moment equations and \(N\) additional derivatives to be considered, the derivation is more complex than the derivation of the hyperbolic axisymmetric systems with radial velocity expanded. The system matrix of the hyperbolic axisymmetric systems with radial velocity expanded shows a lot of structure.
\begin{theorem}\label{NNOrderTheoremMatrixForm}
The HASWME system matrix \(A_{HA}^{(N,N)} \in \mathbb{R}^{(2N+3)\times (2N+3)}\) is given by
\begin{equation}\label{22systemMatrix}
A_{HA}^{(N,N)}=
\begin{pmatrix}
\mathbf{A}^{(N,N)} & \mathbf{0}^{(N,N)} \\
\mathbf{B}^{(N,N)} & \mathbf{C}^{(N,N)}
\end{pmatrix},
\end{equation}
where
\begin{equation*}
\mathbf{A}^{(N,N)}=
\begin{pmatrix}
& 1 & & & & \\[6pt]
gh-v_{r,m}^2-\frac{1}{3}\alpha_1^2 & 2v_{r,m} & \frac{2}{3}\alpha_1 & & & \\[6pt]
-2 v_{r,m}\alpha_1 & 2\alpha_1 & v_{r,m} & \frac{3}{5}\alpha_1 & & & \\[6pt]
-\frac{2}{3}\alpha_1^2 & & \frac{1}{3}\alpha_1 & v_{r,m} & \ddots & & \\[6pt]
& & & \ddots & \ddots & \frac{N+1}{2 N+1}\alpha_1 \\[6pt]
& & & & \frac{N-1}{2N-1}\alpha_1 & v_{r,m}
\end{pmatrix},
\end{equation*}
\begin{equation*}
\mathbf{B}^{(N,N)}=
\begin{pmatrix}
-v_{r,m}v_{\theta,m} -\frac{\alpha_1\gamma_1}{3} & v_{\theta,m} & \frac{\gamma_1}{3} & & & \\[6pt]
-v_{r,m}\gamma_1-v_{\theta,m}\alpha_1 & \gamma_1 & & \frac{2}{5}\gamma_1 & & \\[6pt]
-\frac{2}{3}\alpha_1\gamma_1 & & -\frac{1}{3}\gamma_1 & & \ddots & \\[6pt]
& & & \ddots & \ddots & \frac{N}{2N+1}\gamma_1 \\[6pt]
& & & & -\frac{1}{2N-1}\gamma_1 &
\end{pmatrix},
\end{equation*}
\begin{equation*}
\mathbf{C}^{(N,N)}=
\begin{pmatrix}
v_{r,m} & \frac{\alpha_1}{3} & & & & \\[6pt]
\alpha_1 & v_{r,m} & \frac{2}{5}\alpha_1 & & & \\[6pt]
& \frac{2}{3}\alpha_1 & v_{r,m} & \ddots & & \\[6pt]
& & \ddots & \ddots & \frac{N}{2N+1}\alpha_1 \\[6pt]
& & & \frac{N}{2N-1}\alpha_1 & v_{r,m}
\end{pmatrix},
\end{equation*}
with \(\mathbf{A}^{(N,N)} \in \mathbb{R}^{(N+2)\times (N+2)}\), \(\mathbf{B}^{(N,N)}\in \mathbb{R}^{(N+1)\times (N+2)}\) and \(\mathbf{C}^{(N,N)} \in \mathbb{R}^{(N+1)\times (N+1)}\) and where all other entries are zero. \(\mathbf{0}^{(N,N)} \in \mathbb{R}^{(N+2)\times (N+1)}\) is a zero matrix.
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,N)}.
\end{proof}
In particular, the system matrix is a block lower triangular matrix, simplifying the computation of the characteristic polynomial and the eigenvalues below.
\begin{theorem}\label{22OrderCharPol}
The HASWME system matrix \(A_{HA}^{(N,N)}\in \mathbb{R}^{(N+3)\times (N+3)}\) has the following characteristic polynomial:
\begin{equation}\label{22charpol}
\chi_{A_{HA}^{(N,N)}}(\lambda)=\left( \left( \lambda -v_{r,m})^2-gh-\alpha_1^2 \right) \right)\cdot \chi_{A_2^{(N,N)}}(\lambda -v_{r,m})\cdot\chi_{A_3^{(N,N)}}(\lambda),
\end{equation}
where \(A_2^{(N,N)}\in \mathbb{R}^{N\times N}\) and \(A_3^{(N,N)}\in \mathbb{R}^{(N+1)\times (N+1)}\) are defined as
\begin{equation*}
A_2^{(N,N)}=
\begin{pmatrix}
& c_1 & & \\[6pt]
a_2 & & \ddots & \\[6pt]
& \ddots & & c_{N-1} \\[6pt]
& & a_{N} &
\end{pmatrix}, \qquad
A_3^{(N,N)}=
\begin{pmatrix}
v_{r,m} & \frac{\alpha_1}{3} & & \\[6pt]
\alpha_1 & v_{r,m} & g_1 & & \\[6pt]
& f_2 & v_{r,m} & \ddots & \\[6pt]
& & \ddots & \ddots & g_{N-1} \\[6pt]
& & & f_N & v_{r,m}
\end{pmatrix}
\end{equation*}
with
\begin{align*}
a_i&=\frac{i-1}{2i-1}\alpha_1, \quad f_i=\frac{i}{2i-1}\alpha_1, \qquad i=2,\ldots,N,\\[5pt]
c_i&=\frac{i+2}{2i+3}\alpha_1, \quad g_i=\frac{i+1}{2i+3}\alpha_1, \qquad i=1,\ldots,N-1.
\end{align*}
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,N)}.
\end{proof}
Note that the matrix \(A_2^{(N,N)}\) in Theorem \ref{22OrderCharPol} is the same matrix as defined in Theorem \ref{20OrderCharPol} but with a slightly different notation adjusted to the different setting.
\begin{theorem}\label{theorem:eigenvaluesNN}
The eigenvalues of the modified axisymmetric hyperbolic system matrix \(A_{HA}^{(N,N)} \in \mathbb{R}^{(2N+3)\times (2N+3)}\) are the real numbers
\begin{align}
\lambda_{1,2}&=v_{r,m}\pm \sqrt{gh+\alpha_1^2},\\[5pt]
\label{eigenvalues1}\lambda_{i+2}&=v_{r,m}+b_i\alpha_1, \qquad i=1,\ldots,N,\\[5pt]
\label{eigenvalues2}\lambda_{i+2+N}&=v_{r,m}+s_i\alpha_1,\qquad i=1,\ldots,N+1,
\end{align}
with \(b_i \cdot \alpha_1\) the real roots of \(A_2^{(N,N)}\), and \(s_i \cdot \alpha_1\) the real roots of \(A_3^{(N,N)}\), from Theorem \ref{22OrderCharPol} and where all the \(b_i\)'s and the \(s_i\)'s are pairwise distinct.
\end{theorem}
\begin{proof}
The proof can be found in Appendix \ref{ch:hyperbolicityproof(N,N)}.
\end{proof}
\begin{remark}\label{remark:multiplicity_NNHASWME}
According to Theorem \ref{theorem:eigenvaluesNN}, the roots of the system matrix of a hyperbolic axisymmetric system with full velocity expanded are real. However, for vanishing first coefficient \(\alpha_1\) all eigenvalues \eqref{eigenvalues1} and \eqref{eigenvalues2} are equal to \(v_{r,m}\). To have hyperbolicity in this case, the eigenspace spanned by the eigenvalues needs to have the same dimension as the multiplicity of the eigenvalue. The same observation is made in the hyperbolic axisymmetric systems with radial velocity expanded, see Theorem \ref{theorem:eigenvaluesN0}; the eigenvalues \ref{eigenvalues_n0} collide when \(\alpha_1=0\). A direct numerical computation reveals that the hyperbolic axisymmetric systems with radial velocity expanded are hyperbolic at least up to order \(N_r=5\), but the hyperbolic axisymmetric system with full velocity expanded are not hyperbolic for order \(N=1,2,3,4,5\).
\end{remark}
Remark \ref{remark:multiplicity_NNHASWME} suggests a condition for the hyperbolic axisymmetric systems with full velocity expanded to be hyperbolic.
\begin{corollary}\label{corollary:NNhyperbolicity}
If \(\alpha_1 \neq 0\), the \(N\)th order HASWME with full velocity expanded are globally hyperbolic.
\end{corollary}
\begin{proof}
This follows immediately from Theorem \ref{corollary:NNhyperbolicity}.
\end{proof}
\begin{remark}
The construction of the \(N_r\)th order systems with radial velocity expanded and the \(N\)th order systems with full velocity expanded allow for a generalization to arbitrary systems with order of expansion \(N_r\) in radial direction and order of expansion \(N_{\theta}\) in angular direction, with \(N_r \neq N_{\theta} \neq 0\). A straightforward extension of the theory is that the system matrix will always be a lower triangular matrix, facilitating the computation of the characteristic polynomial considerably. Denoting the blocks of the system matrix by
\begin{equation*}
\mathbf{A}_{sys}=
\begin{pmatrix}
\mathbf{A} & \mathbf{0} \\[5pt]
\mathbf{B} & \mathbf{C}
\end{pmatrix},
\end{equation*}
the matrix \(\mathbf{A}\) corresponds to the system matrix in the one-dimensional HSWME \cite{HSWME}. Moreover, the dimension of this matrix is only determined by the radial order \(N_r\). Since the matrix \(\mathbf{B}\) is lower triangular, block \(\mathbf{B}\), which contains the derivatives with respect to \(h, v_{r,m}\) and \(\alpha_i\), with \(i \in [1,2,\ldots,N_r]\), that appear in the angular momentum balance equation and the equations for \(\gamma_i\), with \(i \in [1,2,\ldots,N_{\theta}]\), does not appear in the calculation of the characteristic polynomial. Block \(\mathbf{C}\) contains the derivatives with respect to \( v_{\theta,m}\) and \(\gamma_i\), with \(i \in [1,2,\ldots,N_{\theta}]\), that appear in the angular momentum balance equation and the equations for \(\gamma_i\), with \(i \in [1,2,\ldots,N_{\theta}]\). Therefore, the dimension of block \(\mathbf{C}\) only depends on the angular order \(N_{\theta}\). It follows that matrix \(\mathbf{C}\) is precisely the matrix \(C^{(N_\theta,N_\theta)}\) defined in Theorem
\ref{NNOrderTheoremMatrixForm}. In particular, this means that the hyperbolicity theorems and proofs given in Appendix \ref{ch:hyperbolicityproof(N,0)} and \ref{ch:hyperbolicityproof(N,N)} can be easily generalized to systems with order \(N_r\) in radial direction and order \(N_{\theta}\) in angular direction.
\end{remark}
\begin{remark}
The stability properties of the HASWME \eqref{systemFormHyperbolicAxisymmetric} are not solely determined by the system matrix representing the transport part, but also by the dissipation part of the model, which is inscribed in the right-hand side source terms \cite{stabilityConditions,equilibriumStability}.
An equilibrium stability analysis of the one-dimensional HSWME is performed in \cite{equilibriumStability}, in which equilibrium manifolds of the one-dimensional models are derived and in which a set of stability conditions, proposed in \cite{stabilityConditions}, is verified for each of these manifolds.
For the HASWME, the forcing terms denoted by \(G(V)\) in \eqref{systemFormHyperbolicAxisymmetric} pose mathematical difficulties, for the analytically derivation of explicit expressions for the equilibrium manifolds. The equilibrium analysis is therefore left for future work.
\end{remark}
\section{Introduction}
The Shallow Water Equations (SWE) are a set of partial differential equations that describe fluid flows for which the horizontal length scale is much larger than the vertical length scale. Applications can be found in a wide range of scientific fields, such as weather forecasting \cite{weatherForecasting} and free-surface flows like tsunami modelling \cite{tsunamiHakata}.
A crucial feature of the SWE is that the lateral velocity field is constant over the vertical position variable. This is a severe simplification and renders the SWE inaccurate in applications such as dam-break floods \cite{inaccuracy_dambreak} and tsunamis \cite{inaccuracy_tsunami}. A typical water flow situation is when the velocity at the bottom of a river is smaller than at the top because of the interaction with the bottom, e.g., caused by friction. Another example is a scenario in which a strong wind blows at the surface of a parcel of water, causing the top layers of the water to display different velocity fields than the water in the middle layer of the parcel.
These applications clearly show that a more flexible system of equations is needed for the modelling of flows with a complex motion. For this reason, the so-called \emph{Shallow Water Moment Equations (SWME)} were derived in \cite{SWME}. These equations allow vertical variability in the lateral velocities. The SWME are obtained using the method of moments, which includes an expansion of the lateral velocity in a polynomial basis and a subsequent Galerkin projection to obtain evolution equations for the expansion coefficients. The new system of equations proved to be more accurate than the SWE in numerical simulations \cite{SWME}. The model was recently extended to the non-hydrostatic case in \cite{Scholz_submitted} and generalized to include multi-layer models similar to \cite{fernandez-nieto2016} and \cite{Garres-Diaz2023}.
When the goal is to model flows in rivers or oceans, complex propagation speeds are nonphysical, because waves propagate with real and finite propagation speeds. Unfortunately, already the one-dimensional SWME are not globally hyperbolic \cite{HSWME} leading to propagation speeds with non-zero imaginary part. The loss of hyperbolicity can lead to instabilities in numerical test cases \cite{HSWME}, \cite{lossOfHyperbolicity}. The lack of hyperbolicity motivated the derivation of a hyperbolic regularization, the so-called \emph{Hyperbolic Shallow Water Moment Equations (HSWME)} \cite{HSWME}, by modifying the system matrix based on similar approaches from kinetic theory \cite{Koellermeier2020g,Fan2016,Koellermeier2014}, thus guaranteeing global hyperbolicity. The hyperbolicity of the HSWME was proved in one spatial dimension and numerical simulations of the one-dimensional HSWME yielded accurate results \cite{HSWME}. Up to now, there is no two-dimensional hyperbolic model of the SWME to the authors best knowledge.
The goal of this paper is to derive and analyze moment equations for shallow flows in a cylindrical coordinate system. The SWE formulated in cylindrical coordinates are widely used. The reason for this is that for many classical applications such as tsunamis \cite{tsunamiHakata,deb_roy_nonlinear_2007} and tropical cyclones \cite{tropicalCyclones}, a system expressed in cylindrical coordinates is more appropriate. From a mathematical point of view, a coordinate transformation can also be beneficial. The Cartesian SWME are not rotationally invariant and this can be inconvenient as it complicates the computation of the system matrix and the characteristic speeds. By transforming the system to cylindrical coordinates \((r,\theta,z)\), it is intrinsically easier to impose rotational invariance. Rotational invariance is then equivalent to requiring that the flow properties do not depend on the angular variable \(\theta\). In this way, axisymmetric SWE are classically obtained \cite{axisymmetricGravityCurrents,selfSimilar}.
The extension to axisymmetric SWME in this paper suffers from the same loss of hyperbolicity that is observed in the one-dimensional case.
Analogously to the one-dimensional HSWME in \cite{HSWME}, \emph{Hyperbolic Axisymmetric Shallow Water Moment Equations (HASWME)} are derived in this paper to overcome the hyperbolicity problem. The new axisymmetric models are pseudo-two-dimensional: there is propagation of information in one direction, but the model describes two-dimensional flow. In this way, the axisymmetric model can be seen as the next step towards the analysis of a full two-dimensional model.
First, we prove that the new model is hyperbolic and compute the propagation speeds. Next, we perform numerical test for discontinuous and smooth initial conditions to investigate the accuracy of the models. In particular, we show that the hyperbolic regularization results in a numerical solution with less oscillations in a dam break situation. For this test case, it is shown that the hyperbolic model is more accurate for short time horizons. Further, we show that the approximation error decreases when the order increases for smooth initial data, thus indicating convergence.
The main contributions of this paper are the derivation of the axisymmetric SWME, its analysis revealing the loss of hyperbolicity and the subsequent derivation of the hyperbolic model called HASWME including a hyperbolicity proof for the case with arbitrary number of moments.
The remaining part of this paper is structured as follows: axisymmetric SWME are derived in \ref{ASWME} by first transforming the Cartesian system to cylindrical coordinates, expanding the velocity variables, and then projecting the reference system onto basis functions. The loss of hyperbolicity is shown in several examples. In Section \ref{HASWMESection}, the new hyperbolic axisymmetric moment model HASWME is presented and analyzed, including the hyperbolicity proofs for two versions of the model.
Numerical simulations of the axisymmetric model are presented in Section \ref{Sec:numerics}. The paper ends with a brief conclusion.
\section{Numerical Simulation}
\label{Sec:numerics}
In this section, we first show that the HASWME overcome the stability problems of the ASWME by considering a dam break situation. Then, we demonstrate that the ASWME and the HASWME yield accurate results by considering a test case with smooth initial conditions. Moreover, we observe that, in general, the error reduces with respect to increasing order \(N\) throughout all models. The simulations are performed using the axisymmetric systems with full velocity expanded up to order \(N=4\). Simulations with axisymmetric moment models with radial velocity expanded are left to future work.
Numerical simulations were performed with the ASWME and the HASWME. An important feature of the cylindrical SWME that has to be dealt with are the forcing terms, because they contain a factor \(1/r\). This causes a singularity for \(r \to 0\). There are two ways to overcome this problem: The first option is to multiply the reference equations \eqref{refsystemcylindrical1}-\eqref{refsystemcylindrical3} with \(r\) and then proceed in the same way as before. The second option is to exclude the singularity \(r\to 0\) from the domain. In this paper, we opted for the second approach and we performed numerical simulations on a mesh with \(r\)-domain \([1,10]\).
We note that the runtime of the HASWME is smaller than the runtime of the ASWME, as the number of non linear terms is reduced because of the hyperbolic modification of the system matrix.
The numerical solutions were computed using the first order non-conservative PRICE scheme and the software framework also employed for the simulations in \cite{koellermeier_dissertation_2017}. The reference solutions were obtained using the software accompanying \cite{SWME}.
\subsection{Radial dam break}
First, a test case with a discontinuous initial height profile will be considered. The numerical setup can be found in Table \ref{table:setup_RadialDamBreak}. This setup is very similar to the 1D test case with discontinuous initial data in \cite{HSWME}. The initial radial velocity is a cubic function of the height, while the initial angular velocity is zero at every point in the domain. The initial height function takes the form of a scenario in which there is a circular dam in the middle of the flow. The simulations are performed using the third order axisymmetric model with full velocity expanded and the third order hyperbolic axisymmetric model with full velocity expanded. Simulations with end times \(t=0.1\) and \(t=0.3\) are performed and the results are plotted in \ref{fig:RadialDamBreak}. The values of the variables are not plotted for the whole domain, but only in that part of the domain in which there are significant differences between the different approximations.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ | m{10em} || m{6.5cm} | }
\hline
\(\nu\) & \(\nu=0.1\) \\[4pt]
\hline
\(\lambda\) & \(\lambda=0.1\) \\[4pt] \hline
Spatial domain & \(r\in [10,20], \quad y \in [0,2\pi]\) \\[4pt]
\hline
\(t_{end}\) & \(t_{end}\in \{0.1,0.3\}\) \\[4pt]
\hline
\(h(r,\theta,0)\) & \[
h(r,\theta,0) =
\begin{cases}
5.0, & r \leq 14.0 \\
1.0, & r > 14.0
\end{cases}
\] \\[4pt]
\hline
\(v_r(r,\theta,\zeta,0)\) & \(v_r(r,\theta,\zeta,0)=0.25-2.5\zeta+7.5\zeta^2-5\zeta^3\) \\[4pt]
\hline
\(v_{\theta}(r,\theta,\zeta,0)\) & \(v_{\theta}(r,\theta,\zeta,0)=0\)\\[4pt]
\hline
Time integration & Forward Euler \\[4pt]
\hline
Spatial discretization & PRICE scheme \\[4pt]
\hline
CFL number & 0.1 \\[4pt]
\hline
\end{tabular}
\caption{Numerical setup for radial dam break scenario.}
\label{table:setup_RadialDamBreak}
\end{center}
\end{table}
Figure \ref{fig:radialDamBreak_h_Time0.1} and \ref{fig:radialDamBreak_h_Time0.3} show that the ASWME and the HASWME yield similar and accurate results for the water height \(h\). The same can be observed for the radial mean velocity \(v_{r,m}\), shown in Figure \ref{fig:radialDamBreak_vr_Time0.1} and Figure \ref{fig:radialDamBreak_vr_Time0.3}. For the approximation of the first coefficient \(\alpha_1\), the ASWME and the HASWME give qualitatively different results, as seen in \ref{fig:radialDamBreak_alpha1_Time0.1} and \ref{fig:radialDamBreak_alpha1_Time0.3}. At time \(t=0.1\), the HASWME outperforms the ASWME, as the latter model displays an oscillation that is not present in the reference solution. This is in agreement with the numerical results obtained in \cite{HSWME}, in which an instability was observed in the numerical simulation of the 1D SWME. However, at time \(t=0.3\), which was not previously reported in the comparable test case in \cite{HSWME}, both the ASWME and the HASWME approximations are not accurate compared to the reference solution, so it is not straightforward to argument if one model is more accurate than the other. In conclusion, the hyperbolic model is preferred at shorter times, while it is not clear which model is preferred at larger times.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_h_Time0.1_SubDomain.pdf}
\caption{\(h\) at time \(t=0.1\)}
\label{fig:radialDamBreak_h_Time0.1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_h_Time0.3_SubDomain.pdf}
\caption{\(h\) at time \(t=0.3\)}
\label{fig:radialDamBreak_h_Time0.3}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_vr_Time0.1_SubDomain.pdf}
\caption{\(v_{r,m}\) at time \(t=0.1\)}
\label{fig:radialDamBreak_vr_Time0.1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_vr_Time0.3_SubDomain.pdf}
\caption{\(v_{r,m}\) at time \(t=0.3\)}
\label{fig:radialDamBreak_vr_Time0.3}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_alpha1_Time0.1_SubDomain.pdf}
\caption{\(\alpha_1\) at time \(t=0.1\)}
\label{fig:radialDamBreak_alpha1_Time0.1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/RadialDamBreakUnstable/ASWMEPaper_RadialDamBreakUnstable_ModelComparison_alpha1_Time0.3_SubDomain.pdf}
\caption{\(\alpha_1\) at time \(t=0.3\)}
\label{fig:radialDamBreak_alpha1_Time0.3}
\end{subfigure}
\caption{Test case with discontinuous initial height profile for the third order ASWME and HASWME with full velocity expanded at times \(t=0.1\) and \(t=0.3\). The ASWME and the HASWME yield almost identical results.}
\label{fig:RadialDamBreak}
\end{figure}
\subsection{Smooth initial height profile}
Next, we consider a test case with a smooth initial height profile. The numerical setup is given in Table \ref{table:setup_Smooth}. The setup is closely related to the 1D smooth test case considered in \cite{HSWME}, but a slightly different initial height function and a different initial velocity profile are used. The initial height function takes the form of a reverse sigmoid function. The initial radial velocity is zero everywhere, while the initial angular velocity is \(v_{\theta}(r,\theta,\zeta,0)=0.5\) and does not depend on the spatial variables.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ | m{10em} || m{4.5cm} | }
\hline
\(\nu\) & \(\nu=1\) \\[4pt]
\hline
\(\lambda\) & \(\lambda=0.1\) \\[4pt] \hline
Spatial domain & \(r\in [10,20], \quad y \in [0,2\pi]\) \\[4pt]
\hline
\(t_{end}\) & \(t_{end}=0.5\) \\[5pt]
\hline
\(h(r,\theta,0)\) &
\(h(r,\theta,0) = 1 + \frac{4}{1+e^{2(r-14)}}\) \\[4pt]
\hline
\(v_r(r,\theta,\zeta,0)\) & \(v_r(r,\theta,\zeta,0)=0\) \\[4pt]
\hline
\(v_{\theta}(r,\theta,\zeta,0)\) & \(v_{\theta}(r,\theta,\zeta,0)=0.5\)\\[4pt]
\hline
Time integration & Forward Euler \\[4pt]
\hline
Spatial discretization & PRICE scheme \\[4pt]
\hline
CFL number & 0.1 \\[4pt]
\hline
\end{tabular}
\caption{Numerical setup for smooth test case.}
\label{table:setup_Smooth}
\end{center}
\end{table}
Moment approximations are obtained using the ASWME and the HASWME \(N\)th order models with full velocity expanded, with \(N \in \{ 0,1,2,3 \}\). Again, the numerical solution is only displayed for a fraction of the radial space. The results are shown in \ref{fig:smoothExpit}. The values of \(h\), \(v_{r,m}\) and \(v_{\theta,m}\) are plotted for both the ASWME model (left column) and the HASWME model (right column) with increasing order $N$. Figure \ref{fig:smoothExpit_h_ASWME} and Figure \ref{fig:smoothExpit_h_HASWME} show that the approximations for the water height \(h\) are more accurate with increasing order, compared to the reference solution. This is the case for both the ASWME models and the HASWME models. This trend is also clearly visible for the mean angular velocity, see Figure \ref{fig:smoothExpit_vtheta_ASWME} and \ref{fig:smoothExpit_vtheta_HASWME}. For the mean radial velocity, displayed in Figure \ref{fig:smoothExpit_vr_ASWME} and \ref{fig:smoothExpit_vr_HASWME}, the zeroth order model and the first order model are less accurate than the higher order models. However, it appears that the second order model is more accurate than the third order model in this part of the domain. Nevertheless, the error of the third order model is smaller than the error of the second order model, because the former model gives a better approximation than the latter in a large part of the domain, which is not displayed in Figure \ref{fig:smoothExpit_vr_ASWME} and Figure \ref{fig:smoothExpit_vr_HASWME}.
The error convergence is shown in Figure \ref{fig:ErrorConvergence}, which shows the relative error of the different models for the water height \(h\), the mean radial velocity \(v_{r,m}\) and the mean angular velocity \(v_{\theta,m}\). For all three variables, we observe a reduction of the error when the order increases from \(N=0\) to \(N=3\). In particular, the error in the ASWE model with $N=0$ is considerably larger than the error in both moment models ASWME and HASWME for $N>0$. This is in agreement with the 1D results in \cite{HSWME}. However, when the order is increased from \(N=3\) to \(N=4\), there is no reduction in the error anymore for most of the models. It is not clear what the cause of this stagnation can be. In any case, the error is already small in the third order model, especially for \(h\) and \(v_{\theta,m}\). We can conclude that for this smooth test case, the moment models yield approximations that are increasingly accurate when the order of the model increase. This is the case for both the non-hyperbolic and the hyperbolic models.
\begin{figure}[ht!]
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_h_Time0.5_SubDomain.pdf}
\caption{\(h\) at time \(t=0.5\)}
\label{fig:smoothExpit_h_ASWME}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_h_Time0.5_HASWME_SubDomain.pdf}
\caption{\(h\) at time \(t=0.5\)}
\label{fig:smoothExpit_h_HASWME}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_vr_Time0.5_SubDomain.pdf}
\caption{\(v_{r,m}\) at time \(t=0.5\)}
\label{fig:smoothExpit_vr_ASWME}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_vr_Time0.5_HASWME_SubDomain.pdf}
\caption{\(v_{r,m}\) at time \(t=0.5\)}
\label{fig:smoothExpit_vr_HASWME}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_vtheta_Time0.5_SubDomain.pdf}
\caption{\(v_{\theta,m}\) at time \(t=0.5\)}
\label{fig:smoothExpit_vtheta_ASWME}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{axisymmetric/Images/SmoothExpit/ASWMEPaper_SmoothExpit_OrderComparison_vtheta_Time0.5_HASWME_SubDomain.pdf}
\caption{\(v_{\theta,m}\) at time \(t=0.5\)}
\label{fig:smoothExpit_vtheta_HASWME}
\end{subfigure}
\caption{Test case with reverse sigmoid initial height profile for different orders \((N,N)\) and the (2,2)th and \((3,3)\)th order HASWME at time \(t=0.5\). The accuracy of the approximations increases with increasing order. The ASWME (left column) and the HASWME (right column) yield very similar results.}
\label{fig:smoothExpit}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\textwidth]{axisymmetric/Images/SmoothExpit/ASWME_ErrorConvergence_SmoothExpit_Time0.5_BoundaryExcluded.pdf}
\caption{Error convergence of smooth test case for ASWME and HASWME.}
\label{fig:ErrorConvergence}
\end{figure}
\clearpage
|
2,869,038,157,073 | arxiv | \section{Introduction}
For a company that specializes in the further processing of raw materials into various end products, two strategic questions are of particular interest. Firstly, which products should be produced and secondly, which materials should be purchased for them. In this case, different materials can be purchased together, whereby the full price is incurred as soon as only one of the materials from a package is required. On the other hand, end products can only be produced if all the materials required for them are available - only in this case can a profit be made.
This problem can be formalized as the \emph{reward-penalty-selection problem} (RPSP) which was recently introduced in \cite{heller2021reward}. It can be viewed as a combination of two classical, well-known combinatorial problems, the set cover problem and the hitting set problem (cf. \cite{garey1979computers}). The ground set of elements~$N$ is given by the set of materials. The set of reward sets~$A\coloneqq \{A_i | A_i\subseteq N\}$ with corresponding rewards~$a_i\in\N$ and a set of penalty sets~$B\coloneqq \{B_i | B_i\subseteq N\}$ with corresponding penalties~$b_i\in\N$ are given by the end products and raw material packages with their respective price. We say that a set is \emph{covered} if all elements of said set are chosen and that a set is \emph{hit} if at least one element is chosen. Now we aim to find a subset of elements~$S$ such that the profit function
\begin{align}
\rewardsum - \penaltysum \label{eq: rpsp objective function}
\end{align}
is maximized. In the example given above, this corresponds to selecting a subset of the raw materials such that the net profit defined by the profit obtained by selling the final products minus the cost for the necessary raw material packages is maximized. For an analysis on the complexity of the RPSP and its variants, as well as algorithmic approaches, we refer to \cite{heller2021reward,heller2021phd}.
The question how ``fair prices'' for the materials could look like can be tackled by considering the problem from the viewpoint of game theory, which is the central point of this paper.
For this, we view each material as a player in a cooperative game with characteristic function as in~\eqref{eq: rpsp objective function}. This leads us to \emph{reward-penalty-selection games} (RPS games) which we define formally later.
Cooperative games are often used to find ``fair'' solutions (cf.~\cite{nisanalgorithmic}) for settings where multiple players cooperate to obtain a profit together. In general, a cooperative game consists of a group of players and a \emph{characteristic function}, which maps every subset of players to its obtainable profit. A \emph{solution} to such a cooperative game is given by a \emph{payment vector} that stores the profit for each player. Properties, that are widely considered as fair, are the following ones. We say that a payment vector fulfills \emph{efficiency} (EFF), if the obtained profit of the group of all players is fully distributed. If all entries in the vector are greater or equal to the profit the corresponding player can obtain by itself, \emph{individual rationality} (IR) is satisfied. \emph{Coalitional rationality} (CR) is fulfilled if the same holds for all subsets of the players, i.e. the sum of the payments to players is greater or equal to the obtained profit by this subset of players. The \emph{core} consists of all those payment vectors that fulfill (EFF), (IR) and (CR) (cf. \cite{nisanalgorithmic}). For the rest of the paper, we focus on core vectors, such as the \emph{Shapley value} (cf. \cite{shapley1953value}), but we note that there exists other payment vectors that are considered as fair, for instance the \emph{egalitarian allocation} (cf. \cite{dutta1989concept, koster1999weighted}). One of the main results of this paper is a characterization of the core vectors via flows in an associated network.
The remainder of the paper is structured as follows. In Section~\ref{sec: rps game} we give a definition of an RPS game. We also provide basic results regarding the properties of such games. In Section~\ref{sec: core characterization} we prove a characterization of the core elements of an RPS game as a network flow in a suitable network graph. We then conclude with a short outlook.
\section{The Reward-Penalty-Selection Game}\label{sec: rps game}
We start with a formal definition of \emph{reward-penalty-selection (RPS) games}:
\begin{definition}[Reward-Penalty-Selection Games]
Let $N\coloneqq\{1,\dots,n\}$ be the set of players. Further, let $\mathcal{A}\coloneqq\{A_1,\dots,A_k\}\subseteq 2^N$ be the set of non-empty reward sets with rewards $a_1,\dots,a_k\in\mathbb{N}$ and $\mathcal{B}\coloneqq\{B_1,\dots,B_l\}\subseteq 2^N$ be the set of non-empty penalty sets with penalties~$b_1,\dots,b_l\in\mathbb{N}$. The game~$(N,v)$ with characteristic function~$v\colon 2^N \rightarrow \Z$ defined by
\begin{align*}
v(S) \coloneqq \rewardsum - \penaltysum
\end{align*}
is called \emph{reward-penalty-selection game (RPS game)}.
\end{definition}
\begin{theorem}[Convexity of RPS games]\label{thm: convexity RPS games}
Every RPS game~$(N,v)$ is convex, i.e. the characteristic function satisfies $v(S) + v(T) \leq v(S\cup T) + v(S\cap T)$ for arbitrary subsets~$S,T\subseteq N$ of players.
\end{theorem}
\begin{proof}
Note that it is sufficient to check the cases of a single reward or a single penalty set since the sum of convex games is again convex. Let~$(N,v)$ be an RPS game consisting of one non-empty reward set $A\subseteq N$ with reward~$a$. Further, let $S,T\subseteq N$ be arbitrary sets of players.
If $v(S\cup T) = 0$, then $A$ is not contained in $S\cup T$ and therefore not either in~$S$ and~$T$. This implies that
\begin{align*}
v(S\cup T) = 0 = v(S) + v(T) - v(S\cap T),
\end{align*}
thus, convexity is fulfilled.
If $v(S\cup T) = a$, we have to distinguish between two cases. Either $A\subseteq S\cap T$, which implies
\begin{align*}
v(S\cup T) =a = a + (a-a) = v(S) + v(T) - v(S\cap T),
\end{align*}
or $A \not\subseteq S\cap T$. In this case, $A$ cannot be contained in both $S$ and $T$ and therefore
\begin{align*}
v(S\cup T) = a = a-0 \geq v(S) + v(T) - v(S\cap T).
\end{align*}
The case of an RPS game with a single penalty set can be shown in the same way.
\end{proof}
Furthermore, by a similar argument as in the proof of convexity, we obtain the following corollary.
\begin{corollary}
RPS games are superadditive, i.e. the characteristic function satisfies $v(T\cup S) \geq v(S) + v(T)$ for all $S,T\subseteq N$ with $S\cap T = \emptyset$.\qed
\end{corollary}
The main purpose of such a cooperative game is to find a fair profit distribution of the total profit among all players. For this, we define the \emph{payment vector} as the vector~$p\in\mathbb{R}^N$ whose $i$th entry~$p_i$ is the payment for player~$i$. We define the payment to a coalition~$S\subseteq N$ as the sum over the payments to the single players, i.e. $p(S) \coloneqq \sum_{i\in S} p_i$. Given such a payment vector, we now formally define desired properties as introduced above. A payment vector~$p$ fulfills \emph{efficiency} (EFF), if $\sum_{i\in N} p_i = v(N)$ holds. Furthermore, if $p_i\geq v(\{i\})$ for all~$i\in N$, we say $p$ fulfills \emph{individual rationality} (IR). The extension of (IR) to coalitions is called \emph{coalitional rationality} (CR). We say a payment vector fulfills (CR) if each coalition is guaranteed at least the value the players could obtain by themselves, i.e. $\sum_{i\in S} p_i \geq v(S)$.
Now, the \emph{core} is a well known solution to such cooperative games and defined as the set of payments, which fulfills (EFF), (IR) and (CR). Using the convexity of RPS games, it follows that an element in the core can be computed in polynomial time (cf. \cite{nisanalgorithmic}). Thus, it follows immediately:
\begin{corollary}\label{cor:poly}
RPS games are balanced (cf. \cite{shapley1965balanced}), i.e. the core of an RPS game is never empty. A core element can be computed in polynomial time.
\qed \end{corollary}
The following two lemmas present structural insights into RPS games.
\begin{lemma}
RPS games are totally balanced.
\end{lemma}
\begin{proof}
Each subgame of an RPS game is itself an RPS game. Thus, since each RPS game is balanced, so are its subgames and therefore RPS games are totally balanced.
\end{proof}
\begin{lemma}
Suppose we are given an RPS game where all reward and penalty sets consist of exactly one player. Then the core consists of a singleton. \end{lemma} \begin{proof}
Suppose $|A_i| = |B_j| = 1$ for all~$i$, $j$. Further, let $p$ be a core vector. Hence, $p$ has to satisfy $p_k \geq v(\{k\}) = \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j$. Since a core vector also has to fulfill $p(N) = v(N)$, we get
\begin{align*}
\sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right) &\leq \sum_{k\in N} p_k \\
& = p(N)\\
& = v(N)\\
& = \sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right).
\end{align*}
Thus, $p(N) = \sum_{k\in N} \left( \sum_{i:\{k\} = A_i} a_i - \sum_{j: \{k\} = B_j} b_j \right)$ is the only core vector.
\end{proof}
A famous core payment is the \emph{Shapley value} (cf. \cite{shapley1953value}). Formally, for a cooperative game~$(N,v)$ the Shapley value of a player~$k$ is defined as
\begin{align*}
\phi_v(k) \coloneqq \sum_{S\subseteq N\backslash\{k\}} \frac{|S|!(n - |S| - 1)!}{n!}(v(S\cup\{k\}) - v(S)).
\end{align*}
This can be interpreted as the average marginal profit the player~$k$ adds to an existing coalition~$S$ when joining the coalition over all possible permutations. The next theorem states that the Shapley value can be computed efficiently for RPS games.
\begin{theorem}\label{thm: characterization of RPS games}
Given an RPS game, the Shapley value~$\phi_v$ for a player~$k$ is given by
\begin{align}
\phi_v(k) \coloneqq \sum_{i\in N} \frac{a_i}{|A_i|} - \sum_{j\in N} \frac{b_j}{|B_j|} \label{eq: shapley value}
\end{align}
for $k=1,\dots, n$ and, thus, can be computed efficiently.
\end{theorem}
\begin{proof}
For each reward set~$A_i\in\mathcal{A}$ a player~$k$ adds value $a_i$ to the coalition value if and only if all other players of~$A_i$ are already contained in the coalition, i.e. in $\frac{1}{|A_i|}$ of all cases. Hence, on average each player in~$A_i$ contributes $\frac{a_i}{|A_i|}$ to the value of the coalition. On the other side, for each penalty set~$B_j\in\mathcal{B}$, a player~$k$ incurs cost~$b_j$ if and only if she enters the coalition first. Again, on average, each player in $B_j$ incurs cost~$\frac{b_j}{|B_j|}$ to the coalition. By summing up over all reward and penalty sets we obtain~\eqref{eq: shapley value}.
In order to compute the Shapley value of a player, the contribution of each player is initialized with zero. Now we iterate over all reward and penalty sets and update the contribution of each player according to~\eqref{eq: shapley value}.
\end{proof}
We now investigate the relation of RPS games to the set of convex games.
\begin{lemma}\label{lem: modeling with RPS games}
The following statements are true:
\begin{enumerate}[(i)]
\item Every convex cooperative game with three players can be modeled as an RPS game.
\item The set of all convex cooperative games is a strict superset of the set of RPS games, meaning that there exist convex cooperative games with four players that cannot be modeled as an RPS game.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item Let $(N,v)$ be a convex cooperative game with player set~$N=\{1,2,3\}$. W.l.o.g. we can assume that the value of a singleton is always zero since we can add singleton reward and penalty sets while keeping convexity. Let $d\coloneqq\min\{v(\{1,2\}), v(\{1,3\}), v(\{2,3\})\}$ be the minimal value of a coalition consisting of two players. By convexity, $d$ is non-negative. If $d$ is greater than $0$, we add a penalty set $B = \{1,2,3\}$ with penalty $b = d$. In order to obtain the amount given by the characteristic function~$v$, we add reward sets
\begin{align*}
A_1 = \{1\} &\text{ with } a_1 = d \\
A_2 = \{2\} &\text{ with } a_2 = d \\
A_3 = \{3\} &\text{ with } a_3 = d \\
A_4 = \{1,2\} &\text{ with } a_4 = v(\{1,2\}) - d \\
A_5 = \{2,3\} &\text{ with } a_5 = v(\{2,3\}) - d \\
A_6 = \{1,3\} &\text{ with } a_6 = v(\{1,3\}) - d
\end{align*}
Note that all rewards are non-negative by the definition of~$d$. Finally, we add $A_7 = \{1,2,3\} \text{ with } a_7 = v(\{1,2,3\}) - \sum_{i=1}^{6} a_i - b$. With this, an arbitrary game consisting of three players can be modeled as an RPS game.
\item Let $(N,v)$ be the convex cooperative game with player set~$N=\{1,2,3,4\}$ and characteristic function
\begin{align*}
v(S) = \begin{cases}
0, \qquad \quad \text{ if } |S|\leq 2, \\
1, \qquad \quad \text{ if } |S| = 3, \\
2, \qquad \quad \text{ if } S = N. \\
\end{cases}
\end{align*}
Suppose $(N,v)$ can be modeled by an RPS game. Let $i,j\in N$ be two different players. It holds that $v(\{i,j\}) = 0 = v(i) + v(j)$. Since $v(\{i,j\}) \geq v(i) + v(j)$ is always true due to convexity, there cannot be a penalty set containing both $i$ and $j$, because otherwise for the coalition the penalty incurs once while the penalty incurs for both singletons~$\{i\}, \{j\}$.
Adding more sets, the gap only increases which does not help in order to obtain $v(\{i,j\}) = v(i) + v(j)$. Hence, no penalty set of size two or larger can be used to model the game. As every two player coalition receives value zero, reward sets of size exactly two cannot exist in said game at the same time. In order to generate a profit of one for all three-player coalitions, we have to add a reward set of value one for each of those coalitions. But then, since there are four such coalitions, the RPS game must grant a profit of at least four to the grand coalition --- a contradiction.
\end{enumerate}
\end{proof}
\section{Characterization of Core Elements}\label{sec: core characterization}
In this section we give a characterization of core elements of an instance of an RPS game. In order to do this, we define a profit sharing graph and prove that any feasible flow in this graph of a certain flow value induces a core vector and vice versa. As a byproduct of this characterization we obtain an alternative proof to the polynomial time computability of a core element stated in Corollary~\ref{cor:poly}. We assume that the reader is familiar with the basics of network flows~\cite{ahuja1988network}.
Our approach of a characterization of core elements as feasible flows is based on results of Ackermann et al.~(cf. \cite{ackermann2014modeling}). First, we define the \emph{profit sharing graph} for RPS games.
\begin{definition}[Profit Sharing Graph for RPS Games]
Let $(N,v)$ be an RPS game with player set~$N=\{1,2,\dots,n\}$, a collection of reward sets~$\mathcal{A} = \{A_1,\dots,A_k\}$ and a collection of penalty sets~$\mathcal{B} = \{B_1,\dots,B_l\}$. The \emph{profit sharing graph} for $(N,v)$ is given by the directed graph~$G=(V,E)$ with nodes
\begin{align*}
V\coloneqq \{s,t,\overline{s},\overline{t}\}\cup N \cup \mathcal{A} \cup \mathcal{B},
\end{align*}
and edges
\begin{alignat*}{2}
E \coloneqq& \{(s,A): A\in\mathcal{A}\} \cup \{(B,t): B\in\mathcal{B}\} \cup \{(s,\overline{s}), (\overline{t}, t)\} \cup \\
& \{(\overline{s},n): n\in N\} \cup \{(n,\overline{t}): n\in N\} \, \cup \\
& \{(A,i): i\in A, A\in\mathcal{A}\} \cup \{(i,B): i\in B, B\in\mathcal{B}\} \cup \{(\overline{s}, \overline{t})\}.
\end{alignat*}
We set the edge capacities to be given by the function~$c\colon E\to \Z$ defined by
\begin{align*}
c(e) \coloneqq \begin{cases}
a_i,& \text{for $e=(s,A_i)$ and $A_i\in\mathcal{A}$}\\
b_j, & \text{for $e=(B_j,t)$ and $B_j\in\mathcal{B}$}\\
\sum_{j=1}^l b_j,& \text{for $e=(s,\overline{s})$} \\
\sum_{i=1}^k a_i, & \text{for $e=(\overline{t},t)$} \\
\infty,& \text{otherwise.}
\end{cases}
\end{align*}
\end{definition}
\begin{figure}
\centering
\begin{tikzpicture}[every node/.style={fill=white,rectangle}, every edge/.style={draw=black,very thick}]
\begin{scope}[every node/.style={circle,thick,draw}]
\node (s) at (0,3) {$s$};
\node (A1) at (2.5,5) {$A_1$};
\node (A2) at (2.5,3) {$A_2$};
\node (A3) at (2.5,1) {$A_3$};
\node (B1) at (7.5,5) {$B_1$};
\node (B2) at (7.5,1) {$B_2$};
\node (t) at (10, 3) {$t$};
\end{scope}
\begin{scope}
[every node/.style={circle, draw}]
\node (n1) at (5,5) {$1$};
\node (n2) at (5,4) {$2$};
\node (n3) at (5,3) {$3$};
\node (n4) at (5,2) {$4$};
\node (n5) at (5,1) {$5$};
\node (s1) at (3.75, -0.5) {$\overline{s}$};
\node (t1) at (6.25, -0.5) {$\overline{t}$};
\end{scope}
\begin{scope}[
every node/.style={fill=white,rectangle,sloped},
every edge/.style={draw=black,thick}]
\path [->] (s) edge node {$[0,a_1]$} (A1);
\path [->] (s) edge node{$[0,a_2]$} (A2);
\path [->] (s) edge node{$[0,a_3]$} (A3);
\path [->] (A1) edge (n1);
\path [->] (A2) edge (n1);
\path [->] (A2) edge (n3);
\path [->] (A3) edge (n2);
\path [->] (A1) edge (n3);
\path [->] (A2) edge (n5);
\path [->] (A2) edge (n4);
\path [->] (A3) edge (n4);
\path [->] (n2) edge (B1);
\path [->] (n2) edge (B2);
\path [->] (n3) edge (B2);
\path [->] (n1) edge (B1);
\path [->] (n2) edge (B2);
\path [->] (n3) edge (B2);
\path [->] (n4) edge (B1);
\path [->] (n5) edge (B1);
\path [->] (n3) edge (B2);
\path [->] (B1) edge node{$[0,b_1]$} (t);
\path [->] (B2) edge node{$[0,b_2]$} (t);
\path [->] (t1) edge[bend left=-40] node{$[0,\sum a_i]$} (t);
\path [->] (s) edge[bend left=-40] node{$[0,\sum b_j]$} (s1);
\path [->] (s1) edge (t1);
\path [->] (n1) edge[out=0, in=90] (t1);
\path [->] (n2) edge[out=0, in=90] (t1);
\path [->] (n3) edge[out=0, in=90] (t1);
\path [->] (n4) edge[out=0, in=90] (t1);
\path [->] (n5) edge[out=0, in=90] (t1);
\path [<-] (n1) edge[out=180, in=90] (s1);
\path [<-] (n2) edge[out=180, in=90] (s1);
\path [<-] (n3) edge[out=180, in=90] (s1);
\path [<-] (n4) edge[out=180, in=90] (s1);
\path [<-] (n5) edge[out=180, in=90] (s1);
\end{scope}
\end{tikzpicture}
\caption{Example of a \emph{profit sharing graph} for $N=\{1,2,3,4,5\}$.}\label{fig: profit sharing}
\end{figure}
Furthermore, we set the lower capacity bound~$l(e)$ of each edge to~0. An example of the profit sharing graph can be found in Figure~\ref{fig: profit sharing}. It is clear by construction that any feasible $s-t-$flow in the profit sharing graph of an RPS game with value~$H\coloneqq\sum_{i:A_i\in\mathcal{A}} a_i + \sum_{j:B_j\in\mathcal{B}} b_j$ fully exhausts all finite capacities.
Now, let $(N,v)$ be an RPS game and $G=(V,E)$ the corresponding profit sharing graph. We define the payment~$p_i$ to a player~$i$ by
\begin{align}
p_i \coloneqq f(i,\overline{t}) - f(\overline{s}, i). \label{eq: payment by flow}
\end{align}
The next theorem shows the connection between a feasible flow in the profit sharing graph and a core vector.
\begin{theorem}[Core Elements]\label{thm: flow induces core}
Any feasible flow with value~$H$ in the profit sharing graph of an RPS game defines a payment vector that fulfills the properties of efficiency (EFF), coalitional rationality (CR) and individual rationality (IR).
\end{theorem}
\begin{proof}
We prove the claim by showing that a payment vector~$p$ defined by a feasible flow~$f$ with value~$H$ fulfills the above conditions, where the payment of a player~$i$ is is defined by \eqref{eq: payment by flow}. Recall that a feasible flow fully exhausts all finite capacities of the profit sharing graph.
\begin{enumerate}[(i)]
\item Efficiency: For efficiency we need to show $p(N) = v(N)$. By definition we have
\begin{align*}
p(N) &= \sum_{i\in N} p_i = \sum_{i\in N} f(i,\overline{t}) - f(\overline{s},i).
\end{align*}
By using flow conservation at each of the player nodes~$i$ this is equal to
\begin{align*}
\sum_{i\in N} \left(f(\mathcal{A}, i) - f(i,\mathcal{B})\right) & = \sum_{i: A_i\in\mathcal{A}} a_i - \sum_{j: B_j\in\mathcal{B}} b_j = v(N).
\end{align*}
\item Coalitional Rationality: Let $S\subseteq N$ be a subset of players. We want to show that the profit distributed to this subgroup~$S$ is greater or equal to the value of $S$, i.e. $p(S) \geq v(S)$. By definition we have
\begin{align*}
p(S) &= \sum_{i\in S} p_i = \sum_{i\in S} \left(f(i,\overline{t}) - f(\overline{s},i)\right)
\end{align*}
By using flow conservation this is equal to
\begin{align*}
\sum_{A\in\mathcal{A}}\sum_{i\in S} f(A,i) - \sum_{B\in\mathcal{B}}\sum_{i\in S} f(i,B)
& = \sum_{i: A_i\cap S\neq \emptyset} f(A_i,S) - \sum_{j: B_j\cap S \neq \emptyset} f(S,B_j)\\
& \geq \sum_{i: A_i\subseteq S} f(A_i,S) - \sum_{j: B_j\cap S \neq \emptyset} f(S,B_j)\\
& = \rewardsum - \penaltysum \\
& = v(S),
\end{align*}
since every feasible flow with value~$H$ fully exhausts all finite capacities.
\item Individual Rationality: Follows by the same argumentation as for (CR).
\end{enumerate}
Thus, all four properties are fulfilled.
\end{proof}
Conversely, the next theorem shows that given a core vector, one finds a corresponding feasible flow.
\begin{theorem}[Core Elements II]\label{thm: core induces flow}
Let $p$ be a core allocation of an RPS game~$(N,v)$. Then, there exists a feasible flow~$f$ with value~$H$ in the corresponding profit sharing graph that induces the allocation~$p$.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be the corresponding profit sharing graph of~$(N,v)$. We modify~$G$ by changing the infinite capacities to finite ones. Define $c_p$ as follows:
\begin{align*}
c_p(\overline{s},i) &= \begin{cases}
p_i & \text{if $p_i \geq 0$} \\
0 & \text{otherwise,}
\end{cases}\\
c_p(i, \overline{t}) &= \begin{cases}
-p_i & \text{if $p_i < 0$} \\
0 & \text{otherwise}
\end{cases}\\
\intertext{and}
c_p(\overline{s},\overline{t}) &= \sum_{B_j\in\mathcal{B}}b_j - \sum_{i\in N} c(\overline{s}, i).
\end{align*}
We now define finite capacities $\operatorname{cap}\colon E\rightarrow\Z$ by
\begin{align*}
\operatorname{cap}(e) \coloneqq
\begin{cases}
c(e), &\text{if $c(e)<\infty$}\\
c_p(e), & \text{if $c(e)=\infty$.}
\end{cases}
\end{align*}
With these finite capacities, the payment from or to a player~$i$ made by $p$ is representable in the flow network since $c_p(\overline{s},i) + c_p(i, \overline{t}) = |p_i|$ and $c_p(\overline{s}, i) - c_p(i,\overline{t}) = p_i$ always hold true.
By the Max-Flow Min-Cut Theorem (cf. \cite{ahuja1988network}), the capacity of a minimum $s$-$t$-cut in $G$ is equal to the value of a feasible $s$-$t$-flow in~$G$. Therefore we show in the following that the capacity of a minimum cut is at least~$H$. Thus, let $X\subseteq V$ be an $s$-$t$-cut~in $G$, i.e. $s\in X$ and $t\notin X$. Since $X'=\{s\}$ is a cut with finite capacity, we know that all edges connecting~$X$ and $V\setminus X$ have finite capacity. That means for an arbitrary player~$i$ in $N\cap X$ all reward and penalty sets that contain~$i$ belong also to~$X$. Further, for any reward or penalty set that are contained in~$X$, all its set members are also contained in~$X$. With this, we get
\begin{align*}
\mathcal{A}\cap X = \{\,A: A\cap X \neq \emptyset, A\in\mathcal{A}\,\} = \{\,A: A\cap N \cap X \neq \emptyset, A\in\mathcal{A}\,\}
\intertext{and}
\mathcal{A}\backslash X = \{\,A: A\cap X = \emptyset, A\in\mathcal{A}\,\} = \{\,A: A\cap N \cap X = \emptyset, A\in\mathcal{A}\,\}.
\end{align*}
There are four possibilities depending on whether $\overline{s}\in X$ or $\overline{t}\in X$.
\begin{enumerate}[(i)]
\item Assume $\overline{s},\overline{t}\notin X$. Then, we get
\begin{align*}
\sum_{i\in N\cap X} \left( c_p(\overline{s}, i) + c_p(i,\overline{t}) \right) &= \sum_{i\in N\cap X} |p_i| \\
&\geq \sum_{i\in N\cap X} p_i = p(N\cap X).
\end{align*}
By coalitional rationality, this is greater or equal to $v(N\cap X)$. Thus,
\begin{align*}
v(N\cap X) &= \sum_{i: A_i\subseteq N\cap X} a_i - \sum_{j: B_j\cap N\cap X \neq \emptyset} b_j \\
& = \sum_{i: A_i\cap N\cap X} a_i - \sum_{j: B_j\cap N\cap X \neq \emptyset} b_j \\
& = \sum_{i: A_i\in \mathcal{A}\cap X} a_i - \sum_{j: B_j\in \mathcal{B}\cap X \neq \emptyset} b_j
\end{align*}
With the above calculation, we get as the capacity of the cut~$X$:
\begin{align*}
\operatorname{cap}(X) & = c(s,\overline{s}) + \sum_{i:A_i \in \mathcal{A}\backslash X} c(s,A_i) + \sum_{j:B_j\in \mathcal{B}\cap X} c(B_j,t) \\
& \qquad \qquad \qquad + \sum_{i\in N\cap X} \left(c_p(\overline{s}, i) + c_p(i, \overline{t})\right) \\
&\geq \sum_{1\leq j \leq |\mathcal{B}|} b_j + \sum_{i:A_i \in \mathcal{A}\backslash X} a_i + \sum_{j:B_j\in \mathcal{B}\cap X} b_j \\
& \qquad \qquad \qquad + \left( \sum_{i: A_i\in \mathcal{A}\cap X} a_i - \sum_{j: B_j\in \mathcal{B}\cap X \neq \emptyset} b_j \right) \\
& = \sum_{1\leq j \leq |\mathcal{B}|}b_j + \sum_{1\leq i \leq |\mathcal{A}|} a_i = H.
\end{align*}
\item Assume $\overline{s},\overline{t}\in X$. Then
\begin{align*}
\sum_{i\in N\backslash X} \left( c_p(\overline{s}, i) + c_p(i,\overline{t}) \right) &= \sum_{i\in N\backslash X} |p_i| \\
&\geq \sum_{i\in N\backslash X} -p_i \\
& = -p(N\backslash X)\\
& = -p(N) + p(N\cap X) \\
& \geq -v(N) + v(N\cap X) \\
& = \sum_{1\leq j \leq |\mathcal{B}|} b_j - \sum_{1\leq i \leq |\mathcal{A}|}a_i + \sum_{i: A_i\cap N\cap X} a_i\\
& \qquad \qquad \qquad - \sum_{j: B_j\cap N\cap X \neq \emptyset} b_j \\
& = \sum_{j: B_j\cap N\cap X = \emptyset} b_j - \sum_{i: A_i\cap (N\backslash X) \neq \emptyset} a_i \\
& \sum_{j:B_j\in \mathcal{B}\backslash X} b_j - \sum_{i: A_i\in \mathcal{A}\backslash X} a_i.
\end{align*}
With this, we get
\begin{align*}
\operatorname{cap}(X) &= c(\overline{t},t) + \sum_{i:A_i \in \mathcal{A}\backslash X} c(s,A_i) + \sum_{j:B_j\in \mathcal{B}\cap X} c(B_j,t) \\
& \qquad \qquad \qquad + \sum_{i\in N\backslash X} \left(c_p(\overline{s}, k) + c_p(i, \overline{t})\right) \\
&= \sum_{1\leq i \leq |\mathcal{A}|}a_i + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X} + \sum_{i\in N\cap X} \left(c_p(\overline{s},i) + c_p(i,\overline{t})\right)\\
&\geq \sum_{1\leq i \leq |\mathcal{A}|}a_i + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X} + \left(\sum_{j:B_j\in \mathcal{B}\cap X} b_j - \sum_{i: A_i\in \mathcal{A}\cap X} a_i\right)\\
& = \sum_{1\leq i \leq |\mathcal{A}|}a_i + \sum_{1\leq j \leq |\mathcal{B}|} b_j = H.
\end{align*}
\item Assume $\overline{s}\in X$ and $\overline{t}\notin X$. Thus, the following holds by similar calculations as above.
\begin{align*}
\operatorname{cap}(X) &= c(\overline{s},\overline{t}) + \sum_{i: A_i\in \mathcal{A}\backslash X}c(s,A_i) + \sum_{j:B_j\in \mathcal{B}\cap X}c(B_j,t) \\
& \qquad \qquad \qquad + \sum_{i\in N\cap X}c_p(i,\overline{t}) + \sum_{i\in N\backslash X}c_p(\overline{s},i)\\
&= \sum_{1\leq j \leq |\mathcal{B}|} b_j - \sum_{i\in N} c_p(\overline{s},i) + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X}b_j \\
& \qquad \qquad \qquad + \sum_{i\in N\cap X}c_p(i,\overline{t}) + \sum_{i\in N\backslash X}c_p(\overline{s},i)\\
&= \sum_{1\leq j \leq |\mathcal{B}|}b_j + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X}b_j + \sum_{i\in N\cap X}c_p(i,\overline{t}) \\
& \qquad \qquad \qquad - \sum_{i\in N\cap X} c_p(\overline{s},i) \\
&= \sum_{1\leq j \leq |\mathcal{B}|}b_j + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X}b_j + \sum_{i\in N\cap X} p_i \\
&\geq \sum_{1\leq j \leq |\mathcal{B}|}b_j + \sum_{i: A_i\in \mathcal{A}\backslash X}a_i + \sum_{j:B_j\in \mathcal{B}\cap X}b_j \\
& \qquad \qquad \qquad + \left(\sum_{i: A_i\in \mathcal{A}\cap X}a_i - \sum_{j:B_j\in \mathcal{B}\cap X}b_j\right) \\
&= \sum_{1\leq j \leq |\mathcal{B}|}b_j + \sum_{1\leq i \leq |\mathcal{A}|} = H.
\end{align*}
\item Assume $\overline{s}\notin X$ and $\overline{t}\in X$. In this case, the only outgoing edges with finite capacities are $(s,\overline{s})$ and $(\overline{t},t)$, and thus we get
\begin{align*}
\operatorname{cap}(X) \geq c(s,\overline{s}) + c(\overline{t},t) = \sum_{1\leq i \leq |\mathcal{A}|} a_i + \sum_{1\leq j \leq |\mathcal{B}|} b_j = H.
\end{align*}
\end{enumerate}
Thus, a minimum $s$-$t$-cut of $G$ has at least capacity~$H$. Since there exists a cut with capacity~$H$, this bound is tight. We still need to show that the finite capacities on outgoing edges from $\overline{s}$ and ingoing edges to $\overline{t}$ are fully exhausted. Suppose not, then there exists at least one edge which is not fully exhausted. We prove this by case distinction. Suppose an edge~$(\overline{s}, i)$ from $\overline{s}$ to a player node~$i$ has flow value~$f_i$ strict less than $p_i$. Then, the capacity on outgoing edges from $s$ and $\overline{s}$ is given by $\sum_{A_i\in\mathcal{A}} a_i + \sum_{B_j\in\mathcal{B}} b_j - \sum_{i\in N}p_i$. Since we assumed at least one edge capacity is not fully exhausted, we obtain $\sum_{i\in N} p_i < \sum_{A_i\in\mathcal{A}} a_i + \sum_{B_j\in\mathcal{B}} b_j$, which is a contradiction to $p$ being a core vector. Thus, in this case all the edges are fully exhausted.
The case for ingoing edges to $\overline{t}$ follows analogously. In total, this shows that a feasible flow~$f$ exists that induces the given payment~$p$.
\end{proof}
With the equivalence between a core vector and a feasible flow in the profit sharing graph, the problem of finding a such a core vector can now be done in polynomial time using any polynomial time maximum flow algorithm (cf.~\cite{ahuja1988network} for an overview of flow algorithms). We summarize this in the next theorem.
\begin{theorem}\label{thm: core element characterization}
A core element for an RPS game can be computed in polynomial time. $\Box$
\end{theorem}
\section{Conclusion and Outlook}
In this paper we introduced a novel class of combinatorial cooperative games, namely the \emph{reward-penalty-selection games} (RPS games) which are based on the \emph{reward-penalty-selection problem} (RPSP). We showed that an RPS game is convex and, thus, its core is always non-empty. Furthermore, we showed that RPS games are a proper subgroup of convex games. RPS games allow a polynomial computation of the Shapley value. Focusing more on solution vectors of RPS games, we gave a characterization of core elements as feasible flows in a network graph. Thus, a core element can be computed not only in polynomial time but also efficiently.
Future research is directed to find flow representations of other payment vectors that fulfill certain fairness properties. For instance, the egalitarian allocation (cf. \cite{dutta1989concept, koster1999weighted}) has the property to distribute the obtained profit in a ``most equal'' way among the players as well as being contained in the core given a convex game. This could be transformed to finding a feasible flow of a certain value with the additional property that the flow values on edges do not differ ``too much''. This problem of finding an \emph{almost equal flow} was introduced in~\cite{haese2020algorithms}. However, at the current stage it is not clear whether such a one-to-one correspondence between almost equal flows and egalitarian allocations actually holds.
Altogether, due to their generality, we think that RPS games are a widely viable modeling technique for profit sharing for several underlying application settings.
\bibliographystyle{plain}
|
2,869,038,157,074 | arxiv | \section{Introduction}
New physics (NP), or physics beyond the standard model, involves various models
that extend the well verified standard model (SM) of particle physics by
introducing a number of new particles with novel properties and interactions.
Though various aspects of many of these particles and interactions are
constrained by existing experimental data, we are yet to detect any definitive
signature of new physics in our experiments. Nevertheless, recent experimental
studies in $B$ meson decays, such as $B \to K^{(*)} \ell^-\ell^+$
\cite{B2KorKstLL}, $B_s \to \phi \ell^-\ell^+$ \cite{Aaij:2015esa}, $B \to
D^{(*)}\ell\nu$ \cite{B2DorDstLN} and $B_c \to J/\psi \ell\nu$
\cite{Aaij:2017tyk} (where $\ell$ can be $e,\mu$ or $\tau$) have reported
anomalous observations raising the expectation of discovery of new physics with
more statistical significance. In this context, model-independent studies of
such semi-leptonic three-body meson decay processes become important as they can
identify generic signatures of new physics which can be probed experimentally.
In this paper, we have analyzed the effects of new physics, in a
model-independent manner, on the angular distribution of a general semi-hadronic
three-body meson decay of the type $P_i \to P_f f_1 f_2$, where $P_i$ and $P_f$
are the initial and final pseudo-scalar mesons respectively, and $f_{1,2}$
denote fermions (which may or may not be leptons but not quarks) out of which at
least one gets detected experimentally. Presence of new interactions, or new
particles such as fermionic dark matter (DM) particles or heavy sterile
neutrinos or long lived particles (LLP) would leave their signature in the
angular distribution and we show by example how new physics contribution can be
quantified from angular asymmetries. Our methodology can be used for detection
of new physics in experimental study of various three-body pseudo-scalar meson
decays at various collider experiments such as LHCb and Belle~II.
The structure of our paper is as follows. In
Sec.~\ref{sec:Lagrangian-and-amplitude} we discuss the most general Lagrangian
and amplitude which include all probable NP contributions to our process under
consideration. The relevant details of kinematics is then described in
Sec.~\ref{sec:kinematics}. This is followed by a discussion on the angular
distribution and the various angular asymmetries in
Sec.~\ref{sec:ang-dist-asymmetries}. In Sec.~\ref{sec:example} we present a few
well chosen examples illustrating the effects of new physics on the angular
distribution. In Sec.~\ref{sec:conclusion} we conclude by summarizing the
important aspects of our methodology and its possible experimental realization.
\section{Most general Lagrangian and Amplitude}\label{sec:Lagrangian-and-amplitude}
Following the model-independent analysis of the decay $B \to D \ell^-\ell^+$ as
given in Ref.~\cite{Kim:2016zbg} and generalizing it for our process $P_i \to
P_f f_1 f_2$ where $P_{i,f}$ can be $B, B_s, B_c, D, K, \pi$ etc.\ as
appropriate and $f_1 f_2$ can be $\ell^- \ell^+$, $\ell \bar{\ell'}$, $\ell
\nu_{\ell}$, $\ell \nu_S$, $\ell f^{DM}$, $\nu_{\ell}\ensuremath{\overline{\nu}}_{\ell}$,
$\nu_S\ensuremath{\overline{\nu}}_{\ell}$, $\nu_{\ell}\ensuremath{\overline{\nu}}_S$, $\nu_S \ensuremath{\overline{\nu}}_S$, $f^{DM}
\bar{f}^{DM}$, $f_1^{DM} f_2^{DM}$, $f_1^{LLP} f_2^{LLP}$ (with
$\ell,\ell'=e,\mu,\tau$ denoting leptons, $\nu_S$ being sterile neutrino,
$f^{DM}_{1,2}$ as fermionic dark matter and $f_{1,2}^{LLP}$ as long lived
fermions)\footnote{It is clear that we can not only analyze processes allowed in
the SM but also those NP contributions from fermionic dark matter in the final
state as well as including flavor violation. Our analysis as presented in this
paper is fully model-independent and general in nature.}, we can write down the
effective Lagrangian facilitating the decay under consideration as follows,
\begin{eqnarray}
\mathcal{L}_{\textrm{eff}} &=& J_S \left( \bar{f}_1 f_2 \right)
+ J_P \left( \bar{f}_1~\gamma^5~f_2 \right) + \left(J_V\right)_{\alpha}
\left( \bar{f}_1~\gamma^{\alpha}~f_2 \right) \nonumber\\%
&& + \left(J_A\right)_{\alpha} \left( \bar{f}_1~\gamma^{\alpha}\gamma^5~f_2
\right) + \left(J_{T_1}\right)_{\alpha\beta} \left(
\bar{f}_1~\sigma^{\alpha\beta}~f_2 \right) \nonumber\\%
&& + \left(J_{T_2}\right)_{\alpha\beta} \left(
\bar{f}_1~\sigma^{\alpha\beta}\gamma^5~f_2 \right) + \text{h.c.},
\label{eq:Effective-Lagrangian}
\end{eqnarray}
where $J_S$, $J_P$, $\left(J_V\right)_{\alpha}$, $\left(J_A\right)_{\alpha}$,
$\left(J_{T_1}\right)_{\alpha\beta}$, $\left(J_{T_2}\right)_{\alpha\beta}$ are
the different hadronic currents which effectively describe the quark level
transitions from $P_i$ to $P_f$ meson. It should be noted that we have kept both
$\sigma^{\alpha\beta}$ and $\sigma^{\alpha\beta}\gamma^5$ terms. This is because
of the fact that the currents $\bar{f}_1 \, \sigma^{\alpha\beta} \, f_2$ and
$\bar{f}_1 \, \sigma^{\alpha\beta}\gamma^5 \, f_2$ describe two different
physics aspects namely the magnetic dipole and electric dipole contributions
respectively. In the SM, vector and axial-vector currents (mediated by photon,
$W^{\pm}$ and $Z^0$ bosons) and the scalar current (mediated by Higgs boson)
contribute. So every other term in Eq.~\eqref{eq:Effective-Lagrangian} except
the ones with $J_S$, $\left(J_V\right)_{\alpha}$ and $\left(J_A\right)_{\alpha}$
can appear in some specific NP model. Since, in this paper, we want to
concentrate on a fully model-independent analysis to get generic signatures of
new physics, we shall refrain from venturing into details of any specific NP
model, which nevertheless are also useful. It is important to note that $J_S$,
$\left(J_V\right)_{\alpha}$ and $\left(J_A\right)_{\alpha}$ can also get
modified due to NP contributions.
\begin{figure}[hbtp]
\centering%
\includegraphics[scale=1]{fig_Feynman_diagram.pdf} \caption{Feynman diagram
for $P_i \to P_f f_1 f_2$ considering $f_1$ as a particle and $f_2$ as an
anti-particle. Here the blob denotes the effective vertex and includes
contributions from all the form factors defined in
Eq.~\eqref{eq:form-factors}.}%
\label{fig:Feynman_diagram}
\end{figure}
In order to get the most general amplitude for our process under consideration,
we need to go from the effective quark-level description of
Eq.~\eqref{eq:Effective-Lagrangian} to the meson level description by defining
appropriate form factors. It is easy to write down the most general form of the
amplitude for the process $P_i \to P_f f_1 f_2$ depicted in
Fig.~\ref{fig:Feynman_diagram} as follows,
\begin{align}
\mathcal{M} \left( P_i \to P_f f_1 f_2 \right) &= F_S \left(
\bar{f}_1 f_2 \right) + F_P \left( \bar{f}_1~\gamma^5~f_2 \right)
\nonumber\\*%
&\quad + \left( F_V^+ p_{\alpha} + F_V^- q_{\alpha} \right) \left(
\bar{f}_1~\gamma^{\alpha}~f_2 \right) \nonumber\\* %
&\quad + \left( F_A^+ p_{\alpha} + F_A^- q_{\alpha} \right) \left(
\bar{f}_1~\gamma^{\alpha}~\gamma^5~f_2 \right) \nonumber\\* %
&\quad + F_{T_1}~p_{\alpha}~q_{\beta} \left( \bar{f}_1~\sigma^{\alpha\beta}~f_2
\right) \nonumber\\* %
&\quad + F_{T_2}~p_{\alpha}~q_{\beta} \left(
\bar{f}_1~\sigma^{\alpha\beta}~\gamma^5~f_2 \right), \label{eq:amplitude}
\end{align}
where $F_{S}$, $F_{P}$, $F_{V}^{\pm}$, $F_{A}^{\pm}$, $F_{T_1}$ and $F_{T_2}$
are the relevant form factors, and are defined as follows,
\begin{subequations}\label{eq:form-factors}
\begin{align}
\bracket{P_f}{J_S}{P_i} &= F_S,\\%
\bracket{P_f}{J_P}{P_i} &= F_P,\\%
\bracket{P_f}{\left(J_V\right)_{\alpha}}{P_i} &= F_V^+ p_{\alpha} + F_V^-
q_{\alpha},\\%
\bracket{P_f}{\left(J_A\right)_{\alpha}}{P_i} &= F_A^+ p_{\alpha} + F_A^-
q_{\alpha},\\%
\bracket{P_f}{\left(J_{T_1}\right)_{\alpha\beta}}{P_i} &=
F_{T_1}~p_{\alpha}~q_{\beta},\\%
\bracket{P_f}{\left(J_{T_2}\right)_{\alpha\beta}}{P_i} &=
F_{T_2}~p_{\alpha}~q_{\beta},
\end{align}
\end{subequations}
with $p \equiv k + k_3$ and $q \equiv k - k_3 = k_1 + k_2$, in which $k, k_1,
k_2, k_3$ are the 4-momenta of the $P_i, f_1, f_2 $ and $P_f$ respectively (see
Fig.~\ref{fig:Feynman_diagram}). All the form factors appearing in the amplitude
in Eq.~\eqref{eq:amplitude} and as defined in Eq.~\eqref{eq:form-factors} are,
in general, complex and contain all NP information. It should be noted that for
simplicity we have implicitly put all the relevant Cabibbo-Kobayashi-Maskawa
matrix elements as well as coupling constants and propagators inside the
definitions of these form factors. In the SM only $F_V^{\pm}$ and $F_A^{\pm}$
are present. Presence of NP can modify these as well as introduce other form
factors\footnote{It should be noted that the form factors, especially the ones
describing semi-leptonic $B$ meson decays, can be obtained by using the heavy
quark effective theory \cite{HQET}, the lattice QCD \cite{Lattice}, QCD
light-cone sum rule \cite{Light-cone} or the covariant confined quark model
\cite{CCQM} etc. In this paper we present a very general analysis which is
applicable to a diverse set of meson decays. Hence we do not discuss any
specifics of the form factors used in our analysis. Moreover, we shall show, by
using certain examples and in a few specific cases, that one can also probe new
physics without worrying about the details of the form factors. Nevertheless,
when one concentrates on a specific decay mode, considering the form factors in
detail is always useful.}. These various NP contributions would leave behind
their signatures in the angular distribution for which we need to specify the
kinematics in a chosen frame of reference.
\section{Decay Kinematics}\label{sec:kinematics}
\begin{figure}[hbtp]
\centering%
\includegraphics[scale=1]{fig_GJframe.pdf}%
\caption{Decay of $P_i \to P_f f_1 f_2$ in the Gottfried-Jackson frame.} %
\label{fig:GJ-frame}
\end{figure}
We shall consider the decay $P_i \to P_f f_1 f_2$ in the Gottfried-Jackson
frame, especially the center-of-momentum frame of the $f_1,f_2$ system, which is
shown in Fig.~\ref{fig:GJ-frame}. In this frame the parent meson $P_i$ flies
along the positive $z$-direction with 4-momentum $k = \left(E, \mathbf{k}
\right) = \left(E,0,0,\modulus{\mathbf{k}}\right)$ and decays to the daughter
meson $P_f$ which also flies along the positive $z$-direction with 4-momentum
$k_3 = \left( E_3, \mathbf{k}_3 \right) = \left(E_3, 0, 0,
\modulus{\mathbf{k}_3}\right)$ and to $f_1$, $f_2$ which fly away back-to-back
with 4-momenta $k_1 = \left( E_1, \mathbf{k}_1 \right)$ and $k_2 = \left( E_2,
\mathbf{k}_2 \right)$ respectively, such that by conservation of 4-momentum we
get, $\mathbf{k}_1 + \mathbf{k}_2 = \mathbf{0}$, $\mathbf{k} = \mathbf{k}_3$,
and $E = E_1 + E_2 + E_3$. The fermion $f_1$ (which we assume can be observed
experimentally) flies out subtending an angle $\theta$ with respect to the
direction of flight of the $P_i$ meson, in this Gottfried-Jackson frame. The
three invariant mass-squares involved in the decay under consideration are
defined as follows,
\begin{subequations}\label{eq:stu}
\begin{align}
s &= (k_1 + k_2)^2 = (k - k_3)^2,\\%
t &= (k_1 + k_3)^2 = (k - k_2)^2,\\%
u &= (k_2 + k_3)^2 = (k - k_1)^2.
\end{align}
\end{subequations}
It is easy to show that $s + t + u = m_i^2 + m_f^2 + m_1^2 + m_2^2$, where
$m_i, m_f,m_1$ and $m_2$ denote the masses of particles $P_i,P_f,f_1$ and $f_2$
respectively. In the Gottfried-Jackson frame, the expressions for $t$ and $u$
are given by
\begin{subequations}\label{eq:tu}
\begin{align}
t &= a_t - b \cos\theta,\label{eq:t}\\%
u &= a_u + b \cos\theta,\label{eq:u}
\end{align}
\end{subequations}
where
\begin{subequations}\label{eq:ab}
\begin{align}
a_t &= m_1^2 + m_f^2 + \frac{1}{2s} \left( s + m_1^2 - m_2^2 \right) \left(
m_i^2 - m_f^2 -s \right),\label{eq:at}\\%
a_u &= m_2^2 + m_f^2 + \frac{1}{2s} \left( s - m_1^2 + m_2^2 \right) \left(
m_i^2 - m_f^2 -s \right),\label{eq:au}\\%
b &= \frac{1}{2s} \sqrt{\lambda\left( s, m_1^2, m_2^2 \right)~\lambda \left( s,
m_i^2, m_f^2 \right)},\label{eq:b}
\end{align}
\end{subequations}
with the K\"{a}ll\'{e}n function $\lambda(x,y,z)$ defined as,
\begin{equation*}
\lambda\left( x,y,z \right) = x^2 + y^2 + z^2 - 2 \left( xy + yz + zx \right).
\end{equation*}
It is clear that $a_t$, $a_u$ and $b$ are functions of $s$ only. For the special
case of $m_1 = m_2 = m$ (say) we have $a_t = a_u = \tfrac{1}{2} \left(m_i^2 +
m_f^2 + 2m^2 -s\right)$ and $b = \tfrac{1}{2} \sqrt{\left(1-4m^2/s\right)
\lambda\left(s,m_i^2,m_f^2\right)}$. It is important to note that we shall use
the angle $\theta$ in our angular distribution.
\section{Most general angular distribution and angular asymmetries}\label{sec:ang-dist-asymmetries}
Considering the amplitude as given in Eq.~\eqref{eq:amplitude}, the most general
angular distribution in the Gottfried-Jackson frame is given by,
\begin{equation}\label{eq:gen-angular-dist}
\frac{d^2\Gamma}{ds \, d\cos\theta} = \frac{b\sqrt{s} \left( C_0 + C_1
\cos\theta + C_2 \cos^2\theta \right)}{128 \, \pi^3 \, m_i^2 \left(m_i^2 - m_f^2
+ s \right)},
\end{equation}
where $C_0$, $C_1$ and $C_2$ are functions of $s$ and are given by,
\begin{subequations}\label{eq:gen-C012}
\begin{align}
C_0 &= 2 \Bigg(-\modulus{F_{T_1}}^2 \bigg(-\Sigma m_{12}^2 s^2 + 2 \Sigma
m_{12}^2 \left( \Sigma m^2 \right)_{if} s \nonumber\\%
& \hspace{1.5cm} + \left( \Delta m^2 \right)_{12}^2 s -\Delta a_{tu}^2 s - 2
\left( \Delta m^2 \right)_{12}^2 \left( \Sigma m^2 \right)_{if} \nonumber\\%
& \hspace{1.5cm} - \left( \Delta m^2 \right)_{if}^2 \Sigma m_{12}^2 + 2 \Delta
a_{tu} \left( \Delta m^2 \right)_{12} \left( \Delta m^2 \right)_{if} \bigg)
\nonumber\\%
& \quad - 2 \Im\left( F_V^+ F_{T_1}^* \right) \bigg( -\Sigma m_{12} s^2 + 2
\Sigma m_{12} \left( \Sigma m^2 \right)_{if} s \nonumber\\%
& \hspace{1.5cm} + \Delta m_{12} \left( \Delta m^2 \right)_{12} s - 2 \Delta
m_{12} \left( \Delta m^2 \right)_{12} \left( \Sigma m^2 \right)_{if}
\nonumber\\%
& \hspace{1.5cm} - \left( \Delta m^2 \right)_{if}^2 \Sigma m_{12} +\Delta a_{tu}
\Delta m_{12} \left( \Delta m^2 \right)_{if} \bigg) \nonumber\\%
& \quad + \modulus{F_{T_2}}^2 \bigg( \Delta m_{12}^2 s^2 - 2 \Delta m_{12}^2
\left( \Sigma m^2 \right)_{if} s - \left( \Delta m^2 \right)_{12}^2 s
\nonumber\\%
& \hspace{1.5cm} + \Delta a_{tu}^2 s + 2 \left( \Delta m^2 \right)_{12}^2 \left(
\Sigma m^2 \right)_{if} + \Delta m_{12}^2 \left( \Delta m^2 \right)_{if}^2
\nonumber\\%
& \hspace{1.5cm} - 2 \Delta a_{tu} \left( \Delta m^2 \right)_{12} \left( \Delta
m^2 \right)_{if} \bigg) \nonumber\\%
& \quad - 2 \Im\left( F_A^+ F_{T_2}^* \right) \bigg(\Delta m_{12} s^2 - 2 \Delta
m_{12} \left( \Sigma m^2 \right)_{if} s \nonumber\\%
& \hspace{1.5cm} - \left( \Delta m^2 \right)_{12} \Sigma m_{12} s + 2 \left(
\Delta m^2 \right)_{12} \Sigma m_{12} \left( \Sigma m^2 \right)_{if}
\nonumber\\%
& \hspace{1.5cm} - \Delta a_{tu} \left( \Delta m^2 \right)_{if} \Sigma m_{12} +
\Delta m_{12} \left( \Delta m^2 \right)_{if}^2 \bigg) \nonumber\\%
& \quad + \modulus{F_A^+}^2 \bigg( s^2 - 2\left( \Sigma m^2 \right)_{if} s -
\Sigma m_{12}^2 s \nonumber\\%
& \hspace{1.5cm} + 2 \Sigma m_{12}^2 \left( \Sigma m^2 \right)_{if} + \left(
\Delta m^2 \right)_{if}^2-\Delta a_{tu}^2 \bigg) \nonumber\\%
& \quad + \modulus{F_V^+}^2 \bigg( s^2 - 2 \left( \Sigma m^2 \right)_{if} s
-\Delta m_{12}^2 s \nonumber\\%
& \hspace{1.5cm} + 2 \Delta m_{12}^2 \left( \Sigma m^2 \right)_{if} + \left(
\Delta m^2 \right)_{if}^2 - \Delta a_{tu}^2 \bigg) \nonumber\\%
& \quad + \modulus{F_A^-}^2 \left(\Sigma m_{12}^2 s - \left( \Delta m^2
\right)_{12}^2 \right) \nonumber\\%
& \quad - 2 \Re\left( F_P F_A^{-*} \right) \left(\Sigma m_{12} s - \Delta m_{12}
\left( \Delta m^2 \right)_{12} \right) \nonumber\\%
& \quad - \modulus{F_V^-}^2 \left( \left( \Delta m^2 \right)_{12}^2-\Delta
m_{12}^2 s \right) \nonumber\\%
& \quad - 2\Re\left( F_S F_V^{-*} \right) \left(\left( \Delta m^2 \right)_{12}
\Sigma m_{12}-\Delta m_{12} s \right) \nonumber\\%
& \quad -\modulus{F_S}^2 \left( \Sigma m_{12}^2-s \right) -\modulus{F_P}^2
\left( \Delta m_{12}^2-s \right) \nonumber\\%
& \quad + 2 \Re\left( F_A^+ F_A^{-*} \right) \left( \left( \Delta m^2
\right)_{if} \Sigma m_{12}^2 - \Delta a_{tu} \left( \Delta m^2 \right)_{12}
\right) \nonumber\\%
& \quad - 2 \Re\left( F_P F_A^{+*} \right) \left( \left( \Delta m^2 \right)_{if}
\Sigma m_{12} - \Delta a_{tu} \Delta m_{12} \right) \nonumber\\%
& \quad - 2 \Re\left( F_S F_V^{+*} \right) \left( \Delta a_{tu} \Sigma m_{12} -
\Delta m_{12} \left( \Delta m^2 \right)_{if} \right) \nonumber\\%
& \quad + 2 \Re\left( F_V^+ F_V^{-*} \right) \left( \Delta m_{12}^2 \left(
\Delta m^2 \right)_{if} - \Delta a_{tu} \left( \Delta m^2 \right)_{12} \right)
\Bigg),\\%
C_1 &= 8 b \Bigg( \Delta m_{12} \left( \Im\left( F_V^- F_{T_1}^* \right)
s-\Re\left( F_P F_A^{+*} \right) \right) \nonumber\\%
& \hspace{1.5cm} + \Sigma m_{12} \Big(-\Im\left( F_A^- F_{T_2}^* \right) s +
\Re\left( F_S F_V^{+*} \right) \nonumber\\%
& \hspace{4cm} - \left( \Delta m^2 \right)_{if} \Im\left( F_A^+ F_{T_2}^*
\right) \Big) \nonumber\\%
& \hspace{1.5cm} + \Delta a_{tu} \left( \modulus{F_V^+}^2 + \modulus{F_A^+}^2
\right) \nonumber\\%
& \hspace{1.5cm} + \left( \Im\left( F_S F_{T_1}^* \right) + \Im\left( F_P
F_{T_2}^* \right) \right)s \nonumber\\%
& \hspace{1.5cm} + \left( \Delta m^2 \right)_{12} \left( \Re\left( F_V^+
F_V^{-*} \right) + \Re\left( F_A^+ F_A^{-*} \right) \right) \nonumber\\%
& \hspace{1.5cm} + \left( \Delta m^2 \right)_{if} \Delta m_{12} \Im\left( F_V^+
F_{T_1}^* \right) \Bigg),\\%
C_2 &= 8 b^2 \left( \left( \modulus{F_{T_2}}^2 + \modulus{F_{T_1}}^2 \right) s -
\modulus{F_V^+}^2 - \modulus{F_A^+}^2 \right),
\end{align}
\end{subequations}
with
\begin{subequations}
\begin{align}
\Delta a_{tu} &= a_t - a_u,\\%
\Delta m_{12} &= m_1 - m_2,\\%
\Delta m_{if} &= m_i - m_f,\\%
\Sigma m_{12} &= m_1 + m_2,\\%
\Sigma m_{if} &= m_i + m_f,\\%
\left(\Delta m^2\right)_{12} &= \Delta m_{12} \Sigma m_{12} = m_1^2 - m_2^2,\\%
\left(\Delta m^2\right)_{if} &= \Delta m_{if} \Sigma m_{if} = m_i^2 - m_f^2,\\%
\left(\Sigma m^2\right)_{if} &= m_i^2 + m_f^2.
\end{align}
\end{subequations}
In the limit $m_1=m_2$, which happens when $f_1 f_2 = \ell^-\ell^+, \nu\ensuremath{\overline{\nu}},$
or $f^{DM} \bar{f}^{DM}$ etc., our expressions for the angular distribution
matches with the corresponding expression in Ref.~\cite{Kim:2016zbg}. It is
important to remember that in the SM we come across scalar, vector and axial
vector currents only. Therefore, in the SM, $F_P^{\text{SM}} =
F_{T_1}^{\text{SM}} = F_{T_2}^{\text{SM}} = 0$, which implies that,
\begin{subequations}\label{eq:SM-C012}
\begin{align}
C_0^{\text{SM}} =& 2 \Bigg( \modulus{\left(F_A^+\right)_{\text{SM}}}^2 \bigg(
s^2 - 2\left( \Sigma m^2 \right)_{if} s - \Sigma m_{12}^2 s \nonumber\\%
& \hspace{2cm} + 2 \Sigma m_{12}^2 \left( \Sigma m^2 \right)_{if} + \left(
\Delta m^2 \right)_{if}^2-\Delta a_{tu}^2 \bigg) \nonumber\\%
& \quad + \modulus{\left(F_V^+\right)_{\text{SM}}}^2 \bigg( s^2 - 2 \left(
\Sigma m^2 \right)_{if} s -\Delta m_{12}^2 s \nonumber\\%
& \hspace{2.25cm} + 2 \Delta m_{12}^2 \left( \Sigma m^2 \right)_{if} + \left(
\Delta m^2 \right)_{if}^2 - \Delta a_{tu}^2 \bigg) \nonumber\\%
& \quad + \modulus{\left(F_A^-\right)_{\text{SM}}}^2 \left(\Sigma m_{12}^2 s -
\left( \Delta m^2 \right)_{12}^2 \right) \nonumber\\%
& \quad - \modulus{\left(F_V^-\right)_{\text{SM}}}^2 \left( \left( \Delta m^2
\right)_{12}^2-\Delta m_{12}^2 s \right) \nonumber\\%
& \quad - \modulus{\left(F_S\right)_{\text{SM}}}^2 \left( \Sigma m_{12}^2 -s
\right) \nonumber\\%
& \quad + 2 \Re\left( \left(F_A^+\right)_{\text{SM}}
\left(F_A^-\right)_{\text{SM}}^* \right) \bigg( \left( \Delta m^2 \right)_{if}
\Sigma m_{12}^2 \nonumber\\*%
& \hspace{4cm} - \Delta a_{tu} \left( \Delta m^2 \right)_{12} \bigg)
\nonumber\\%
& \quad + 2 \Re\left( \left(F_V^+\right)_{\text{SM}}
\left(F_V^-\right)_{\text{SM}}^* \right) \bigg( \left( \Delta m^2 \right)_{if}
\Delta m_{12}^2 \nonumber\\%
& \hspace{4cm} - \Delta a_{tu} \left( \Delta m^2 \right)_{12} \bigg) \Bigg),\\%
C_1^{\text{SM}} =& 8 b \Bigg( \Delta a_{tu} \left(
\modulus{\left(F_V^+\right)_{\text{SM}}}^2 +
\modulus{\left(F_A^+\right)_{\text{SM}}}^2 \right) \nonumber\\%
& \qquad + \left( \Delta m^2 \right)_{12} \bigg( \Re\left(
\left(F_V^+\right)_{\text{SM}} \left(F_V^-\right)_{\text{SM}}^* \right)
\nonumber\\%
& \hspace{2.5cm} + \Re\left( \left(F_A^+\right)_{\text{SM}}
\left(F_A^-\right)_{\text{SM}}^* \right) \bigg) \Bigg),\\%
C_2^{\text{SM}} =& - 8 b^2 \left( \modulus{\left(F_V^+\right)_{\text{SM}}}^2 +
\modulus{\left(F_A^+\right)_{\text{SM}}}^2 \right).
\end{align}
\end{subequations}
It is interesting to note that in the special case of $m_1 = m_2$, such as in
$P_i \to P_f \ell^+ \ell^-$, we always have $C_1^{\text{SM}}=0$. For specific
meson decays of the form $P_i \to P_f f_1 f_2$ allowed in the SM, one can write
down $\left(F_S\right)_{\text{SM}}$ , $\left(F_V^{\pm}\right)_{\text{SM}}$ and
$\left(F_A^{\pm}\right)_{\text{SM}}$, at least in principle. The SM prediction
for the angular distribution can thus be compared with corresponding
experimental measurement. In order to quantitatively compare the theoretical
prediction with experimental measurement, we define the following three angular
asymmetries which can precisely probe $C_0$, $C_1$ and $C_2$ individually,
\begin{subequations}\label{eq:ang-asymmetries}
\begin{align}
A_0 \equiv A_0(s) &= \frac{- \frac{1}{6} \left( \int_{-1}^{-1/2} - 7
\int_{-1/2}^{+1/2} + \int_{+1/2}^{+1} \right) \dfrac{d^2 \Gamma}{ds \,
d\cos\theta} d\cos\theta}{d\Gamma/ds} \nonumber\\%
&= 3C_0/\left(6C_0+2C_2\right),\\%
A_1 \equiv A_1(s) &= \frac{- \left( \int_{-1}^{0} - \int_{0}^{+1} \right)
\dfrac{d^2 \Gamma}{ds \, d\cos\theta} d\cos\theta}{d\Gamma/ds} \nonumber\\%
&= 3C_1/\left(6C_0+2C_2\right),\\%
A_2 \equiv A_2(s) &= \frac{2 \left( \int_{-1}^{-1/2} - \int_{-1/2}^{+1/2} +
\int_{+1/2}^{+1} \right) \dfrac{d^2 \Gamma}{ds \, d\cos\theta}
d\cos\theta}{d\Gamma/ds} \nonumber\\%
&= 3C_2/\left(6C_0+2C_2\right).
\end{align}
\end{subequations}
The angular asymmetries of Eq.~\eqref{eq:ang-asymmetries} are functions of $s$
and it is easy to show that $A_2 = 3 \left(1/2 - A_0 \right)$. We can do the
integration over $s$ in Eq.~\eqref{eq:gen-angular-dist} and define the following
normalized angular distribution,
\begin{equation}\label{eq:Gen-ang-dist}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = T_0 + T_1 \cos\theta + T_2 \cos^2\theta,
\end{equation}
where
\begin{equation}\label{eq:Def-T012}
T_j = 3 c_j/\left(6c_0 + 2 c_2\right),
\end{equation}
for $j=0,1,2$ and with
\begin{equation}\label{eq:cj}
c_j = \int_{(m_1+m_2)^2}^{(m_i-m_f)^2} \frac{b\sqrt{s} \, C_j}{128 \pi^3 m_i^2 \left(m_i^2 - m_f^2 + s\right)} ds.
\end{equation}
From Eq.~\eqref{eq:Def-T012} it is easy to show that $T_2 = 3 \left(1/2 -
T_0\right)$ which also ensures that integration over $\cos\theta$ on
Eq.~\eqref{eq:Gen-ang-dist} is equal to $1$. It is interesting to note that the
angular distribution of Eq.~\eqref{eq:Gen-ang-dist} can be written in terms of
the orthogonal Legendre polynomials of $\cos\theta$ as well,
\begin{equation}\label{eq:Gen-ang-dist-Legendre}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \sum_{i=0}^{2} \langle G^{(i)} \rangle P_i \left(\cos\theta\right).
\end{equation}
Here we have followed the notation of Ref.~\cite{Gratrex:2015hna} which also
analyzes decays of the type $P_i \to P_f f_1 f_2$, with only leptons for
$f_{1,2}$, in a model-independent manner but using a generalized helicity
amplitude method. The observables $\langle G^{(i)} \rangle$ of
Eq.~\eqref{eq:Gen-ang-dist-Legendre} are related to $T_0$, $T_1$ and $T_2$ of
Eq.~\eqref{eq:Gen-ang-dist} as follows,
\begin{subequations}
\begin{align}
\langle G^{(0)} \rangle &= T_0 + T_2/3 = 1/2,\\%
\langle G^{(1)} \rangle &= T_1,\\%
\langle G^{(2)} \rangle &= 2 T_2/3.
\end{align}
\end{subequations}
These angular observables $\langle G^{(i)} \rangle$'s can be obtained by using
the method of moments \cite{Gratrex:2015hna, Beaujean:2015xea}. Another
important way to describe the normalized angular distribution is by using a flat
term $F_H/2$ and the forward-backward asymmetry $A_{FB}$ \cite{AngDist:Hiller}
as follows,
\begin{equation}\label{eq:Gen-ang-dist-expt}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{1}{2} F_H + A_{FB}
\cos\theta + \frac{3}{4} \left(1-F_H\right) \left(1 - \cos^2\theta\right).
\end{equation}
This form of the angular distribution has also been used in the experimental
community \cite{AngDist:Expt} in the study of $B \to K \ell^+ \ell^-$. The parameters $F_H$ and $A_{FB}$ are related to $T_0$, $T_1$ and $T_2$ as follows,
\begin{subequations}
\begin{align}
F_H &= 2 \left( T_0 + T_2 \right) = 3 - 4 T_0,\\%
A_{FB} &= T_1.
\end{align}
\end{subequations}
Thus we have shown that Eqs.~\eqref{eq:Gen-ang-dist},
\eqref{eq:Gen-ang-dist-Legendre} and \eqref{eq:Gen-ang-dist-expt} are equivalent
to one another. In this paper, we choose to work using the normalized angular
distribution in terms of $T_0$, $T_1$ and $T_2$ as shown in
Eq.~\eqref{eq:Gen-ang-dist}. This is because the terms $T_0$, $T_1$ and $T_2$
can be easily determined experimentally by using the $t$-vs-$u$ Dalitz plot
which does not depend on any specific frame of reference. This Dalitz plot can
be easily divided into four segments $I$, $II$, $III$ and $IV$ as shown in
Fig.~\ref{fig:Dalitz-plot-region}. The segments are decided as follows,
\begin{center}
\begin{tabular}{lcl}
Segment $I$ & : & $-1 \leqslant \cos\theta \leqslant -0.5$,\\%
Segment $II$ & : & $-0.5 < \cos\theta \leqslant 0$,\\%
Segment $II$ & : & $0 < \cos\theta \leqslant 0.5$,\\%
Segment $IV$ & : & $0.5 < \cos\theta \leqslant 1$.%
\end{tabular}
\end{center}
\begin{figure}[hbtp]
\centering%
\includegraphics[scale=0.8]{fig_Dalitz_plot_region.pdf}%
\caption{Two examples depicting the variation of $\cos\theta$ in the interior
region of the $t$-vs-$u$ Dalitz plot. The interior of the Dalitz plot can be
divided into four segments, $I$, $II$, $III$ and $IV$, as shown here.}%
\label{fig:Dalitz-plot-region}
\end{figure}
The terms $T_0$, $T_1$ and $T_2$ can thus be expressed in terms of the following
asymmetries,
\begin{subequations}\label{eq:T012}
\begin{align}
T_0 &= - \frac{1}{6} \left( \frac{N_I - 7 \left( N_{II} + N_{III}\right) +
N_{IV}}{N_I + N_{II} + N_{III} + N_{IV}} \right),\\%
T_1 &= \frac{\left( N_I + N_{II} \right) - \left( N_{III} + N_{IV} \right)}{N_I
+ N_{II} + N_{III} + N_{IV}},\\%
T_2 &= 2 \left( \frac{N_I - \left( N_{II} + N_{III}\right) + N_{IV}}{N_I +
N_{II} + N_{III} + N_{IV}} \right),
\end{align}
\end{subequations}
where $N_j$ denotes the number of events contained in the segment $j$. Since the
$t$-vs-$u$ Dalitz plot does not depend on the frame of reference, we need not
constraint ourselves to the Gottfried-Jackson frame of Fig.~\ref{fig:GJ-frame}
and can work in the laboratory frame as well. Furthermore, we can use the
expressions in Eq.~\eqref{eq:T012} to search for NP.
\section{Illustrating the effects of new physics on the angular distribution}\label{sec:example}
\subsection{Classification of the \texorpdfstring{$P_i \to P_f f_1 f_2$}{Pi -> Pf + f1 + f2} decays}%
It should be emphasized that for our methodology to work, we need to know the
angle $\theta$ in the Gottfried-Jackson frame, or equivalently the $t$-vs-$u$
Dalitz plot, which demand that 4-momenta of the final particles be fully known.
Usually, the 4-momenta of the initial and final pseudo-scalar mesons are
directly measured experimentally. However, depending on the detection
possibilities of $f_1$ and $f_2$ we can identify three distinct scenarios for
our process $P_i \to P_f f_1 f_2$. We introduce the notations $f_i^{\textrm{\ding{51}}}$
and $f_i^{\textrm{\ding{55}}}$ to denote whether the fermion $f_i$ gets detected
(\textrm{\ding{51}}) or not (\textrm{\ding{55}}) by the detector. Using this notation the three
scenarios are described as follows.
\begin{enumerate}
\item[(S1)] $P_i \to P_f + f_1^{\textrm{\ding{51}}} + f_2^{\textrm{\ding{51}}} \equiv P_f +
\textrm{`visible'}$. Here both $f_1$ and $f_2$ are detected, e.g.\ when $f_1 f_2
= \ell^-\ell^+$ or $\ell \bar{\ell'}$.%
\item[(S2)] $P_i \to
\begin{Bmatrix}
P_f + f_1^{\textrm{\ding{51}}} + f_2^{\textrm{\ding{55}}}\\%
P_f + f_1^{\textrm{\ding{55}}} + f_2^{\textrm{\ding{51}}}
\end{Bmatrix} \equiv P_f + \textrm{`visible'} + \text{`invisible'}$. Here either
$f_1$ or $f_2$ gets detected, e.g.\ when $f_1 f_2 = \ell \nu_{\ell}$, $\ell
\nu_S$, $\ell f^{DM}$, $\ell f^{LLP}$.%
\item[(S3)] $P_i \to P_f + f_1^{\textrm{\ding{55}}} + f_2^{\textrm{\ding{55}}} \equiv P_f +
\textrm{`invisible'}$. Here neither $f_1$ nor $f_2$ gets detected, e.g.\ when
$f_1 f_2 = \nu_{\ell}\ensuremath{\overline{\nu}}_{\ell}$, $\nu_{\ell}\ensuremath{\overline{\nu}}_S$, $\nu_S\ensuremath{\overline{\nu}}_{\ell}$,
$\nu_S\ensuremath{\overline{\nu}}_S$, $f^{DM} \bar{f}^{DM}$, $f_1^{DM} f_2^{DM}$, $f_1^{LLP}
f_2^{LLP}$ etc.
\end{enumerate}
It should be noted that the above classification is based on our existing
experimental explorations. What is undetected today might get detected in future
with advanced detectors. In such a case we can imagine that, in future, the
modes grouped in S2 might migrate to S1 and those in S3 might be grouped under
S2. Below we explore each of the above scenarios in more details.
\subsection{Exploration of new physics effects in each scenario}
The first scenario (S1) is an experimenter's delight as in this case all final
4-momenta can be easily measured and the $t$-vs-$u$ Dalitz plot can be obtained.
Here, our methodology can be used to look for the possible signature of new
physics in rare decays such as $B \to D \ell^- \ell^+$ (which can be found in
\cite{Kim:2016zbg}) or study the nature of new physics contributing to
lepton-flavor violating processes such as $B \to P \ell^{\pm} \ell^{\prime\mp}$
where $P=\pi,K,D$, $\ell\neq\ell'$ and $\ell,\ell'=e,\mu,\tau$. Let us consider
a few NP possibilities mediating this lepton-flavor violating decay. There is no
contribution within the SM to such decays. Therefore, all contribution to these
decays comes from NP alone. It is very easy to note that for the decay $B \to P
\ell^{-} \ell^{\prime+}$, from Eqs.~\eqref{eq:gen-C012} and
\eqref{eq:Gen-ang-dist} we get,
\begin{equation}\label{eq:NPinB2PLl}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} =
\begin{cases}
\dfrac{1}{2}, & \left(\parbox{0.23\linewidth}{\textrm{\centering only scalar or pseudo-scalar interaction}}\right)\\[5mm]%
T_0 + T_2 \cos^2\theta, & \left(\parbox{0.23\linewidth}{\textrm{\centering only tensorial interaction}}\right)\\[3mm]%
T_0 + T_1 \cos\theta + T_2 \cos^2\theta, & \left(\parbox{0.23\linewidth}{\textrm{\centering only vector or axial-vector interaction}}\right)
\end{cases}
\end{equation}
where $T_2 = 3\left(1/2 - T_0\right)$ with the quantities $T_0$, $T_1$ and $T_2$
being easily obtainable from the Dalitz plot distribution by using
Eq.~\eqref{eq:T012}. It is clear from Eq.~\eqref{eq:NPinB2PLl} that scalar or
pseudo-scalar interaction would give rise to a uniform (or constant) angular
distribution, while tensorial interaction gives a non-uniform distribution which
is symmetric under $\cos\theta \leftrightarrow -\cos\theta$ and for this $T_0
\leqslant 1/2$. On the other hand vector or axial-vector interaction can only be
described by the most general form of the angular distribution, with its
signature being $T_1 \neq 0$. Nevertheless, if vector or axial-vector
interaction contributes to the flavor violating processes $B \to P \ell^{-}
\ell^{\prime+}$, it is important to note that $T_1 \propto \left(m_{\ell}^2 -
m_{\ell'}^2\right)$, where $m_{\ell}$, $m_{\ell'}$ denote the masses of the
charged leptons $\ell^-$ and $\ell^{\prime+}$ respectively. Therefore, we should
observe an increase in the value of $T_1$ when going from $B \to P \mu^- e^+$ to
$B \to P \tau^- \mu^+$ to $B \to P \tau^- e^+$. This would nail down the vector
or axial vector nature of the NP, if it is the only NP contributing to these
decays. Thus far we have analyzed the first scenario (S1) in which the relevant
decays can be easily probed with existing detectors.
The second scenario (S2) can also be studied experimentally with existing
detectors. In this case, the missing 4-momentum can be fully deduced using
conservation of 4-momentum. Thus the $t$-vs-$u$ Dalitz plot can readily be
obtained. Using our methodology the signatures of NP can then be extracted. One
promising candidate for search for NP in this kind of scenario is in the decay
$B \to P \ell N$ where $P=\pi$, $K$ or $D$ and $N$ can be an active neutrino
($\nu_{\ell}$) or sterile neutrino ($\nu_S$) or a neutral dark fermion
($f^{DM}$) or a long lived neutral fermion ($f^{LLP}$) which decays outside the
detector. These S2 decay modes offer an exciting opportunity for study of NP
effects.
The third scenario (S3), which has the maximum number of NP possibilities, is
also the most challenging one for the current generation of experimental
facilities, due to lack of information about the individual 4-momentum of $f_1$
and $f_2$. This implies that we can not do any angular analysis for these kind
of decays unless by some technological advancement such as by using displaced
vertex detectors\footnote{There are many existing proposals for such displaced
vertex studies from other theoretical and experimental considerations (see
Refs.~\cite{DV:Theory,DV:Experiments} and references contained therein for
further information). } we can manage to make measurement of the 4-momentum or
the angular information of at least one of the final fermions. Getting 4-momenta
of both the fermions would be ideal, but knowing 4-momentum of either one of
them would suffice for our purpose. We are optimistic that the advancement in
detector technology would push the current S3 decay modes to get labelled as S2
modes in the foreseeable future. It is important to note that once the current
S3 modes enter the S2 category, we can cover the whole spectrum of NP
possibilities in the $P_i \to P_f f_1 f_2$ decays. Below we make a comprehensive
exploration of NP possibilities in the generalized S2 decay modes, which
includes the current S2 and S3 modes together.
\subsection{Probing effects of new physics in the (S2)\\and generalized (S2) scenarios}
In the generalized S2 (GS2) scenario we have decays of the type $P_i \to
\begin{Bmatrix}
P_f + f_1^{\textrm{\ding{51}}} + f_2^{\textrm{\ding{55}}}\\%
P_f + f_1^{\textrm{\ding{55}}} + f_2^{\textrm{\ding{51}}}
\end{Bmatrix} \equiv P_f + \textrm{`visible'} + \text{`invisible'}$, where the detected (\textrm{\ding{51}}) or undetected (\textrm{\ding{55}}) nature is not constrained by our existing detector technology. In some cases, even with advanced detectors, either of the fermions $f_1$, $f_2$ might not get detected simply because its direction of flight lies outside the finite detector coverage, especially when the detector is located farther from the place of origin of the particle. Such possibilities are also included here. As noted before, measuring the 4-momentum of either of the final fermions would suffice to carry out the angular analysis following our approach.
In this context let us analyze the following decays.
\begin{enumerate}
\item[(i)] S2 decay: $B \to P\ell^- f^{\textrm{\ding{55}}}$ where $P$ can be $\pi$ or
$D$ and $f^{\textrm{\ding{55}}}$ is a neutral fermion. In the SM this process is
mediated by $W^-$ boson and we have $f^{\textrm{\ding{55}}} = \ensuremath{\overline{\nu}}_{\ell}$. Presence
of NP can imply $f^{\textrm{\ding{55}}}$ being a sterile neutrino $\nu_S$ or a fermionic
dark matter particle $f^{DM}$ or a long lived fermion $f^{LLP}$, with additional
non-SM interactions.%
\item[(ii)] GS2 decay: $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ where
$f_1^{\textrm{\ding{51}}}$ and $f_2^{\textrm{\ding{55}}}$ are both neutral fermions. In the SM
this process is mediated by $Z^0$ boson and we have $f_1 f_2 = \nu_{\ell}
\ensuremath{\overline{\nu}}_{\ell}$. However, in case of NP contribution we can get pairs of sterile
neutrinos or fermionic dark matter or fermionic long lived particles etc.\ along
with nonstandard interactions as well. Here we are assuming that either of the
final neutral fermions leaves a displaced vertex signature in an advanced
detector so that its 4-momentum or angular information could be obtained.%
\end{enumerate}
\subsubsection{New physics effects in the S2 decay \texorpdfstring{$B \to P\ell^- f^{\textrm{\ding{55}}}$}{B -> P + l- + fX}}
Analyzing the $B \to P\ell^- f^{\textrm{\ding{55}}}$ decay in the SM we find that only
vector and axial vector currents contribute and $F_A^{\pm} = -F_V^{\pm}$ while
other form factors are zero. Also considering the anti-neutrino to be massless,
i.e.\ $m_2 =0$ we find that
\begin{align*}
a_t &= m_{\ell}^2 + m_P^2 + \left(s + m_{\ell}^2\right) \left(m_B^2 - m_P^2 -
s\right)/(2s),\\%
a_u &= m_P^2 + \left(s - m_{\ell}^2\right) \left(m_B^2 - m_P^2 -
s\right)/(2s),\\%
b &= \left(s-m_{\ell}^2\right) \sqrt{\lambda\left(s, m_B^2, m_P^2\right)}/(2s),
\end{align*}
where $m_{\ell}$, $m_P$ and $m_B$ denote the masses of the charged lepton
$\ell^-$, mesons $P$ and $B$ respectively. Substituting these information in
Eqs.~\eqref{eq:SM-C012} and in Eq.~\eqref{eq:gen-angular-dist} we get,
\begin{equation}\label{eq:B2Plnu-dist-gen}
\frac{d^2\Gamma^{\textrm{SM}}}{ds \, d\cos\theta} = \frac{b\sqrt{s} \left(
C_0^{\textrm{SM}} + C_1^{\textrm{SM}} \cos\theta + C_2^{\textrm{SM}}
\cos^2\theta \right)}{128 \, \pi^3 \, m_B^2 \left(m_B^2 - m_P^2 + s \right)},
\end{equation}
where
\begin{subequations}\label{eq:C012-in-B2Plnu}
\begin{align}
C_0^{\text{SM}} =& 4 \Bigg( \modulus{\left(F_V^+\right)_{\text{SM}}}^2 \bigg(
\lambda\left(s, m_B^2, m_P^2\right) - m_{\ell}^2 \left(s - 2 \left(m_B^2 -
m_P^2\right) \right) \nonumber\\%
& \hspace{2cm} - m_{\ell}^4 \left(m_B^2 - m_P^2\right)^2/s^2 \bigg)
\nonumber\\%
& \quad + \modulus{\left(F_V^-\right)_{\text{SM}}}^2 m_{\ell}^2 \left( s -
m_{\ell}^2 \right) \nonumber\\%
& \quad + 2 \Re\left( \left(F_V^+\right)_{\text{SM}}
\left(F_V^-\right)_{\text{SM}}^* \right) m_{\ell}^2 \left(m_B^2 - m_P^2\right)
\left(1- \frac{m_{\ell}^2}{s}\right) \Bigg),\\%
C_1^{\text{SM}} =& 16 m_{\ell}^2 b \Bigg( \left(\frac{m_B^2 - m_P^2}{s}\right)
\modulus{\left(F_V^+\right)_{\text{SM}}}^2 + \Re\left(
\left(F_V^+\right)_{\text{SM}} \left(F_V^-\right)_{\text{SM}}^* \right)
\Bigg),\\%
C_2^{\text{SM}} =& - 16 b^2 \modulus{\left(F_V^+\right)_{\text{SM}}}^2 .
\end{align}
\end{subequations}
It is important to notice that in Eq.~\eqref{eq:C012-in-B2Plnu} we have many
terms in the expression for $C_0^{\textrm{SM}}$ that are proportional to some
power of the lepton mass, while the entire $C_1^{\textrm{SM}}$ is directly
proportional to $m_{\ell}^2$. If we compare the $m_{\ell}$ dependent and
$m_{\ell}$ independent contributions in $C_0^{\textrm{SM}}$ we find that the
dependent terms are suppressed by about a factor of
$\mathcal{O}\left(2m_{\ell}^2/m_B^2\right)$ which is roughly $8\times 10^{-4}$
for muon and $2\times 10^{-8}$ for electron. Thus we can neglect these
$m_{\ell}$ dependent terms in comparison with mass independent terms.
Equivalently, we can consider the charged leptons such as electron and muon as
massless fermions, when compared with the $B$ meson mass scale. In the limit
$m_{\ell} \to 0$ the expression for angular distribution as given in
Eq.~\eqref{eq:B2Plnu-dist-gen} becomes much simpler,
\begin{equation}
\frac{d^2\Gamma^{\text{SM}}}{ds \, d\cos\theta} = \frac{b^3\sqrt{s}}{8 \, \pi^3
\, m_B^2 \left( m_B^2 - m_P^2 + s \right)}
\modulus{\left(F_V^+\right)_{\text{SM}}}^2 \sin^2\theta.
\end{equation}
Independent of the expression for $\left(F_V^+\right)_{\text{SM}}$, it is easy
to show that the normalized angular distribution is given by,
\begin{equation}\label{eq:SM-Dist-B2Plnu-massless}
\frac{1}{\Gamma^{\text{SM}}} \frac{d\Gamma^{\text{SM}}
}{d\cos\theta} = \frac{3}{4} \sin^2\theta,
\end{equation}
which implies that $T_0 = 3/4 = -T_2$, $T_1 = 0$. Since the distribution of
events in the Dalitz plot is symmetric under $\cos\theta \leftrightarrow -
\cos\theta$, we have $N_I = N_{IV}$ and $N_{II} = N_{III}$ which automatically
satisfies the condition $T_1 = 0$. If we solve $T_0 = 3/4 = -T_2$, we find that
the number of events in the different segments of the Dalitz plot (equivalently
the number of events in the four distinct bins of $\cos\theta$) are related to
one another by
\begin{equation}\label{eq:SM-bins-B2Plnu}
\frac{N_I}{N_{II}} = \frac{5}{11} = \frac{N_{IV}}{N_{III}}.
\end{equation}
Any significant deviation from this would imply presence of NP effects. To
illustrate the effects of NP on the angular distribution in these types of
decays, we consider two simple and specific NP possibilities. Here we assume the
charged lepton to be massless ($m_{\ell}=0$) and the undetected fermion
($f^{\textrm{\ding{55}}}$) to have mass $m\neq 0$.
\paragraph{\textbf{Scalar type new physics:}} Considering the simplest scalar type NP
scenario, with $F_S \neq 0$, $F_P = F_V^{\pm} = F_A^{\pm} = F_{T_1} = F_{T_2} =
0$, we get
\begin{align*}
C_0^{\text{NP}} =& 2 \left(s - m^2\right) \modulus{F_S}^2,\\%
C_1^{\text{NP}} =& 0 = C_2^{\text{NP}}.
\end{align*}
In other words, there is no angular dependence at all here, i.e.\
\begin{equation*}
\frac{d^2\Gamma^{\text{NP}}}{ds \, d\cos\theta} = \frac{b\sqrt{s}}{64 \, \pi^3 \,
m_B^2 \left(m_B^2 - m_P^2 + s \right)} \left(s - m^2\right) \modulus{F_S}^2,
\end{equation*}
where $b = \left(s-m^2\right) \sqrt{\lambda\left(s,m_B^2,m_P^2\right)}/(2s)$ and
$m^2 \leqslant s \leqslant \left(m_B-m_P\right)^2$. If we do the integration
over $s$, then the normalized angular distribution is given by,
\begin{equation*}
\frac{1}{\Gamma^{\text{NP}}} \frac{d\Gamma^{\text{NP}}}{d\cos\theta} =
\frac{1}{2}.
\end{equation*}
In fact, if such a new physics were present, our observation of $B \to P +
\ell^- + f^{\textrm{\ding{55}}}$ would have the following angular distribution,
\begin{equation*}
\frac{d\Gamma}{d\cos\theta} = \Gamma^{\text{SM}} \left(\frac{3}{4} \sin^2\theta
+ \frac{1}{2} \epsilon_0 \right),
\end{equation*}
where we have parametrized the new physics contribution in terms of $\epsilon_0$,
\begin{equation*}
\epsilon_0 = \Gamma^{\text{NP}}/\Gamma^{\text{SM}}.
\end{equation*}
Doing integration over $\cos\theta$ we get,
\begin{equation*}
\Gamma = \Gamma^{\text{SM}} \left(1+\epsilon\right) = \Gamma^{\text{SM}} + \Gamma^{\text{NP}}.
\end{equation*}
This implies
\begin{equation}\label{eq:Scalar-NP-Dist-B2Plnu}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{3\sin^2\theta + 2
\epsilon_0}{4 \left(1+\epsilon_0\right)}.
\end{equation}
This angular distribution is shown in Fig.~\ref{fig:Scalar-NP-B2Plnu} where we
have varied $\epsilon_0$ in the range $[0,1]$, i.e.\ we have allowed the
possibility that the NP contribution might be as large as that of the SM. It is
interesting to find that in Fig.~\ref{fig:Scalar-NP-B2Plnu} at two specific
values of $\cos\theta$ there is no difference between the standard model
prediction alone and the combination of standard model and new physics
contributions. These two points can be easily obtained by equating
Eqs.~\eqref{eq:SM-Dist-B2Plnu-massless} and \eqref{eq:Scalar-NP-Dist-B2Plnu},
and then solving for $\cos\theta$ gives us
\begin{equation}
\cos\theta = \pm 1/\sqrt{3} \approx \pm 0.57735.
\end{equation}
This corresponds to the angle $\theta \approx 54.74^{\circ}$. At these two
points in $\cos\theta$, the normalized uni-angular distribution always has the
value $0.5$, even if there is some scalar new physics contributing to our
process under consideration.
\begin{figure}[hbtp]
\centering%
\includegraphics[scale=0.8]{fig_Scalar-NP.pdf}%
\caption{Normalized uni-angular distribution showing the effect of a scalar new
physics contribution to $B \to P \ell^- f^{\textrm{\ding{55}}}$ where we have neglected
the mass of the charged lepton $\ell =e,\mu$. This also shows the normalized
uni-angular distribution showing the effect of a scalar new physics contribution
to $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ considering the $m_1 = m_2$ case
only.}%
\label{fig:Scalar-NP-B2Plnu}
\end{figure}
From Eq.~\eqref{eq:Scalar-NP-Dist-B2Plnu} it is clear that despite the scalar NP
effect, the distribution is still symmetric under $\cos\theta \leftrightarrow
-\cos\theta$, and solving for the number of events in the four segments of the
Dalitz plot (equivalently the four $\cos\theta$ bins) we get,
\begin{equation}\label{eq:Scalar-NP}
\frac{N_I}{N_{II}} = \frac{5+8\epsilon_0}{11 + 8\epsilon_0} =
\frac{N_{IV}}{N_{III}}.
\end{equation}
It is easy to see that when $\epsilon=0$ we get back the SM prediction of
Eq.~\eqref{eq:SM-bins-B2Plnu} as expected.
\paragraph{\textbf{Tensor type new physics:}}
\begin{figure*}[hbtp]
\centering%
\includegraphics[scale=0.8]{fig_B2Plnu-Tensor-NP.pdf}%
\caption{Normalized uni-angular distribution showing the effect of a tensor new
physics contribution to $B \to P \ell^- f^{\textrm{\ding{55}}}$ where we have neglected
the mass of the charged lepton $\ell=e,\mu$. These set of plots can also
describe the effect of a vector new physics contribution to $B \to K
f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ when the final fermions are equally
massive.}%
\label{fig:Tensor-NP}
\end{figure*}
Let us consider a tensor type of new physics
possibility in which $F_{T_1} \neq 0$ and all other form factors are zero. In
such a case we get,
\begin{align*}
C_0^{\textrm{NP}} &=2 m^2 \left(s-m^2\right)
\frac{\lambda\left(s,m_B^2,m_P^2\right)}{s} \modulus{F_{T_1}}^2,\\%
C_1^{\textrm{NP}} &=0,\\%
C_2^{\textrm{NP}} &= 2 \left(s-m^2\right)^2 \frac{\lambda\left(s, m_B^2,
m_P^2\right)}{s} \modulus{F_{T_1}}^2.
\end{align*}
It is easy to notice that in the limit $m \to 0$ we have $C_0 \to 0$ but $C_2
\not\to 0$. If we do the integration over $s$, then the normalized angular
distribution is given by,
\begin{equation}
\frac{1}{\Gamma^{\textrm{NP}}} \frac{d\Gamma^{\textrm{NP}}}{d\cos\theta} =
T_0^{\textrm{NP}} + T_2^{\textrm{NP}} \cos^2\theta,
\end{equation}
where $T_2^{\textrm{NP}} = 3\left(1/2-T_0^{\textrm{NP}}\right)$ and
$T_0^{\textrm{NP}} = 3c_0/\left(6c_0 + 2c_2\right)$ with
\begin{equation*}
c_j = \int_{m^2}^{\left(m_B-m_P\right)^2} \frac{b\sqrt{s} \
C_j^{\textrm{NP}}}{128 \pi^3 m_B^2 \left(m_B^2 - m_P^2 + s\right)} ds.
\end{equation*}
Thus in the limit $m\to 0$ we have $T_0 = 0$. If such a new physics were
present, our observation of $B \to P \ell^- f^{\textrm{\ding{55}}}$ would have the
following angular distribution,
\begin{equation}
\frac{d\Gamma}{d\cos\theta} = \Gamma^{\text{SM}} \left(\frac{3}{4} \sin^2\theta + \left(T_0^{\textrm{NP}} + 3 \left(\frac{1}{2} - T_0^{\textrm{NP}}\right) \cos^2\theta\right) \epsilon \right),
\end{equation}
where $\epsilon=\Gamma^{\textrm{NP}}/\Gamma^{\textrm{SM}}$ is the NP parameter
which can vary in the range $\left[0,1\right]$ denoting the possibility that the
NP contribution can be as large as that of the SM, and $T_0^{\textrm{NP}}$ acts
as a free parameter here which can vary in the range $\left[0,3/4\right]$ in
which $d\Gamma^{\textrm{NP}}/d\cos\theta \geqslant 0$ for all values of
$\cos\theta$. Doing integration over $\cos\theta$ we get $\Gamma =
\Gamma^{\textrm{SM}} \left(1 + \epsilon\right) = \Gamma^{\textrm{SM}} +
\Gamma^{\textrm{NP}}$. This implies
\begin{equation}\label{eq:Tensor-NP-Dist-B2Plnu}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{3 + 4 T_0^{\textrm{NP}}
\epsilon - 3 \left(4T_0^{\textrm{NP}} \epsilon -2\epsilon +
1\right)\cos^2\theta}{4\left(1+\epsilon\right)}.
\end{equation}
This angular distribution is shown in Fig.~\ref{fig:Tensor-NP} in which we have
considered nine values of $T_0^{\textrm{NP}}$ and varied $\epsilon$ in the range
$[0,1]$. It is clearly evident in Fig.~\ref{fig:Tensor-NP} that
$T_0^{\textrm{NP}} = 3/4$ case is always indistinguishable from the SM case, as
it should be. Just like the scalar-type new physics case, we observe that there
are two values of $\cos\theta$ at which there is no difference between the SM
prediction alone and the combination of SM and NP contributions. These two
points can be easily computed by equating
Eqs.~\eqref{eq:SM-Dist-B2Plnu-massless} and \eqref{eq:Tensor-NP-Dist-B2Plnu},
and then solving for $\cos\theta$ we once again find that,
\begin{equation}
\cos\theta = \pm 1/\sqrt{3} \approx \pm 0.57735,
\end{equation}
which corresponds to the angle $\theta \approx 54.74^{\circ}$. At these two
points in $\cos\theta$, the normalized uni-angular distribution always has the
value $0.5$, even if there is some tensor new physics contributing to our
process under consideration. It should be noted that these are also the same
points where the scalar new physics contribution shows similar effect.
It is also easy to notice that the angular distribution as given in
Eq.~\eqref{eq:Tensor-NP-Dist-B2Plnu} is symmetric under $\cos\theta
\leftrightarrow -\cos\theta$, and solving for the number of events in the four
segments of the Dalitz plot (equivalently the four $\cos\theta$ bins) we get,
\begin{equation}
\frac{N_{I}}{N_{II}} = \frac{5 + 2 \epsilon \left(7 - 6
T_0^{\textrm{NP}}\right)}{11 + 2 \epsilon \left(1 + 6 T_0^{\textrm{NP}}\right)}
= \frac{N_{IV}}{N_{III}}.
\end{equation}
It is easy to see that when $\epsilon=0$ or $T_0^{\textrm{NP}}=3/4$ we get back
the SM prediction of Eq.~\eqref{eq:SM-bins-B2Plnu} as expected.
Finally we analyze new physics possibilities in the decays belonging to the GS2
category. Due to the very nature of the GS2 decay modes, the following
discussion of NP effects presumes usage of advanced detector technology to get
angular information.
\subsubsection{New physics effects in the GS2 decay \texorpdfstring{$B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$}{B -> K + f1V + f2X}}
As mentioned before, the GS2 decay modes are originally part of S3, i.e.\ it is
extremely difficult to get angular distribution for these cases unless we
innovate on detector technology. Here we consider such a decay mode $B \to K
f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ in which both $f_1$, $f_2$ are neutral
fermions who have evaded, till now, all our attempts to detect them near their
place of origin. But probably with displaced vertex detectors or some other
advanced detector we could bring at least one of these fermions (say $f_1$)
under the purview of experimental study and measure its 4-momentum or angular
information. The missing fermion (which is $f_2$ in our example here) might have
flied in a direction along which there is no detector coverage. To increase the
sample size we should include $B \to K f_1^{\textrm{\ding{55}}} f_2^{\textrm{\ding{51}}}$ events
also, provided we know how to ascertain the particle or anti-particle nature of
$f_1$ and $f_2$. To illustrate this point, let us consider the possibility $f_1
f_2 = \nu_S \ensuremath{\overline{\nu}}_S$. In a displaced vertex detector if we see $\pi^+ \mu^-$
events, they can be attributed to the decay of $\nu_S$ and similarly $\pi^-
\mu^+$ events would appear from the decay of $\ensuremath{\overline{\nu}}_S$. In this case, we can
infer the angle $\theta$ by knowing the 4-momentum of either $f_1 = \nu_S$ or
$f_2 = \ensuremath{\overline{\nu}}_S$ (see Fig.~\ref{fig:GJ-frame}). If we find that both $f_1$ and
$f_2$ leave behind their signature tracks in the detector (i.e.\
$f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{51}}}$) it would be the most ideal situation. But
as we have already stressed before, measuring 4-momenta of either of the
fermions would suffice for our angular studies.
In the SM the only contribution to $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$
and $B \to K f_1^{\textrm{\ding{55}}} f_2^{\textrm{\ding{51}}}$ would come from $B \to K
\nu_{\ell} \ensuremath{\overline{\nu}}_{\ell}$ where as in the case of NP we have a number of
possibilities that includes sterile neutrinos, dark matter particles, or some
long lived particles in the final state, $f_1 f_2 = \nu_{\ell} \ensuremath{\overline{\nu}}_S$, $\nu_S
\ensuremath{\overline{\nu}}_{\ell}$, $\nu_S \ensuremath{\overline{\nu}}_S$, $f^{\textrm{DM}} \bar{f}^{\textrm{DM}}$,
$f_1^{\textrm{DM}} f_2^{\textrm{DM}}$, $f^{\textrm{LLP}}
\bar{f}^{\textrm{LLP}}$, $f_1^{\textrm{LLP}} f_2^{\textrm{LLP}}$
etc.\footnote{In addition to the new physics possibilities considered here,
there can be additional contributions to the $B \to K + \text{`invisible'}$
decay, e.g.\ from SM singlet scalars contributing to the `invisible' part as
discussed in Ref.~\cite{Kim:2009qc}. As is evident, our analysis is instead
focused on a pair of fermions contributing to the `invisible' part.} One can
also consider non-standard neutrino interactions also contributing in these
cases. To demonstrate our methodology, we shall analyze only a subset of these
various NP possibilities in which $f_1$ and $f_2$ have the same mass, i.e.\ $m_1
= m_2 = m$ (say), as this greatly simplifies the calculation. As we shall
illustrate below we can not only detect the presence of NP but ascertain whether
it is of scalar type or vector type, for example, by analyzing the angular
distribution.
Before, we go for new physics contributions, let us analyze the SM contribution
$B \to K \nu_{\ell} \ensuremath{\overline{\nu}}_{\ell}$. Here only vector and axial-vector currents
contributions, and $F_A^{\pm} = - F_V^{\pm}$. Also the neutrino and
anti-neutrino are massless, i.e.\ $m_1 = 0 = m_2$, which implies $a_t = a_u =
\tfrac{1}{2} \left(m_B^2 + m_K^2 -s\right)$ and $b = \tfrac{1}{2}
\sqrt{\lambda\left(s,m_B^2,m_K^2\right)}$, where $m_B$ and $m_K$ denote the
masses of $B$ and $K$ mesons respectively. Substituting these information in
Eqs.~\eqref{eq:SM-C012} and in Eq.~\eqref{eq:gen-angular-dist} we get,
\begin{equation}
\frac{d^2\Gamma^{\text{SM}}}{ds \, d\cos\theta} = \frac{b^3\sqrt{s}}{8 \, \pi^3 \, m_B^2
\left( m_B^2 - m_K^2 + s \right)} \modulus{\left(F_V^+\right)_{\text{SM}}}^2
\sin^2\theta.
\end{equation}
Irrespective of the expression for $\left(F_V^+\right)_{\text{SM}}$, it can be
easily shown that the normalized angular distribution is given by,
\begin{equation}\label{eq:SM-Dist-B2Knn}
\frac{1}{\Gamma^{\text{SM}}} \frac{d\Gamma^{\text{SM}}
}{d\cos\theta} = \frac{3}{4} \sin^2\theta,
\end{equation}
which implies that $T_0 = 3/4 = -T_2$, $T_1 = 0$. Following the same logic as
the one given after Eq.~\eqref{eq:SM-Dist-B2Plnu-massless}, we find that the
number of events in the different segments of the Dalitz plot (equivalently the
number of events in the four distinct bins of $\cos\theta$) are related to one
another by,
\begin{equation}\label{eq:SM-bins}
\frac{N_I}{N_{II}} = \frac{5}{11} = \frac{N_{IV}}{N_{III}}.
\end{equation}
This sets the stage for us to explore (i) a scalar type and (ii) a vector type
of NP possibility, with final fermions for which $m_1 = m_2 = m \neq 0$.
\paragraph{\textbf{Scalar type new physics:}}
Once again we consider the simplest scalar type NP scenario, with $F_S \neq 0$,
and other form factors being zero. This leads us to,
\begin{align*}
C_0^{\text{NP}} =& 2 \left(s - 4m^2\right) \modulus{F_S}^2,\\%
C_1^{\text{NP}} =& 0 = C_2^{\text{NP}}.
\end{align*}
In other words, there is no angular dependence at all here, i.e.\
\begin{equation}
\frac{d^2\Gamma^{\text{NP}}}{ds \, d\cos\theta} = \frac{b\sqrt{s}}{64 \, \pi^3 \,
m_B^2 \left(m_B^2 - m_K^2 + s \right)} \left(s - 4m^2\right) \modulus{F_S}^2,
\end{equation}
where $b = \left(\sqrt{\left(s-4m^2\right)} \,
\sqrt{\lambda\left(s,m_B^2,m_K^2\right)}\right)/(2\sqrt{s})$ and $4m^2 \leqslant
s \leqslant \left(m_B-m_K\right)^2$. If we do the integration over $s$, then for
NP only the normalized angular distribution is given by,
\begin{equation*}
\frac{1}{\Gamma^{\text{NP}}} \frac{d\Gamma^{\text{NP}}}{d\cos\theta} =
\frac{1}{2}.
\end{equation*}
Assuming such a NP contributing in addition to the SM, the experimentally
observed angular distribution can be written as,
\begin{equation*}
\frac{d\Gamma}{d\cos\theta} = \Gamma^{\text{SM}} \left(\frac{3}{4} \sin^2\theta + \frac{1}{2} \epsilon_0 \right),
\end{equation*}
where $\epsilon_0 = \Gamma^{\text{NP}}/\Gamma^{\text{SM}}$ is the new physics
parameter which can vary in the range $\left[0,1\right]$ if we assume the NP
contribution to be as large as that from the SM. Doing integration over
$\cos\theta$ we get, $\Gamma = \Gamma^{\text{SM}} \left(1+\epsilon_0\right) =
\Gamma^{\text{SM}} + \Gamma^{\text{NP}}$. This implies
\begin{equation}\label{eq:Scalar-NP-Dist-B2Knn}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{3\sin^2\theta + 2
\epsilon_0}{4 \left(1+\epsilon_0\right)}.
\end{equation}
Since Eq.~\eqref{eq:Scalar-NP-Dist-B2Knn} is identical to
Eq.~\eqref{eq:Scalar-NP-Dist-B2Plnu}, the angular distribution for this case is
also as shown in Fig.~\ref{fig:Scalar-NP-B2Plnu} where we have varied
$\epsilon_0$ in the range $[0,1]$. Once again at two specific values of
$\cos\theta$, namely $\cos\theta = \pm 1/\sqrt{3} \approx \pm 0.57735$
corresponding to the angle $\theta \approx 54.74^{\circ}$, there is no
difference between the standard model prediction alone and the combination of
standard model and scalar new physics contribution. At these two points in
$\cos\theta$, the normalized uni-angular distribution always has the value
$0.5$, even if there is some scalar new physics contributing to our process
under consideration.
Since the angular distribution as shown in Eq.~\eqref{eq:Scalar-NP-Dist-B2Knn}
is fully symmetric under $\cos\theta \leftrightarrow -\cos\theta$, the number of
events in the four segments of the Dalitz plot (equivalently in the four
$\cos\theta$ bins) satisfy the following relationship,
\begin{equation}\label{eq:Scalar-NP-bins}
\frac{N_I}{N_{II}} = \frac{5+8\epsilon_0}{11 + 8\epsilon_0} =
\frac{N_{IV}}{N_{III}}.
\end{equation}
It is easy to see that $\epsilon_0=0$ gives the SM prediction of
Eq.~\eqref{eq:SM-bins} as expected.
\paragraph{\textbf{Vector type new physics:}}
Let us now discuss another new physics scenario, such as the case of a
flavor-changing $Z'$ or a dark photon $\gamma_D$ giving rise to the final pair
of fermions $f_1 f_2$. We assume that for this kind of new physics scenario,
$F_V^+ = F_V^{\text{NP}} \neq 0$ and other form factors are zero. For this kind
of new physics we get,
\begin{align*}
C_0^{\text{NP}} =& 2 \modulus{F_V^{\text{NP}}}^2
\lambda\left(s,m_B^2,m_K^2\right),\\%
C_1^{\text{NP}} =& 0,\\%
C_2^{\text{NP}} =& -8 b^2 \modulus{F_V^{\text{NP}}}^2,
\end{align*}
where $b = \left(\sqrt{\left(s-4m^2\right)} \,
\sqrt{\lambda\left(s,m_B^2,m_K^2\right)}\right)/\left(2\sqrt{s}\right)$ and
$4m^2 \leqslant s \leqslant \left(m_B-m_K\right)^2$. The angular distribution
for the NP alone contribution can, therefore, be written in terms of
$T_0^{\textrm{NP}}$ and $T_2^{\textrm{NP}}$ which are directly proportional to
$C_0^{\text{NP}}$ and $C_2^{\text{NP}}$ respectively. It would lead us to
describe the complete angular distribution in terms of $T_0^{\text{NP}}$ and
$\epsilon=\Gamma^{\textrm{NP}}/\Gamma^{SM}$ using
Eq.~\eqref{eq:Tensor-NP-Dist-B2Plnu} and the angular distribution would look
like the one shown in Fig.~\ref{fig:Tensor-NP}. However, it is possible to
describe the effects of NP on the angular distribution using a different set of
parameters as well. For this we start a fresh with the angular distribution for
the NP contribution alone, which in our case is given by
\begin{equation*}
\frac{d^2\Gamma^{\text{NP}}}{ds \, d\cos\theta} = \frac{b \modulus{F_V^{\text{NP}}}^2 \lambda\left(s,m_B^2,m_K^2\right) \, \left( s \sin^2\theta + 4m^2 \cos^2\theta \right)}{64 \, \pi^3 \, m_B^2 \left(m_B^2 - m_K^2
+ s \right) \sqrt{s}}.
\end{equation*}
Doing integration over $\cos\theta$ we obtain,
\begin{equation*}
\frac{d\Gamma^{\text{NP}}}{ds} = \frac{b \modulus{F_V^{\text{NP}}}^2
\lambda\left(s,m_B^2,m_K^2\right)}{64 \, \pi^3 \, m_B^2 \left(m_B^2 - m_K^2 + s
\right) \sqrt{s}} \left( \frac{4s + 8m^2}{3} \right).
\end{equation*}
Therefore, the normalized uni-angular distribution is given by
\begin{equation}\label{eq:Vector-NP-Dist-s-B2Knn}
\frac{1}{d\Gamma^{\text{NP}}/ds} \frac{d^2\Gamma^{\text{NP}}}{ds \, d\cos\theta} = \frac{3}{4} \left(\frac{s \sin^2\theta + 4m^2 \cos^2\theta}{s + 2m^2}\right).
\end{equation}
It is interesting to compare this with the standard model expression,
\begin{equation}\label{eq:SM-Dist-s-B2Knn}
\frac{1}{d\Gamma^{\text{SM}}/ds} \frac{d^2\Gamma^{\text{SM}}}{ds \, d\cos\theta} = \frac{3}{4} \sin^2\theta.
\end{equation}
\begin{figure*}[hbtp]
\centering%
\includegraphics[scale=0.8]{fig_B2Knunu-Vector-NP.pdf}%
\caption{Normalized uni-angular distribution showing the effect of a vector new
physics contribution to $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$. }%
\label{fig:Vector-NP}
\end{figure*}
Since the range for $s$ is different in the SM and the NP scenarios, we can not
add Eqs.~\eqref{eq:Vector-NP-Dist-s-B2Knn} and \eqref{eq:SM-Dist-s-B2Knn}
directly. Carrying out the integration over $s$ we get,
\begin{equation*}
\frac{d\Gamma^{\text{NP}}}{d\cos\theta} = \frac{3}{4} \Big( \mathcal{S}
\sin^2\theta + \mathcal{C} \cos^2\theta \Big),
\end{equation*}
where
\begin{align*}
\mathcal{S} &= \int_{4m^2}^{(m_B-m_K)^2} \frac{d\Gamma^{\text{NP}}}{ds}
\left(\frac{s}{s+2m^2}\right) ds,\\%
\mathcal{C} &= \int_{4m^2}^{(m_B-m_K)^2} \frac{d\Gamma^{\text{NP}}}{ds}
\left(\frac{4m^2}{s+2m^2}\right) ds.
\end{align*}
Doing integration over $\cos\theta$ we get,
\begin{equation*}
\Gamma^{\text{NP}} = \mathcal{S} + \mathcal{C}/2,
\end{equation*}
and hence
\begin{equation*}
\frac{1}{\Gamma^{\text{NP}}} \frac{d\Gamma^{\text{NP}}}{d\cos\theta} = \frac{3
\left(\mathcal{S} \sin^2\theta + \mathcal{C} \cos^2\theta\right)}{2
(2\mathcal{S} + \mathcal{C})}.
\end{equation*}
For the SM contribution we know that
\begin{equation*}
\frac{1}{\Gamma^{\text{SM}}} \frac{d\Gamma^{\text{SM}}}{d\cos\theta} = \frac{3}{4} \sin^2\theta.
\end{equation*}
Now the uni-angular distribution for the process $B \to K f_1^{\textrm{\ding{51}}}
f_2^{\textrm{\ding{55}}}$ is given by,
\begin{equation*}
\frac{d\Gamma}{d\cos\theta} =\frac{3}{4} \Gamma^{\text{SM}} \left( \left(1 +\epsilon_s \right) \sin^2\theta + \epsilon_c \cos^2\theta \right),
\end{equation*}
where $\epsilon_s = \mathcal{S}/\Gamma^{\text{SM}}$ and $\epsilon_c =
\mathcal{C}/\Gamma^{\text{SM}}$, are the two parameters which describe the
effect of vector type NP. It is easy to check that,
\begin{equation*}
\Gamma = \frac{3}{4} \Gamma^{\text{SM}} \left( \frac{4}{3} \left(1+\epsilon_s\right) + \frac{2\epsilon_c}{3} \right) = \Gamma^{\text{SM}} + \Gamma^{\text{NP}}.
\end{equation*}
Therefore, the normalized angular distribution is given by,
\begin{equation}\label{eq:Vector-NP-Dist-B2Knn}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{3 \left(1 + \epsilon_s\right) \sin^2\theta + 3\epsilon_c \cos^2\theta}{4 \left(1+\epsilon_s\right) + 2 \epsilon_c}.
\end{equation}
It is important to note that, if we consider the mass of the fermion $f$ to be
zero, i.e.\ $m=0$, then $\epsilon_c = 0$, since $\mathcal{C} =0$. In such a case
the uni-angular distribution is given by,
\begin{equation*}
\frac{1}{\Gamma} \frac{d\Gamma}{d\cos\theta} = \frac{3}{4} \sin^2\theta, \qquad \left(\text{here } \epsilon_c=0\right)
\end{equation*}
which is same as that of the SM case. This is plausible, as in the SM case also
one has $m=0$ for the neutrino mass and only vector and axial-vector currents
contribute.
Assuming that the NP contribution can be smaller than or as large as the SM
contribution, i.e.\ $0 \leqslant \Gamma^{\text{NP}} \leqslant
\Gamma^{\text{SM}}$, we get
\begin{equation*}
0 \leqslant \epsilon_s + \epsilon_c/2 \leqslant 1.
\end{equation*}
Thus $0 \leqslant \epsilon_s \leqslant 1$ implies that $0 \leqslant \epsilon_c
\leqslant 2(1-\epsilon_s)$.
In Fig.~\ref{fig:Vector-NP} we have considered nine values of $\epsilon_s$ and
varied $\epsilon_c$ in the range $[0,2\left(1-\epsilon_s\right)]$, to obtain the
uni-angular distribution. It is clearly evident in Fig.~\ref{fig:Vector-NP} that
$\epsilon_c=0$ case is always indistinguishable from the SM case, as it should
be. Just like the scalar-type new physics case, we observe that at $\cos\theta =
\pm 1/\sqrt{3} \approx \pm 0.57735$, there is no difference between the SM
prediction alone and the combination of SM and NP contributions.
It is also easy to notice that the angular distribution as given in
Eq.~\eqref{eq:Vector-NP-Dist-B2Knn} is symmetric under $\cos\theta
\leftrightarrow -\cos\theta$, and solving for the number of events in the four
segments of the Dalitz plot (equivalently the four $\cos\theta$ bins) we get,
\begin{equation}
\frac{N_{I}}{N_{II}} = \frac{5 \left(1+\epsilon_s\right) +
7\epsilon_c}{11\left(1+\epsilon_s\right) + \epsilon_c} = \frac{N_{IV}}{N_{III}}.
\end{equation}
It is easy to see that when $\epsilon_c = 0 = \epsilon_s$ we get back the SM
prediction of Eq.~\eqref{eq:SM-bins} as expected.
\subsection{Discussion}
It should be noted that our discussions on the types of NP contributions to the
S2 and GS2 modes, specifically $B \to P \ell^- f^{\textrm{\ding{55}}}$ and $B \to K
f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ respectively, has been fully general. There
is no complications arising out of hadronic form factors since we have
considered normalized angular distribution. It should be noted that our analysis
also does not depend on how large or small the masses of the fermions
$f,f_{1,2}$ are, as long as they are non-zero.
It is also very interesting to note that both the scalar and tensor type of NP
for the $B \to P \ell^- f^{\textrm{\ding{55}}}$ decays and both the scalar and vector
types of NP for the $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ decays, exhibit
similar behaviour at $\cos\theta = \pm 1/\sqrt{3}$. In order to know the real
reason behind this we must do a very general analysis. Let us assume that the
most general angular distribution for the processes $B \to P \ell^-
f^{\textrm{\ding{55}}}$ and $B \to K f_1^{\textrm{\ding{51}}} f_2^{\textrm{\ding{55}}}$ is given by
Eq.~\eqref{eq:Gen-ang-dist}. If we now equate this distribution to the SM
prediction of Eq.~\eqref{eq:SM-Dist-B2Plnu-massless} or
Eq.~\eqref{eq:SM-Dist-B2Knn}, and solve for $\cos\theta$ after substituting
Eq.~\eqref{eq:Def-T012} we find that,
\begin{equation}\label{eq:costheta-gen-sol}
\cos\theta = \frac{-c_1 \pm \sqrt{c_1^2 + 3
\left(c_0+c_2\right)^2}}{3\left(c_0+c_2\right)},
\end{equation}
where the $c_j$'s (for $j=0,1,2$) are obtained from Eq.~\eqref{eq:cj} with
appropriate substitutions of masses and form factors. Thus
Eq.~\eqref{eq:costheta-gen-sol} is the most general solution that we can get for
the two specific values of $\cos\theta$. However, let us look at the specific
case when $c_1=0$. Only in this situation do we get
\begin{equation}
\cos\theta = \pm 1/\sqrt{3}.
\end{equation}
Now it is clear that since, in both the scalar and tensor type of NP
considerations for the $B \to P \ell^- f^{\textrm{\ding{55}}}$ decays and in both the
scalar and vector types of NP considerations for the $B \to K f_1^{\textrm{\ding{51}}}
f_2^{\textrm{\ding{55}}}$ decays, the angular distribution did not have any term
directly proportional to $\cos\theta$ (i.e.\ $c_1=0$), we obtained the same
$\cos\theta = \pm 1/\sqrt{3}$ result in both the cases. Therefore, if the
observed normalized uni-angular distribution does not have the value $0.5$ at
$\cos\theta = \pm 1/\sqrt{3}$, it implies that $c_1 \neq 0$.
Another interesting aspect of the two specific NP contributions we have
considered, is that from Figs.~\ref{fig:Scalar-NP-B2Plnu}, \ref{fig:Tensor-NP}
and \ref{fig:Vector-NP} one can clearly see that the vector and tensor types of
NP can accommodate a much larger variation in the angular distribution than the
scalar type NP. However, there is also a certain part of the angular
distribution for which both scalar and vector (or tensor) types of NP give
identical results. This happens when
\begin{equation}
\epsilon_0 = \frac{3\epsilon_c}{2\left(1 + \epsilon_s - \epsilon_c\right)} = \frac{\epsilon \left( 3 - 4T_0^{\textrm{NP}} \right)}{1 - \epsilon \left( 2 - 4T_0^{\textrm{NP}} \right)}.
\end{equation}
In order for $\epsilon_0$ to vary in the range $[0,1]$ we find that (i) for
$0\leqslant \epsilon_s \leqslant 1$ we have $0 \leqslant \epsilon_c \leqslant
2\left(1+\epsilon_s\right)/5$ and (ii) for $0 \leqslant \epsilon \leqslant 1$ we
have $\frac{1}{2} \leqslant T_0^{\textrm{NP}} \leqslant \frac{3}{4}$. In these
specific regions, therefore, it would not be possible to clearly distinguish
whether scalar or vector or tensor type NP is contributing to our process under
consideration. Nevertheless, our approach can be used to constraint these NP
hypothesis without any hadronic uncertainties.
\section{Conclusion}\label{sec:conclusion}
We have shown that all NP contributions to three-body semi-hadronic decays of
the type $P_i \to P_f f_1 f_2$, where $P_{i(f)}$ denotes appropriate initial
(final) pseudo-scalar meson and $f_{1,2}$ are a pair of fermions, can be
codified into the most general Lagrangian which gives rise to a very general
angular distribution. The relevant NP information can be obtained by using
various angular asymmetries, provided at least one of the final pair of fermions
has some detectable signature, such as a displaced vertex, in the detector.
Depending on the detection feasibility of the final fermions we have grouped the
$P_i \to P_f f_1 f_2$ decays into three distinct categories: (i) S1 where both
$f_1$ and $f_2$ are detected, (ii) S2 where either $f_1$ or $f_2$ gets detected,
and (ii) S3 where neither $f_1$ nor $f_2$ gets detected. We consider the
possibility that with advancement in detector technology S3 decays could, in
future, be grouped under S2 category. We analyze some specific NP scenarios in
each of these categories to illustrate how NP affects the angular distribution.
Specifically we have analyzed (a) lepton-flavor violating S1 decay $B \to P
\ell^- \ell'^+$ (with $P = \pi, K, D$ and $\ell,\ell'=e,\mu,\tau$) showing
angular signatures of all generic NP possibilities, (b) S2 decays of the type $B
\to P \ell^- f$ (where $f$ is not detected in the laboratory) showing the effect
of a scalar type and a tensor type NP on the angular distribution, and finally
(c) S3 decays (more correctly generalized S2 decays) of the type $B \to K f
\bar{f}$ (where either $f$ or $\bar{f}$ gets detected in an advanced detector)
showing the effects of a scalar type and a vector type NP on the angular
distribution. The effects on the angular distribution can be easily estimated
from Dalitz plot asymmetries. The signatures of NP in angular distribution are
distinct once the process is chosen carefully. Moreover, as shown in our
examples it can be possible to do the identification and quantification of NP
effects without worrying about hadronic uncertainties. We are optimistic that
our methodology can be put to use in LHCb, Belle II in the study of appropriate
$B$ meson decays furthering our search for NP.
\acknowledgments
This work was supported in part by the National Research Foundation of Korea
(NRF) grant funded by the Korean government (MSIP) (No.2016R1A2B2016112) and
(NRF-2018R1A4A1025334). This work of D.S. was also supported (in part) by the
Yonsei University Research Fund (Post Doc. Researcher Supporting Program) of
2018 (project no.: 2018-12-0145).
|
2,869,038,157,075 | arxiv | \section{Introduction}\label{sect:intro}
{\it Gaia}, a cornerstone mission of the European Space Agency (ESA), is ambitious to chart a three-dimensional map of our Galaxy with unprecedented precision. After 22 months of observations, {\it Gaia} delivered its second release ({\it Gaia} DR2) on April 25, 2018 \citep{gaia2018}. This catalog contains the positions of nearly 1.7 billion objects with G-band magnitude brighter than $\sim$ 20.7. Among these sources, more than 1.3 billion stars in the Milk Way have precise positions, proper motions, parallaxes and colors. The average uncertainties in the respective proper motion components are up to 0.06 \,mas\,$\rm yr^{-1}$ (for $G<15$\,mag), 0.2 \,mas\,$\rm yr^{-1}$ (for $G=17$\,mag) and 1.2 \,mas\,$\rm yr^{-1}$ (for $G = 20$\,mag). The {\it Gaia} DR2 parallaxes and proper motions are based only on {\it Gaia} data.
The {\it Gaia} DR2 supersedes the most majority of current existing proper motion catalogs. The previous proper motion catalogs, such as PPMXL \citep{roeser2010}, HSOY \citep{Altmann2017}, the UCAC series \citep{zacharias2004, zacharias2010, Zacharias2017}, APOP \citep{qi2015}, and GPS1 \citep[][hereafter, T17]{tian2017}, are not comparable with {\it Gaia} DR2 in quality, even though HSOY, UCAC5 and GPS1 were built combining the precise {\it Gaia} DR1 astrometry \citep{gaia2016a}.
Unfortunately, some limitations still exists in {\it Gaia} DR2: (1) more than 361 million sources only have positions (precision $\sim$2\,mas) at J2015.5 and the mean {\it G} magnitude, missing proper motions and parallax etc; (2) the average precision of proper motions is hard to reach a level of \,sub-mas\,$\rm yr^{-1}$ for faint sources, in particular for those close to the {\it Gaia} limiting magnitude; (3) {\it Gaia} DR2 is complete in $12 < G < 17$\,mag, but incomplete at an ill-defined faint magnitude limit; (4) no sources with $G > 20.7$\,mag.
In this study, we would like to extend the GPS1, and release GPS1+ proper motion catalog to make up the limitations of {\it Gaia} DR2. Therefore, the GPS1+ will mainly focus on: (1) the sources ($19<G<20.7$\,mag), using the {\it Gaia} DR2 proper motions as priors to improve the proper motions combining PS1 and SDSS if their proper motions were measured in {\it Gaia} DR2; (2) the part of missing sources ($>361$ million) in {\it Gaia} DR2, their proper motions are calculated with the same procedure of GPS1; (3) the faint sources ($20.7 < G < 22.5$\,mag), using the same procedure as GPS1
With the above motivations, we arrange the remainder of this paper as follows. In Section 2, we describe how to construct the GPS1+ catalog. In this section, we first summarize the four data sets in brief, and describe a Bayesian model to calculate proper motions for the sources which have {\it Gaia} proper motions. Section 3 then presents the results of GPS1+ proper motions, and demonstrate their performance in accuracy and precision. In Section 4, we briefly discuss the limitations of GPS1+ and summarize in Section 5.
Throughout the paper, we adopt the Solar motion as $(U_\odot,V_\odot,W_\odot)=(9.58, 10.52, 7.01)$\,km$\rm s^{-1}$\ \citep{tian2015}, and the IAU circular speed of the local standard of rest (LSR) as $v_0=220$\,km$\rm s^{-1}$. Also, $\alpha*$ is used to denote the right ascension in the gnomonic projection coordinate system, for example, $\mu_{\alpha^*}$\ $=$ $\mu_{\alpha}\cos(\delta)$, and $\Delta\alpha*=\Delta\alpha\cos(\delta)$, while ${\epsilon}$ denotes uncertainties, to avoid confusion with the symbol $\delta$ referring to a source's declination. We use $\Delta$ to denote the differences in quantities such as proper motion or position.
\section{The Construction of GPS1+}\label{sect:data}
\subsection{Data Set}\label{sect:data}
We still use the four basic imaging surveys, i.e., {\it Gaia}, PS1, SDSS, and 2MASS, to build the GPS1+ catalog. Unlike GPS1, GPS1+ will be based on the {\it Gaia} DR2, but the other three astrometric datasets keep the same as those used in GPS1, i.e., the same data version and treatment.
{\it Gaia} DR2 consists of around 1.69 billion astrometric sources \citep{gaia2018}. All the sources have positions, and they are calibrated to the International Celestial Reference Frame (ICRF) at epoch J2015.5. The typical uncertainties in positions are the order of 0.7\,mas\ for sources at the faint end (i.e., G $=$ 20\,mag), as shown in the top panel of Figure \ref{fig:mr_raErr}. Therefore, {\it Gaia} DR2 is able to provide one precise observational position at epoch J2015.5. The epoch is different from J2015.0 in {\it Gaia} DR1.
About 1.33 billion sources have proper motions, but more than 361 million sources have no proper motions in {\it Gaia} DR2. The sources missing proper motions are mainly located at the faint region in {\it Gaia} DR2, as shown in the top panel of Figure \ref{fig:mr_gaia} by comparing the histograms between the entire sources (blue) and those without proper motions (green) in {\it Gaia} DR2. The bottom panel of Figure \ref{fig:mr_gaia} displays the scatter distribution of the uncertainties of {\it Gaia} DR2 proper motions at the faint region ($r>19$\,mag). The red points are the median uncertainties of proper motions in different magnitude bins. The median uncertainty is larger than 2.0 \,mas\,$\rm yr^{-1}$\ (marked with the black dashed line) for the sources close to the limiting magnitude. The proper motion precision of these sources will be significantly improved by combining the astrometry of PS1, SDSS and Gaia. This point will be demonstrated in Section \ref{sect:method}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.65]{mr_decErr.pdf}
\caption{Precision of the source position measurements along the $\delta$ direction for the various data sets used in the construction of the GPS1+ catalog, as a function of $r$-band magnitude. The red dots and bars indicate the average and root-mean-square ({\it rms}) of the position uncertainties in each magnitude bin. The average position uncertainties are 0.5, 50, 80, and 235\,mas for the entire samples in $19<r<22.5$\,mag from {\it Gaia} DR2, PS1, SDSS, and 2MASS, respectively. The contours indicate the normalized number density of sources with different levels of 0.02, 0.05, 0.1, 0.2, 0.4, 0.6, and 0.8 (the highest density is normalized to 1).}\label{fig:mr_raErr}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{mr_hist.pdf}
\caption{Top: the histograms of the sources at the faint region ($r>19$\,mag) from {\it Gaia} DR2 (blue), PS1 (black), and {\it Gaia} DR2 in which sources have proper motions (green), in the same sky region. The sources with {\it Gaia} proper motions only take fraction of $\sim$40\% in a sample selected from PS1 in a random sky region. Bottom: the uncertainties distribution of {\it Gaia} DR2 proper motions at the faint region ($r>19$\,mag). The uncertainties increase with magnitudes as a function of a power law, i.e., {\bf the flux in r band to the 0.29 power} (the yellow dashed line). The average uncertainty of the proper motions is larger than 2 \,mas\,$\rm yr^{-1}$\ (marked with the black dashed line) for the sources nearby the {\it Gaia} limiting magnitude.}\label{fig:mr_gaia}
\end{figure}
Pan-STARRS1 \citep[PS1;][]{Chambers2011} is a wide-field optical/near-IR survey telescope system, which has been conducting multi-epoch and multi-color observations over the entire sky visible from Hawaii (decl. $\gtrsim -30^\circ$) for many years. Its Processing Version 3 catalog \citep[PV3;][]{Chambers2016} contained around 65 detections for each source over a sky area of $\sim$30,000\ deg$^2$ with epochs throughout the 5.5 years from 2010 to 2014.
As does in GPS1, we take season averages and positional uncertainties for the faint sources ($r_{PS1}>19.0$\,mag) in PS1. Each source is detected more than 10 times in an observing season. The typical single-epoch positional precision of faint sources is $\sim 50\,$ mas, as illustrated in the second panel of Figure \ref{fig:mr_raErr}. Furthermore, we apply the selection cuts used in GPS1 on the individual detections and the individual faint sources to remove the PS1 astrometry outliers. Finally, we obtain around 400 million faint objects with billions of detections.
The Sloan Digital Sky Survey (SDSS) began its regular operations in 2000 April \citep{York2000}. Its ninth data release (DR9) almost contains its all the photometric data \citep{Ahn2012}, which were imaged in the early epochs, e.g. 10-20 years ago. The long epoch baseline makes this data very valuable. The typical astrometric uncertainties for faint stars ($r>19.0$ mag) are around 80\,mas per coordinate \citep{Stoughton2002}, as shown in the third panel of Figure \ref{fig:mr_raErr}.
Two Micron All Sky Survey \citep[2MASS;][]{Skrutskie2006} All-Sky Data Release identifies around 471 million point sources, and 1.6 million extended sources, covering virtually the entire celestial sphere between June 1997 and February 2001. Faint source extractions have the astrometric accuracy of the order 100\,mas, as shown in the bottom panel of Figure \ref{fig:mr_raErr}. Because of large positional uncertainties, 2MASS positions provide only a weak constraint for proper motion measurements.
We cross-matched the PS1 objects with {\it Gaia}, 2MASS and SDSS using a 1.5$\arcsec$ search radius. Therefore, the internal ID from PS1 is the key identifier to connect the four catalogs.
The black histogram in Figure \ref{fig:mr_gaia} indicates that there are more than 60\% PS1 sources beyond {\it Gaia}'s limiting magnitude. These sources ($>$70\%) without {\it Gaia} (including {\it Gaia} missing) proper motions will be our main interest in this study.
\subsection{Derivation of Proper Motions}\label{sect:method}
Proper motions in GPS1+ are determined basically with the same procedure used in GPS1. The key difference just takes place on the sources which have {\it Gaia} proper motions. For these sources, we calculate two kinds of proper motions for each source: one is with the method of GPS1, i.e., by performing a linear least-squares fit; the other is fitted through a Bayesian model which uses {\it Gaia} proper motions as priors and combines the astrometry of {\it Gaia}, PS1, SDSS and 2MASS. It is worth to mention that both of the two fit methods do not involve Gaia parallax, so the derived proper motions will be away from the impact of parallax, unlike {\it Gaia}'s proper motions which is correlated with the parallax.
With the procedure of GPS1 construction, we build a reference catalog by averaging repeatedly observed positions of PS1 galaxies in each tile (i.e., a sky area of a constant size of 10\degr\ by 10\degr), and calibrate the cataloged positions for each object in five (or six) PS1 epochs, one {\it Gaia} epoch, possibly one SDSS epoch and one 2MASS epoch onto the same reference frame. All the steps have been minutely summarized in Section 3 of T17. For each source, we calculate its proper motion by performing a linear least squares fit based on a simple $\chi^2$ which is described in Equation (2) of T17. But for a source which has a {\it Gaia} proper motion, we re-measure another new proper motion by building a Bayesian model. We start with a likelihood
\begin{equation}\label{eq:bayesian}
L=p(\{t_i,y_i\} | {\mu}, {b})=\prod_{i}^{N}\left\{\frac{1}{\sqrt{\epsilon_{i}^2}}\exp\Bigl[-\frac{[\hat{y}_{i}^{o} - y_{i}^{model}(t_i)]^2}{2\epsilon_{i}^2}\Bigr]\right\},
\end{equation}
where $\hat{y}_i^o$ is the observed position of a star with a positional uncertainty $\epsilon_{i}$ at epoch $i$. The positional uncertainty $\epsilon_{i}$ consists of two parts: one part is the individual position precision, illustrated in Figure \ref{fig:mr_raErr}; and the other part is the uncertainty from the offset calibration discussed in Section 3.2 of T17. $y_i^{model}(t_i)$ is the predicted position by a linear model at the given time $t_i$, i.e., $y_i^{model}(t_i) = {\mu}t_i + {b}$, $N$ is the number of epochs in different surveys.
The position $\hat{y}_i^o$ has been calibrated by
\begin{equation}\label{eq:cal_offset}
\hat{y}_i^o = y_i^o - \Delta_{i}(\alpha, \delta) - \Delta_{i}(\delta, m),
\end{equation}
where $y_i^o$ is the original cataloged position of a star at epoch $i$, $\Delta_{i}(\alpha, \delta)$ is the direction dependent offset described in Section 3.2.1 of T17, and $\Delta_{i}(\delta, m)$ is the magnitude and declination dependent offset described in Section 3.2.2 of T17.
According to Bayes theorem, the posterior probability can be easily expressed as
\begin{equation}\label{eq:posterior}
p(\{{\mu}, {b} | t_i,y_i\})=p(\{t_i,y_i\} | {\mu}, {b}) p(\mu) p(b) ,
\end{equation}
we assume the prior probability of $\mu$ obeys a Gaussian distribution with $\bar\mu = \mu_{Gaia}$ and $\sigma=\epsilon_{Gaia}$, where $\mu_{Gaia}$ and $\epsilon_{Gaia}$ are the proper motion and uncertainty values of a source provided by {\it Gaia} DR2. We assume a flat prior probability of $b$, i.e., $p(b) = 1$.
We use \texttt{emcee} \citep{forman2013} to sample the posterior distribution (Equation \ref{eq:posterior}) and estimate proper motions in the two directions, i.e., $\mu_{\alpha^*}$\ and $\mu_{\delta}$, respectively. In practice, we could use the joint posterior probability to constrain $\mu_{\alpha^*}$\ and $\mu_{\delta}$, simultaneously. The intercept $b$ is also a free parameter in the MCMC sampling, but its value is not important for this study. Figure \ref{fig:mcmc} illustrates two examples of proper motion contours and marginalized probability distributions of two sources with different magnitudes. The {\it Gaia} detector takes on different performances for sources with distinct brightness. For instance, {\it Gaia} is able to measure a good position for a source with $r=19.6$\,mag. Thus, the combination of the multi-surveys can not significantly improve the precision of the {\it Gaia} proper motion (only by $\Delta\epsilon_{\mu}\sim0.2$\,mas\,$\rm yr^{-1}$, see the left panel of Figure \ref{fig:mcmc}). However, for a source with $r=20.9$\,mag, which is close to the {\it Gaia} limiting magnitude, the combination of PS1, SDSS and {\it Gaia} can improve the precision of the {\it Gaia} proper motion by $\Delta\epsilon_{\mu}\sim1.0$\,mas\,$\rm yr^{-1}$ (see the right panel of Figure \ref{fig:mcmc}).
\begin{figure*}[!t]
\centering
\includegraphics[width=0.48\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{fig_corner_12.pdf}
\includegraphics[width=0.48\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{fig_corner_18.pdf}
\caption{Illustration of the proper motion contours and marginalized probability distributions for two sources with different magnitudes, from the MCMC sampling. The left panel shows the case with a magnitude of $r=19.6$\,mag. In this case, the source is not too faint, the {\it Gaia} detector works well. Therefore, the proper motions ($12.24\pm1.56$\,mas\,$\rm yr^{-1}$\ and $-20.14\pm0.63$\,mas\,$\rm yr^{-1}$) are well measured in {\it Gaia} DR2. The precision of new proper motions ($11.16\pm1.34$\,mas\,$\rm yr^{-1}$\ and $-20.13\pm0.60$\,mas\,$\rm yr^{-1}$) can not be improved significantly (only by $\Delta\epsilon_{\mu}\sim0.2$\,mas\,$\rm yr^{-1}$) via combining the astrometry from SDSS, PS1, and Gaia. The right panel displays the case with a magnitude of $r=20.9$\,mag, which is close to the {\it Gaia} limiting magnitude. In this case, the precision of proper motion can be improved significantly (by $\Delta\epsilon_{\mu}\sim1.0$\,mas\,$\rm yr^{-1}$) with the combination of the multi-surveys.
}
\label{fig:mcmc}
\end{figure*}
\section{Results and performance}\label{sect:result}
Using the approach described in Section \ref{sect:method} and the method used in GPS1 (see Section 3 of T17), we determine proper motions for around 400 million sources, down to a magnitude of $\sim 22.5$\, in the r-band. Among these sources, about 40\% sources are re-measured new proper motions with the Bayesian method described in Section \ref{sect:method}. The proper motions of the remaining objects are obtained with the previous method used in GPS1. The catalog draws on PS1 SeasonAVG and {\it Gaia} DR2 as the primary data, together with the best available combinations of other surveys. The final catalog uses the robust fit (where all the data points are fitted regardless of outliers), cross-validation fit (where outliers are removed while fitting), and MCMC fit (with which proper motions from {\it Gaia} DR2 are used as priors while sampling if the proper motions exist in {\it Gaia} DR2). For reference, we also include the proper motions of {\it Gaia} DR2 if they exist. Table \ref{tab:gps1} lists the main columns contained in the catalog. In the following sub-sections, we discuss the precision and accuracy of proper motions in the different cases.
\begin{table*}
\begin{threeparttable}
\caption{The columns of GPS1+ catalog}.\label{tab:gps1}
\centering
\begin{tabular}{l|l|l|l}
\hline
\hline
&Column&Unit&description\\
\hline
1&obj\_id\footnotemark&-&The unique \HJT{but internal} object\_id in PS1\\
2&ra°ree&R.A. at J2015.0 from {\it Gaia} DR2\\
3&dec°ree&Decl. at J2015.0 from {\it Gaia} DR2\\
4&e\_ra&mas&Positional uncertainty in right ascension at J2015.0 from {\it Gaia} DR2\\
5&e\_dec&mas&Positional uncertainty in declination at J2015.0 from {\it Gaia} DR2\\
6&ra\_ps1°ree&Average right ascension at J2010 from PS1 PV3\\
7&dec\_ps1°ree&Average declination at J2010 from PS1 PV3\\
8&pmra&\,mas\,$\rm yr^{-1}$&Proper motion with robust fit in $\alpha\cos\delta$\\
9&pmde&\,mas\,$\rm yr^{-1}$&Proper motion with robust fit in $\delta$\\
10&e\_pmra&\,mas\,$\rm yr^{-1}$&Error of the proper motion with robust fit in $\alpha\cos\delta$\\
11&e\_pmde&\,mas\,$\rm yr^{-1}$&Error of the proper motion with robust fit in $\delta$\\
12&chi2pmra&-&$\chi_{\nu}^2$ from the robust proper motion fit in $\alpha\cos\delta$\\
13&chi2pmde&-&$\chi_{\nu}^2$ from the robust proper motion fit in $\delta$\\
14&pmra\_x&\,mas\,$\rm yr^{-1}$&Proper motion with cross-validated fit in $\alpha\cos\delta$\\
15&pmde\_x&\,mas\,$\rm yr^{-1}$&Proper motion with cross-validated fit in $\delta$\\
16&e\_pmra\_x&\,mas\,$\rm yr^{-1}$&Error of the proper motion with cross-validated fit in $\alpha\cos\delta$\\
17&e\_pmde\_x&\,mas\,$\rm yr^{-1}$&Error of the proper motion with cross-validated fit in $\delta$\\
18&pmra\_mcmc&\,mas\,$\rm yr^{-1}$&Proper motion with MCMC sampling fit in $\alpha\cos\delta$\\
19&pmde\_mcmc&\,mas\,$\rm yr^{-1}$&Proper motion with MCMC sampling fit in $\delta$\\
20&e\_pmra\_mcmc&\,mas\,$\rm yr^{-1}$&Error of the proper motion with MCMC sampling fit in $\alpha\cos\delta$\\
21&e\_pmde\_mcmc&\,mas\,$\rm yr^{-1}$&Error of the proper motion with MCMC sampling fit in $\delta$\\
22&pmra\_gaia&\,mas\,$\rm yr^{-1}$&Proper motion from {\it Gaia} DR2 in $\alpha\cos\delta$\\
23&pmde\_gaia&\,mas\,$\rm yr^{-1}$&Proper motion from {\it Gaia} DR2 in $\delta$\\
24&e\_pmra\_gaia&\,mas\,$\rm yr^{-1}$&Error of the proper motion from {\it Gaia} DR2 in $\alpha\cos\delta$\\
25&e\_mude\_gaia&\,mas\,$\rm yr^{-1}$&Error of the proper motion from {\it Gaia} DR2 in $\delta$\\
26&n\_obsps1&-&The number of SeasonAVG observations used in the proper motion fit \\
27&n\_obs&-&The number of all the observations used in the robust proper motion fit\\
28&flag\footnotemark&-&An integer number used to flag the different data combination in the proper motion fit. \\
29&magg&mag&g-band magnitude from PS1 \\
30&magr&mag&r-band magnitude from PS1\\
31&magi&mag&i-band magnitude from PS1\\
32&magz&mag&z-band magnitude from PS1\\
33&magy&mag&y-band magnitude from PS1\\
34&e\_magg&mag&Error in g-band magnitude from PS1\\
35&e\_magr&mag&Error in r-band magnitude from PS1\\
36&e\_magi&mag&Error in i-band magnitude from PS1\\
37&e\_magz&mag&Error in z-band magnitude from PS1\\
38&e\_magy&mag&Error in y-band magnitude from PS1\\
39&maggaia&mag&G-band magnitude from {\it Gaia} \\
40&e\_maggaia&mag&Error in G-band magnitude from Gaia\\
\hline
\hline
\end{tabular}
\begin{tablenotes}
\item [a] \HJT{Here obj\_id is an internal PS1 ID, which is different from the public ID released in PS1 catalog.}
\item [b] \HJT{In order to label the different survey combinations for proper motion fit, we assign PS1, 2MASS, SDSS, and {\it Gaia} with different integer identifiers, i.e. 0, 5, 10, and 20, respectively, and define a $flag$ with the sum of identifiers of surveys combined.}
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Proper Motion Uncertainties in the Different Data Set Combinations}
The \HJT{footprint overlap} among {\it Gaia}, PS1, SDSS and 2MASS surveys introduces some complexity: $\sim$ 14.5\% stars are covered by {\it Gaia}, PS1, and SDSS, $\sim$ 43.6\% by PS1 and {\it Gaia}, but not SDSS, $\sim$ 33.9\% stars are only observed by PS1, and the remaining 8\% by PS1 and SDSS, but not Gaia. Therefore, it is necessary to investigate how the final proper motions are affected by combining the different data sets.
\begin{table*}
\begin{threeparttable}[b]
\caption{The formal fitting uncertainties of the proper motions in the different data combinations}.\label{tab:dif_comb}
\centering
\begin{tabular}{c|l|c|c|c|c|c|c}
\hline
\hline
ID&Mode& \multicolumn{2}{c|}{$19<m_r<21$} &\multicolumn{2}{c|}{$21<m_r<22.5$}&\multicolumn{2}{c}{MCMC Sampling\tnote{a}} \\
\hline
&&$\langle \epsilon_{\mu_{\alpha^*}}\rangle$ &$\langle \epsilon_{\mu_{\delta}}\rangle$ &$\langle \epsilon_{\mu_{\alpha^*}}\rangle$ &$\langle \epsilon_{\mu_{\delta}}\rangle$ &$\langle \epsilon_{\mu_{\alpha^*}}\rangle$ &$\langle \epsilon_{\mu_{\delta}}\rangle$ \\
\hline
\multicolumn{2}{c|}{}&\multicolumn{6}{c}{\,mas\,$\rm yr^{-1}$}\\
\hline
1&GPS (Gaia+PS1+SDSS+2MASS)&2.10$\pm$0.64&1.97$\pm$0.60&3.18$\pm$0.94&2.93$\pm$0.82&0.96$\pm$0.50&0.85$\pm$0.47\\
2&GP (Gaia+PS1+2MASS)&3.71$\pm$2.07&2.98$\pm$1.35&5.10$\pm$2.34&4.24$\pm$1.68&1.03$\pm$0.61&0.95$\pm$0.58\\
3&PD (PS1+SDSS+2MASS)&5.12$\pm$2.33&4.85$\pm$2.22&7.56$\pm$3.26&7.16$\pm$3.05&-&-\\
4&PS1 ({\sl only} PS1)&15.20$\pm$9.93&12.75$\pm$6.67&21.58$\pm$12.67&18.25$\pm$9.72&-&-\\
\hline
\hline
\end{tabular}
\begin{tablenotes}
\item [a] The uncertainties in this column are estimated from the sources with $19.0<r<22.5$\,mag. In this mode, proper motions are measured with {\it Gaia} proper motions as priors during MCMC sampling. Therefore, there are no values in the PD and {\sl only} PS1 modes.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Like GPS1, we investigate how the uncertainties in proper motion differ among the following four combinations of data sets: {\it Gaia} + PS1 + SDSS + 2MASS (GPS), {\it Gaia} + PS1 + 2MASS (GP), PS1 + SDSS + 2MASS (PD), and
{\sl only} PS1 (PS1). \HJT{For the catalog table, different} surveys are assigned different integer \HJT{identifiers: 0, 5, 10, and 20 for PS1, 2MASS, SDSS, and {\it Gaia}, respectively. This defines a {\sl flag} for different survey combinations entering a fit}, represented as the sum of \HJT{the individual survey identifiers}. The primary observations are those from PS1, so the positions for each star must include the PS1 detections when fitting for proper motion.
Figure \ref{fig:sigmu_mr_GPS1} summarizes the distribution of proper motion uncertainties for the four different combinations. The figure is drawn with one million sources randomly selected from the GPS1+ catalog. In the four panels, the blue points correspond to the stars in different combinations and the red curves are the median uncertainties in proper motions within different magnitude bins. \HJT{The average uncertainties in magnitude bins are listed in Table \ref{tab:dif_comb}, with the mean ($19<m_r<22.5$) marked by black lines. In the GPS mode, the average uncertainties are $\epsilon_{\mu_{\alpha^*}}$\ $\sim$2.24 \,mas\,$\rm yr^{-1}$\ and $\epsilon_{\mu_{\delta}}$\ $\sim$2.10 \,mas\,$\rm yr^{-1}$. This is better than the GP mode ($\mu_{\alpha^*}$\ $\sim$3.98 \,mas\,$\rm yr^{-1}$ and $\mu_{\delta}$\ $\sim$3.19 \,mas\,$\rm yr^{-1}$). SDSS positions improve the precision by $\sim1.5$\,mas\,$\rm yr^{-1}$\ for both the $\epsilon_{\mu_{\alpha^*}}$\ and $\epsilon_{\mu_{\delta}}$. Without {\it Gaia} positions (PD mode), the typical uncertainties become $\epsilon_{\mu_{\alpha^*}}$\ $\sim$7.45 \,mas\,$\rm yr^{-1}$ and $\epsilon_{\mu_{\delta}}$\ $\sim$7.05 \,mas\,$\rm yr^{-1}$. {\it Gaia} positions improve the precision by $\sim4.3$\,mas\,$\rm yr^{-1}$\ for both $\epsilon_{\mu_{\alpha^*}}$\ and $\epsilon_{\mu_{\delta}}$. For PS1 data alone, the mean uncertainties become $\epsilon_{\mu_{\alpha^*}}$\ $\sim$21.03 \,mas\,$\rm yr^{-1}$ and $\epsilon_{\mu_{\delta}}$\ $\sim$17.75 \,mas\,$\rm yr^{-1}$. The precision improvement is dominated by {\it Gaia} and SDSS.}
Figure \ref{fig:uncertanties_star} illustrates the distribution of uncertainties of these stars as Mollweide projection maps of the entire 3$\pi$ region of the sky in equatorial coordinate system, containing one million stars randomly selected. The median uncertainty in each pixel is calculated \HJT{from hundreds of stars}.
The median values of the uncertainties are $\sim8.0$\,mas\,$\rm yr^{-1}$\ for $\mu_{\alpha^*}$\ (the left panel) and $\sim7.2$\,mas\,$\rm yr^{-1}$\ for $\mu_{\delta}$\ (the right panel), as shown in the maps. The uncertainties at high and low declinations are larger, \HJT{as SDSS data are missing}. The small uncertainties in the north Galactic cap are driven by the SDSS observations taken ten or fifteen years ago.
For sources with $19<r<21$\,mag, the GPS1+ catalog is at its best. In this magnitude bin, $\sim$92\% sources have {\it Gaia} positions, and $\sim$79.5\% sources have {\it Gaia} proper motions. Therefore, the proper motions in this bin are dominated by the values obtaining with MCMC fitting. During the fitting, we use {\it Gaia} proper motions as priors, and combine all the astrometry from {\it Gaia}, PS1, SDSS and 2MASS. The final uncertainties are better than 1.0 \,mas\,$\rm yr^{-1}$\ for both $\epsilon_{\mu_{\alpha^*}}$\ and $\epsilon_{\mu_{\delta}}$\ on average in this bin.
For the fainter sources with $r>21.0$\,mag, the positional uncertainties steeply increase with the magnitude. The magnitudes of these sources are beyond {\it Gaia} or close to PS1 and SDSS limiting magnitudes, so the precision of the obtained proper motions will be worse towards the faint end. As the values in Table \ref{tab:dif_comb} show, both SDSS and {\it Gaia} can improve the precision of the proper motions in the PS1 mode by $\sim10$\,mas\,$\rm yr^{-1}$\ individually, and by $\sim15$\,mas\,$\rm yr^{-1}$\ together. Therefore, {\it Gaia} and SDSS are comparably important for reducing uncertainties \HJT{for the faint stars}.
We checked the quality of the proper motion fits via the distribution of reduced $\chi^2$ for a random subset of stars. The median values for both $\mu_{\alpha^*}$\ and $\mu_{\delta}$\ are smaller than 1, implying that most fits are good.
According to the performance, the around 66\% sources in the GPS, GP, and PD modes are defined as the primary sources, which have a good precision with an average value of 2.0-5.0 \,mas\,$\rm yr^{-1}$; while the remaining $\sim$34\% sources only have PS1 astrometry, which are defined as the secondary sources with an average precision of worse than 15.0 \,mas\,$\rm yr^{-1}$. The bad precision makes the secondary sources probably have no good applications.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.504\textwidth, trim=0.0cm 0.0cm 0.1cm 0.0cm, clip]{rmag_muErr_GPS_mrcal2.pdf}
\includegraphics[width=0.44\textwidth, trim=1.85cm 0.0cm 0.0cm 0.0cm, clip]{rmag_muErr_GP_mrcal2.pdf}
\includegraphics[width=0.504\textwidth, trim=0.0cm 0.0cm 0.1cm 0.0cm, clip]{rmag_muErr_PD_mrcal2.pdf}
\includegraphics[width=0.44\textwidth, trim=1.85cm 0.0cm 0.0cm 0.0cm, clip]{rmag_muErr_PS1_mrcal2.pdf}
\caption{Proper motion precision for the four different combinations of data sets (top-left: GPS, top-right: GP, bottom-left: PD, and bottom-right: ONLY PS1). In the four panels, the red curves and bars are the median uncertainties and {\it rms} of proper motions within different magnitude bins, and the black dashed lines mark the typical average uncertainties in the magnitude range $19<r<22.5$\,mag. The blue scatter points represent one million sources randomly selected from the sky. All the uncertainties are logarithmic in every y-axis.
The typical average uncertainties for the four combination modes are $\epsilon_{\mu_{\alpha^*}}$\ $\sim$2.24 \,mas\,$\rm yr^{-1}$, $\epsilon_{\mu_{\delta}}$\ $\sim$2.10 \,mas\,$\rm yr^{-1}$\ for the GPS mode, $\epsilon_{\mu_{\alpha^*}}$\ $\sim$3.98 \,mas\,$\rm yr^{-1}$, $\epsilon_{\mu_{\delta}}$\ $\sim$3.19 \,mas\,$\rm yr^{-1}$\ for the GP mode, $\epsilon_{\mu_{\alpha^*}}$\ $\sim$7.45 \,mas\,$\rm yr^{-1}$, $\epsilon_{\mu_{\delta}}$\ $\sim$7.05 \,mas\,$\rm yr^{-1}$\ for the PD mode, and $\epsilon_{\mu_{\alpha^*}}$\ $\sim$21.03 \,mas\,$\rm yr^{-1}$, $\epsilon_{\mu_{\delta}}$\ $\sim$17.75 \,mas\,$\rm yr^{-1}$\ for the ONLY PS1 mode, respectively. The contours indicate the normalized number density of sources with different levels of 0.02, 0.05, 0.1, 0.2, 0.4, 0.6, and 0.8 (the highest density is normalized to 1).
}
\label{fig:sigmu_mr_GPS1}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{pixel_GPS1+_Starsmpmra.png}
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{pixel_GPS1+_Starsmpmdec.png}
\caption{The distribution of proper motion uncertainties for stars with $19<r<22.5$\,mag; this is illustrated with an equatorial Mollweide projection of the entire 3$\pi$ sky region.
The pink solid ($b=0^{\circ}$) and two dotted lines ($b=\pm20^{\circ}$) mark the location of the Galactic plane in the equatorial coordinate system, \HJT{where sources are crowded and the effects of dust extinction are manifest \citep{tian2014}}.
To highlight the structures in the maps, the color bar is scaled in $\pm3\sigma$ around the entire median value for each map.
}\label{fig:uncertanties_star}
\end{figure*}
\subsection{Proper Motion Validation with QSOs}\label{sec:val}
To validate the derived proper motions, we cross-match the GPS1+ catalog with the QSO candidates from \citet{nina2016}, and randomly select 58,000 QSOs with high probability in the entire PS1 3$\pi$ sky region.
Figure \ref{fig:vali_qsos} displays the histograms of the $\mu_{\alpha^*}$\ (the top panel) and $\mu_{\delta}$\ (the bottom panel) for the QSOs. The median values of the $\mu_{\alpha^*}$\ and $\mu_{\delta}$\ are -0.13 \,mas\,$\rm yr^{-1}$\ and -0.17 \,mas\,$\rm yr^{-1}$, and the dispersions are 4.57 \,mas\,$\rm yr^{-1}$\ and 5.05 \,mas\,$\rm yr^{-1}$, respectively. The median values suggest that the accuracies of GPS1+ proper motions are better than 0.2 \,mas\,$\rm yr^{-1}$\ on average for both $\mu_{\alpha^*}$\ and $\mu_{\delta}$. The dispersion values roughly reflect the {\it rms} of GPS1+ proper motions. Note that the apparent proper motions of QSOs suffer from the impact of differential chromatic refraction (DCR), especially in $\delta$. At high declinations, the $\delta$ proper motions are biased by up to 2\,mas\,$\rm yr^{-1}$. At low declinations, the $\delta$ proper motions are under-estimated by $\sim$ 2.0 \,mas\,$\rm yr^{-1}$. This definitely makes the dispersion values of the QSO proper motions become larger than the true values.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{hist_QSO_GPS1.pdf}
\caption{Validation of GPS1+ proper motions with QSOs. The light dashed lines denote zero \,mas\,$\rm yr^{-1}$.
}
\label{fig:vali_qsos}
\end{figure}
\subsection{Comparison with {\it Gaia} Proper Motions }\label{sect:val_pal5}
{\it Gaia} DR2 provides us enough proper motions with good measurements for stars in the entire sky. In GPS1+, we re-calculate the proper motions for the sources with {\it Gaia} DR2 proper motions in two methods: (1) the GPS1 method, in which proper motions are obtained by combining the astrometry of {\it Gaia}, PS1, SDSS, and 2MASS, regardless of {\it Gaia} DR2 proper motions; (2) the MCMC method, in which proper motions are obtained by combining the astrometry of {\it Gaia}, PS1, SDSS, and 2MASS, and using {\it Gaia} DR2 proper motions as priors during MCMC sampling. For the comparison, we randomly select about half a million of stars which have three kinds of proper motions, simultaneously.
Figure \ref{fig:vali_gaia} illustrates the comparison of proper motions between our GPS1+ and {\it Gaia} DR2 for $\mu_{\alpha^*}$\ (the top sub-panel) and $\mu_{\delta}$\ (the bottom sub-panel). Two typical proper motions are presented: the GP proper motions (the left panel), and the GPS proper motions (the right panel). The median of the differences of proper motions ($\Delta\mu=\mu_{GPS1+} - \mu_{Gaia}$) lies within $\pm$0.05 \,mas\,$\rm yr^{-1}$\ of zero, implying that the accuracy of GPS1+ proper motion is better than 0.05 \,mas\,$\rm yr^{-1}$\ for both $\mu_{\alpha^*}$\ and $\mu_{\delta}$. The red bars indicate the average {\it rms} of GPS1+ proper motion is better than 5.0 \,mas\,$\rm yr^{-1}$\ in the GP mode, and 3.0 \,mas\,$\rm yr^{-1}$\ in the GPS mode, respectively. Here, we assume that the proper motions are measured well enough in {\it Gaia} DR2.
Figure \ref{fig:vali_pal5} represents the comparison of proper motions between our GPS1+(MCMC) case and {\it Gaia} DR2 for $\mu_{\alpha^*}$\ (the left panel) and $\mu_{\delta}$\ (the right panel). The insets are the histograms of the error-weighted difference between the two, e,g. $\tilde{\Delta} \mu = (\mu_{ours} - \mu_{Gaia})/\sqrt{\smash[b]{\epsilon_{\mu, ours}^2 + \epsilon_{\mu, Gaia}^2}}$, where the two $\epsilon$ are the errors of our and {\it Gaia} proper motions. The median of the error-weighted differences (marked by the white dashed lines) for the $\mu_{\alpha^*}$\ and $\mu_{\delta}$\ are $-0.02\pm0.46$ and $0.01\pm0.52$, respectively. The plot indicates that our proper motions are consistent with {\it Gaia} at a high level.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{dmu_mag_gp.pdf}
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{dmu_mag_gps.pdf}
\caption{Comparison of proper motions between GPS1+ and {\it Gaia} DR2 in different magnitude bins. Two typical proper motion modes are presented: the GP proper motions (the left panel), and the GPS proper motions (the right panel). The blue points are the scatters of the differences of proper motions ($\Delta\mu=\mu_{GPS1+} - \mu_{Gaia}$). The red curves are the median values of $\Delta\mu$ in different magnitude bins and the error bars represent the robust {\it rms}. The black dashed lines mark $\Delta\mu=0$. All the red points oscillate around the black dashed lines within $\pm$0.05 \,mas\,$\rm yr^{-1}$, indicating that the average accuracy of GPS1+ proper motions is better than 0.05 \,mas\,$\rm yr^{-1}$.
The average {\it rms} in the GP case is $\sim$ 5.0 \,mas\,$\rm yr^{-1}$, which is reduced to $\sim$3.0 \,mas\,$\rm yr^{-1}$\ in the GPS case.}\label{fig:vali_gaia}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{cmp_GPS_gaia_ra_relative.pdf}
\includegraphics[width=0.45\textwidth, trim=0.0cm 0.0cm 0.0cm 0.0cm, clip]{cmp_GPS_gaia_dec_relative.pdf}
\caption{Comparison of proper motions between GPS1+ (MCMC) and {\it Gaia} DR2 for $\mu_{\alpha^*}$\ (the left panel) and $\mu_{\delta}$\ (the right panel), based on sources whose proper motions are measured with MCMC fitting. The insets are histograms of the error-weighted difference between our proper motion and {\it Gaia} DR2. The median of the error-weighted differences (the white dashed line) for the $\mu_{\alpha^*}$\ and $\mu_{\delta}$\ are $-0.02\pm0.46$ and $0.01\pm0.52$ (the absolute values: $-0.03\pm0.66$ \,mas\,$\rm yr^{-1}$ and $0.01\pm0.64$ \,mas\,$\rm yr^{-1}$), respectively.
}
\label{fig:vali_pal5}
\end{figure*}
\subsection{Proper Motions Beyond Gaia}\label{sec:val}
In this section, we explicitly summarize what unique data GPS1+ can offer beyond {\it Gaia} DR2. Overall, more than 60\% sources in GPS1+ are beyond the {\it Gaia} limiting magnitude. It means that {\it Gaia} can not reach this part of objects, even in {\it Gaia}'s next data release. The average precision of proper motions for this part of sources is $\sim$7.0\,mas\,$\rm yr^{-1}$\ if they are measured by SDSS (around one third of them have the astrometry of SDSS). Meanwhile, around 40\% sources are measured new proper motions with the Bayesian technique with the goal of improving the precisions of {\it Gaia} DR2 proper motions at the end faint. Moreover, it is worth to mention that around 13\% sources are the objects whose proper motions are missing in {\it Gaia} DR2. We provide the proper motions for these sources in GPS1+ with an average precision$\sim4.5$\,mas\,$\rm yr^{-1}$.
Figure \ref{fig:mr_rix} displays the situations of the proper motions beyond {\it Gaia} in the different magnitudes. The top panel illustrates the cumulative histograms of the GPS1+ sources ($N_{GPS1+}$, the black curve), and the sources for which GPS1+ provides proper motions, but {\it Gaia} DR2 does not ($N_{Gaia,\,missing}$, the blue curve), across the 3$\pi$ sky over the magnitude at the faint region ($r>19$\,mag). The two curves tell us the total $N_{GPS1+}$ and $N_{Gaia,\,missing}$ are about 400 and 47 millions, respectively. The middle panel demonstrates how the ratios of $N_{Gaia,\,\mu}/N_{GPS1+}$ (the black curve) and $N_{Gaia,\,missing}/N_{Gaia}$ (the blue curve) vary with magnitudes. Here, $N_{Gaia,\,\mu}$ and $N_{Gaia}$ are the number of sources for which {\it Gaia} DR2 provides proper motions in GPS1+, and all the sources for which {\it Gaia} DR2 provides positions in a magnitude bin, respectively. The black curve suggests that the number of the sources with {\it Gaia} proper motions drops dramatically at $r>20$\,mag in GPS1+, and there are almost no {\it Gaia} proper motions beyond $r>21$\,mag. The blue curve demonstrates that the sources whose proper motions are missing in {\it Gaia} DR2 increase quickly at $r>20.5$\,mag. The bottom panel displays how the precisions of {\it Gaia} DR2 proper motions are improved in the different magnitude bins by including PS1 or SDSS astrometry. As the figure shown, the precisions of {\it Gaia} DR2 proper motions are improved with a limited degree, only by around 0.05\,dex at $r<20.5$\,mag. But at $r>20.5$\,mag, the precisions are improved by about 0.1\,dex on average.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{mr_rix.pdf}
\caption{Top: the cumulative histograms of the GPS1+ sources ($N_{GPS1+}$, the black curve), and the sources for which GPS1+ provides proper motions, but {\it Gaia} DR2 does not ($N_{Gaia,\,missing}$, the blue curve) across the 3$\pi$ sky over the magnitude at the faint region ($r>19$\,mag). Middle: the number ratio v.s. magnitude. The black and blue curves represent the ratios of $N_{Gaia,\,\mu}/N_{GPS1+}$ and $N_{Gaia,\,missing}/N_{Gaia}$ in the different magnitude bins, where $N_{Gaia,\,\mu}$ and $N_{Gaia}$ are the number of sources for which {\it Gaia} DR2 provides proper motions, and all the sources for which {\it Gaia} DR2 provides positions in a magnitude bin, respectively. Bottom: the precision improvement factor ($\log(\epsilon_{\mu,\,Gaia}/\epsilon_{\mu,\,MCMC})$) of {\it Gaia} DR2 proper motion at the faint region by the Bayesian technique, where $\epsilon_{\mu,\,Gaia}$ and $\epsilon_{\mu,\,MCMC}$ denote the precisions of total proper motions measured in {\it Gaia} DR2 and with the Bayesian technique in GPS1+, respectively. At $r<20.5$\,mag, the precisions of {\it Gaia} DR2 proper motions are tinily improved, only by around 0.05\,dex. At $r>20.5$\,mag, the precisions are improved by about 0.1\,dex on average. Note that this scatter plot is obtained from a sample of one million sources randomly selected from the whole GPS1+ catalog.
}\label{fig:mr_rix}
\end{figure}
\section{The values and Limitations of GPS1+}
For the most part, GPS1+ constitute a catalog that extends the depth of GPS1 from $r<20$\,mag down to 22.5\,mag. It not only fills up some proper motions missed in {\it Gaia} DR2, but also improves the proper motion precision of faint sources in {\it Gaia} DR2. The most important point is that GPS1+ provide new proper motions for a large number of faint sources beyond {\it Gaia} and other existing catalogs. GPS1+ has important values for the studies involved with faints sources, such as precise age of field stars from white dwarf companions \citep[][Qiu et al. in preparing]{Fouesneau2019}, brown \citep{Cook2017, Luhman2018} or ultrcool \citep{Scholz2020} dwarfs, white dwarf binaries \citep{Parsons2017, Wang2018, Gentile2019,Brown2020,tian2020, Wang2020}, and the sdA problem \citep{Pelisoli2018a, Pelisoli2018b, Pelisoli2019}. Moreover, GPS1+ has some potential values for the studies, such as the stellar kinematics \citep{tian2017b, Farihi2018, WangHF2018, Tian2019}, stellar stream \citep{Fu2018}, hypervelocity Stars \citep{Li2018, Brown2018}, and so on.
In addition, it is worth to summarize the limitations of GPS1+, and where it should be used with caution:
(1) Some sources may have erroneous proper motions in crowded regions, e.g., nearby globular clusters, partly because blended sources are easily classified erroneously as extended sources during the reference frame is built, and partly because source crowding may lead to systematic errors in source centering.
(2) Some regions are blank in the Galactic plane, particularly in the direction of Galactic center, see Figure \ref{fig:uncertanties_star}. So many sources are included in these regions that our pipeline is hard to process these sources.
(3) Some sources, e.g., QSOs, are significantly affected by the effect of differential chromatic refraction (DCR). {\it Gaia} is a space-based telescope, and its observations are not affected by DCR; while PS1 and SDSS are ground-based telescopes and located in different places, so the two surveys suffer from DCR to a different extent. The combination of different surveys in the proper motion fit may lead to complex DCR effects.
(4) Around one third of sources in GPS1+, i.e., the so-called secondary sub-sample, have an average precision of worse than 15.0 \,mas\,$\rm yr^{-1}$\ for their proper motions, because most of them are so faint that they are beyond the capability of Gaia's detector, and only have PS1 astrometry. They may have no good applications due to the bad precision.
\section{Conclusions}\label{sect:conclusions}
{\it Gaia} DR2 released proper motions for more than 1.3 billion stars with unprecedented precision in the entire sky region. However, there are some spaces left for the successor of GPS1 proper motion catalog. Firstly, the uncertainties of {\it Gaia} proper motions increase with magnitudes as a function of power law at the faint region ($r>19.0$\,mag), the average uncertainty of {\it Gaia} proper motions become larger than 2 \,mas\,$\rm yr^{-1}$\ for the sources close to {\it Gaia} limiting magnitude. Secondly, more than 361 million stars have no proper motions, but have positions in {\it Gaia} DR2. Thirdly, about 85\% PS1 sources have no {\it Gaia} proper motions in $21<r<22.5$\,mag, which are beyond {\it Gaia} limiting magnitude. In light of these points, we extend the GPS1 catalog.
With the same procedure of GPS1, we calculated the proper motions for all the PS1 sources fainter than 19\,mag in r-band. For the sources with {\it Gaia} proper motions, we build a Bayesian model by taking {\it Gaia} proper motions as priors to calculate another new proper motion for each source combining all the available astrometry from {\it Gaia} DR2, PS1, SDSS, and 2MASS. Finally, we release the GPS1+ proper motion catalog which contains about 400 million point sources down to 22.5\,mag in r-band, across three quarters of the sky. The systematic error (i.e., accuracy) is $<0.1$ \,mas\,$\rm yr^{-1}$, but the typical uncertainty (i.e., precision) in the proper motion of a single source is mode-dependent: $\sim$ 14.5\% sources in the GPS1+ catalog are measured proper motions in the GPS mode, the average precision is $\sim$2.0\,mas\,$\rm yr^{-1}$, $\sim$ 43.6\% and 8\% sources are measured in the GP and PD modes, the precision is $\sim$5\,mas\,$\rm yr^{-1}$\ on average, but $\sim$ 33.9\% sources are only observed by PS1, the typical precision is worse than 15\,mas\,$\rm yr^{-1}$. Note that $\sim$13\% sources are the objects whose proper motions are missing in {\it Gaia} DR2, GPS1+ provide their proper motion with a precision of $\sim$4.5\,mas\,$\rm yr^{-1}$, and $\sim$40\% sources have {\it Gaia} proper motions, we re-calculate their proper motions by building a Bayesian model, the final precision of proper motions can be improved up to $\sim$1.0\,mas\,$\rm yr^{-1}$\ relative to {\it Gaia}'s values at the faint end.
According to the performance, we divide the GPS1+ catalog into two sub-samples, i.e., the primary sources with a typical precision of 2.0-5.0 \,mas\,$\rm yr^{-1}$, which have either or both of {\it Gaia} and SDSS astrometry; and the secondary sources with an average precision of worse than 15.0 \,mas\,$\rm yr^{-1}$, which only have PS1 astrometry. The bad precision makes the secondary sources probably have no good applications.
The GPS1+ proper motions are validated with QSOs, and the performance is illustrated by comparing with proper motions of {\it Gaia} DR2.
\acknowledgements
H.-J.T. acknowledges the National Natural Science Foundation of China (NSFC) under grants 11873034, U1731108, and U1731124. H.-W.R. acknowledges funding from the European Research Council under the European Unions Seventh Framework Programme (FP 7) ERC Grant Agreement n. [321035].The Pan-STARRS1 Survey (PS1) has been made possible through contributions of the Institute for Astronomy at the University of Hawaii, Pan-STARRS Project Office, Max-Planck Society and its participating institutes, specifically Max Planck Institute for Astronomy, Heidelberg and Max Planck Institute for Extraterrestrial Physics, Garching, Johns Hopkins University, Durham University, University of Edinburgh, Queen's University Belfast, Harvard-Smithsonian Center for Astrophysics, Las Cumbres Observatory Global Telescope Network Incorporated, National Central University of Taiwan, Space Telescope Science Institute, National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, University of Maryland, Eotvos Lorand University and Los Alamos National Laboratory. This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by
the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding
for the DPAC has been provided by national institutions, in particular
the institutions participating in the {\it Gaia} Multilateral Agreement.
\bibliographystyle{apj} |
2,869,038,157,076 | arxiv | \section{Introduction}
Eclipsing binaries and multiple systems play a crucial role in our understanding
of the Universe. The eclipsing binaries are being used for precise derivation of the stellar
parameters such as radii, masses or luminosities. On the other hand, the multiple systems play an
important role in our calibrations of models of star formation and evolution, because the presence
of triple, quadruple, or even higher order systems can serve as a very sensitive indicator in these
models and simulations. And finally, this distant component can play a crucial role during the
evolution of the system, it offers the possibility to study the role of the so-called Kozai cycles
with tidal friction (see e.g. \citealt{2001ApJ...562.1012E}) or try to detect a slow precession
of both inner and outer orbits.
Using the nowadays technique and large telescopes, automatic surveys and satellite observatories
the borders of the astrophysical front-line research is still moving to the more distant and more
faint targets. This is quite a logical process, but we have to be very careful when saying anything
about completeness of our knowledge of bright and close systems. As was already
stated in several recently published papers, also relatively bright targets among eclipsing
binaries located within one hundred parsecs distance from the Sun can bring us quite new surprising results
(see e.g. \citealt{2011A&A...532A..50M}, or \citealt{2016KsiTau}).
Therefore, we focused our effort on three rather seldom-investigated systems (namely V773~Cas,
QS~Aql, and BR~Ind) containing besides an inner eclipsing pair also a more distant third
component detected via interferometry and having the orbital periods from several decades to hundreds
of years (hence their ratio of periods is very large). Moreover, all of these stars show an
Algol-like light curve and their spectroscopic as well as photometric study is still missing yet.
As was already published earlier e.g. by \cite{2009AJ....138..664Z} a list of such systems with
eclipsing components among visual doubles, where both the inner and outer orbits are known is
still very limited to only several dozens on the whole sky.
The statistics of the triple and multiple systems is still rather limited yet, but what can surely
be said is that there is a lack of systems with higher-mass tertiary among the triple stars. This
was shown e.g. by \cite{2008MNRAS.389..925T} on spectroscopic triple stars or by
\cite{2016MNRAS.455.4136B} on the Kepler eclipsing binaries. Only a small fraction of systems have
the more massive tertiary than the eclipsing pair itself. And here comes our contribution to the
topic.
\section{The data}
The spectroscopy was obtained in two observatories. Most of the data
points for these systems came from the Ond\v{r}ejov observatory and its 2-meter telescope
(resolution R $\sim$ 12500). Additionally, the data for BR~Ind and some of the data for QS~Aql were
obtained with the FEROS instrument mounted on 2.2-meter MPG telescope located in La Silla
Observatory in Chile (R $\sim$ 48000). The individual exposing times were chosen according to the
quality of the particular night and specifications of the instrument to achieve the $S/N$ ratio
between several dozens to a few hundreds.
The original FEROS data were reduced using the standard routines. The final radial velocities
(hereafter RV) used for the analysis were derived via a technique comparing both
the direct and flipped profile of the spectral lines manually on the computer screen to
achieve the best match, using program SPEFO (\citealt{1996A&A...309..521H},
\citealt{1996ASPC..101..187S}) on several absorption lines in the measured spectral region (usually
\emph{Fe}, \emph{Ca}, or \emph{Si} lines). The derived radial velocities are given in Tables below
in Appendix section.
The photometry for these three systems were collected during the time span from 2008 to 2016. However, some
of the older data used only for the minima times were already published earlier, but the complete
light curves (hereafter LC) are being published here for the first time. All of the data were
obtained in the Johnson-Cousins photometric system \cite{1990PASP..102.1181B}, particularly the
system V773~Cas in $BVR$, while both the systems QS~Aql and BR~Ind in $BVRI$ filters.
Owing to the relatively high brightness of the targets, only rather small telescopes were used for
these photometric observations. The system V773~Cas has been observed (by one of the authors: PS)
with the 34-mm refractor at the private observatory in Brno, Czech Republic, using the SBIG ST-7XME
CCD camera. The star QS~Aql was monitored with the similar instrument at the private observatory
(by one of the authors: RU) in J\'{\i}lov\'e u Prahy, Czech Republic, using a G2-0402 CCD camera.
On the other hand, the only very southern star BR~Ind was observed with the FRAM telescope
\citep{2010AdAst2010E..31P}, installed and operated at the Pierre Auger Observatory at Malarg\"ue,
Argentina. For observations only a small Nikkor lens with 107 mm diameter and a CCD camera of
G4-16000 type was used (which is mounted on 30-cm FRAM telescope itself). All the measurements were
processed by the software C-MUNIPACK\footnote{See http://c-munipack.sourceforge.net/} which is
based on aperture photometry and uses the standard DAOPHOT routines \citep{1993ASPC...52..173T}.
\section{The analysis}
The whole work is based on classical techniques of using the photometry and spectroscopy together
with the analysis of the positional measurements of the visual double on the sky obtained during
much longer time span (more than a century). Combining these methods together one can not only
obtain the reliable orbital and stellar parameters, but also the structure of the system and its long-term evolution.
The advantage is also the fact that having the complete information about the masses, inclinations,
periods, etc. we can also fill in still rather incomplete statistics of the triple and quadruple
systems, which is compared to models of formation of binaries and multiple systems
\citep{2008MNRAS.389..925T}.
At first, the visual orbit based on already published interferometric data was analysed.
However, orbits of systems analysed within this study were published quite recently.
Hence, our new re-calculations led to only slight improvements of the fits. The data were
downloaded from the already published papers and the Washington Double Star Catalogue
(hereafter WDS\footnote{http://ad.usno.navy.mil/wds/}, \citealt{WDS}). The orbits were calculated following
the paper \cite{2007AN....328..928Z}, but the coverage of the orbits was usually not perfect and
only parts of the long orbits are covered with data nowadays.
Both the photometry and spectroscopy were studied in the standard manner. The obtained photometric
data and the radial velocities were analysed by the program {\sc Phoebe}
\citep{2005ApJ...628..426P}, which is using the classical Wilson-Devinney algorithm
(\cite{1971ApJ...166..605W} and its later modifications) and allows us to fit the relevant
parameters of the eclipsing components and their relative orbit. For the modelling, we used several
assumptions. At first, the primary temperature was set to the value corresponding to the particular
spectral type (see e.g. calibrations by \cite{2013ApJS..208....9P} and the updated web
site\footnote{\tiny{http://www.pas.rochester.edu/$\sim$emamajek/EEM$\_$dwarf$\_$UBVIJHK$\_$colors$\_$Teff.txt}}).
The limb-darkening coefficients were obtained through interpolation in tables by
\citet{1993AJ....106.2096V}. The albedo coefficients $A_i$, and the gravity darkening coefficients
$g_i$
were fixed at their suggested values according to the temperatures of the components. As all
studied eclipsing binaries are members of multiple systems third light from the remaining
components was taken into account.
And finally, if we have the LC+RV solution and both the eclipsing binary masses are known, we can
proceed to the combined analysis of the visual orbit together with the period changes of the
eclipsing pair. The method itself was introduced in
\cite{2007AN....328..928Z}, and its usage was presented e.g. in \cite{2012A&A...542A..78Z}, or
\cite{2014AcA....64..125Z}. The most crucial for the whole analysis seems to be the quality of the
input observations in both methods and the data coverage of the long orbit. This is usually
problematic in these cases where the third-body orbit is too long and we have only small fraction
of the orbit covered. On the other hand, if we have a good data coverage in both methods, we can
even calculate independently the distance to the system.
\section{V773 Cas}
The first system in our sample of stars is the northern-hemisphere V773~Cas (= HIP~8115, HD~10543,
$V_{max}$ = 6.18~mag), an eclipsing binary discovered on the basis of Hipparcos data
(\citealt{HIP}, and \citealt{1999IBVS.4659....1K}). However, many years before the discovery of its
photometric variability was the star recognized as a visual binary. Its most recent orbital
solution is that one published by \cite{2009AJ....138..813H}, who derived its period to be of about
193~yr, the orbital eccentricity of about 0.77, and the semimajor axis of about
0.9$^{\prime\prime}$. The spectral type was usually stated as A3V \citep{1978BICDS..15..121J} or
A2V \citep{1968RGOB..135..385P}. However, as noted by \cite{2010SerAJ.180...71C}, there arises a
large discrepancy between the astrophysical and dynamical total mass of the system. From its
spectral types the total mass should be of about 3~M$_\odot$, while from the orbital solution
arises that the $M_{dyn} = 11.9$~M$_\odot$. This strange discrepancy originally led to our interest
about the star.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_V773Cas_LC_RV.eps}
\caption{Light and radial velocity curve fits of V773~Cas based on the {\sc Phoebe} fitting.}
\label{FigLCRV_V773Cas}
\end{figure}
First of all, we found out that for V773~Cas three new interferometric measurements were obtained
since the last visual orbit calculation was published by \cite{2009AJ....138..813H}. We added these
three data points and re-ran the fitting procedure again, resulted only in slightly different
orbital solution.
However, our solution would not be complete enough if we did not try to incorporate also the
photometric monitoring and our results from the LC+RV analysis.
\begin{table}
\caption{The parameters from the LC+RV fitting of V773 Cas.}
\label{V773CasLCRVparam}
\centering
\begin{tabular}{c c c c}
\hline \hline
Parameter & Primary & Secondary & Tertiary \\
\hline
$HJD_0$ & \multicolumn{2}{c}{2448500.9209 $\pm$ 0.0003 } & -- \\
$P$ [d] & \multicolumn{2}{c}{2.587332 $\pm$ 0.000002} & -- \\
$a$ [R$_\odot$] & \multicolumn{2}{c}{9.96 $\pm$ 0.06} & -- \\
$v_\gamma$ [km/s] & \multicolumn{2}{c}{7.11 $\pm$ 0.30 } & -- \\
$q = M_2/M_1$ & \multicolumn{2}{c}{1.00 $\pm$ 0.05 } & -- \\
$i$ [deg]& \multicolumn{2}{c}{84.7 $\pm$ 2.2 } & -- \\
$K$ [km/s] & 97.1 $\pm$ 0.9 & 97.0 $\pm$ 1.6 & -- \\
$T$ [K] & 5900 (fixed) & 5842 $\pm 50$ & -- \\
$M$ [M$_\odot$] & 0.99 $\pm$ 0.03 & 0.99 $\pm$ 0.04 & -- \\
$R$ [R$_\odot$] & 1.05 $\pm$ 0.05 & 1.05 $\pm$ 0.05 & -- \\
$M_{bol}$ [mag] & 4.55 $\pm$ 0.10 & 4.58 $\pm$ 0.10 & -- \\
$L_B [\%]$ & 10.0 $\pm$ 0.9 & 9.5 $\pm$ 0.9 & 80.5 $\pm$ 0.9 \\
$L_V [\%]$ & 12.5 $\pm$ 0.7 & 12.1 $\pm$ 0.7 & 75.4 $\pm$ 0.6 \\
$L_R [\%]$ & 15.0 $\pm$ 0.6 & 14.6 $\pm$ 0.6 & 70.4 $\pm$ 0.5 \\
\hline
\end{tabular}
\end{table}
The primary temperature was kept fixed at a value of 5900~K, which resulted from our spectral
estimations and also from the primary mass. The results of our LC+RV solution are shown in Figure
\ref{FigLCRV_V773Cas}, where one can see the light curve in $BVR$ filters together with the RV
curve based on the Ond\v{r}ejov data. In total, there were obtained 20 spectrograms and 15 nights
of photometry. The radial velocities were mostly derived from the \emph{Ca}\,I and \emph{Fe}\,I
lines. The parameters of the least-squares fit are given in Table \ref{V773CasLCRVparam}. The
system is well-detached and both eclipsing components are probably of about G1-2V spectral type,
hence should be considered as solar analogues. The spectral lines of both components seem to be
very similar to each other, while some of the lines which remained at almost fixed position (at
about 5~km$\cdot$s$^{-1}$) seem to originate from the third distant body, whose movement is
negligible over the time span of the observed spectra. However, as one can see from the relatively
high value of the third light as resulted from the LC solution, the third body is the brightest
member of the system and it is probably responsible for the spectral classification of V773~Cas as
A2-3 in the past.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_V773Cas_Orbit_OC.eps}
\caption{Upper plot: Orbit of V773 Cas on the sky. The individual observations are connected with their
theoretical positions on the orbit, while the dotted line stands for the line of the apsides and
the dashed line stands for the line of the nodes. Bottom plot: $O-C$ diagram of V773 Cas as resulted from our combined analysis of period changes
together with the visual orbit. Best-fitting solution is plotted as a solid line, while the individual
observations are denoted as filled (primary) and open (secondary) circles.}
\label{FigV773Cas_OC_orbit}
\end{figure}
The above-mentioned solution was derived using the {\sc Phoebe} code and the LC+RV fitting.
However, we also tried a different approach to the problem. Using the available 20 spectrograms we
applied the code {\sc
Pyterpol}\footnote{https://github.com/chrysante87/pyterpol/wiki}\citep{2016KsiTau}. This program
determines kinematic and radiative properties of binary components through comparison of observed
spectra to synthetic ones obtained through interpolation in pre-calculated grids of synthetic
spectra (\citealt{2012A&A...544A.126D} and \citealt{2010A&A...516A..13P}). Using this approach, we
obtained the solution presented in Table \ref{Table_V773Cas_Pyterpol}. Such result is in very good
agreement with the {\sc Phoebe} solution presented above as well as with the observed magnitude
difference between the two visual components (of about 1-2 mag from the WDS catalogue).
\begin{table}
\caption{Parameters of V773 Cas using \sc{Pyterpol}.}
\label{Table_V773Cas_Pyterpol}
\centering
\begin{tabular}{c c c c}
\hline \hline
Parameter & Primary & Secondary & Tertiary \\
\hline
$T$ [K] & 5933 $\pm$ 131 & 5693 $\pm$ 161 & 8522 $\pm$ 38 \\
$v \sin i$ [km/s]& 32.17 $\pm$ 2.32 & 49.10 $\pm$ 7.46 & 84.55 $\pm$ 1.42\\
$L$ [\%] & 13.1 $\pm$ 0.8 & 11.1 $\pm$ 0.5 & 75.8 $\pm$ 0.6 \\ \hline
\end{tabular}
\end{table}
The linear ephemerides written in Table \ref{V773CasLCRVparam} are the best suitable elements for
future prospective observations of V773~Cas in the upcoming years. However, these elements will
change significantly due to the orbital motion of the eclipsing pair around a common barycenter
with the third component. The most significant change of the orbital elements of the inner pair
will take place near the periastron passage which will occur in 2021. We plotted the predicted
period variation of V773~Cas eclipsing pair in the $O-C$ diagram in the Fig.
\ref{FigV773Cas_OC_orbit}. This diagram was constructed in agreement with the visual orbit of the
double as resulted from our combined analysis. The orbit of the third component is given in Fig.
\ref{FigV773Cas_OC_orbit} and the parameters of such a fit are given in Table
\ref{Table_AstrOrbitV773Cas}. The list of minima times used for the analysis are given below in
Appendix section.
\begin{table}
\caption{The orbital parameters of V773 Cas.}
\label{Table_AstrOrbitV773Cas}
\centering
\begin{tabular}{c c c}
\hline \hline
Parameter & Our solution & \cite{2009AJ....138..813H} \\
\hline
$p_3$ [yr] & 184.9 $\pm$ 2.7 & 193.17 $\pm$ 6.23 \\
$T_0$ [yr] & 2021.8 $\pm$ 2.1 & 2022.39 $\pm$ 0.78 \\
$e$ & 0.794 $\pm$ 0.050 & 0.773 $\pm$ 0.016 \\
$a$ [arcsec] & 0.911 $\pm$ 0.065 & 0.899 $\pm$ 0.012 \\
$i$ [deg] & 133.3 $\pm$ 2.6 & 134.8 $\pm$ 3.8 \\
$\Omega$ [deg] & 125.4 $\pm$ 4.3 & 128.8 $\pm$ 4.6 \\
$\omega$ [deg] & 269.5 $\pm$ 8.5 & 270.2 $\pm$ 7.4 \\
\hline
\end{tabular}
\end{table}
The problem is that for achieving such a self-consistent solution, we cannot use the Hipparcos
parallax as an input parameter. The spectral classification of about A3V for the third component
comes not only from the already published papers, but also from our findings about the spectra (the
lines indicate a spectral type of about A3), as well as from the photometric indices of the third
body as resulted from the LC+RV solution. Hence, the total mass of the three components should be
of about 4~M$_\odot$, which is in contradiction with the computed mass using the Hipparcos parallax
$\pi_{HIP}=11.77 \pm 0.67$~mas. Hence, the parallax needed for our combined solution to be
self-consistent one needs the value of about $\pi_{new}=17.6 \pm 1.5$~mas, but its uncertainty is
still rather high because it is based only on a mass estimation. Such a situation is nothing novel,
because there was already shown that the Hipparcos data sometimes produce spurious results for the
close double stars, see e.g. \cite{2008AJ....136..890D}.
\section{QS Aql}
Second eclipsing system analysed in the present study is QS~Aql (=HIP~96840, HD~185936, $V_{max}$ =
6.01~mag), which is the brightest one and also the most massive one among the studied systems.
\cite{1978BICDS..15..121J} classified its spectral type as B5V, while the others like
\cite{1923AnHar..98....1C} or \cite{1991PBeiO..17...59L} published its type as B3. Moreover, its
variability was first detected by \cite{Millman1928}, but its eclipsing nature was confirmed by
\cite{Guth1931}, who also gave its correct orbital period of about 2.5~days. Some 40 years later,
\cite{Knipe1971} discovered a rapid period change, which occurred at about 1964 (his suggestion)
and was caused by the periastron passage in the wide orbit around the barycenter. The period change
was so rapid that the eccentricity of the wide orbit must be very high. On the other hand, the
first astrometric observations are more than 80~years old, but their accuracy is questionable due
to small angular separation of the components. Many reliable speckle interferometric
observations were obtained since 1976. The most recent orbital solution was computed by
\cite{2007AJ....133.1209D}, who derived its orbital period to be of 61.72~yr and surprisingly high
eccentricity of about 0.966. The total mass of the system was estimated to be of about
20~M$_\odot$, but with rather high uncertainty. \cite{Mayer2004} noted that the combined analysis
of period changes together with the visual orbit is still problematic due to poor coverage of data
by both methods.
We started the photometric monitoring of this interesting system in 2007, while the new
spectroscopy was collected from 2012. Since the last calculation of its visual orbit by
\cite{2007AJ....133.1209D} there was published one new observation of the visual double. The system
is known as a single-lined spectroscopic binary, while the secondary as well as the tertiary
component lines were not detected in the spectra. For the discussion about the individual RV
solutions see below.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_QSAql_LC_RV.eps}
\caption{Light and radial velocity curve fits of QS~Aql based on the {\sc Phoebe} fitting.}
\label{FigLCRV_QSAql}
\end{figure}
\begin{table}
\caption{The parameters from the LC+RV fitting of QS Aql.}
\label{QSAqlLCRVparam}
\centering
\begin{tabular}{c c c c}
\hline \hline
Parameter & Primary & Secondary & Tertiary \\
\hline
$HJD_0$ & \multicolumn{2}{c}{2440443.5442 $\pm$ 0.0015 } & -- \\
$P$ [d] & \multicolumn{2}{c}{2.5132987 $\pm$ 0.0000075 } & -- \\
$a$ [R$_\odot$] & \multicolumn{2}{c}{13.78 $\pm$ 0.11 } & -- \\
$v_\gamma$ [km/s] & \multicolumn{2}{c}{-16.13 $\pm$ 0.62 } & -- \\
$q = M_2/M_1$ & \multicolumn{2}{c}{0.37 $\pm$ 0.02 } & -- \\
$i$ [deg]& \multicolumn{2}{c}{83.6 $\pm$ 1.3 } & -- \\
$K$ [km/s] & 73.98 $\pm$ 0.33 & 201.76 $\pm$ 2.09 & -- \\
$T$ [K] & 14500 (fixed) & 7910 $\pm$ 78 & -- \\
$M$ [M$_\odot$] & 4.07 $\pm$ 0.09 & 1.49 $\pm$ 0.05 & -- \\
$R$ [R$_\odot$] & 4.08 $\pm$ 0.15 & 1.65 $\pm$ 0.20 & -- \\
$M_{bol}$ [mag] & -2.31 $\pm$ 0.18 & 2.29 $\pm$ 0.14 & -- \\
$L_B [\%]$ & 47.6 $\pm$ 2.9 & 1.4 $\pm$ 0.4 & 51.0 $\pm$ 3.1 \\
$L_V [\%]$ & 47.4 $\pm$ 1.2 & 2.0 $\pm$ 0.3 & 50.6 $\pm$ 1.4 \\
$L_R [\%]$ & 49.2 $\pm$ 3.4 & 2.3 $\pm$ 0.2 & 48.5 $\pm$ 3.5 \\
$L_I [\%]$ & 48.7 $\pm$ 2.3 & 2.7 $\pm$ 0.2 & 48.6 $\pm$ 2.5 \\
\hline
\end{tabular}
\end{table}
The light curve analysis was carried out using the data obtained in $BVRI$ filters in the Czech
Republic in 2009 and 2010. The results are shown in Fig. \ref{FigLCRV_QSAql}, while the parameters
corresponding to the best-fitting synthetic light curve are given in Table \ref{QSAqlLCRVparam}. In
Fig. \ref{FigLCRV_QSAql} you can also see some small variability on the residuals after subtraction
of the light curve. However, these deviations are only caused by worse photometric conditions
during some of the nights and our decision not to remove the outlying points on the light curves.
The results of the RV fitting are shown in Fig. \ref{FigLCRV_QSAql}, where one can see that the
secondary velocities were also derived, but are affected by much larger errors than primary ones.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_QSAql_Orbit_OC.eps}
\caption{Orbit of QS Aql on the sky, and the $O-C$ diagram. See Fig. \ref{FigV773Cas_OC_orbit} for description.}
\label{FigQSAql_OC_orbit}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{QSAql-RV.eps}
\caption{Gamma velocities of QS Aql as based on individual published RV solutions.}
\label{FigRVvar_QSAql}
\end{figure}
As one can see from the parameters given in Table \ref{QSAqlLCRVparam}, the eclipsing components
are rather different, but the most luminous one seems to be the third distant member. What is
rather surprising is the fact that the amplitude of the RV variations for the primary component
resulted in about 74~km/s, while the authors of the previous studies gave the $K_1$ value in
between 40.8~km/s \citep{1990ApJS...74..551A} and 58.4~km/s \citep{1987BAAS...19..709H}. The other
solutions gave also rather low values of $K_1$ about 47.3~km/s, see \cite{Hill1931} and
\cite{Lucy1971}. Explanation of this discrepancy probably comes from the fact that the lines are
very broad and blended together with the third (dominant) component, which remains on almost the
same position over the whole time interval. Thanks to this reason the lines are rather asymmetric
and the previous authors probably measured the wide wings of the lines instead of the cores. If we
measure the wings, the amplitude is really lower. However, thanks to the high dispersion FEROS
spectra we were able to confidentially identify both the eclipsing components for the first time
(hence SB1 becomes SB2) and derive the amplitude $K_1$ with higher conclusiveness.
If we tried to obtain a combined solution of the visual orbit plus the period variation of the
eclipsing pair, we found out that the analysis is still rather problematic. For the trustworthy fit
of the visual orbit only the data obtained since 1976 were taken into account. On the other hand,
the older times of minima were also used because these define the rapid period change near the year
1960 quite well. For the results of our fitting see Fig. \ref{FigQSAql_OC_orbit}. The parameters of
our solution are presented in Table \ref{Table_combined_QSAql}.
As a by-product of the fitting we also plotted the gamma velocity changes in agreement with the
third body orbit and plotted there also the individual gamma velocities as resulted from individual
studies as published during the last century. Our result is shown in Fig. \ref{FigRVvar_QSAql}.
Unfortunately, some individual systemic velocities are affected by large uncertainties and the
predicted RV variation is rather flat during most of the $p_3$ period.
\begin{table}
\caption{The parameters of our combined solution for QS Aql.}
\label{Table_combined_QSAql}
\centering
\begin{tabular}{c c c}
\hline \hline
Parameter & Value \\
\hline
$p_3$ [yr] & 77.0 $\pm$ 4.3 \\
$T_0$ [yr] & 1962.3 $\pm$ 2.3 \\
$e$ & 0.947 $\pm$ 0.038 \\
$a$ [arcsec] & 0.111 $\pm$ 0.045 \\
$i$ [deg] & 61.2 $\pm$ 3.6 \\
$\Omega$ [deg] & 144.5 $\pm$ 5.1 \\
$\omega$ [deg] & 336.8 $\pm$ 4.7 \\ \hline
$M_3$ [M$_\odot$]& 4.04 $\pm$ 0.86 \\
$\pi$ [mas] & 2.89 $\pm$ 0.55 \\
\hline
\end{tabular}
\end{table}
However, a discussion of this combined solution is necessary. The presented final fits are still
rather preliminary and especially the astrometry suffers from many deviating points. This is
probably caused by rather large eccentricity and the inclined orbit. Several much more reliable
observations would be very useful in the upcoming years. In the paper \cite{Heintze1989} the
authors discussed the spectroscopic orbit by \cite{1987BAAS...19..709H} and concluded that the
third light should be of about 1.2 times larger than the combined light of the eclipsing pair.
Their conclusion would imply tertiary mass 4.3~M$_\odot$ and spectral type of B5-6\,V. However,
this assumption is contradicted by the last interferometric observations, which indicate
that both visual components are of similar brightness (i.e.
the combined light from the eclipsing pair roughly the same as the third star), which is also in
agreement with our new LC+RV solution. From this information we can derive that the third component
is probably of about the same spectral type as the primary, i.e. B6V with the mass of about
4~M$_\odot$. Exactly the same result was obtained from our combined analysis of period changes
and the visual orbit, see Table~\ref{Table_combined_QSAql}.
Thanks to the relatively well-derived amplitudes of both phenomena (semimajor axis for the visual
orbit as well as the semiamplitude of the period variations in the $O-C$ diagram) we also tried
to determine independently distance to the system. Quite interesting is the fact how the
parallax of QS~Aql has changed from the original Hipparcos value 1.98 $\pm$ 0.82~mas \citep{HIP},
while the new one was recalculated to 0.49 $\pm$ 0.62~mas \citep{2007A&A...474..653V}. On the other
hand, \cite{Docobo2006} presented two different possible values of parallax based on two different
methods and the Hipparcos parallax, namely 1.8~mas and 3.1~mas. No other parallax estimation has
been found in the literature. However, from our solution there resulted that the parallax value
have to be larger than the Hipparcos ones due to the amplitudes in both methods, of about 2.89~mas.
The future space missions like GAIA \citep{2001A&A...369..339P} would solve this problem, however
the star is too bright and close to the bright limit of the satellite.
\section{BR Ind}
The last system in our analysis is rather neglected and only seldom-investigated BR~Ind (=
HIP~104604, HD 201427, $V_{max}$ = 7.07~mag). Its photometric variability was discovered on the
basis of the Hipparcos data \citep{HIP}, giving the orbital period of 0.89277~days (a short note
about its possible double value was also added there). Its spectral type F8V was published by
\cite{1978mcts.book.....H}, however it is not clear to which component this classification belongs.
The star is also known as a visual double, having the time span of the positional observations of
more than 100~yrs. The most recent orbital solution was published by \cite{Seymour2002}, who
derived the period of 167~yr, semimajor axis of 0.894$^{\prime\prime}$ and eccentricity of 0.521.
However, since than five new observations were obtained and the orbit should be recalculated.
We collected the available photometry of BR Ind trying to find out which of the orbital periods is
the correct one (0.89 or 1.78 days). However, the photometry from the surveys like ASAS
\citep{ASAS} or Pi of the sky \citep{2005NewA...10..409B} were not able to distinguish between
these two periods. Therefore, the photometry for BR~Ind in $BVRI$ filters was obtained in 2014
and 2015 using the FRAM telescope. Thanks to these data we finally confirmed that the correct orbital
period of the inner pair is really double, i.e. of about 1.786~days.
On the other hand, we also obtained the spectroscopy of BR Ind with the FEROS spectrograph in La
Silla. However, after four nights (and obtaining 27 \'echelle spectra) of observations we were not
able to detect the 1.8-days period on the most prominent lines. Instead, the lines follow some
longer periodicity of several days. Hence, we applied for more observing time using the Tycho Brahe
proposals for the 2.2-meter MPG telescope and the FEROS instrument again. During four consecutive
seasons we obtained 14 more spectra of BR~Ind leading finally to the solution.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{BRInd_RV_fit.eps}
\caption{Radial velocity curve of the B pair of BR~Ind as obtained from the FEROS data. The black
dots stand for the data from fall 2013, blue ones are from fall 2014, green dots denote the data
from spring 2015, the cyan ones are the observations coming from fall 2015, while the magenta ones
are from spring 2016.}
\label{FigRV_BRIndB}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_BRInd_LC_RV.eps}
\caption{Light and radial velocity curve fits of the inner pair of BR~Ind based on the {\sc Phoebe} fitting.}
\label{FigLCRV_BRInd}
\end{figure}
\begin{figure}
\hspace{8mm}
\includegraphics[width=0.28\textwidth]{BRInd_structure_new_.eps}
\caption{Structure of BR~Ind as resulted from our analysis.}
\label{Fig_BRInd_structure}
\end{figure}
The most prominent lines which were also used to derive the radial velocities were the $Fe$ and
$Ca$ lines. These were analysed leading to the detection of a 6-days variation. Only much weaker
lines were detected as the lines coming from the primary and secondary components of the eclipsing
pair and following the 1.786-days variability. Hence, for the subsequent analysis we denoted the
6-days orbit as the ''B'' component, while the eclipsing pair is always designated as ''A''. The
results of our RV fitting are plotted in Figs. \ref{FigRV_BRIndB} and \ref{FigLCRV_BRInd}.
Resulting parameters of the pair B are given in Table \ref{BRIndLCRVparam}. As one can see, the
orbit is only slightly eccentric. Structure of BR~Ind is plotted in Fig. \ref{Fig_BRInd_structure}.
The light curve fitting was carried out together with the radial velocity analysis in {\sc Phoebe}.
The results are plotted in Fig. \ref{FigLCRV_BRInd}, while the parameters are given in Table
\ref{BRIndLCRVparam}. We can see that both components are very similar to each other (therefore the
problems with 0.89 versus 1.78-days period). Both gamma velocities of the eclipsing pair as well as
of the B pair are similar to each other and close to zero. This would indicate that the orbit is
very close to face-on orientation, which was confirmed via the fitting of the astrometry (see
below). We also collected the available photometry for deriving the times of the eclipses and
constructed the $O-C$ diagram plotted in Fig. \ref{FigBRInd_OC_orbit}. No visible variation can be
seen there during these approximately 25 years of observations. This would also indicate that the
period of the visual pair is rather long or the orbit is nearly face-on.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig_BRInd_Orbit_OC.eps}
\caption{Orbit of BR Ind on the sky, and the $O-C$ diagram. See Fig. \ref{FigV773Cas_OC_orbit} for description.}
\label{FigBRInd_OC_orbit}
\end{figure}
\begin{table}
\caption{The parameters from the LC+RV fitting of BR Ind.}
\label{BRIndLCRVparam}
\centering
\scalebox{0.83}{
\begin{tabular}{c c c c}
\hline \hline
&\multicolumn{2}{c}{Pair A} & Pair B \\
Parameter & Primary & Secondary & \\
\hline
$HJD_0$ & \multicolumn{2}{c}{2448500.4755 $\pm$ 0.0002} & 2456563.186 $\pm$ 0.052 \\
$P$ [d] & \multicolumn{2}{c}{1.7855618 $\pm$ 0.0000015} & 6.0009949 $\pm$ 0.0000020 \\
$a$ [R$_\odot$] & \multicolumn{2}{c}{7.37 $\pm$ 0.04 } & 9.551 $\pm$ 0.026 \\
$v_\gamma$ [km/s]& \multicolumn{2}{c}{-2.66 $\pm$ 0.08} & -3.131 $\pm$ 0.089 \\
$e$ & \multicolumn{2}{c}{--} & 0.190 $\pm$ 0.003 \\
$\omega$ [deg] & \multicolumn{2}{c}{--} & 161.83 $\pm$ 0.94 \\
$q = M_2/M_1$ & \multicolumn{2}{c}{0.96 $\pm$ 0.02 } & -- \\
$i$ [deg] & \multicolumn{2}{c}{85.17 $\pm$ 1.8 } & -- \\
$K$ [km/s] & 101.9 $\pm$ 0.4 & 106.4 $\pm$ 0.4 & 40.40 $\pm$ 0.03 \\
$T$ [K] & 5170 (fixed) & 5203 $\pm$ 75 & -- \\
$M$ [M$_\odot$] & 0.86 $\pm$ 0.02 & 0.83 $\pm$ 0.02 & -- \\
$R$ [R$_\odot$] & 1.23 $\pm$ 0.03 & 0.95 $\pm$ 0.02 & -- \\
$M_{bol}$ [mag] & 4.78 $\pm$ 0.05 & 5.32 $\pm$ 0.06 & -- \\
$L_B [\%]$ & 16.5 $\pm$ 2.0 & 10.2 $\pm$ 2.0 & 73.3 $\pm$ 3.5 \\
$L_V [\%]$ & 24.3 $\pm$ 2.3 & 14.9 $\pm$ 1.7 & 60.8 $\pm$ 4.0 \\
$L_R [\%]$ & 26.3 $\pm$ 2.4 & 16.0 $\pm$ 1.6 & 57.7 $\pm$ 4.2 \\
\hline
\end{tabular}}
\end{table}
\begin{table}
\caption{The parameters of visual orbit of BR Ind.}
\label{Table_orbit_BRInd}
\centering
\begin{tabular}{c c c c}
\hline \hline
Parameter & Our solution & \cite{Seymour2002} \\
\hline
$p_3$ [yr] & 147.9 $\pm$ 2.5 & 167.0 \\
$T_0$ [yr] & 2050.3 $\pm$ 1.9 & 2056.0 \\
$e$ & 0.711 $\pm$ 0.021& 0.521 \\
$a$ [arcsec] & 0.864 $\pm$ 0.045& 0.894 \\
$i$ [deg] & 154.6 $\pm$ 8.2 & 141.9 \\
$\Omega$ [deg] & 220.8 $\pm$ 11.8 & 142.8 \\
$\omega$ [deg] & 80.1 $\pm$ 9.9 & 178.4 \\
\hline
\end{tabular}
\end{table}
The available positional measurements were analysed leading to slight improvement of the published
fit by \cite{Seymour2002}. The parameters are given in Table \ref{Table_orbit_BRInd} and the plot
of the orbit in Fig. \ref{FigBRInd_OC_orbit}. As one can see, the period is a bit shorter and the
eccentricity higher. The orientation of the orbit is really close to face-on ($i=155^\circ$), which
is in agreement with the discussion in the previous paragraph.
From the fit of the visual orbit we can also derive the total mass of the whole system. This
resulted in about 3.34~M$_\odot$. Such mass can be used to derive the individual masses of the
pair B in the system. If we accept the orbital solution as derived from our LC+RV analysis, the
mass of the pair B has to be of about 1.65~M$_\odot$. Due to the fact that the B subsystem is only
a SB1-type binary, we can only estimate its individual masses. For the SB1-type binary we can
calculate a so-called mass function \citep{2009ebs..book.....K}: $$f(M)_B = \frac{1}{2 \pi G} K^3
P_B \, (1-e_B^2)^{3/2} = \frac{(M_{Bb} \cdot \sin i_B)^3}{(M_{Ba}+M_{Bb})^2},$$ and from the
knowledge of the total mass of pair B we can derive the product $(M_{Bb} \cdot \sin i_B) =
0.47$~M$_\odot$. Obviously, we do not know the inclination of the pair, but at least some
estimation can be done also with these values. From the LC solution the third light of the pair B
is higher than the combined light coming from the two eclipsing components of pair A. Hence, this
finding is in excellent agreement with the masses as derived from the SB1 binary of the pair B and
the total mass -- but only with the assumption that the inclination is close to 90$^\circ$. Hence,
the two components of the B pair should be of about F8V+M2V. With such a configuration the
individual luminosity levels and their ratios as well as the non-detection of the Bb component in
the spectra will be explained. Also the spectral classification is now clarified, being of the Ba
component instead of the eclipsing pair. And finally, this solution would indicate that the two
visual components have their individual magnitudes shifted of about 1.1~mag, which is in agreement
with the magnitude differences as published in the WDS catalogue \citep{WDS}. However, the whole
discussion is solely based on the assumption that the Hipparcos parallax
\citep{2007A&A...474..653V} of 20.65~mas is correct.
\section{Discussion and Conclusions}
Regardless of the fact that a lot of work has been done on the theoretical modelling as well as on
the observations during the last decades, the formation of systems of higher multiplicity still
remains an open question. Multiplicity itself is the most promising mechanism to produce close
binaries with short orbital periods below 1 day. A third component may cause Kozai cycles and then
the tidal friction between the binary components causes an orbital shrinkage and its
circularization, see e.g. \cite{2001ApJ...562.1012E} and \cite{2016MNRAS.455.4136B}. Nevertheless,
this is not the only one possible scenario of origin of such systems and several other competing
theories are still being discussed. Truly existing systems were probably produced by a combination
of several different mechanisms, see e.g. \cite{2008MNRAS.389..925T}. The numerical simulations
which include only particular mechanisms are able to explain only some statistical properties of
the multiple systems, but fail to explain the others. That is a matter of intensive investigation
of recent years and each newly discovered multiple system with its known orbital and physical
parameters should help us to improve the statistical properties of the sample and provide us with
new observational constrains.
The study of three selected eclipsing multiple systems provides us with only a piece of information
needed for construction of the theory of stellar formation and evolution. Despite this fact, it is
still a valuable contribution to the topic in several aspects. At first, the three systems
represent the most typical multiple systems nowadays (containing the close inner pair with the
period of a few days and the distant component of much longer period). At second, their physical
and orbital properties are now well-determined and can be placed into the broader
context of theoretical modelling (e.g. the ratio of periods of the inner and outer orbits, the mass
ratios or the eccentricity values), see e.g. \cite{2003A&A...397..159H} or \cite{2008MNRAS.389..925T}.
At third, each of the studied systems is somehow interesting and deserves our attention. V773~Cas
was found to be much closer to the Sun than originally assumed on the basis of the Hipparcos data.
QS~Aql is rather massive, but moving around a common barycenter with the third component on the
orbit with a very high eccentricity of about 0.95. And also BR~Ind was found to be of a rare
quadruple system consisting of eclipsing and non-eclipsing pairs. This detection of the higher order multiplicity
among the stars of such a late type is also rather remarkable, because for the late type stars
their multiplicity fraction is generally very low, see e.g. \cite{2013ARA&A..51..269D}. As a common
characteristics for all these systems should also be mentioned that in all of them the most massive
and most luminous star is the distant third component (apart from BR~Ind, where it is a binary).
And as was already shown (e.g. by \citealt{2008MNRAS.389..925T} and \citealt{2016MNRAS.455.4136B}),
such systems are still rather rare.
As a by-product we also derived the possible mutual inclinations for these three systems and their
orbits. Due to the fact that longitude of the ascending node of the inner eclipsing pair is not
known, we can only estimate the ranges within which the mutual inclination should lie. This
resulted in 48--142~deg for V773~Cas, 22--145~deg for QS~Aql, and 69--120~deg for BR~Ind,
respectively. As one can see, the ranges are still rather high and the uncertainty should be lower
when knowing the longitude of the ascending node. However, this can only be achieved resolving the
inner eclipsing pair via interferometry, which is very problematic (due to the luminous third star
and a small angular separation of the eclipsing components). The most promising in this sense seems
to be V773~Cas, where the predicted angular distance was computed to be of about 0.8~mas.
And finally, such a study is also viable from the observational point of view. We can see that
there still exist many systems never analysed before and their parameters not known, despite the
fact that the observations exist and are easily obtainable. At this point it would be suitable to
mention that the photometric data for V773~Cas and QS~Aql were obtained using only the 34mm-sized
telescopes, while BR~Ind with a 107-mm one.
\begin{acknowledgements}
We would like to thank the Pierre Auger Collaboration for the use of its facilities. The operation
of the robotic telescope FRAM is supported by the EU grant GLORIA (No. 283783 in FP7-Capacities
program) and by the grant of the Ministry of Education of the Czech Republic (MSMT-CR LM2015038).
The data calibration and analysis related to FRAM telescope is supported by the Ministry of
Education of the Czech Republic (MSMT-CR LG15014) and by the Czech Science Foundation grant No.
14-17501S. The observations obtained with the MPG 2.2m telescope were supported by the Ministry of
Education, Youth and Sports project - LG14013 (Tycho Brahe: Supporting Ground-based Astronomical
Observations). We would like to thank the observers S.~Ehlerov\'a, A.~Kawka, P.~Kab\'ath, and
S.~Vennes for obtaining the data. R.~K\v{r}\'{\i}\v{c}ek is also acknowledged for obtaining some of
the spectroscopic data. This research has made use of the Washington Double Star Catalog maintained
at the U.S. Naval Observatory. This investigation was supported by the Czech Science Foundation
grants No. P209/10/0715 and GA15-02112S. We also do thank the {\sc ASAS}, and {\sc Pi of the sky}
teams for making all of the observations easily public available. This research has made use of the
SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France and of NASA's Astrophysics Data
System Bibliographic Services.
\end{acknowledgements}
\bibliographystyle{apj}
|
2,869,038,157,077 | arxiv | \section{Introduction}
The classical game of \textsc{Fibonacci nim}, as studied by Whinihan in~\cite{Whinihan63}, is played as follows: There is one pile of stones, with $n$ stones in the pile initially, and there are two players who take turns making moves. A move consists of removing some of the stones in the pile, subject to the following constraints: the first player must remove at least 1 stone, but may not remove the entire pile. On subsequent turns, if the previous player removed $m$ stones, then the next player must remove least one stone and at most $2m$ stones. The loser is the player who is unable to make a move (usually because there are no stones remaining, although there is also a special case in which the initial pile has one stone).
In his original paper on the game, Whinihan described the outcome of the game under optimal play:
\begin{thm}[Whinihan,~\cite{Whinihan63}] The first player has a winning strategy if and only if $n$ is not a Fibonacci number. \end{thm}
Furthermore, Whinihan gave a full winning strategy. This strategy relies on a celebrated theorem of Zeckendorf. However, it is also possible to give an alternative description of the winning strategy, in terms of partial sums of the so-called Fibonacci word. We introduce this word in~\S\ref{sec:Fibword} and deduce the winning strategy in terms of the Fibonacci word in~\S\ref{sec:Ppos}
It is natural to consider the game of \textsc{Fibonacci nim} played with more than one pile. In this game, one may remove stones from only one pile on any given move. However, there are two natural possibilities for the bound on the number of stones that may be removed: \begin{itemize} \item \textit{Local move dynamic:} Each pile has a separate counter, so that if the last move (by either player) in a pile was to remove $m$ stones, then the next move \emph{in that pile} must be to remove at most $2m$ stones. \item \textit{Global move dynamic:} There is only one counter for the entire game, so that if the previous move was to remove $m$ stones in \emph{any} pile, then the next move must be to remove at most $2m$ stones in \emph{any} pile (either the same pile, or a different pile). \end{itemize} In either case, it is natural to remove the restriction that the first player may not remove an entire pile; this artificial rule is necessary to make the one-pile game nontrivial, but it serves no further purpose in either multi-pile game.
The local move dynamic game is more natural from the perspective of combinatorial game theory, as the game is the \emph{disjunctive sum} of the individual piles. As a result, the game can be studied by means of the Sprague-Grundy theory (see~\cite{Grundy39,Sprague35}); the authors have previously analyzed this version in~\cite{LRS14}.
The global move dynamic game is probably the more natural one from the perspective of game play, and it must be analyzed differently, as the powerful tools based on Grundy values and disjunctive sums are not applicable. In~\S\ref{sec:twopile} we give the outcome class for all two-pile positions, first in terms of Zeckendorf representation, and then in terms of a generalized version of the Fibonacci word. In~\S\ref{sec:multipile} we study some properties of positions with several piles. Finally, in~\S\ref{sec:pow2nim}, we describe a simpler variant of the global move dynamic game, in which we can describe the full winning strategy.
We use the notation $(n_1,\ldots,n_k;r)$ to denote the global \textsc{Fibonacci nim} position with piles of size $n_1,\ldots,n_k$, where the maximum number of stones that can be removed on the first turn is $r$. We write $(n_1,\ldots,n_k;\infty)$ for the global \textsc{Fibonacci nim} position with piles of size $n_1,\ldots,n_k$, where any number of stones can be removed on the first turn, provided that they are all from the same pile.
\section*{Acknowledgments}
Part of the work for this paper was completed at the Games at Dal workshop at Dalhousie University in Halifax, Nova Scotia, in August 2015.
\section{$\mathcal{N}$ and $\mathcal{P}$ positions} \label{sec:NandP}
\begin{defn} We say that a game $G$ is an $\mathcal{N}$ position (resp.\ $\mathcal{P}$ position) and write $G\in\mathcal{N}$ (resp.\ $G\in\mathcal{P}$) if the player to move (resp.\ player not to move) has a winning strategy under optimal play. \end{defn}
Given an \emph{impartial} game (i.e.\ one in which both players have the same moves available to them, as opposed to e.g.\ \textsc{chess}, where one player moves the white pieces and one player moves the black pieces), there is a simple recursive characterization of the $\mathcal{N}$ and $\mathcal{P}$ positions.
\begin{prop} \label{prop:partition} $G\in\mathcal{N}$ if and only if there exists a move to a game $G'$ such that $G'\in\mathcal{P}$. \end{prop}
See~\cite[Theorem 2.13]{ANW07}.
Consequently, $G\in\mathcal{P}$ if and only if, for every move to a game $G'$, we have $G'\in\mathcal{N}$.
\section{Zeckendorf representation} \label{sec:zeckendorf}
A celebrated theorem of Lekkerkerker and Zeckendorf is the following:
\begin{thm}[Lekkerkerker~\cite{Lek52}, Zeckendorf~\cite{Zeckendorf72}] \label{thm:zeckendorf} Every positive integer $n$ can be expressed uniquely as a sum of pairwise nonconsecutive Fibonacci numbers with index at least 2. \end{thm}
\begin{defn} The Zeckendorf representation of $n$ is the unique sequence $z_1(n),z_2(n),\ldots,z_k(n)$ of Fibonacci numbers such that $z_1(n)+\cdots+z_k(n)=n$, and for all $1\le i<k$, $z_i(n)<z_{i+1}(n)$, and $z_i(n)$ and $z_{i+1}(n)$ are not consecutive elements of the Fibonacci sequence. We write $\mathcal{Z}(n)=\{z_1(n),\ldots,z_k(n)\}$. \end{defn}
For notational convenience, if $|\mathcal{Z}(n)|<k$, then we set $z_k(n)=\infty$, and we say that $z_k(n)>m$ for all integers $m$
\section{The Fibonacci word} \label{sec:Fibword}
The Fibonacci word $W^{x,y} = f_0f_1f_2\cdots$ is a string of digits from some two-letter alphabet $\{x,y\}$. It is an archetype of a so-called Sturmian word; see~\cite{Loth02} for much more on Sturmian words. There are many equivalent ways of generating it.
\begin{prop}\label{prop:fibword} The following constructions give rise to the same sequence $f_0f_1f_2\cdots$: \begin{enumerate} \item Let $S_0=x$ and $S_1=xy$. For $n\ge 2$, let $S_n=S_{n-1}S_{n-2}$ be the concatenation of strings. Then, for every $n$, $S_n$ is an initial string of $S_{n+1}$. The Fibonacci word $S_\infty$ is the limiting string of the sequence $\{S_0,S_1,S_2,\ldots\}$. \item The Fibonacci word is the string $f_0f_1f_2\cdots$, where $f_n=x$ if $1\not\in \mathcal{Z}(n)$ and $f_n=y$ if $1\in\mathcal{Z}(n)$. \item The Fibonacci word is the unique non-trivial word $u$ on the alphabet $(x,y)$ where the parallel update $x\rightarrow yz, y\rightarrow y$, for all letters in $u$, gives back the same word, but now on the alphabet $(y,z)$. \end{enumerate} \end{prop}
See~\cite[p.\ 20]{Berstel86} or~\cite{Knuth97} for more details on the Fibonacci word, including a proof of Proposition~\ref{prop:fibword}, as well as other descriptions and interesting properties. Item (3) is obvious, by interpreting $y$ as $x$ and $z$ as $y$, which is the standard ``Fibonacci morphism". The beginning of the Fibonacci word is \[xyxxyxyxxyxxyxyxxyxyxxyxxyxyxxyxxy.\]
In the following, we make use of partial sums of the Fibonacci word, after substituting certain integers for $x$ and $y$. For instance, if we substitute $x=3$ and $y=7$, and among the first $m$ letters, there are $i$ $x$'s and $j=m+1-i$ $y$'s, then the partial sum is $3i+7j$. We sometimes write $W_i^{3,7}$, when we refer to the $i^\text{th}$ value $f_i\in \{3,7\}$ of $W^{3,7}$.
In view of item Proposition~\ref{prop:fibword} (3), a parallel update will often be interpreted as
\begin{enumerate}
\item[(T1)] $F_i\rightarrow F_{i}$;
\item[(T2)] $F_{i+1}\rightarrow F_{i}F_{i-1}$,
\end{enumerate}
for some $i>1$.
We use the following lemmas on the sets of partial sums of Fibonacci words, which are easy consequences of part (3) of Proposition~\ref{prop:fibword}. These sets can also be described naturally in terms of the Zeckendorf representation, which would provide alternative proofs of some of our theorems in the rest of the article.
\begin{lem} \label{lem:partialsums} Suppose that $b\le a$. Then the set of partial sums of $w_a := W^{F_{a+1},F_a}$
is a subset of the set of partial sums of $W^{F_{b+1}, F_b}$. That is, let \[PS(w_a)=\left\{k:k=\sum_{i=0}^m W_i^{F_{a+1},F_a} \text{ for some } m\right\}.\]
Then $PS(w_a)\subseteq PS(w_b)$.
\end{lem}
\begin{proof} It suffices to check this when $b=a-1$. In this case, any instance of $F_{a+1}$ in the Fibonacci word turns into $F_a,F_{a-1}$ after applying the morphism of part (3) of Proposition~\ref{prop:fibword} and making the substitutions. Any instance of $F_a$ remains as $F_a$. The result follows since $F_{a+1}=F_a+F_{a-1}$. \end{proof}
\begin{lem} \label{lem:lastPS} Suppose that $n\in PS(w_a)\setminus PS(w_{a+1})$. Then $n-F_{a+1}\in PS(w_{a+2})$. \end{lem}
\begin{proof} Suppose $n=\sum_{i=0}^m W_i^{F_{a+1},F_a}$, and we use the alphabet $(y,z)=(F_{a+1},F_a)$. We note that $n\in PS(w_a)\setminus PS(w_{a+1})$ if and only if $f_m=y$ and $f_{m+1}=z$, which follows immediately from part (3) of Proposition~\ref{prop:fibword}. Each instance of $yz$ is replaced by $x$, when using the reverse direction. This implies that $n-F_{a+1}\in PS(w_{a+1})$. Since $x$ represents $F_{a+2}$, the largest Fibonacci term in the word $w_{a+1}$, in going to $w_{a+2}$, the partial sum of all letters to the left of $x$ will remain a partial sum; this follows by applying the reverse of part (3) of Proposition~\ref{prop:fibword}.
\end{proof}
Observe that $PS(w_1)=\mathbb{N}$ (since $F_1=F_2=1$), so that every nonnegative integer is in some $PS(w_a)$. Furthermore, $\bigcap_{a\ge 1} PS(w_a)=\{0\}$.
\section{$\mathcal{P}$ positions in one-pile \textsc{Fibonacci nim}} \label{sec:Ppos}
The winning strategy for one-pile \textsc{Fibonacci nim} was described by Whinihan. Consider the position $(n;r)$. If $z_1(n)\le r$, then $(n;r)$ is an $\mathcal{N}$ position, and removing $z_1(n)$ stones is a winning move. If $z_1(n)>r$, then $(n;r)$ is a $\mathcal{P}$ position, and there are no winning moves.
It is also possible to characterize the $\mathcal{N}$ and $\mathcal{P}$ positions in terms of the Fibonacci word. This approach will be useful for our analysis of the multi-pile game.
\begin{thm}\label{thm:alg} Fix a take-away size $r$. There is a unique Fibonacci number $F_t$ so that $F_t\le r<F_{t+1}$. The position $(n;r)\in\mathcal{P}$ if and only if $n$ is a partial sum of $W^{F_{t+1},F_{t}}$, i.e.\ if and only if there is some $m$ so that \begin{equation}\label{eq:fibnim}
n = \sum_{i=0}^m W^{F_{t+1},F_t}_i
\end{equation}
\end{thm}
\begin{proof} Suppose first that $(n;r)$ is of the given form. We must demonstrate that there is no move to a position of the same form. Suppose that the new position is $(n-s;2s)$, with $s\le r < F_{t+1}$. Let $b$ be such that $F_b\le 2s < F_{b+1}$. Then $F_b\le 2s < 2F_{t+1}<F_{t+3}$, gives $b\le t+2$. We must prove that there is no $m$ such that
\[n-s = \sum_{i=0}^m W^{F_{b+1},F_b}_i.\]
Since $s<F_b$ and $\min\{F_{b+1},F_b\} = F_{b}$, if $b\le t$, then (\ref{eq:fibnim}) together with Lemma~\ref{lem:partialsums} gives the claim. Suppose therefore that $b\in \{t+1, t+2\}$. If it were possible to play in $PS(w_{t+1})$ or $PS(w_{t+2})$, then, by definition of $t$, by (\ref{eq:fibnim}) and by the reverse of the (T2) composition (applied once or twice), the Fibonacci number $F_t$ has to be subtracted from $n$. Then, again by (T2), this gives the contradiction.
Suppose next that $(n;r)$ is not of the form in the statement of the theorem. Then there is an $m$ such that
\begin{equation}\label{eq:fibnim2}
\sum_{i=0}^m W^{F_{t+1},F_t}_i < n < \sum_{i=0}^{m+1} W^{F_{t+1},F_t}_i.
\end{equation}
There is a unique positive integer $b$ so that $n\in PS(w_b)\setminus PS(w_{b+1})$. By Lemma~\ref{lem:lastPS}, $n-F_{b+1}\in PS(w_{b+2})$. Since $F_{b+2}\le 2F_{b+1}<F_{b+3}$, $(n-F_{b+1},2F_{b+1})$ is a $\mathcal{P}$-position.
\end{proof}
\section{Two-pile \textsc{Fibonacci nim}} \label{sec:twopile}
\subsection{The Zeckendorf approach}
The $\mathcal{P}$ positions of the two-pile \textsc{Fibonacci nim} game $(m,m+k;r)$ can also be expressed in terms of the Fibonacci word as a simple generalization of that of the one-pile game. Fix one pile size $m$ and the initial take-away amount $r$.
\begin{thm} \label{thm:twopiles} Let $t$ be such that $F_t\le r<F_{t+1}$. Then the following is a complete classification of the outcomes of the position $(m,m+k;r)$: \begin{enumerate} \item If $z_1(k)\le F_t$, then $(m,m+k;r)\in\mathcal{N}$. \item If $z_1(k)\ge F_{t+2}$, then $(m,m+k;r)\in\mathcal{P}$. \item If $z_1(k)=F_{t+1}$ and $m<F_t$, then $(m,m+k;r)\in\mathcal{P}$. \item If $m\ge F_t$ and $z_1(k)=F_{t+1}$, and either $z_2(k)=\infty$ or $z_2(k)=F_{t+d}$ where $m<F_t+F_{t+1}+\cdots+F_{t+d-3}$, then let $s$ be the unique integer so that $F_t+F_{t+1}+\cdots+F_{t+s-1}\le m<F_t+F_{t+1}+\cdots+F_{t+s}$. Then \begin{enumerate} \item If $s$ is odd, then $(m,m+k;r)\in\mathcal{N}$, \item If $s$ is even, then $(m,m+k;r)\in\mathcal{P}$. \end{enumerate} \item If $z_1(k)=F_{t+1}$, and $z_2(k)=F_{t+d}$, and $m\ge F_t+F_{t+1}+\cdots+F_{t+d-3}$, then \begin{enumerate} \item If $d$ is odd, then $(m,m+k;r)\in\mathcal{N}$, \item If $d$ is even, then $(m,m+k;r)\in\mathcal{P}$. \end{enumerate} \end{enumerate} \end{thm}
\begin{rem} For $s\ge 1$, the partial sums $F_t+F_{t+1}+\cdots+F_{t+s-1}$ can be written more concisely: \[ F_t+F_{t+1}+\cdots+F_{t+s-1} = (F_{t+2}-F_{t+1}) + (F_{t+3}-F_{t+2}) + \cdots + (F_{t+s+1}-F_{t+s}) = F_{t+s+1}-F_{t+1}.\] However, in this context it is more natural to leave the series unsummed, as it serves as a reminder of how the game might be played. \end{rem}
Since Theorem~\ref{thm:twopiles} is a bit complicated, let us say something about how it should be interpreted from a player's point of view. First, if it possible to remove $z_1(k)=:F_e$ stones from the $m+k$ pile, then either this move or removing $F_{e-1}$ stones from the $m$ pile is winning. (See the proof for a more complete description of when to play each of these moves.) If it is not possible to remove $z_1(k)$ stones from the $m+k$ pile, then every move \emph{in that pile} is losing. However, there may still be winning moves in the $m$ pile, and indeed the only move that \emph{might} win is to remove $F_t$ stones from the $m$ pile. If $z_1(k)\ge F_{t+2}$, then this move loses. If $z_1(k)=F_{t+1}$, then the situation is rather complicated, leading to cases (3)--(5) in the theorem. However, when actually playing the game, the structure of the theorem is not terribly important: if all moves except for one are clearly losing and the remaining one leads to complications, by all means play the complicated one!
We now turn to the proof. One key input is the following Lemma, which we used in our earlier work on the local move dynamic game:
\begin{lem}[\cite{LRS14}, Lemma 4.3] \label{lem:smallfibs} Suppose $n>1$ and $1\le k<z_1(n)$. Then $z_1(n-k)\le 2k$. \end{lem}
\begin{proof}[Proof of Theorem~\ref{thm:twopiles}] We work one case at a time. For the claimed $\mathcal{N}$ positions, we show that there is a move to a position that we claim to be in $\mathcal{P}$, and for the claimed $\mathcal{P}$ positions, we show that every move is to a claimed $\mathcal{N}$ position. By Proposition~\ref{prop:partition}, the claimed $\mathcal{N}$ and $\mathcal{P}$ positions are, in fact, the $\mathcal{N}$ and $\mathcal{P}$ positions. Observe that every position of two-pile \textsc{Fibonacci nim} is of type (1), (2), (3), (4a), (4b), (5a), or (5b).
We begin with positions of type (1). Suppose first that $z_2(k)\ge F_{t+3}$. Then we can remove $z_1(k)=:F_e$ stones from the $m+k$ pile to get to $(m,m+k-z_1(k);2z_1(k))$, which is of type (2), since $z_1(k-z_1(k))=z_2(k)\ge F_{t+3}$, which is at least as large as the second Fibonacci number after $2z_1(k)<F_{t+2}$. If $z_2(k)=F_{e+2}$ and $m<F_{e+1}$, then $(m,m+k-z_1(k);2z_1(k))$ is of type (3).
However, if $z_2(k)=F_{e+2}$ and $m\ge F_{e+1}$, then removing $z_1(k)$ stones yields a position of type (4) or (5). Instead, there is a winning move in the $m$ pile, to $(m-F_{e-1},(m-F_{e-1})+(F_{e-1}+k);2F_{e-1})$; since $z_1(F_{e-1}+k)\ge F_{e+3}$, this position is of type (2).
Suppose we are in a position of type (2). Then we may move in the $m+k$ pile, to $(m,m+k-a;2a)$ for $1\le a\le r$, and since $r<z_1(k)$ and hence $a<z_1(k)$, Lemma~\ref{lem:smallfibs} ensures that $z_1(k-a)\le 2a$, so $(m,m+k-a;2a)$ is of type (1). We may also move in the $m$ pile to $(m-a,(m-a)+(a+k);2a)$ for $1\le a\le\min(r,m)$, which is of type (1) since $z_1(a+k)=z_1(a)\le a\le 2a$.
Now, suppose we are in a position of type (3). Then the same arguments as for type (2) positions again shows that all moves from type (3) positions are to type (1) positions.
Now, suppose we are in a position of type (4) or (5). (We will distinguish the types more finely later.) We may move in the $m+k$ pile to $(m,m+k-a;2a)$ for $1\le a\le r$, which is of type (1). We may also move in the $m$ pile to $(m-a,(m-a)+(a+k);2a)$ for $1\le a\le\min(m,r)$. If $a\neq F_t$, then $z_1(a+k)=z_1(a)\le a\le 2a$, which is of type (1). The remaining move is to $(m-F_t,(m-F_t)+(F_t+k);2F_t)$, which is of type (4) or (5) if $m-F_t\ge F_{t+1}$ and $z_1(F_t+k)=F_{t+2}$, type (2) if $z_1(F_t+k)\ge F_{t+3}$, and type (3) if $m-F_t<F_{t+1}$ and $z_1(F_t+k) = F_{t+2}$. As a result, the only move from a position of type (4) or (5) that \emph{might} be to a $\mathcal{P}$ position is the move to $(m-F_t,(m-F_t)+(F_t+k);2F_t)$, and only if this position is of type (4) or (5). When it is to another position of type (4) or (5), then it decreases $s$ or $d$ by one, depending on whether it is a type (4) or (5) position, respectively. Hence, a position $(m,m+k;r)$ of type (4) or (5) is a $\mathcal{P}$ position if the (unique) maximal sequence of moves to positions of type (4) or (5) has even length, and is an $\mathcal{N}$ position if the (unique) maximal sequence of moves to positions of type (4) or (5) has odd length. From a position of type (4), removing consecutive Fibonacci numbers from the $m$ pile eventually results in a position of type (3), whereas from a position of type (5), removing consecutive Fibonacci numbers from the $m$ pile eventually results in a position of type (2). Either way, this distinguishes types (4a) and (4b), as well as (5a) and (5b). \end{proof}
\subsection{The word approach}
As in the case of one-pile \textsc{Fibonacci nim}, it is possible to express the $\mathcal{P}$-positions of two-pile global \textsc{Fibonacci nim} in terms of partial sums of a word.
Fix the smallest pile size $m\ge 0$ in a two pile game, and define $p=p(m)\ge 0$ as a function of $m$ such that $F_p\le m<F_{p+1}$. If $r < F_{p-1}$, then we say that the position $(m,m+k;r)$ is \emph{hybrid}, and otherwise it is \emph{Sturm} (in particular, the latter case applies if $p>0$ and $r\ge F_{p-1}$). We also define any one-pile game to be Sturm.
Let $\alpha =\alpha(r,p)$ be the function of $r$ and $p$, defined by $F_{p+\alpha-1}\le r < F_{p+\alpha}$ (so the parameter $t$ from Theorem~\ref{thm:alg} is $t=p+\alpha-1$). For $\alpha\ge 0$, this function will classify the Sturm games that are $\mathcal{P}$-positions (via the word $w_{p+\alpha}$ or $w_{p+\alpha-1}$, as defined in Lemma~\ref{lem:partialsums}).
For the hybrid games, we recursively build the relevant word for classifying the $\mathcal{P}$-positions, in the following way. There are three possible transformations of a letter in a given word. In each word, each letter is one of three consecutive Fibonacci numbers, generalizing the two letter Sturm case. The Fibonacci numbers (as letters) are given recursively by successively decreasing $\alpha$ via the move dynamic parameter $r$, starting with the Sturm case of $\alpha = 0$.
The possible transformations are
\begin{enumerate}
\item[(T1)] $F_i\rightarrow F_{i}$; ($y\rightarrow y$)
\item[(T2)] $F_i\rightarrow F_{i-1}F_{i-2}$; ($x\rightarrow yz$, or $y\rightarrow vz$)
\item[(T3)] $F_i\rightarrow F_{i-2}F_{i-3}F_{i-2}$; ($x\rightarrow vzv$)
\end{enumerate}
The only new transformation is (T3), and it applies only if $x$ is the largest Fibonacci number present in the word, (that is only if the alphabet for the current word is $\{F_i,F_{i-1},F_{i-2}\}$). (We also list an interpretation in symbolic dynamics, generalizing Proposition~\ref{prop:fibword} (3), for later reference. For example starting with the Fibonacci word on the alphabet $\{x,y\}$, we produce a ``generalized Fibonacci word'' on the alphabet $\{y,z,v\}$, by applying (T3) and (T1): $vzvyvzvvzvyvzvyvzv\ldots$. For each transformation to follow, at most one of the two possible transformations in (T2) is used. The symbolic dynamic approach will give an abstract interpretation of the transformation $T$ below, and could be studied independently.\footnote{Note the similarity with the Tribonacci morphism $x\rightarrow xy,y\rightarrow xz,z\rightarrow x$, which satisfies $T_n=T_{n-1}T_{n-2}T_{n-3}$, with fixed point $xyxzxyxxyxzxyx\underline{y}xz\ldots$. We have indicated the first letter that differs from our type 1 word in Example~\ref{ex1}.}) Note that (T3) is the second iteration of the Fibonacci morphism, that is, the word $S_2$ in Proposition~\ref{prop:fibword}.
Before introducing the transformation $T$, let us give three generic examples of how (T1), (T2), (T3) apply.
\begin{ex}[Type 1] The word transformation \label{ex1} $$[34, 21, 34, 34, 21, 34, 21, 34, \ldots] \rightarrow [13, 8, 13, 21, 13, 8, 13, 13, 8, 13, 21, 13, 8, 13, 21, 13, 8, 13,\ldots ],$$ appears when $m=26$ and the move dynamic parameter changes from $r=13$ to $r=12$, i.e.\ when $\alpha$ changes from 0 to $-1$. This means that we start with the Fibonacci word on the alphabet $\{34,21\}$ and apply (T1) to 21 and (T3) to 34. Note that the decrease in $r$ gives a new $\alpha$, and hence a new word.
\end{ex}
The case (T2) occurs only in very particular cases, concerning the two largest letters (the smallest letter uses only (T1)).
\begin{ex}[Type 2] \label{ex2} The word transformation $$[13, 8, 13, 8, 5, 8, 13, 8, 13, 13, 8, \ldots]\rightarrow [8, 5, 8, 5, 3, 5, 8, 5, 8, 8, 5, 8, 5, 3, 5, 8, 5, 8, \ldots],$$ occurs
when $m=26$ and $r$ changes from 5 to 4, so that $\alpha$ changes from $-2$ to $-3$. In this example, ``13'' changes to ``8,5'' (T2) if the partial sum of all lower terms belongs to $PS(w_{9})$ (or equivalently $PS(w_8)$, since we are only concerned with the letter ``13," which is the word at level $\alpha =0$ in Example~\ref{ex1}), and otherwise ``13'' changes to ``5,3,5'' (T3). For reference to the definition to come, here $x=34-26=8=F_6$, and $p=8$. \end{ex}
\begin{ex}[Type 3] \label{ex3} The word transformation $$[13, 8, 13, 21, 13, 8, 13, 13, 8, 13\ldots]\rightarrow [8, 5, 8, 13, 8, 5, 8, 8, 5, 8, 13, 8, 5, 8, 13\ldots],$$ occurs when $m=25$ and $r$ changes from 8 to 7, so that $\alpha$ changes from $-1$ to $-2$. In this example, ``13'' changes to ``8,5'' (T2) if the (partial) sum of all lower terms belongs to $PS(w_{9})$ (or $PS(w_{8})$),
and otherwise it does not decompose (T1). For use in the definition to come, here $x=34-25=9\le F_7$, and $p=8$. \end{ex}
Let us describe the words for consideration. For $\alpha \ge 0$, we let $T^{-\alpha}(w_p)$ denote the word where the reverse of (T1) or (T2) has been applied to each letter in the word $T^{1-\alpha}(w_p)$, as follows. Suppose first that $\alpha = 0$, in which case the word is $T^0(w_p) = w_p = W^{F_{p+1},F_p}$, which is defined as in the one pile case. The Fibonacci morphism applies (with only one important exception) that is (T2) is applied to each instance of the larger letter and (T1) to each instance of a smaller letter (as in the one pile case). The exception is the case $\alpha = -1$, for which $T^{-1}(w_p) = T^{-2}(w_p)$, where only (the reverse of ) (T1) is used. Thus, when $\alpha\ge 0$, each word $T^{-\alpha}(w_p)$ is the Fibonacci word, and only the alphabet differs.
In a transformation to the hybrid case, i.e. $\alpha \le 0$, except for two \emph{special} cases (to be described in the next paragraph), the transformation $T$ applies (T3) to the largest Fibonacci letter in the current word, and (T1) to the other letter(s). For the special cases: define $x$ by $m = F_{p+1} - x\ge F_p$,
A \emph{special transformation} will apply if and only if $F_{p+\alpha} < x \le F_{p+\alpha +1}$ (where $\alpha$ is defined by $F_{p+\alpha -1}\le r < F_{p+\alpha}$ as usual). There are two cases for this special transformation, depending on whether $p+\alpha$ is even or odd. Denote the partial sum of a word $W$ of all terms with index less than the $i^\text{th}$ letter by $W(i)$.
The current word (before the transformation) is $T^{1-\alpha}(w_p)$.
\begin{itemize}
\item Suppose first that $p+\alpha$ is odd. Consider the $i^\text{th}$ letter if and only if it is the largest letter. Then (Example~\ref{ex2} generalizes to) (T2) applies if $T^{1-\alpha}(w_p)(i)\in PS(w_p)$, and otherwise apply (T3). For the second largest and the smallest letter (T1) applies.
\item Suppose next that $p+\alpha$ is even. Consider the $i^\text{th}$ letter if and only if it is the second largest letter. Then (Example~\ref{ex3} generalizes to) (T2) applies if $T^{1-\alpha}(w_p)(i)\in PS(w_p)$, and otherwise (T1); (T3) applies to the largest letter and (T1) to the smallest letter.
\end{itemize}
Note that Example~\ref{ex1} is $T(w_p)$ for $p=8$ (which is non-Sturmian although $w_p$ is Sturmian). This type 1 transformation also applies to each purely hybrid non-special transformation.
In the case of a Sturm position $(m,m+k;r)$ (with $F_p\le m<F_{p+1}$), let $\sigma_m(\alpha)=PS(w_{p+\alpha})$ if $0\le \alpha \le 1$ and $\sigma_m(\alpha) = PS(w_{p+\alpha-1})$ if $\alpha > 1$, and otherwise, in case of a hybrid position, that is, if $\alpha<0$, we let $\sigma_m(\alpha) = PS(T^{-\alpha}(w_p))$. By this maneuver, we combine the hybrid and Sturm notations, and each set of partial sums of a word will be some $\sigma_m(\alpha)$ for some integer $\alpha = \alpha(r,p(m))$. Thus, for a fixed $m$, and variable $r$, $\sigma_p(0)$ represents the first Sturm set $PS(w_p)$, $\sigma_m(1)=\sigma_m(2)$ the second, $\sigma_m(3)$ the third, and so forth. $\sigma_m(-1)$ is the first hybrid set (Example~\ref{ex1}), and, the interesting special cases (Examples~\ref{ex2} and~\ref{ex3}) appear as $\sigma_m(-2)$, because $F_{p+\alpha-1} < F_{p+1}-m=8 \le F_{8-2} = F_{p+\alpha}$, and $\sigma_m(-1)$, because $F_{p+\alpha-1} < F_{p+1}-m=9 \le F_{8-1} = F_{p+\alpha}$, respectively. Note that, in the hybrid case, $\sigma_m(\alpha)$ depends on $x=F_{p+1}-m$, namely in case of $r<F_\xi$, where $\xi$ is defined by $F_\xi < x \le F_{\xi+1}$. Hence, in general we cannot only rely on the (more convenient) parameter $p$.
The smallest letter in the alphabet of a given word has a similar importance to the proof of Theorem~\ref{thm:alg2} as for that of Theorem~\ref{thm:alg}.
\begin{lem}
Consider the hybrid case with $\alpha > -p+2$. If $\alpha = -p +3<0$, then the smallest letter is $F_2=1$; that is, the smallest letter in $T^{p-3}(w_p)$, $p>3$ is $F_2$. For $\alpha < 0$, the smallest letter in $T^{p-3-j}(w_p)$ is $F_{2+j}$. That is, for $\alpha <0$, the smallest letter in $T^{-\alpha}(w_p)$ is $F_{p + \alpha - 1}$. For the Sturm case, if $0\le \alpha\le 1$, then the smallest letter is $F_{p + \alpha }$, and otherwise it is $F_{p + \alpha -1}$. The largest letter is two larger than the smallest in case of hybrid and one larger than the smallest in case of Sturm.
\end{lem}
\begin{proof}
This follows from the definition of the word $T^\alpha(w_p)$.
\end{proof}
That is, the smallest letter is $F_{p + \alpha - 1}$, except for the cases $0\le \alpha \le 1$, when it is $F_{p+\alpha}$ (independently of Sturm or hybrid).
\begin{thm}\label{thm:alg2}
The position $(m,m+k;r)\in\mathcal{P}$ if and only if $k\in \sigma_m(\alpha)$.
\end{thm}
\iffalse
Since there are many tedious details to check, we give only a sketch of the proof, explaining what must be checked.
\begin{proof}[Sketch of proof]
We explain how to check that there are no moves from candidate $\mathcal{P}$ positions to other candidate $\mathcal{P}$ positions; showing that there is a move from a candidate $\mathcal{N}$ position to a candidate $\mathcal{P}$ position involves a similar analysis. Consider the position $(m,m+k;r)$, and suppose that $k\in \sigma_p(\alpha)$.
We must prove that there is no move to a position of the same form. Recall that $p$ and $\alpha > -p+1$ are defined by $F_p\le m\le F_{p+1}$ and $F_{p+\alpha -1}\le r<F_{p+\alpha}$. There are three types of options, with $s\le r<F_{p+\alpha}$, depending on which pile we play on, and which becomes the smaller one:
\begin{itemize}
\item[(i)] $(m-s,m+k;2s)$, $q=p(m-s)$;
\item[(ii)] $(m,m+k-s;2s)$, $k-s\ge 0$, $q=p$;
\item[(iii)] $(m+k-s,m;2s)$, $k-s<0$, $q=p(m+k-s)$.
\end{itemize}
In either case, let $b=q+\beta-1$ be such that $F_{b}\le 2s< F_{b+1}$. For the respective option, we must prove that:
\begin{itemize}
\item[(i)] $k+s\not\in \sigma(\beta)$;
\item[(ii)] $k-s\not\in \sigma(\beta)$;
\item[(iii)] $s-k\not\in \sigma(\beta)$,
\end{itemize}
where $\beta =\alpha(2s, b)$. We list the possible cases
\begin{enumerate}
\item $(m,m+k;r)$ is Sturm
\begin{enumerate}
\item the option as in (i)-(iii) is Sturm
\item the option as in (i)-(iii) is hybrid
\end{enumerate}
\item $(m,m+k;r)$ is hybrid
\begin{enumerate}
\item the option as in (i)-(iii) is Sturm
\item the option as in (i)-(iii) is hybrid
\end{enumerate}
\end{enumerate}
\end{proof}
\fi
The proof is similar to that of Theorem~\ref{thm:alg}. There are more tedious details to check, but all the ideas and techniques are similar.
\section{Multi-pile \textsc{Fibonacci nim}} \label{sec:multipile}
The following theorem about (ordinary) \textsc{nim} is well-known:
\begin{thm}[Bouton] If $(n_1,\ldots,n_k)$ is a \textsc{nim} position, then there is a unique nonnegative integer $b$ for which $(n_1,\ldots,n_k,b)\in\mathcal{P}$. Furthermore, $b<2\max(n_1,\ldots,n_k)$ and $b\le n_1+\cdots+n_k$. \end{thm}
\begin{rem} This follows easily from Bouton's description of the winning strategy in \textsc{nim}, but it is also possible to prove it (say, by induction on the largest power of 2 occurring in any $n_i$) without using the winning strategy. \end{rem}
It would be desirable to have a similar statement for multi-pile \textsc{Fibonacci nim}. However, the best we can do is the following:
\begin{thm} \label{thm:comps} If $(n_1,\ldots,n_k;\infty)$ is a multi-pile \textsc{Fibonacci nim} position, then there is at most one nonnegative integer $b$ for which $(n_1,\ldots,n_k,b;\infty)\in\mathcal{P}$. When it exists, we call $b$ the \emph{complementary value} of $(n_1,\ldots,n_k)$. \end{thm}
\begin{proof} Suppose that $(n_1,\ldots,n_k,b;\infty)\in\mathcal{P}$, and suppose that $b'>b$. Then there is a move from $(n_1,\ldots,n_k,b';\infty)$ to $(n_1,\ldots,n_k,b;b'-b)$. But the latter is a $\mathcal{P}$ position, since its options are a subset of those of $(n_1,\ldots,n_k,b;\infty)$, which is itself a $\mathcal{P}$ position. Hence $(n_1,\ldots,n_k,b';\infty)\in\mathcal{N}$. \end{proof}
\begin{rem} It remains an open question to determine a bound on the complementary value, when it exists. It appears that most of the time, the complementary value is ``not too much larger'' than the maximum of the $n_i$'s. However, there are some notable exceptions; for instance: \begin{itemize} \item $(1,47,72;\infty)\in\mathcal{P}$, \item $(2,41,139;\infty)\in\mathcal{P}$, \item $(2,93,345;\infty)\in\mathcal{P}$, \item $(8,9,53;\infty)\in\mathcal{P}$. \end{itemize} See Table~\ref{table:comps} for a table of values of complementary values. \end{rem}
\begin{table} \begin{tabular}{c|cccccccccccccccc} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline 0 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ 1 & 1 & 0 & 4 & 6 & 2 & 9 & 3 & 11 & 12 & 5 & 14 & 7 & 8 & 17 & 10 & 16 \\ 2 & 2 & 4 & 0 & 7 & 1 & 10 & 11 & 3 & 19 & 15 & 5 & 6 & 14 & 24 & 12 & 9 \\ 3 & 3 & 6 & 7 & 0 & \fbox{$\infty$} & 11 & 1 & 2 & 16 & 12 & 13 & 5 & 9 & 10 & 17 & 18 \\ 4 & 4 & 2 & 1 & \fbox{$\infty$} & 0 & 7 & 10 & 5 & 17 & 16 & 6 & 18 & 13 & 12 & 19 & 69 \\ 5 & 5 & 9 & 10 & 11 & 7 & 0 & 35 & 4 & 15 & 1 & 2 & 3 & 18 & 22 & 23 & 8 \\ 6 & 6 & 3 & 11 & 1 & 10 & 35 & 0 & 8 & 7 & 17 & 4 & 2 & 16 & 14 & 13 & 26 \\ 7 & 7 & 11 & 3 & 2 & 5 & 4 & 8 & 0 & 6 & 13 & 27 & 1 & 15 & 9 & 22 & 12 \\ 8 & 8 & 12 & 19 & 16 & 17 & 15 & 7 & 6 & 0 & 53 & 11 & 10 & 1 & 57 & 35 & 5 \\ 9 & 9 & 5 & 15 & 12 & 16 & 1 & 17 & 13 & 53 & 0 & 21 & 27 & 3 & 7 & 76 & 2 \\ 10 & 10 & 14 & 5 & 13 & 6 & 2 & 4 & 27 & 11 & 21 & 0 & 8 & 26 & 3 & 1 & 24 \\ 11 & 11 & 7 & 6 & 5 & 18 & 3 & 2 & 1 & 10 & 27 & 8 & 0 & 22 & 21 & 64 & 88 \\ 12 & 12 & 8 & 14 & 9 & 13 & 18 & 16 & 15 & 1 & 3 & 26 & 22 & 0 & 4 & 2 & 7 \\ 13 & 13 & 17 & 24 & 10 & 12 & 22 & 14 & 9 & 57 & 7 & 3 & 21 & 4 & 0 & 6 & 20 \\ 14 & 14 & 10 & 12 & 17 & 19 & 23 & 13 & 22 & 35 & 76 & 1 & 64 & 2 & 6 & 0 & 21 \\ 15 & 15 & 16 & 9 & 18 & 69 & 8 & 26 & 12 & 5 & 2 & 24 & 88 & 7 & 20 & 21 & 0 \end{tabular} \caption{Complementary values of \textsc{Fibonacci nim}. The boxed $\infty$'s mean that there is no complementary value for these positions.} \label{table:comps} \end{table}
A curious aspect of Theorem~\ref{thm:comps} is the possibility that there may not be a complementary value for a \textsc{Fibonacci nim} position. It turns out that \textsc{Fibonacci nim} positions with no complementary values \emph{do} exist.
\begin{thm} For any nonnegative integer $n$, $(3,4,n;\infty)\in\mathcal{N}$. \end{thm}
It turns out that many small facts have to be verified in this proof. Since it is tedious to check the details, we only explain the general ideas.
\begin{proof} The following four classes partition the nonnegative integers.
\begin{enumerate}
\item $B-2=PS(w_3)=\{n:z_1(n)\ge 3\}=\{0,3,5,8,11,13,16,18,21,\ldots\}$,
\item $AB-2=1+PS(w_4)=\{n:z_1(n-1)\ge 5\}=\{1,6,9,14,19,22,27,\ldots\}$,
\item $AB-1=2+PS(w_4)=\{n:z_1(n-2)\ge 5\}=\{2,7,10,15,20,23,28,\ldots\}$,
\item $BB-1=4+PS(w_5)=\{n:z_1(n-4)\ge 8\}=\{4,12,17,25,33,38,\ldots\}$.
\end{enumerate}
(Recall that $F_3=2$, so that $PS(w_3)$ consists of partial sums with letters 3 and 2, and so forth.)
\begin{rem} The names for these sets come from the theory of complementary equations. We let $a(n)=\lfloor\phi n\rfloor$ and $b(n)=\lfloor \phi^2 n\rfloor$, where $\phi=\frac{1+\sqrt{5}}{2}$. Then $A$ consists of all numbers of the form $a(n)$ for some $n\ge 1$, $B$ consists of all numbers of the form $b(n)$ for some $n$, $AB$ consists of all numbers of the form $a(b(n))$ for some $n$, and so forth. See~\cite{Kim08} for more details. \end{rem}
Adding one to each set gives the sets, $B-1 = AA$, $AB-1 = BA$, $AB$ and $BB$. The sets $AA$ and $AB$ partition $A$ and the sets $BA$ and $BB$ partition $B$. $A$ and $B$ partition the positive integers.
We claim that the following moves are to $\mathcal{P}$ positions:
\begin{enumerate}
\item If $n\in B-2$, then $(3,3,n;2)\in\mathcal{P}$.
\item If $n\in AB-2$, then $(2,4,n;2)\in\mathcal{P}$.
\item If $n\in AB-1$, then $(1,4,n;4)\in\mathcal{P}$.
\item If $n\in BB-1$, then $(0,4,n;6)\in\mathcal{P}$.
\end{enumerate}
\iffalse
By the partitioning result, this claim implies that there is a $\mathcal{P}$-position available from $(3,4,n;\infty )$ for any $n$. To prove the claim, we use induction via the results from 2 piles, together with the following list of $\mathcal{P}$-sequences. These are routine, but somewhat tedious, to verify, so we omit the checks.
\begin{itemize}
\item If $r\ge 2$, then $(0,1,n;r)\in\mathcal{P}$ if and only if $z_1(n-1)>r$.
\item If $r\ge 3$, then $(0,2,n;r)\in\mathcal{P}$ if and only if $z_1(n-2)>r$. \\ $(0,2,n;2)\in\mathcal{P}$ if and only if $z_1(n-2)\ge 5$.
\item If $r\ge 3$, then $(0,3,n;r)\in\mathcal{P}$ if and only if $z_1(n-3)>r$. \\ $(0,3,n;2)\in\mathcal{P}$ if and only if $z_1(n-3)\ge 5$.
\item If $r\ge 5$, then $(0,4,n;r)\in\mathcal{P}$ if and only if $z_1(n-4)>r$. \\
$(0,4,n;2)\in\mathcal{P}$ if and only if $z_1(n-4)\ge 5$. \\
For $r=3,4$, $(0,4,n;r)\in\mathcal{P}$ if and only if $z_1(n-4)\ge 8$.
\item If $r\ge 1$, then $(1,1,n;r)\in\mathcal{P}$ if and only if $z_1(n)>r$.
\item If $r\ge 5$, then $(1,2,n;r)\in\mathcal{P}$ if and only if $z_1(n-4)>r$. \\ For $2\le r\le 4$, $(1,2,n;4)\in\mathcal{P}$ if and only if $z_1(n-4)\ge 8$.
\item If $r\ge 8$, then $(1,3,n;r)\in\mathcal{P}$ if and only if $z_1(n-6)>r$. \\ $(1,3,n;2)\in\mathcal{P}$ if and only if $z_1(n-1)\ge 5$. \\ For $3\le r\le 7$, $(1,3,n;r)\in\mathcal{P}$ if and only if $z_1(n-6)\ge 13$.
\item If $r\ge 2$, then $(1,4,n;r)\in\mathcal{P}$ if and only if $z_1(n-2)>r$.
\item If $r\ge 1$, then $(2,2,n;r)\in\mathcal{P}$ if and only if $z_1(n)>r$.
\item If $r\ge 8$, then $(2,3,n;r)\in\mathcal{P}$ if and only if $z_1(n-7)>r$. \\ $(2,3,n;2)\in\mathcal{P}$ if and only if $z_1(n-2)\ge 5$. \\ For $3\le r\le 7$, $(2,3,n;r)\in\mathcal{P}$ if and only if $z_1(n-7)\ge 13$.
\item If $r\ge 3$, then $(2,4,n;r)\in\mathcal{P}$ if and only if $z_1(n-1)>r$. \\ $(2,4,n;2)\in\mathcal{P}$ if and only if $z_1(n-1)\ge 5$.
\item If $r\ge 2$, then $(3,3,n;r)\in\mathcal{P}$ if and only if $z_1(n)>r$.
\end{itemize}
\fi
We give a proof in the spirit of the proofs of Theorems~\ref{thm:alg} and~\ref{thm:alg2}, although it is also possible to give a proof in terms of the Zeckendorf representation.
Part (4) of the claim follows from Theorem~\ref{thm:alg2}. Let us list the moves for the respective three first types (enumerating as above, and with $n$ belonging to respective subclass):
\begin{enumerate}
\item $(3,3,n-x;2x), 1\le x\le 2$, $(2,3,n;2)$, $(1,3,n;4)$ (we may assume $n>0$).
\item $(2,4,n-x;2x), 1\le x\le 2$, $(2,3,n;2)$, $(2,2,n;4)$, $(1,4,n;2)$, $(0,4,n;4)$.
\item $(1,4,n-x;2x), 1\le x\le 4$, $(0,4,n;2)$, $(1,3,n;2)$, $(1,2,n;4)$, $(1,1,n;6)$, $(0,1,n;8)$.
\end{enumerate}
We must show that all of these positions are in $\mathcal{N}$. To do this, we show that all of the following positions are in $\mathcal{P}$:
\begin{itemize}
\item $(0,1,z;2)\in \mathcal{P}$ if $z\in B-1$, and $(0,1,z;6)\in \mathcal{P}$ if $z\in BB-3\subset AB-2\subset B-1$.
\item $(0,2,z;4)\in \mathcal{P}$ if $z\in AB-1$.
\item $(0,3,z;2)\in \mathcal{P}$ if $z\in AB=\{3,8,11,16,21, 24\ldots\}\subset B-2$.
\item $(1,1,z;2)\in \mathcal{P}$ if $z\in B-2$ and $(1,1,z;4)\in \mathcal{P}$ if $z\in BB\subset B-2$.
\item $(1,2,z; 2)\in \mathcal{P}$ if $z\in BB-1$.
\item $(1,3,z; 2)\in \mathcal{P}$ if $z\in AB-2$.
\item $(2,2,z;2)\in \mathcal{P}$ if $z\in B-2$ and $(2,2,z;4)\in \mathcal{P}$ if $z\in BB\subset B-2$.
\item $(2,3,z;2)\in \mathcal{P}$ if $z\in AB-1$.
\end{itemize}
We verify that each candidate $\mathcal{N}$ position in (1) above has a move to a candidate $\mathcal{P}$ position. Since $2<n\in B-2$, the positions of the form $(3,3,n-x;2x)$ can immediately be reverted to a position of the same type. This follows from the proof of Theorem~\ref{thm:alg}, using the letters $(3,2)$. From $(2,3,n;2)$ we can move to the candidate $\mathcal{P}$ position $(2,2,n;2)$, and from $(1,3,n;4)$ we can move either to $(1,3,n-2;4)$ (with $n\in B-2\setminus AB=\{5,13,18,\ldots\}$, which implies $n-2\in B-2$), or to $(0,3,n;2)$ (with $n\in AB$).
Next, we verify that each candidate $\mathcal{N}$ position in (2) above has a move to a candidate $\mathcal{P}$ position. Here $0 < n\in AB-2$ and the letters are $(5,3)$. For positions of the form $(2,4,n-x;2x)$, all but one case can be reversed to a $\mathcal{P}$ position of the same type. This is the case where the current letter of the Fibonacci word with letters $3,5$ is 5, but $x=1$. In this case, $n-1\in BB$, so that $(2,2,n-1;4)$ is a $\mathcal{P}$ candidate. (In fact, we have that $BB\subset AB-3\subset B-2$.) For the remaining three proper three-pile cases, we have just seen that $(2,2,n-1;4)$ is a $\mathcal{P}$ candidate; it remains to note that both $(2,3,n;2)$ and $(1,4,n;2)$ have options to the candidate $\mathcal{P}$ position $(1,3,n;2)$.
For case (3), concerning $n\in AB-1$, the first type has a reversible option unless $x=1$ and the current letter is 5. In any case, the option $(1,3,n-1;2)$ is a candidate $\mathcal{P}$ position, and this also suffices for $(1,3,n;4)$. There are two more proper 3-pile candidate $\mathcal{N}$ positions of this form: $(1,2,n;4)$ reverses to $(0,2,n-1;2)$ which is a $\mathcal{P}$ candidate, and from $(1,1,n;6)$, there is an option $(1,1,z;2(n-z))$, with $z \in B-2$, because the move dynamic is 6. Hence, we have found $\mathcal{P}$ candidates for all the $\mathcal{N}$ candidates of the forms in (1)--(3).
Next, we must show that for all the lower level $\mathcal{P}$ candidates, for all $\mathcal{N}$ candidate options, there is a reversible move to an $\mathcal{P}$ candidate. Let us begin to show that $(1,1,z;2)$ is reversible. The two-stone removal is reversible, by the above argument, and the move to $(1,1,z-1;2)$ reverts to a position of the same form, unless the current letter is ``3.'' In this case there is a response to $(0,1,z-2;2)$, and where $z-2\in AB-2$, which is a $\mathcal{P}$ position, by Theorem 13. This response is also possible from $(0,1,z;2)$, which concludes this case.
For a candidate $\mathcal{P}$ position of the form $(1,2,z;2)$, note that $BB-3\subset AB-1$, and so both $(0,2,z;2)$ and $(1,2,z-2;4)$ revert to the $\mathcal{P}$ position $(0,2,z-2;r)$, $r=2,4$ respectively. A move to $(1,1,z;2)$ reverts to the $\mathcal{P}$ position $(1,1,z-1;2)$, because $z-1\in B-2$. A move to $(0,1,z;4)$ reverses to $(0,1,z-3;6)$, which is a $\mathcal{P}$ position (by Theorem~\ref{thm:alg2}). The move to $(1,2,z-1;2)$, reverses to $(1,1,z-1;2)$, which is a $\mathcal{P}$ position because $z-1\in B-2$.
For the position $(1,3,z; 2)$, with $z\in AB-2$, if two stones are removed from the third pile, the position reverses to one of the same form. Similarly, from $(1,3,z-1; 2)$, it suffices to study the ``5" letter case, and thus $z-1\in BB$; there is a response to $(1,1,z-1; 4)\in \mathcal{P}$. Next, consider $(1,2,z; 2)$, with $z\in AB-2$; then respond to $(0,1,z; 4)\in \mathcal{P}$. Consider $(1,1,z; 4)$, with $z\in AB-2$; then respond to $(0,1,z; 2)\in \mathcal{P}$. The options $(0,1,z; 6)$ and $(0,3,z; 2)$, with $z\in AB-2$, are both $\mathcal{N}$ positions, by Theorem~\ref{thm:alg2}.
For the position $(2,2,z; r)$, with $z\in B-2$ and $r=2,4$, playing on the third pile is reversible to a position of the same type.
Playing on the first pile, $(1,2,z; 2)$ reverses to $(1,1,z; 2)$ and playing to $(0,2,z; 4)$, gives an $\mathcal{N}$ position, by Theorem~\ref{thm:alg2}.
For the position $(2,3,z; 2)$, with $z\in AB-1$, playing on the third pile, it suffices to find a winning response to the option $(2,3,z-1; 2)$ when the current letter is a ``5,'' and therefore with $z-1\in BB-3\subset AB-2$. The option $(1,3,z-1; 2)\in \mathcal{P}$ suffices (so the letter ``5'' is not important). The same response obviously works for the option $(1,3,z; 2)$. The remaining options to check are $(0,3,z; 4)$ (which is an $\mathcal{N}$ position by Theorem~\ref{thm:alg2}), $(2,2,z; 2)$, and $(1,2,z; 4)$. These options have responses to $(0,2,z; r)\in \mathcal{P}$, for $r=2,4$.
\end{proof}
\begin{qn} Are there any other two-pile \textsc{Fibonacci nim} positions $(n_1,n_2;\infty)$ (besides $(3,4;\infty)$) with no complementary value? \end{qn}
\section{An easier variant: global \textsc{power-of-two nim}} \label{sec:pow2nim}
\textsc{power-of-two nim} is a simpler variant of \textsc{Fibonacci nim}. In the classical (one-pile) formulation, the rules are the same as in \textsc{Fibonacci nim}, except that if the previous player removed $m$ stones, then the next player may only remove at most $m$ stones. Thus, the move dynamic can only stay the same or decrease on each move.
The winning strategy is closely related to that of \textsc{Fibonacci nim}, but it relies on the binary representation of $n$ rather than the Zeckendorf representation. A winning strategy is to remove the smallest bit from the binary representation of the pile size on each move, and a position is a $\mathcal{P}$ position if and only if the smallest bit is larger than the move dynamic.
We represent the multi-pile global \textsc{power-of-two nim} game using the same notation as we do the multi-pile global \textsc{Fibonacci nim} game. It turns out that we can describe the $\mathcal{P}$ positions of multi-pile \textsc{power-of-two nim} completely. Let $a\oplus b$ denote the nim sum of $a$ and $b$, and let $\sb(n)$ denote the smallest power of 2 in the binary expansion of $n$, i.e.\ if $n=2^{a_1}+\cdots+2^{a_k}$ where the $a_i$'s are distinct powers of 2 with $a_1<\cdots<a_k$, then $\sb(n)=2^{a_1}$. (If $n=0$, then we define $\sb(n)=\infty$.) Then we have the following:
\begin{thm} \label{thm:powtwosoln} The \textsc{power-of-two nim} game $(n_1,\ldots,n_k;r)\in\mathcal{P}$ if and only if $\sb(n_1\oplus\cdots\oplus n_k)>r$. \end{thm}
\begin{cor} \label{cor:powtwocor} The \textsc{power-of-two nim} game $(n_1,\ldots,n_k;\infty)\in\mathcal{P}$ if and only if the \textsc{nim} game $(n_1,\ldots,n_k)\in\mathcal{P}$. \end{cor}
\begin{proof}[Proof of Theorem~\ref{thm:powtwosoln}] The idea is to mimic good play in \textsc{nim}, playing a move that makes partial progress toward a winning \textsc{nim} move. To this end, we show that, given a position that we claim to be an $\mathcal{N}$ position, there is a move to a position that we claim to be a $\mathcal{P}$ position, whereas given a claimed $\mathcal{P}$ position, all moves are to claimed $\mathcal{N}$ positions. By Proposition~\ref{prop:partition}, this shows that the $\mathcal{P}$ positions are exactly as we claim them to be.
First, suppose that $(n_1,\ldots,n_k;r)$ is a \textsc{power-of-two nim} position with $\sb(n_1\oplus\cdots\oplus n_k)\le r$. Then there is some move $n_i\to n_i'$ that is a winning move in \textsc{nim}. Let $2^a=\sb(n_1\oplus\cdots\oplus n_k)$. Then $2^a\le r$, so removing $2^a$ stones from pile $i$ is a legal move, to $(n_1,\ldots,n_i',\ldots,n_k;2^a)$. But now $\sb(n_1\oplus\cdots\oplus n_i'\oplus\cdots\oplus n_k)\ge 2^{a+1}$, so this position is a claimed $\mathcal{P}$ position.
On the other hand, suppose that $\sb(n_1\oplus\cdots\oplus n_k)>r$, and consider the move $n_i\to n_i'$, where $n_i-n_i'\le r$. Then $\sb(n_1\oplus\cdots n_i'\oplus\cdots\oplus n_i)=\sb(n_i-n_i')$, so the position $(n_1,\ldots,n_i',\ldots,n_k;n_i-n_i')$ is a claimed $\mathcal{N}$ position. This completes the proof of Theorem~\ref{thm:powtwosoln}. \end{proof}
|
2,869,038,157,078 | arxiv | \section{\textbf{Introduction and Preliminary}}
The main purpose of this article is to present new proofs and their
generalizations of the following three theorems concerning the analytic
continuation of a biholomorphic mapping on a strongly pseudoconvex analytic
real hypersurface.
First, we concern Vitushkin's theorem of a germ of a biholomorphic
mapping(cf. \cite{Vi85}) as follows:
\begin{theorem}[Vitushkin]
Let $M,$ $M^{\prime }$ be two strongly pseudoconvex analytic real
hypersurfaces in $\Bbb{C}^{n+1}$ and $p,p^{\prime }$ be points respectively
on $M,M^{\prime }$ such that the germ $M$ at the point $p$ and the germ $%
M^{\prime }$ at the point $p^{\prime }$ are biholomorphically equivalent.
Then there is a positive real number $\delta $ depending only on $%
M,M^{\prime }$ and $p,p^{\prime }$ such that a biholomorphic mapping $\phi $
on an open connected neighborhood $U$ of the point $p$ is analytically
continued to the open ball $B(p;\delta )$ as a biholomorphic mapping
whenever
\[
\phi (p)=p^{\prime }\quad \text{and}\quad \phi (U\cap M)\subset M^{\prime }.
\]
\end{theorem}
Next, we concern Pinchuk's theorem of the analytic continuation of a
biholomorphic mapping on a spherical analytic real hypersurface(cf. \cite
{Pi78}, \cite{CJ95}) as follows:
\begin{theorem}[Pinchuk, Chern-Ji]
Let $M$ be a spherical strongly pseudoconvex analytic real hypersurface in $%
\Bbb{C}^{n+1}$, $U$ be an open connected neighborhood of a point $p\in M,$
and $\phi $ be a biholomorphic mapping on $U$ such that $\phi \left( U\cap
M\right) \subset S^{2n+1}.$ Then the mapping $\phi $ is analytically
continued along any path in $M$ as a local biholomorphic mapping.
\end{theorem}
Finally, we concern Pinchuk's theorem of the analytic continuation of a
biholomorphic mapping on a nonspherical nondegenerate analytic real
hypersurface(cf. \cite{Pi78}, \cite{Vi85}) as follows:
\begin{theorem}[Pinchuk, Ezhov-Kruzhilin-Vitushkin]
Let $M,M^{\prime }$ be nonspherical strongly pseudoconvex analytic real
hypersurfaces in $\Bbb{C}^{n+1}$ such that $M^{\prime }$ is compact. Suppose
that there are an open connected neighborhood $U$ of a point $p\in M$ and a
biholomorphic mapping $\phi $ on $U$ such that
\[
\phi \left( M\cap U\right) \subset M^{\prime }.
\]
Then the mapping $\phi $ is analytically continued along any path on $M$ as
a local biholomorphic mapping.
\end{theorem}
In the following subsections, we provide some preliminary results from the
papers \cite{Pa1} and \cite{Pa2}. We have attempted to present the results
of this paper in the 18th DaeWoo Workshop at Hanseo University, Korea. The
short outline of the main results in this article shall appear in the
proceedings of the Daewoo Workshop.
\subsection{Existence and uniqueness theorem}
We take a coordinate systime of $\Bbb{C}^{n}\times \Bbb{C}$ as follows:
\[
z\equiv \left( z^{1},\cdots ,z^{n}\right) ,\quad w=u+iv\equiv z^{n+1}.
\]
A holomorphic mapping $\phi $ in $\Bbb{C}^{n}\times \Bbb{C}$ consists of $%
(n+1)$ holomorphic functions
\[
f\equiv (f^{1},\cdots ,f^{n}),\quad g\equiv f^{n+1}.
\]
We keep the notations
\[
\langle z,z\rangle \equiv z^{1}\overline{z}^{1}+\cdots +z^{e}\overline{z}%
^{e}-\cdots -z^{n}\overline{z}^{n}
\]
and
\[
\Delta \equiv \frac{\partial ^{2}}{\partial z^{1}\partial \overline{z}^{1}}%
+\cdots +\frac{\partial ^{2}}{\partial z^{e}\partial \overline{z}^{e}}%
-\cdots -\frac{\partial ^{2}}{\partial z^{n}\partial \overline{z}^{n}}.
\]
Then it is known that a nondegenerate analytic hypersurface $M$ is locally
biholomorphic to a real hypersurface of the following form(cf. \cite{CM74},
\cite{Pa1}):
\[
v=\langle z,z\rangle +\sum_{s,t\geq 2}^{\infty }F_{st}\left( z,\overline{z}%
,u\right)
\]
where
\[
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0.
\]
We shall denote by $H$ the isotropy subgroup of a real hyperquadric $%
v=\langle z,z\rangle $ such that
\[
H=\left\{ \left(
\begin{array}{ccc}
\rho & 0 & 0 \\
-\sqrt{\left| \rho \right| }Ua & \sqrt{\left| \rho \right| }U & 0 \\
-r-i\langle a,a\rangle & 2ia^{\dagger } & 1
\end{array}
\right) :
\begin{array}{l}
\langle Uz,Uz\rangle =\text{\textrm{sign}}(\rho )\langle z,z\rangle ,\quad
a\in \Bbb{C}^{n}, \\
\rho \neq 0,\quad \rho ,r\in \Bbb{R} \\
a^{\dagger }=\left( \overline{a^{1}},\cdots ,\overline{a^{e}},-\overline{%
a^{e+1}},\cdots ,-\overline{a^{n}}\right)
\end{array}
\right\} .
\]
\begin{theorem}[Chern-Moser]
\label{exi-uni}Let $M$ be a nondegenerate analytic real hypersurface in $%
\Bbb{C}^{n+1}$ defined near the origin by the equation
\begin{equation}
v=\langle z,z\rangle +F\left( z,\overline{z},u\right) \tag*{(0.1)}
\label{def-eq}
\end{equation}
where
\[
F\left( z,\overline{z},u\right) =o\left( \left| z_{1}\right| ^{2}+\cdots
+\left| z_{n}\right| ^{2}+\left| w\right| ^{2}\right) .
\]
Then, for each element $(U,a,\rho ,r)\in H,$ there exists a unique
biholomorphic mapping $\phi =(f,g)$ near the origin which transforms $M$ to
a real hypersurface in Chern-Moser normal form such that
\begin{eqnarray*}
\left( \left. \frac{\partial f}{\partial z}\right| _{0}\right) &=&C,\quad
\left( \left. \frac{\partial f}{\partial w}\right| _{0}\right) =-Ca \\
\rm{Re}\left( \left. \frac{\partial g}{\partial w}\right| _{0}\right)
&=&\rho ,\quad \rm{Re}\left( \left. \frac{\partial ^{2}g}{\partial w^{2}}%
\right| _{0}\right) =2\rho r
\end{eqnarray*}
where the constants $(U,a,\rho ,r)$ shall be called the initial value of the
normalization $\phi .$
\end{theorem}
We present a brief outline of the proof of Theorem \ref{exi-uni}(cf. \cite
{CM74}, \cite{Pa1}). First of all, we show that there is a biholomorphic
mapping
\[
\phi _{1}:\left\{
\begin{array}{l}
z=z^{*}+D(z^{*},w^{*}) \\
w=w^{*}+g(z^{*},w^{*})
\end{array}
\right.
\]
which transforms the equation \ref{def-eq} to an equation of the form
\[
v^{*}=F_{11}^{*}\left( z^{*},\overline{z}^{*},u^{*}\right) +\sum_{s,t\geq
2}F_{st}^{*}\left( z^{*},\overline{z}^{*},u^{*}\right) .
\]
Then we set
\[
p(u)\equiv D(0,u)
\]
and we verify that the functions
\[
D(z,w),\quad g(z,w),\quad F_{st}^{*}\left( z,\overline{z},u\right)
\]
are uniquely determined by the function $F\left( z,\overline{z},u\right) $
and $p(u)$ whenever we require the following normalizing condition
\[
\overline{g(0,u)}=-g(0,u).
\]
Further, the functions
\[
\left( \left. \frac{\partial ^{\left| I\right| }D}{\partial z^{I}}\right|
_{z=v=0}\right) ,\quad \left( \left. \frac{\partial ^{\left| I\right| }g}{%
\partial z^{I}}\right| _{z=v=0}\right) ,\quad \left( \left. \frac{\partial
^{\left| I\right| +\left| J\right| }F_{st}^{*}}{\partial z^{I}\partial
\overline{z}^{J}}\right| _{z=\overline{z}=0}\right)
\]
depend analytically on $u$ and $p(u),$ rationally on $p^{\prime }(u),$ and
polynomially on the higher order derivatives of $p(u)$.
At this point, we need an operator $\mathrm{tr}$ introduced by Chern and
Moser as follows:
\[
\mathrm{tr}F_{st}^{*}\left( z,\overline{z},u\right) =\frac{1}{st}%
\sum_{\alpha ,\beta =0}^{n}h^{\alpha \beta }(u)\left( \frac{\partial
^{2}F_{st}^{*}}{\partial z^{\alpha }\partial \overline{z}^{\beta }}\right)
\left( z,\overline{z},u\right)
\]
where
\[
F_{11}^{*}\left( z,\overline{z},u\right) =\sum_{\alpha ,\beta
=0}^{n}h_{\alpha \beta }(u)z^{\alpha }\overline{z}^{\beta }
\]
and $\left( h^{\alpha \beta }(u)\right) $ is the inverse matrix of $\left(
h_{\alpha \beta }(u)\right) .$ Then we show that the equation
\[
(\mathrm{tr})^{2}F_{23}^{*}\left( z,\overline{z},u\right) =0
\]
is an ordinary differential equation of the function $p(u)$ as follows:
\[
p^{\prime \prime }=Q(u,p,\overline{p},p^{\prime },\overline{p}^{\prime }).
\]
Hence, for a given value $p^{\prime }(0)\equiv D_{w}(0,0)\in \Bbb{C}^{n},$
there is a unique biholomorphic mapping $\phi _{1}$ which satisfies the
normalizaing condition and which transforms the equation \ref{def-eq} to an
equation of the following form
\begin{equation}
v=F_{11}^{*}\left( z,\overline{z},u\right) +\sum_{s,t\geq 2}F_{st}^{*}\left(
z,\overline{z},u\right) \tag*{(0.2)} \label{def-eq2}
\end{equation}
where
\[
(\mathrm{tr})^{2}F_{23}^{*}\left( z,\overline{z},u\right) =0.
\]
Note that, for a biholomorphic mapping $\phi ,$ $\phi (0)=0,$ near the
origin, there is a unique decomposition
\[
\phi =\phi _{2}\circ \phi _{1}
\]
where
\begin{eqnarray*}
\phi _{1} &:&\left\{
\begin{array}{l}
z=z^{*}+D(z^{*},w^{*}) \\
w=w^{*}+g(z^{*},w^{*})
\end{array}
\right. \\
\phi _{2} &:&\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}E(w)z \\
w^{*}=q(w)
\end{array}
\right. .
\end{eqnarray*}
and where the function $D(z,w),g(z,w),E(w),q(w)$ are complex analytic such
that
\begin{gather*}
D(0,0)=D_{z}(0,w)=0,\quad g(0,0)=q(0)=0 \\
\overline{g(0,u)}=-g(0,u),\quad \overline{q(u)}=q(u) \\
\det E(0)\neq 0\quad \det q^{\prime }(0)\neq 0.
\end{gather*}
Let $\phi $ be the biholomorphic mapping which transforms the equation \ref
{def-eq2} to a defining equation satisfying the following condition
\[
v=\langle z,z\rangle +\sum_{s,t\geq 2}G_{st}\left( z,\overline{z},u\right)
\]
where
\[
\Delta ^{2}G_{23}\left( z,\overline{z},u\right) =0.
\]
Then there is a unique decomposition of a biholomorphic mapping $\phi $ such
that
\[
\phi =\phi ^{*}\circ \phi _{1}
\]
where
\[
\phi ^{*}:\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}E(w)z \\
w^{*}=q(w)
\end{array}
\right.
\]
and
\[
F_{11}^{*}\left( z,\overline{z},u\right) =\mathrm{sign\{}q^{\prime }(0)%
\mathrm{\}}\langle E(u)z,E(u)z\rangle .
\]
Second, we take a matrix valued function $E_{1}(u)$ such that
\[
F_{11}^{*}\left( z,\overline{z},u\right) =\langle E_{1}(u)z,E_{1}(u)z\rangle
.
\]
Then there is a biholomorphic mapping
\[
\phi _{2}:\left\{
\begin{array}{l}
z^{*}=E(w)z \\
w^{*}=w
\end{array}
\right.
\]
which transforms the equation \ref{def-eq2} to an equation of the same form
\[
v^{*}=\langle z^{*},z^{*}\rangle +\sum_{s,t\geq 2}G_{st}\left( z^{*},%
\overline{z}^{*},u^{*}\right)
\]
where
\[
\Delta ^{2}G_{23}\left( z,\overline{z},u\right) =0.
\]
Further, the function $E(u)$ is uniquely determined up to a function $U(u)$
such that
\[
E(u)=U(u)E_{1}(u)
\]
where
\[
\langle U(u)z,U(u)z\rangle =\langle z,z\rangle .
\]
Then we show that the equation
\[
\Delta G_{22}\left( z,\overline{z},u\right) =0
\]
is an ordinary differential equation of the function $U(u)$ as follows:
\[
U(u)^{-1}U^{\prime }(u)=R(u).
\]
Hence, for a given value $U(0),$ there is a unique biholomorphic mapping $%
\phi _{2}$ which transforms the equation \ref{def-eq2} to an equation of the
following form
\begin{equation}
v=\langle z,z\rangle +\sum_{s,t\geq 2}G_{st}\left( z,\overline{z},u\right)
\tag*{(0.3)} \label{def-eq3}
\end{equation}
where
\[
\Delta G_{22}\left( z,\overline{z},u\right) =\Delta ^{2}G_{23}\left( z,%
\overline{z},u\right) =0.
\]
Third, we show that there is a biholomorphic mapping
\[
\phi _{3}:\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}z \\
w^{*}=q(w)
\end{array}
\right.
\]
which transforms the equation \ref{def-eq3} to an equation of the same form
\[
v^{*}=\langle z^{*},z^{*}\rangle +\sum_{s,t\geq 2}G_{st}^{*}\left( z^{*},%
\overline{z}^{*},u^{*}\right)
\]
where
\[
\Delta G_{22}^{*}\left( z,\overline{z},u\right) =\Delta ^{2}G_{23}^{*}\left(
z,\overline{z},u\right) =0.
\]
Then we show that the equation
\[
\Delta ^{3}G_{33}^{*}\left( z,\overline{z},u\right) =0
\]
is an ordinary differential equation of the function $q(u)$ as follows:
\[
\frac{q^{\prime \prime \prime }}{3q^{\prime }}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }}{q^{\prime }}\right) ^{2}=\kappa (u).
\]
Hence, for given values $q^{\prime }(0),q^{\prime \prime }(0),$ there is a
unique biholomorphic mapping $\phi _{3}$ which transforms the equation \ref
{def-eq3} to an equation of the following form
\[
v=\langle z,z\rangle +\sum_{s,t\geq 2}G_{st}^{*}\left( z,\overline{z}%
,u\right)
\]
where
\[
\Delta G_{22}\left( z,\overline{z},u\right) =\Delta ^{2}G_{23}\left( z,%
\overline{z},u\right) =\Delta ^{3}G_{33}\left( z,\overline{z},u\right) =0.
\]
Thus the existence and uniqueness of the biholomorphic mapping $\phi $ have
been reduced to the existence and uniqueness of solutions of the ordinary
differential equations, where some constants $U,a,\rho ,r$ appear as the
initial values of the solutions.
In the paper \cite{Pa1}, we have showed that there exist a family of normal
forms as follows:
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }%
+\sum_{s,t\geq 2}F_{st}\left( z,\overline{z},u\right)
\]
where $\alpha ,\beta \in \Bbb{R}$ and
\[
\left\{
\begin{array}{l}
\Delta F_{22}=\Delta ^{2}F_{23}=0 \\
\Delta ^{3}F_{33}=\beta \Delta ^{4}\left( F_{22}\right) ^{2}.
\end{array}
\right.
\]
In the case of $\alpha =0,$ we assume
\[
v=\langle z,z\rangle +\sum_{s,t\geq 2}F_{st}\left( z,\overline{z},u\right) .
\]
The value $(\alpha ,\beta )$ is called the type of normal form. Chern-Moser
normal form is given in the case of $\alpha =\beta =0$ and Moser-Vitushkin
normal form is defined by taking $\alpha \neq 0$ and $\beta =0.$
Then each normalization of a real hypersurface $M$ to a normal form of a
given type $(\alpha ,\beta )$ is determined by constant initial value
parameterized by the local automorphism group $H$ of the following real
hypersurface
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle },
\]
which is locally biholomorphic to a real hyperquadric.
\begin{theorem}
\label{exi-uni2}Let $M$ be a nondegenerate analytic real hypersurface
defined by
\[
v=\sum_{k=2}^{\infty }F_{k}\left( z,\overline{z},u\right) .
\]
Then there exist unique natural mappings for each $k\geq 2$ such that
\begin{eqnarray*}
\nu :\left\{ F_{l}\left( z,\overline{z},u\right) :l\leq k\right\} \times
H\times \Bbb{R}^{2}\longmapsto \left( f_{k-1}\left( z,w\right) ,g_{k}\left(
z,w\right) \right) \\
\kappa :\left\{ F_{l}\left( z,\overline{z},u\right) :l\leq k\right\} \times
H\times \Bbb{R}^{2}\longmapsto F_{k}^{*}\left( z,\overline{z},u\right)
\end{eqnarray*}
such that, for a given $\sigma \in H$ and $\alpha ,\beta \in \Bbb{R},$ the
formal series mapping
\[
\phi =\left( \sum_{k=1}^{\infty }f_{k}\left( z,w\right) ,\sum_{k=2}^{\infty
}g_{k}\left( z,w\right) \right)
\]
is a biholomorphic normalization of $M$ with initial value $\sigma \in H$
and
\[
v=\langle z,z\rangle +\sum_{k=4}^{\infty }F_{k}^{*}\left( z,\overline{z}%
,u\right)
\]
is the defining equation of the real hypersurface $\phi \left( M\right) $ in
normal form of type $(\alpha ,\beta ).$
\end{theorem}
Then we obtain the following theorem
\begin{theorem}
\label{main}Let $M$ be a nondegenerate analytic real hypersurface defined by
\[
v=\sum_{k=2}^{\infty }F_{2}(z,\overline{z},u)
\]
and $\phi =(\sum_{k}f_{k},\sum_{k}g_{k})$ be a normalization of $M$ such
that the real hypersurface $\phi \left( M\right) $ is defined in normal form
of type $(\alpha ,\beta )$ by the equation
\[
v=\langle z,z\rangle +\sum_{k=4}^{\infty }F_{k}^{*}(z,\overline{z},u).
\]
Then the functions $f_{k-1},g_{k},F_{k}^{*},$ $k\geq 3,$ are given as a
finite linear combination of finite multiples of the following factors:
\begin{enumerate}
\item[(1)] the coefficients of the functions $F_{l}$, $l\leq k,$
\item[(2)] the constants $C,C^{-1},\rho ,\rho ^{-1},a,r,\alpha ,\beta ,$
\end{enumerate}
\noindent where $(C,a,\rho ,r)$ are the initial value of the normalization $%
\phi $ and $\alpha ,\beta $ are the parameters of normal forms.
\end{theorem}
\subsection{Equation of a chain}
In the proof of Theorem \ref{exi-uni}, we have a distinguished curve $\gamma
$ on $M,$ which is named a chain by E. Cartan \cite{Ca32} and Chern-Moser
\cite{CM74}. Suppose that there is a nondegenerate analytic real
hypersurface $M$ defined near the origin by
\[
v=F(z,\overline{z},u),\quad \left. F\right| _{0}=\left. F_{z}\right|
_{0}=\left. F_{\overline{z}}\right| _{0}=0.
\]
Then there exists an ordinary differential equation
\begin{equation}
p^{\prime \prime }=Q\left( u,p,\overline{p},p^{\prime },\overline{p}^{\prime
}\right) \tag*{(0.4)} \label{ordinary}
\end{equation}
such that a chain $\gamma ,$ passing through the origin $0\in M$, is given
near the origin by the equation
\[
\gamma :\left\{
\begin{array}{l}
z=p(u) \\
w=u+iF\left( p(u),\overline{p}(u),u\right)
\end{array}
\right.
\]
where $p(u)$ is a solution of the ordinary differential equation \ref
{ordinary}.
The explicit form of the equation \ref{ordinary}, which depends on the
function $F\left( z,\overline{z},u\right) ,$ is quite complicate(cf. \cite
{CM74}, \cite{Pa1}). Roughly, the function $Q$ in \ref{ordinary} is given as
follows:
\begin{equation}
Q(u,p,\overline{p},p^{\prime },\overline{p}^{\prime })=(A_{1}-A_{2}\overline{%
A_{1}}^{-1}\overline{A_{2}})^{-1}(B-A_{2}\overline{A_{1}}^{-1}\overline{B})
\tag*{(0.5)} \label{Q-function}
\end{equation}
where
\begin{enumerate}
\item[(1)] $A_{1},A_{2},B$ are functions of $u,p,\overline{p},p^{\prime },%
\overline{p}^{\prime }$,
\item[(2)] $A_{1},A_{2}$ are $n\times n$ matrices respectively given by
\begin{eqnarray*}
\left[ A_{1}\left( u,p,\overline{p},p^{\prime },\overline{p}^{\prime
}\right) \right] _{\alpha \beta } \\
=\left\{ 2iF_{\beta \overline{\alpha }}+2\left( 1+iF^{\prime }\right)
^{-1}\left( F_{\beta }^{\prime }+iF^{\prime \prime }F_{\beta }\right) F_{%
\overline{\alpha }}\right\} \times \\
\qquad \qquad \left\{ 1-i\left( 1+iF^{\prime }\right) F_{\gamma }p^{\gamma
\prime }+i\left( 1-iF^{\prime }\right) F_{\overline{\gamma }}p^{\overline{%
\gamma }\prime }+F^{\prime 2}\right\} \\
-i\left( 1+iF^{\prime }\right) F_{\beta }\times \left\{ 2iF_{\gamma
\overline{\alpha }}p^{\gamma \prime }+2F^{\prime \prime }F_{\overline{\alpha
}}+i\left( 1+iF^{\prime }\right) F_{\overline{\alpha }}^{\prime }\right. \\
\qquad \qquad \left. +2\left( 1+iF^{\prime }\right) ^{-1}F_{\overline{\alpha
}}\left( F_{\gamma }^{\prime }p^{\gamma \prime }+iF^{\prime \prime
}F_{\gamma }p^{\gamma \prime }+iF^{\prime \prime }F_{\overline{\gamma }}p^{%
\overline{\gamma }\prime }\right) \right\}
\end{eqnarray*}
and
\begin{eqnarray*}
\left[ A_{2}\left( u,p,\overline{p},p^{\prime },\overline{p}^{\prime
}\right) \right] _{\alpha \beta } \\
=2iF^{\prime \prime }\left( 1+iF^{\prime }\right) ^{-1}F_{\overline{\alpha }%
}F_{\overline{\beta }}\times \\
\qquad \qquad \left\{ 1-i\left( 1+iF^{\prime }\right) F_{\gamma }p^{\gamma
\prime }+i\left( 1-iF^{\prime }\right) F_{\overline{\gamma }}p^{\overline{%
\gamma }\prime }+F^{\prime 2}\right\} \\
+i\left( 1-iF^{\prime }\right) F_{\overline{\beta }}\times \left\{
2iF_{\gamma \overline{\alpha }}p^{\gamma \prime }+2F^{\prime \prime }F_{%
\overline{\alpha }}+i\left( 1+iF^{\prime }\right) F_{\overline{\alpha }%
}^{\prime }\right. \\
\qquad \qquad \left. +2\left( 1+iF^{\prime }\right) ^{-1}F_{\overline{\alpha
}}\left( F_{\gamma }^{\prime }p^{\gamma \prime }+iF^{\prime \prime
}F_{\gamma }p^{\gamma \prime }+iF^{\prime \prime }F_{\overline{\gamma }}p^{%
\overline{\gamma }\prime }\right) \right\}
\end{eqnarray*}
where
\begin{eqnarray*}
F_{\alpha }=\left( \frac{\partial F}{\partial z^{\alpha }}\right) \left(
p(u),\overline{p}(u),u\right) ,\quad F_{\overline{\beta }}=\left( \frac{%
\partial F}{\partial \overline{z}^{\beta }}\right) \left( p(u),\overline{p}%
(u),u\right) \\
F^{\prime }=\left( \frac{\partial F}{\partial u}\right) \left( p(u),%
\overline{p}(u),u\right) ,\quad F^{\prime \prime }=\frac{1}{2}\left( \frac{%
\partial ^{2}F}{\partial u^{2}}\right) \left( p(u),\overline{p}(u),u\right)
\\
F_{\alpha }^{\prime }=\left( \frac{\partial ^{2}F}{\partial z^{\alpha
}\partial u}\right) \left( p(u),\overline{p}(u),u\right) ,\quad F_{\alpha
\overline{\beta }}=\left( \frac{\partial ^{2}F}{\partial z^{\alpha }\partial
\overline{z}^{\beta }}\right) \left( p(u),\overline{p}(u),u\right) ,
\end{eqnarray*}
\item[(3)] $B$ is a $n\times 1$ matrix given by at most cubic polynomial
with respect to $p^{\prime },\overline{p}^{\prime }$ such that $B$ is a
finite linear combination of multiples of the derivatives $p^{\prime },%
\overline{p}^{\prime }$ and the following terms:
\[
\left( \frac{\partial ^{\left| I\right| +\left| J\right| +m}F}{\partial
z^{I}\partial \overline{z}^{J}\partial u^{m}}\right) \left( p(u),\overline{p}%
(u),u\right) \quad \text{for }\left| I\right| +\left| J\right| +m\leq 5
\]
and
\[
\left[ \det \left\{ \left( 1-iF^{\prime }\right) ^{2}F_{\alpha \overline{%
\beta }}-i\left( 1+iF^{\prime }\right) F_{\alpha }^{\prime }F_{\overline{%
\beta }}+i\left( 1-iF^{\prime }\right) F_{\overline{\beta }}^{\prime
}F_{\alpha }+2F^{\prime \prime }F_{\alpha }F_{\overline{\beta }}\right\}
\right] ^{-1}.
\]
\end{enumerate}
On the real hyperquadric $v=\langle z,z\rangle ,$ the chain $\gamma $ is
locally given by
\[
\gamma :\left\{
\begin{array}{l}
z=p(u) \\
w=u+i\langle p(u),p(u)\rangle
\end{array}
\right.
\]
where $p(u)$ is a solution of the ordinary differential equation(cf. \cite
{Pa1}):
\[
p^{\prime \prime }=\frac{2ip^{\prime }\langle p^{\prime },p^{\prime }\rangle
\left( 1+3i\langle p,p^{\prime }\rangle -i\langle p^{\prime },p\rangle
\right) }{\left( 1+i\langle p,p^{\prime }\rangle -i\langle p^{\prime
},p\rangle \right) \left( 1+2i\langle p,p^{\prime }\rangle -2i\langle
p^{\prime },p\rangle \right) }.
\]
Further, the chain $\gamma $ on a real hyperquadric $v=\langle z,z\rangle $
is necessarily given as an intersection of a complex line(cf. \cite{CM74},
\cite{Pa1}).
Then we may define a chain $\gamma $ globally. Let $M$ be a nondegenerate
analytic real hypersurface and $\gamma :(0,1)\rightarrow M$ be an open
connected curve. Then the curve $\gamma $ is called a chain if, for each
point $p\in \gamma ,$ there exist an open neighborhood $U$ of the point $p$
and a biholomorphic mapping $\phi $ which translates the point $p$ to the
origin and transforms $M$ to Chern-Moser normal form such that
\[
\phi \left( U\cap \gamma \right) \subset \left\{ z=v=0\right\} .
\]
An alternative definition of a chain $\gamma $ may be given through the
intrinsic geometry of nondegenerate real hypersurfaces(cf. \cite{Ca32}, \cite
{CM74}, \cite{Ta76}).
\section{Nonsingular matrices}
\subsection{A family of nonsingular matrices}
\begin{lemma}
\label{L2}Let $A_{m}$ be a matrix as follows:
\[
\left(
\begin{array}{cccccc}
0 & 2 & 0 & \cdots & & 0 \\
m & 3 & 4 & \ddots & & \\
0 & m-1 & 6 & 6 & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
& & \ddots & 2 & 3m-3 & 2m \\
0 & & \cdots & 0 & 1 & 3m
\end{array}
\right) .
\]
Then the eigenvalues of $A_{m}$ are given by
\[
\frac{3m}{2}+\frac{m-2s}{2}\sqrt{17}\quad \text{for }s=0,\cdots ,m.
\]
\end{lemma}
\proof
We consider the following system of first order ordinary differential
equations:
\begin{eqnarray*}
y^{\prime } &=&z \\
z^{\prime } &=&3z+2y.
\end{eqnarray*}
Then the general solutions $y,z$ are given by
\begin{align}
y(t)& =c_{1}e^{t\lambda _{1}}+c_{2}e^{t\lambda _{2}}, \nonumber \\
z(t)& =c_{1}\lambda _{1}e^{t\lambda _{1}}+c_{2}\lambda _{2}e^{t\lambda _{2}},
\tag*{(1.1)} \label{sys}
\end{align}
where $\lambda _{1},\lambda _{2},\lambda _{1}\neq \lambda _{2},$ are the two
solutions of the quadratic equation:
\[
x^{2}-3x-2=0,
\]
and $c_{1},c_{2}$ are arbitrary real numbers.
We take nonzero constants $c_{1},c_{2}$ so that $y(t),z(t)$ are linear
independent. Then we obtain
\begin{equation}
\left(
\begin{array}{l}
e^{t\lambda _{1}} \\
e^{t\lambda _{2}}
\end{array}
\right) =\left(
\begin{array}{cc}
c_{1} & c_{2} \\
c_{1}\lambda _{1} & c_{2}\lambda _{2}
\end{array}
\right) ^{-1}\left(
\begin{array}{l}
y(t) \\
z(t)
\end{array}
\right) . \tag*{(1.2)} \label{inv}
\end{equation}
We consider a real vector space $V$ generated by the following elements:
\[
y^{m-s}z^{s}\quad \text{for}\quad s=0,1,\cdots ,m.
\]
By the equalities \ref{sys} and \ref{inv}, the vector space $V$ is generated
as well by the following elements:
\begin{equation}
\exp t(s\lambda _{1}+(m-s)\lambda _{2})\quad \text{for}\quad s=0,1,\cdots ,m.
\tag*{(1.3)} \label{eign}
\end{equation}
We put
\[
B_{1}=\left(
\begin{array}{l}
y^{m} \\
y^{m-1}z \\
\vdots \\
yz^{m-1} \\
z^{m}
\end{array}
\right) \quad \text{and}\quad B_{2}=\left(
\begin{array}{l}
e^{tm\lambda _{1}} \\
e^{t((m-1)\lambda _{1}+\lambda _{2})} \\
\vdots \\
e^{t(\lambda _{1}+(m-1)\lambda _{2})} \\
e^{tm\lambda _{2}}
\end{array}
\right) .
\]
Then it is verified that
\begin{equation}
\frac{dB_{1}}{dt}=\left(
\begin{array}{cccccc}
0 & 2 & 0 & \cdots & & 0 \\
m & 3 & 4 & \ddots & & \\
0 & m-1 & 6 & 6 & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
& & \ddots & 2 & 3m-3 & 2m \\
0 & & \cdots & 0 & 1 & 3m
\end{array}
\right) ^{T}B_{1}, \tag*{(1.4)} \label{dt}
\end{equation}
and
\[
\frac{dB_{2}}{dt}=\left(
\begin{array}{ccccc}
m\lambda _{1} & 0 & \cdots & & 0 \\
0 & (m-1)\lambda _{1}+\lambda _{2} & \ddots & & \\
& \ddots & \ddots & \ddots & \vdots \\
\vdots & & \ddots & \lambda _{1}+(m-1)\lambda _{2} & 0 \\
0 & \cdots & & 0 & m\lambda _{2}
\end{array}
\right) B_{2}.
\]
Hence the derivative $\frac{d}{dt}$ is an endomorphism on $V$ and the
vectors in \ref{eign} are the eigenvectors of the endomorphism $\frac{d}{dt}%
. $ Thus the matrix $A_{m}$ has eigenvalues as follows:
\[
s\lambda _{1}+(m-s)\lambda _{2}\quad \text{for}\quad s=0,1,\cdots ,m
\]
where
\[
\lambda _{1},\lambda _{2}=\frac{3\pm \sqrt{17}}{2}.
\]
This completes the proof.\endproof
\begin{lemma}
\label{Corr1}Let $B_{m}$ be a matrix as follows:
\[
B_{m}=\left(
\begin{array}{cccccc}
1 & 2 & 3 & \cdots & m & m+1 \\
m & 7-m & 4 & 0 & \cdots & 0 \\
0 & m-1 & 10-m & 6 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Then the matrix $B_{m}$ is nonsingular.
\end{lemma}
\proof
We easily verify that
\begin{equation}
\det B_{m}=\frac{1}{4}\det C_{m} \tag*{(1.5)} \label{bc}
\end{equation}
where
\[
C_{m}=\left(
\begin{array}{cccccc}
4-m & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Note that
\begin{equation}
C_{m}=A_{m}-(m-4)id_{(m+1)\times (m+1)}. \tag*{(1.6)} \label{dt2}
\end{equation}
By Lemma \ref{L2}, the eigenvalues of the matrix $A_{m}$ is given as
follows:
\[
\frac{3m}{2}+\frac{m-2s}{2}\sqrt{17}\quad \text{for }s=0,\cdots ,m.
\]
Thus the eigenvalues of the matrix $C_{m}$ is given by
\[
\frac{m+8}{2}+\frac{m-2s}{2}\sqrt{17}\quad \text{for }s=0,\cdots ,m.
\]
The matrix $C_{m}$ does not have $0$ as its eigenvalue. Therefore the matrix
$C_{m}$ is nonsingular, i.e.,
\[
\det C_{m}\neq 0.
\]
By the relation \ref{bc},
\[
\det B_{m}=\frac{1}{4}\det C_{m}\neq 0
\]
so that the matrix $B_{m}$ is nonsingular. This completes the proof.\endproof
\subsection{Sufficient condition for Nonsingularity}
\begin{lemma}
\label{Lemm1}Let $\Delta (m),$ $m\in \Bbb{N},$ denote the function defined
as follows:
\[
\Delta (m)=\sum_{k=0}^{m}\frac{\binom{m}{k}}{k\lambda _{1}+(m-k)\lambda
_{2}-(m-4)}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right)
^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}
\]
where
\[
\lambda _{1}=\frac{3-\sqrt{17}}{2},\quad \lambda _{2}=\frac{3+\sqrt{17}}{2}.
\]
Then
\[
\frac{\det E_{m}(m+1)}{\det E_{m}(m)}=\Delta (m)^{-1}
\]
where
\[
E_{m}(m+1)=\left(
\begin{array}{ccccc}
4-m & 2 & 0 & \cdots & 0 \\
m & 7-m & 4 & \ddots & \vdots \\
0 & \ddots & \ddots & \ddots & 0 \\
\vdots & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & 0 & 1 & 2m+4
\end{array}
\right)
\]
and
\[
E_{m}(m)=\left(
\begin{array}{ccccc}
7-m & 4 & 0 & \cdots & 0 \\
m-1 & 10-m & 6 & \ddots & \vdots \\
0 & \ddots & \ddots & \ddots & 0 \\
\vdots & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & 0 & 1 & 2m+4
\end{array}
\right) .
\]
\end{lemma}
\proof
We easily see that
\[
\varepsilon =\frac{\det E_{m}(m+1)}{\det E_{m}(m)}
\]
if and only if the following matrix is singular:
\[
\left(
\begin{array}{cccccc}
4-m-\varepsilon & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Then, by the equalities \ref{dt} and \ref{dt2}, there are constants $c_{s},$
$s=0,\cdots ,m-1,$ which are not all zero and satisfy the following
equality:
\begin{equation}
\sum_{s=0}^{m-1}c_{s}\frac{d}{dt}\left( y^{s}z^{m-s}e^{-t(m-4)}\right) =%
\frac{d}{dt}\left( y^{m}e^{-t(m-4)}\right) -\varepsilon y^{m}e^{-t(m-4)}
\tag*{(1.7)} \label{dtt}
\end{equation}
whenever
\[
\varepsilon =\frac{\det E_{m}(m+1)}{\det E_{m}(m)}.
\]
By the expression \ref{sys}, we obtain
\begin{eqnarray*}
y^{m}e^{-t(m-4)} &=&(c_{1}e^{t\lambda _{1}}+c_{2}e^{t\lambda
_{2}})^{m}e^{-t(m-4)} \\
&=&\sum_{k=0}^{m}\binom{m}{k}c_{1}^{k}c_{2}^{m-k}e^{tk\lambda
_{1}+t(m-k)\lambda _{2}}e^{-t(m-4)} \\
&=&\frac{d}{dt}\left\{ \sum_{k=0}^{m}\binom{m}{k}\frac{%
c_{1}^{k}c_{2}^{m-k}e^{tk\lambda _{1}+t(m-k)\lambda _{2}}e^{-t(m-4)}}{%
k\lambda _{1}+(m-k)\lambda _{2}-(m-4)}\right\} .
\end{eqnarray*}
By using the expression \ref{inv}, we obtain
\begin{equation}
y^{m}e^{-t(m-4)}=\frac{d}{dt}\left\{ \sum_{k=0}^{m}\frac{\binom{m}{k}\left(
\frac{\lambda _{2}y-z}{\lambda _{2}-\lambda _{1}}\right) ^{k}\left( \frac{%
-\lambda _{1}y+z}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}e^{-t(m-4)}}{%
k\lambda _{1}+(m-k)\lambda _{2}-(m-4)}\right\} . \tag*{(1.8)} \label{dtt2}
\end{equation}
Because the derivative $\frac{d}{dt}$ is an isomorphism, the equalities \ref
{dtt} and \ref{dtt2} yields
\begin{equation}
\sum_{s=0}^{m-1}c_{s}y^{s}z^{m-s}=y^{m}-\varepsilon \sum_{k=0}^{m}\frac{%
\binom{m}{k}\left( \frac{\lambda _{2}y-z}{\lambda _{2}-\lambda _{1}}\right)
^{k}\left( \frac{-\lambda _{1}y+z}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}}{%
k\lambda _{1}+(m-k)\lambda _{2}-(m-4)}. \tag*{(1.9)} \label{house}
\end{equation}
We easily see that the equality \ref{house} is satisfied by some constants $%
c_{s}$ only if we have the following equality:
\[
1-\varepsilon \sum_{k=0}^{m}\frac{\binom{m}{k}\left( \frac{\lambda _{2}}{%
\lambda _{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{m-k}}{k\lambda _{1}+(m-k)\lambda _{2}-(m-4)}=0.
\]
Thus we have
\[
\varepsilon =\Delta (m)^{-1}.
\]
This completes the proof.\endproof
\begin{lemma}
\label{Lemm2}Let $B_{m}(2),$ $m\geq 3,$ and $B_{m}(3),$ $m\geq 4,$ be
matrices as follows:
\[
B_{m}(2)=\left(
\begin{array}{cccccc}
2 & 3 & 4 & \cdots & m & m+1 \\
m-1 & 7-m & 6 & 0 & \cdots & 0 \\
0 & m-2 & 10-m & 8 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right)
\]
and
\[
B_{m}(3)=\left(
\begin{array}{cccccc}
3 & 4 & 5 & \cdots & m & m+1 \\
m-2 & 10-m & 8 & 0 & \cdots & 0 \\
0 & m-3 & 13-m & 10 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Then
\begin{eqnarray*}
\det B_{m}(2)\neq 0\quad \text{if and only if}\quad \Delta (m)^{-1}\neq 4, \\
\det B_{m}(3)\neq 0\quad \text{if and only if}\quad \Delta (m)^{-1}\neq -%
\frac{4}{3}(m-3).
\end{eqnarray*}
\end{lemma}
\proof
We easily verify that
\[
\det B_{m}(2)=\frac{1}{4}\det C_{m}(2)
\]
where
\[
C_{m}(2)=\left(
\begin{array}{cccccc}
9-m & 4 & 0 & & \cdots & 0 \\
m-1 & 10-m & 6 & \ddots & & \vdots \\
0 & m-2 & 13-m & 8 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Note that
\[
\det C_{m}(2)=0
\]
if and only if there are numbers $c_{1},\cdots ,c_{m-1}$ satisfying
\[
\left(
\begin{array}{cccccc}
9-m & 4 & 0 & & \cdots & 0 \\
m-1 & 10-m & 6 & \ddots & & \vdots \\
0 & m-2 & 13-m & 8 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) \left(
\begin{array}{c}
1 \\
c_{1} \\
\\
\vdots \\
\\
c_{m-1}
\end{array}
\right) =0.
\]
Then we easily see
\[
\left(
\begin{array}{cccccc}
-m & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) \left(
\begin{array}{c}
\frac{2}{m} \\
1 \\
c_{1} \\
\vdots \\
\\
c_{m-1}
\end{array}
\right) =0
\]
so that
\[
\det \left(
\begin{array}{cccccc}
-m & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) =0.
\]
Hence, by Lemma \ref{Lemm1}, we verify that
\[
\det B_{m}(2)\neq 0
\]
if and only if
\[
\Delta (m)^{-1}\neq 4.
\]
For the case of $B_{m}(3),$ we easily verify that
\[
\det B_{m}(3)=\frac{1}{4}\det C_{m}(3)
\]
where
\[
C_{m}(3)=\left(
\begin{array}{cccccc}
14-m & 6 & 0 & & \cdots & 0 \\
m-2 & 13-m & 8 & \ddots & & \vdots \\
0 & m-3 & 16-m & 10 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Note that
\[
\det C_{m}(3)=0
\]
if and only if there are numbers $c_{1},\cdots ,c_{m-2}$ satisfying
\[
\left(
\begin{array}{cccccc}
14-m & 6 & 0 & & \cdots & 0 \\
m-2 & 13-m & 8 & \ddots & & \vdots \\
0 & m-3 & 16-m & 10 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) \left(
\begin{array}{c}
1 \\
c_{1} \\
\\
\vdots \\
\\
c_{m-2}
\end{array}
\right) =0.
\]
Then we easily see
\[
\left(
\begin{array}{cccccc}
\frac{m}{3} & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) \left(
\begin{array}{c}
-\frac{24}{m(m-1)} \\
\frac{4}{m-1} \\
1 \\
c_{1} \\
\vdots \\
c_{m-2}
\end{array}
\right) =0
\]
so that
\[
\det \left(
\begin{array}{cccccc}
\frac{m}{3} & 2 & 0 & & \cdots & 0 \\
m & 7-m & 4 & \ddots & & \vdots \\
0 & m-1 & 10-m & 6 & \ddots & \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) =0.
\]
Hence, by Lemma \ref{Lemm1}, we verify that
\[
\det B_{m}(3)\neq 0
\]
if and only if
\[
\Delta (m)^{-1}\neq -\frac{4}{3}(m-3).
\]
This completes the proof.\endproof
\subsection{Estimates}
\begin{lemma}
\label{17}For any two positive integers $p,q,$ the inequality
\[
\left| \sqrt{17}-\frac{p}{q}\right| >\frac{2}{17q^{2}}
\]
is satisfied$.$
\end{lemma}
\proof
Note that
\[
\left| \sqrt{17}-\frac{p}{q}\right| =\frac{\left| p^{2}-17q^{2}\right| }{%
\left| \sqrt{17}+\frac{p}{q}\right| q^{2}}\geq \frac{1}{\left| \sqrt{17}+%
\frac{p}{q}\right| q^{2}}.
\]
Let $c$ be a positive real number. Then we consider integer pairs $(p,q)$
such that
\[
\left| \sqrt{17}-\frac{p}{q}\right| \leq \frac{1}{c},
\]
which yields
\begin{eqnarray*}
\left| \sqrt{17}+\frac{p}{q}\right| &\leq &2\sqrt{17}+\left| \sqrt{17}-\frac{%
p}{q}\right| \\
&\leq &2\sqrt{17}+\frac{1}{c}.
\end{eqnarray*}
Thus the inequality
\begin{equation}
\left| \sqrt{17}-\frac{p}{q}\right| >\frac{1}{cq^{2}} \nonumber
\label{liou}
\end{equation}
is satisfied for all integer pair $(p,q),$ $q\geq 1,$ by any positive real
number $c$ satisfying
\[
c>2\sqrt{17}+\frac{1}{c},
\]
i.e.,
\[
c>\sqrt{17}+\sqrt{18}=8.3657\ldots .
\]
This completes the proof.\endproof
\begin{lemma}
\label{f1}Let $F_{1}(m)$ be a function of $m\in \Bbb{N}$ defined by
\[
F_{1}(m)\equiv 192m^{3}\binom{m}{\left[ 0.7m\right] }\left( \frac{\lambda
_{2}}{\lambda _{2}-\lambda _{1}}\right) ^{\left[ 0.7m\right] }\left( \frac{%
-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-\left[ 0.7m\right] }.
\]
Then
\[
F_{1}(k)\leq F_{1}(m)
\]
whenever
\[
m\geq 100\quad \text{and}\quad k\geq m+11.
\]
\end{lemma}
\proof
We easily verify that
\[
\frac{F_{1}(m+1)}{F_{1}(m)}=\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda
_{1}}\right) \left( 1+\frac{1}{m}\right) ^{3}\frac{m+1}{\left[ 0.7m\right] +1%
}
\]
whenever
\[
\left[ 0.7m\right] \neq \left[ 0.7m+0.7\right]
\]
and that
\[
\frac{F_{1}(m+1)}{F_{1}(m)}=\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda
_{1}}\right) \left( 1+\frac{1}{m}\right) ^{3}\frac{m+1}{m-\left[ 0.7m\right]
+1}
\]
whenever
\[
\left[ 0.7m\right] =\left[ 0.7m+0.7\right] .
\]
Then we obtain the following estimates:
\[
\frac{F_{1}(m+1)}{F_{1}(m)}\leq \frac{10}{7}\left( \frac{\lambda _{2}}{%
\lambda _{2}-\lambda _{1}}\right) \left( 1+\frac{1}{m}\right) ^{4}
\]
whenever
\[
\left[ 0.7m\right] \neq \left[ 0.7m+0.7\right]
\]
and that
\[
\frac{F_{1}(m+1)}{F_{1}(m)}\leq \frac{10}{3}\left( \frac{-\lambda _{1}}{%
\lambda _{2}-\lambda _{1}}\right) \left( 1+\frac{1}{m}\right) ^{4}
\]
whenever
\[
\left[ 0.7m\right] =\left[ 0.7m+0.7\right] .
\]
Hence we obtain
\begin{eqnarray*}
\frac{F_{1}(m+11)}{F_{1}(m)} &=&\frac{F_{1}(m+11)}{F_{1}(m+10)}\times \cdots
\times \frac{F_{1}(m+1)}{F_{1}(m)} \\
&\leq &\frac{10^{10}}{3^{3}7^{7}}\left( \frac{\lambda _{2}}{\lambda
_{2}-\lambda _{1}}\right) ^{7}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{3}\left( 1+\frac{11}{m}\right) ^{4}.
\end{eqnarray*}
Note that
\[
\frac{10}{3}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
\leq \frac{10}{7}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}%
\right)
\]
and
\begin{eqnarray*}
0.7\times 1 &=&0.7,\quad 0.7\times 2=1.4, \\
0.7\times 3 &=&2.1,\quad 0.7\times 4=2.8, \\
0.7\times 5 &=&3.5,\quad 0.7\times 6=4.2, \\
0.7\times 7 &=&4.9,\quad 0.7\times 8=5.6, \\
0.7\times 9 &=&6.3,\quad 0.7\times 0=0.
\end{eqnarray*}
Thus we have the following estimates:
\begin{eqnarray*}
\frac{F_{1}(m+12)}{F_{1}(m)} &\leq &A^{8}B^{3}\left( 1+\frac{12}{m}\right)
^{4},\quad \frac{F_{1}(m+13)}{F_{1}(m)}\leq A^{8}B^{4}\left( 1+\frac{13}{m}%
\right) ^{4} \\
\frac{F_{1}(m+14)}{F_{1}(m)} &\leq &A^{8}B^{5}\left( 1+\frac{14}{m}\right)
^{4},\quad \frac{F_{1}(m+15)}{F_{1}(m)}\leq A^{9}B^{5}\left( 1+\frac{15}{m}%
\right) ^{4} \\
\frac{F_{1}(m+16)}{F_{1}(m)} &\leq &A^{9}B^{6}\left( 1+\frac{16}{m}\right)
^{4},\quad \frac{F_{1}(m+17)}{F_{1}(m)}\leq A^{9}B^{7}\left( 1+\frac{17}{m}%
\right) ^{4} \\
\frac{F_{1}(m+18)}{F_{1}(m)} &\leq &A^{10}B^{7}\left( 1+\frac{18}{m}\right)
^{4},\quad \frac{F_{1}(m+19)}{F_{1}(m)}\leq A^{10}B^{8}\left( 1+\frac{19}{m}%
\right) ^{4} \\
\frac{F_{1}(m+20)}{F_{1}(m)} &\leq &A^{10}B^{9}\left( 1+\frac{20}{m}\right)
^{4},\quad \frac{F_{1}(m+21)}{F_{1}(m)}\leq A^{10}B^{10}\left( 1+\frac{21}{m}%
\right) ^{4}
\end{eqnarray*}
where
\[
A=\frac{10}{7}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right)
,\quad B=\frac{10}{3}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}%
\right) .
\]
Then the straight forward computation yields
\begin{eqnarray*}
65 &\geq &\max \left\{ \frac{11}{A^{-\frac{7}{4}}B^{-\frac{3}{4}}-1},\text{%
\quad }\frac{12}{A^{-\frac{8}{4}}B^{-\frac{3}{4}}-1},\text{\quad }\frac{13}{%
A^{-\frac{8}{4}}B^{-\frac{4}{4}}-1},\right. \\
&&\qquad \left. \frac{14}{A^{-\frac{8}{4}}B^{-\frac{5}{4}}-1},\text{\quad }%
\frac{15}{A^{-\frac{9}{4}}B^{-\frac{5}{4}}-1},\text{\quad }\frac{16}{A^{-%
\frac{9}{4}}B^{-\frac{6}{4}}-1},\right. \\
&&\qquad \left. \frac{17}{A^{-\frac{9}{4}}B^{-\frac{7}{4}}-1},\text{\quad }%
\frac{18}{A^{-\frac{10}{4}}B^{-\frac{7}{4}}-1},\text{\quad }\frac{19}{A^{-%
\frac{10}{4}}B^{-\frac{8}{4}}-1},\right. \\
&&\qquad \left. \frac{20}{A^{-\frac{10}{4}}B^{-\frac{9}{4}}-1},\text{\quad }%
\frac{21}{A^{-\frac{10}{4}}B^{-\frac{10}{4}}-1}\right\}
\end{eqnarray*}
so that
\[
F_{1}(k)\leq F_{1}(m)
\]
whenever
\[
m\geq 65\quad \text{and}\quad k\geq m+11.
\]
This completes the proof.\endproof
\begin{lemma}
\label{nonsingular}Let $\Delta (m)$ be the function defined in Lemma \ref
{Lemm1}. Then
\[
\Delta (m)^{-1}\neq 4,-\frac{4}{3}(m-3)\quad \text{for all }m.
\]
Thus the matrices $B_{m}(2),$ $m\geq 3,$ and $B_{m}(3),$ $m\geq 4,$ are
nonsingular.
\end{lemma}
\proof
We define a function $\delta _{m}$ as follows:
\[
\Delta (m)^{-1}=(4-m)(1-\delta _{m})
\]
so that
\begin{align}
& (4-m)\Delta (m) \nonumber \\
& =(1-\delta _{m})^{-1} \nonumber \\
& =\sum_{k=0}^{m}\frac{\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda
_{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{m-k}}{1-\frac{k}{m}\frac{m\lambda _{1}}{m-4}-(1-%
\frac{k}{m})\frac{m\lambda _{2}}{m-4}} \tag*{(1.10)} \label{average}
\end{align}
where
\begin{eqnarray*}
\lambda _{1} &=&\frac{3-\sqrt{17}}{2}=-0.5615\ldots \\
\lambda _{2} &=&\frac{3+\sqrt{17}}{2}=3.5615\ldots \\
\frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}} &=&\frac{3+\sqrt{17}}{2\sqrt{%
17}}=0.8638\ldots \\
\frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}} &=&\frac{-3+\sqrt{17}}{2%
\sqrt{17}}=0.1361\ldots .
\end{eqnarray*}
Note that
\[
\sum_{k=0}^{m}\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda
_{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}%
\right) ^{m-k}=\left( \frac{\lambda _{2}-\lambda _{1}}{\lambda _{2}-\lambda
_{1}}\right) ^{m}=1.
\]
Thus the summation \ref{average} is an average of the function
\begin{equation}
\frac{1}{1-\frac{m\lambda _{1}}{m-4}\frac{k}{m}-\frac{m\lambda _{2}}{m-4}(1-%
\frac{k}{m})}=\frac{m-4}{m\sqrt{17}}\left( \frac{k}{m}-\frac{1}{2}-\frac{m+8%
}{2m\sqrt{17}}\right) ^{-1} \nonumber
\end{equation}
under the binary distribution
\[
\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right)
^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}.
\]
In the equation \ref{average}, the function
\begin{equation}
\frac{1}{1-\frac{m\lambda _{1}}{m-4}X-\frac{m\lambda _{2}}{m-4}(1-X)}
\tag*{(1.11)} \label{singular}
\end{equation}
has a singular point at the value
\[
X=\frac{m+8+m\sqrt{17}}{2m\sqrt{17}}.
\]
But we easily see that
\[
\frac{k}{m}\neq \frac{m+8+m\sqrt{17}}{2m\sqrt{17}}
\]
for each $k=0,\cdots ,m.$
Then
\begin{eqnarray*}
&&(1-\delta _{m})^{-1}-1 \\
&=&\delta _{m}(1-\delta _{m})^{-1} \\
&=&\sum_{k=0}^{m}\frac{\frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}}}{\frac{k%
}{m}-\frac{1}{2}-\frac{m+8}{2m\sqrt{17}}}\cdot \binom{m}{k}\left( \frac{%
\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda
_{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}.
\end{eqnarray*}
For $m\geq 13,$ we have
\begin{equation}
\frac{1}{2}+\frac{m+8}{2m\sqrt{17}}\leq 0.7\leq \frac{1}{2}+\frac{3}{2\sqrt{%
17}} \tag*{(1.12)} \label{inequality}
\end{equation}
so that
\begin{eqnarray*}
&&\sum_{k=0}^{m}\frac{\left| \frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}}%
\right| }{\left| \frac{k}{m}-\frac{1}{2}-\frac{m+8}{2m\sqrt{17}}\right| }%
\cdot \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}%
\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
^{m-k} \\
&=&\sum_{k=0}^{\left[ 0.7m\right] }\frac{\left| \frac{k}{m}-\frac{1}{2}-%
\frac{3}{2\sqrt{17}}\right| }{\left| \frac{k}{m}-\frac{1}{2}-\frac{m+8}{2m%
\sqrt{17}}\right| }\cdot \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda
_{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{m-k} \\
&&+\sum_{k=\left[ 0.7m\right] +1}^{m}\frac{\left| \frac{k}{m}-\frac{1}{2}-%
\frac{3}{2\sqrt{17}}\right| }{\left| \frac{k}{m}-\frac{1}{2}-\frac{m+8}{2m%
\sqrt{17}}\right| }\cdot \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda
_{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{m-k}
\end{eqnarray*}
where the first summation contains the singular terms as $m\rightarrow
\infty .$
By Lemma \ref{17}, we have the following estimate:
\begin{align}
& \frac{1}{\left| \frac{k}{m}-\frac{1}{2}-\frac{m+8}{2m\sqrt{17}}\right| }
\nonumber \\
& =\frac{34m}{(m+8)\left| \sqrt{17}-\frac{17(2k-m)}{m+8}\right| } \nonumber
\\
& <17^{2}m(m+8). \tag*{(1.13)} \label{estimate}
\end{align}
By the inequality \ref{inequality} and the estimate \ref{estimate}, we
obtain
\begin{align}
& \sum_{k=0}^{m}\frac{\left| \frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}}%
\right| }{\left| \frac{k}{m}-\frac{1}{2}-\frac{m+8}{2m\sqrt{17}}\right| }%
\cdot \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}%
\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
^{m-k} \nonumber \\
& \leq \frac{17\sqrt{17}(3+\sqrt{17})}{2}m(m+8)\sum_{k=0}^{\left[
0.7m\right] }\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}%
}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
^{m-k} \nonumber \\
& +\left( \frac{1}{5}-\frac{m+8}{2m\sqrt{17}}\right) ^{-1}\sum_{k=\left[
0.7m\right] +1}^{m}\left| \frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}}\right|
\nonumber \\
& \hspace{2in}\times \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda
_{2}-\lambda _{1}}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda
_{2}-\lambda _{1}}\right) ^{m-k}. \tag*{(1.14)} \label{home}
\end{align}
Note that the binary distribution
\begin{equation}
\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right)
^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-k}
\tag*{(1.15)} \label{binary}
\end{equation}
increases up to the average
\[
\frac{k}{m}=\frac{1}{2}+\frac{3}{2\sqrt{17}}\geq 0.7.
\]
Thus we obtain, for $m\geq 100,$
\begin{eqnarray*}
&&\frac{17\sqrt{17}(3+\sqrt{17})m(m+8)}{2}\sum_{k=0}^{\left[ 0.7m\right] }%
\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}\right)
^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-k} \\
&\leq &192m^{3}\binom{m}{\left[ 0.7m\right] }\left( \frac{\lambda _{2}}{%
\lambda _{2}-\lambda _{1}}\right) ^{\left[ 0.7m\right] }\left( \frac{%
-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right) ^{m-\left[ 0.7m\right] }
\end{eqnarray*}
by the following inequality
\[
\frac{17\sqrt{17}(3+\sqrt{17})}{2}\left[ 0.7m\right] m(m+8)\leq
192m^{3}\quad \text{for }m\geq 100.
\]
By numerical computation, we obtain
\begin{eqnarray*}
F_{1}(100) &=&2114.7\ldots \\
F_{1}(200) &=&1.5207\ldots \\
F_{1}(300) &\leq &5.3215\times 10^{-4}.
\end{eqnarray*}
Then, by Lemma \ref{f1}, we obtain the following numerical estimate:
\begin{equation}
F_{1}(m)\leq 5.33\times 10^{-4}\quad \text{for }m\geq 400. \tag*{(1.16)}
\label{f11}
\end{equation}
For the second part of the inequality \ref{home}, we have the following
estimate
\begin{eqnarray*}
&&\sum_{k=0.7m}^{m}\left| \frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}}%
\right| \binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}}%
\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
^{m-k} \\
&\leq &\sqrt{\sum_{k=0}^{m}\left( \frac{k}{m}-\frac{1}{2}-\frac{3}{2\sqrt{17}%
}\right) ^{2}\binom{m}{k}\left( \frac{\lambda _{2}}{\lambda _{2}-\lambda _{1}%
}\right) ^{k}\left( \frac{-\lambda _{1}}{\lambda _{2}-\lambda _{1}}\right)
^{m-k}} \\
&=&\sqrt{\frac{2}{17m}}.
\end{eqnarray*}
We easily verify that
\[
F_{2}(m)\equiv \left( \frac{1}{5}-\frac{m+8}{2m\sqrt{17}}\right) ^{-1}\sqrt{%
\frac{2}{17m}}\searrow 0\quad \text{as }m\rightarrow \infty .
\]
By numerical computation, we obtain
\begin{align}
F_{2}(400)& =0.2247\ldots \nonumber \\
F_{2}(600)& =0.1815\ldots \nonumber \\
F_{2}(800)& =0.1564\ldots . \tag*{(1.17)} \label{f12}
\end{align}
Note that we have the following estimate
\[
\left| \delta _{m}\right| \leq \frac{\left| F(m)\right| }{1-\left|
F(m)\right| }
\]
whenever
\[
F(m)\equiv F_{1}(m)+F_{2}(m)<1.
\]
Thus we obtain
\[
\left| \delta _{m}\right| \leq 0.2
\]
whenever
\[
\left| F(m)\right| \leq \frac{1}{6}=0.1666\ldots .
\]
Therefore, by the numerical result in \ref{f11} and \ref{f12},
\[
\left| \delta _{m}\right| \leq 0.2\quad \text{for all }m\geq 800.
\]
Hence, it suffices to compute the numerical value of the function $\Delta
(m) $ up to $m\leq 800.$
Indeed, by numerical computation up to $m=800$, we can check the tendency of
the function $\delta _{m}$ as follows:
\begin{equation}
\left| \delta _{m}\right| \leq 0.2 \tag*{(1.18)} \label{computer1}
\end{equation}
for $m\geq 30$ so that
\begin{equation}
\Delta (m)^{-1}\leq -20. \nonumber
\end{equation}
Thus we easily see
\[
\Delta (m)^{-1}\neq 4,-\frac{4}{3}(m-3)\quad \text{for }m\geq 30.
\]
Then we need to check
\[
\Delta (m)^{-1}=\frac{\det E_{m}(m+1)}{\det E_{m}(m)}\neq 4,-\frac{4}{3}%
(m-3)\quad \text{for }1\leq m\leq 29,
\]
or, equivalently,
\begin{eqnarray*}
\eta (m) &=&\frac{\det E_{m}(m)}{\det E_{m}(m-1)}=\frac{2m}{4-m-\Delta
(m)^{-1}} \\
&\neq &-2\text{ or }6\quad \text{for }1\leq m\leq 29.
\end{eqnarray*}
We may compute the value $\eta (m)$ for $1\leq m\leq 29$ by using the
following recurrence relation
\[
\det E_{m}(s+1)=(2m+4-3s)\det E_{m}(s)-2s(m-s+1)\det E_{m}(s-1)
\]
for $s=2,\cdots ,m,$ with the initial values
\[
\det E_{m}(1)=2m+4\quad \text{and}\quad \det E_{m}(2)=4(m+1)^{2}
\]
where
\[
E_{m}(s+1)=\left(
\begin{array}{ccccc}
2m-3s+4 & 2(m-s+1) & 0 & \cdots & 0 \\
s & 2m-3s+7 & 2(m-s+2) & \ddots & \vdots \\
0 & \ddots & \ddots & \ddots & 0 \\
\vdots & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & 0 & 1 & 2m+4
\end{array}
\right) .
\]
Then we obtain
\begin{equation}
\begin{tabular}{rrcrrr}
$\eta (1)=$ & $2.66\ldots $ & $\hspace{0.3cm}\eta (11)=$ & $-7.14\ldots $ & $%
\hspace{0.3cm}\eta (21)=$ & $-1.65\ldots $ \\
$\eta (2)=$ & $4.5\hspace{0.55cm}\;$ & $\hspace{0.3cm}\eta (12)=$ & $%
39.86\ldots $ & $\hspace{0.3cm}\eta (22)=$ & $-11.21\ldots $ \\
$\eta (3)=$ & $2.75\;\quad $ & $\hspace{0.3cm}\eta (13)=$ & $-1.96\ldots $ &
$\hspace{0.3cm}\eta (23)=$ & $-31.81\ldots $ \\
$\eta (4)=$ & $0.36\ldots $ & $\hspace{0.3cm}\eta (14)=$ & $-9.84\ldots $ & $%
\hspace{0.3cm}\eta (24)=$ & $-7.43\ldots $ \\
$\eta (5)=$ & $-5.24\ldots $ & $\hspace{0.3cm}\eta (15)=$ & $19.83\ldots $ &
$\hspace{0.3cm}\eta (25)=$ & $-14.96\ldots $ \\
$\eta (6)=$ & $12.05\ldots $ & $\hspace{0.3cm}\eta (16)=$ & $-4.61\ldots $ &
$\hspace{0.3cm}\eta (26)=$ & $118.96\ldots $ \\
$\eta (7)=$ & $0.64\ldots $ & $\hspace{0.3cm}\eta (17)=$ & $-13.56\ldots $ &
$\hspace{0.3cm}\eta (27)=$ & $-12.12\ldots $ \\
$\eta (8)=$ & $-5.36\ldots $ & $\hspace{0.3cm}\eta (18)=$ & $6.45\ldots $ & $%
\hspace{0.3cm}\eta (28)=$ & $-19.25\ldots $ \\
$\eta (9)=$ & $35.53\ldots $ & $\hspace{0.3cm}\eta (19)=$ & $-7.76\ldots $ &
$\hspace{0.3cm}\eta (29)=$ & $-3.97\ldots $ \\
$\eta (10)=$ & $-0.11\ldots $ & $\hspace{0.3cm}\eta (20)=$ & $-19.12\ldots $
& $\hspace{0.3cm}\eta (30)=$ & $-16.30\ldots $%
\end{tabular}
\tag*{(1.19)} \label{computer2}
\end{equation}
This completes the proof.\endproof
\section{Local automorphism group of a real hypersurface}
\subsection{Polynomial Identities}
We shall use the following notations:
\begin{eqnarray*}
O(k+1) &=&\sum_{s+2t\geq k+1}O\left( \left| z\right| ^{s}\left| w\right|
^{t}\right) \\
O_{\times }(k+1) &=&\left( O(k),\cdots ,O(k),O(k+1)\right) .
\end{eqnarray*}
\begin{lemma}
\label{B}Let $M$ be a nondegenerate analytic real hypersurface defined near
the origin by
\begin{equation}
v=F(z,\bar{z},u),\quad \left. F\right| _{0}=\left. dF\right| _{0}=0
\nonumber
\end{equation}
and $\phi $ be a biholomorphic mapping near the origin such that the
transformed real hypersurface $\phi (M)$ is defined by the equation
\begin{equation}
v=\langle z,z\rangle +F^{*}(z,\bar{z},u)+O(k+1). \nonumber \label{3.7}
\end{equation}
Suppose that the equation
\[
v=\langle z,z\rangle +F^{*}(z,\bar{z},u)
\]
is in normal form. Then there is a normalization $\varphi $ of $M$ such that
\[
\varphi =\phi +O_{\times }(k+1).
\]
Further, suppose that the normalization $\varphi $ transforms $M$ to a real
hypersurface $M^{\prime }$ in normal form defined by
\[
v=\langle z,z\rangle +F^{\prime }(z,\bar{z},u).
\]
Then
\[
F^{\prime }(z,\bar{z},u)=F^{*}(z,\bar{z},u)+O(k+1).
\]
\end{lemma}
In the paper \cite{Pa2}, we have given the proof of Lemma \ref{B}.
\begin{lemma}
\label{Lemma-a}Let $M$ be a real hypersurface in normal form defined by the
equation
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u)+O(l+2),
\]
where, for all complex number $\mu ,$%
\[
F_{st}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{st}(z,\overline{z},u).
\]
Let $\phi $ be a normalization of $M$ with initial value $(id_{n\times
n},a,1,0)\in H$ such that $\phi $ transforms $M$ to a real hypersurface in
normal form defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u).
\]
Then
\[
F^{*}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z}%
,u)+O(l+2)
\]
if and only if
\begin{align}
-2i(\langle z,a\rangle -\langle a,z\rangle )\sum_{\min (s,t)\geq 2}F_{st}(z,%
\overline{z},u) \nonumber \\
+\sum_{\min (s-1,t)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }(u+i\langle
z,z\rangle ) \nonumber \\
+2i\sum_{t\geq 2}\sum_{\alpha }\left( \frac{\partial F_{2t}}{\partial
z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }\langle z,z\rangle
\nonumber \\
+\sum_{\min (s,t-1)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial \overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}%
^{\alpha }(u-i\langle z,z\rangle ) \nonumber \\
-2i\sum_{s\geq 2}\sum_{\alpha }\left( \frac{\partial F_{s2}}{\partial
\overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}^{\alpha
}\langle z,z\rangle \nonumber \\
+\frac{i}{2}\sum_{\min (s,t)\geq 2}\left( \frac{\partial F_{st}}{\partial u}%
\right) (z,\overline{z},u)\left\{ \langle z,a\rangle (u+i\langle z,z\rangle
)-\langle a,z\rangle (u-i\langle z,z\rangle )\right\} \nonumber \\
+G_{l+1}(z,\overline{z},u) \nonumber \\
=0 \tag*{(2.1)} \label{lem-a}
\end{align}
where, for $l=2k-1,$%
\begin{eqnarray*}
G_{l+1}(z,\overline{z},u)=\frac{g}{2}\{(k-1)\langle z,z\rangle
+iu\}(u+i\langle z,z\rangle )^{k-1} \\
+\frac{g}{2}\{(k-1)\langle z,z\rangle -iu\}(u-i\langle z,z\rangle )^{k-1}
\end{eqnarray*}
and, for $l=2k,$%
\begin{eqnarray*}
G_{l+1}(z,\overline{z},u)=\langle \kappa ,z\rangle (u+i\langle z,z\rangle
)^{k}+\langle z,\kappa \rangle (u-i\langle z,z\rangle )^{k} \\
+2ik\langle z,z\rangle \langle z,\kappa \rangle (u+i\langle z,z\rangle
)^{k-1} \\
-2ik\langle z,z\rangle \langle \kappa ,z\rangle (u-i\langle z,z\rangle
)^{k-1} \\
-\langle z,\kappa \rangle (u+i\langle z,z\rangle )^{k}-\langle \kappa
,z\rangle (u-i\langle z,z\rangle )^{k}
\end{eqnarray*}
and
\begin{eqnarray*}
\langle \kappa ,z\rangle =\frac{u^{3-k}}{4k(k-1)(n+1)(n+2)}\left\{
\sum_{\alpha }a^{\alpha }\Delta ^{2}\left( \frac{\partial F_{33}}{\partial
z^{\alpha }}\right) (z,\overline{z},u)\right. \\
\hspace{2in}\left. +\sum_{\alpha }\overline{a}^{\alpha }\Delta ^{2}\left(
\frac{\partial F_{24}}{\partial \overline{z}^{\alpha }}\right) (z,\overline{z%
},u)\right\} \\
g=\frac{u^{4-k}}{2k(k-1)(k-2)n(n+1)(n+2)}\left\{ \sum_{\alpha }a^{\alpha
}\Delta ^{3}\left( \frac{\partial F_{43}}{\partial z^{\alpha }}\right) (z,%
\overline{z},u)\right. \\
\hspace{2.2in}\left. +\sum_{\alpha }\overline{a}^{\alpha }\Delta ^{3}\left(
\frac{\partial F_{34}}{\partial \overline{z}^{\alpha }}\right) (z,\overline{z%
},u)\right\} .
\end{eqnarray*}
\end{lemma}
\proof
For the initial value $(id_{n\times n},a,1,0)\in H,$ we have the following
decomposition(cf. \cite{Pa1}):
\[
\phi =E\circ \psi
\]
where $E$ is a normalization with identity initial value and
\[
\psi :\left\{
\begin{array}{c}
z^{*}=\frac{z-aw}{1+2i\langle z,a\rangle -i\langle a,a\rangle w} \\
w^{*}=\frac{\rho w}{1+2i\langle z,a\rangle -i\langle a,a\rangle w}
\end{array}
\right. .
\]
The mapping $\psi $ transforms $M$ to a real hypersurface $M^{\prime }$
defined up to $O(l+2)$ by the equation
\begin{eqnarray*}
v &=&\langle z,z\rangle +F_{l}(z,z,u) \\
&&-2i(\langle z,a\rangle -\langle a,z\rangle )F_{l}(z,\overline{z},u) \\
&&+\sum_{\alpha }\left( \frac{\partial F_{l}}{\partial z^{\alpha }}\right)
(z,\overline{z},u)a^{\alpha }(u+i\langle z,z\rangle ) \\
&&+\sum_{\alpha }\left( \frac{\partial F_{l}}{\partial \overline{z}^{\alpha }%
}\right) (z,\overline{z},u)\overline{a}^{\alpha }(u-i\langle z,z\rangle ) \\
&&+\frac{i}{2}\left( \frac{\partial F_{l}}{\partial u}\right) (z,\overline{z}%
,u)\{\langle z,a\rangle (u+i\langle z,z\rangle )-\langle a,z\rangle
(u-i\langle z,z\rangle )\} \\
&&+O(l+2)
\end{eqnarray*}
where
\[
F_{l}(z,z,u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u).
\]
By virtue of Lemma \ref{B}, we normalize $M^{\prime }$ up to $O(l+2)$ by a
mapping $h=(f,g)$ satisfying
\begin{eqnarray*}
\left( \left. \frac{\partial f}{\partial z}\right| _{0}\right)
&=&id_{n\times n},\quad \left( \left. \frac{\partial f}{\partial w}\right|
_{0}\right) =0, \\
\Re \left( \left. \frac{\partial g}{\partial w}\right| _{0}\right)
&=&1,\quad \Re \left( \left. \frac{\partial ^{2}g}{\partial w^{2}}\right|
_{0}\right) =0,
\end{eqnarray*}
so that we obtain
\begin{align}
F^{*}(z,\overline{z},u)& =F_{l}(z,\overline{z},u)-2i(\langle z,a\rangle
-\langle a,z\rangle )\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u)
\nonumber \\
& +\sum_{\min (s-1,t)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }(u+i\langle
z,z\rangle ) \nonumber \\
& +2i\sum_{t\geq 2}\sum_{\alpha }\left( \frac{\partial F_{2t}}{\partial
z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }\langle z,z\rangle
\nonumber \\
& +\sum_{\min (s,t-1)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial \overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}%
^{\alpha }(u-i\langle z,z\rangle ) \nonumber \\
& -2i\sum_{s\geq 2}\sum_{\alpha }\left( \frac{\partial F_{s2}}{\partial
\overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}^{\alpha
}\langle z,z\rangle \nonumber \\
& +\frac{i}{2}\sum_{\min (s,t)\geq 2}\left( \frac{\partial F_{st}}{\partial u%
}\right) (z,\overline{z},u)\{\langle z,a\rangle (u+i\langle z,z\rangle )
\nonumber \\
& \hspace{5cm}-\langle a,z\rangle (u-i\langle z,z\rangle )\} \nonumber \\
& +G_{l+1}(z,\overline{z},u)+O(l+2). \tag*{(2.2)} \label{sourse}
\end{align}
where, for $l=2k-1,$%
\begin{eqnarray*}
G_{l+1}(z,\overline{z},u) &=&\langle \chi z,z\rangle (u+i\langle z,z\rangle
)^{k-1}+\langle z,\chi z\rangle (u-i\langle z,z\rangle )^{k-1} \\
&&-\frac{g}{2i}(u+i\langle z,z\rangle )^{k}+\frac{g}{2i}(u-i\langle
z,z\rangle )^{k}
\end{eqnarray*}
and, for $l=2k,$%
\begin{eqnarray*}
G_{l+1}(z,\overline{z},u) &=&\langle \kappa ,z\rangle (u+i\langle z,z\rangle
)^{k}+\langle z,\kappa \rangle (u-i\langle z,z\rangle )^{k} \\
&&+2ik\langle z,z\rangle \langle z,\kappa \rangle (u+i\langle z,z\rangle
)^{k-1} \\
&&-2ik\langle z,z\rangle \langle \kappa ,z\rangle (u-i\langle z,z\rangle
)^{k-1} \\
&&-\langle z,\kappa \rangle (u+i\langle z,z\rangle )^{k}-\langle \kappa
,z\rangle (u-i\langle z,z\rangle )^{k}.
\end{eqnarray*}
Here the constants $\chi ,g,\kappa $ satisfy the conditions
\begin{gather*}
\langle \chi z,z\rangle +\langle z,\chi z\rangle =kg\langle z,z\rangle , \\
g\in \Bbb{R},\quad \kappa \in \Bbb{C}^{n},
\end{gather*}
and they are uniquely determined by the following conditions:
\[
\Delta F_{22}^{*}=\Delta ^{2}F_{23}^{*}=\Delta ^{3}F_{33}^{*}=0,
\]
where
\begin{eqnarray*}
F^{*}(z,\overline{z},u) &=&\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u) \\
&&+\sum_{\min (s,t)\geq 2}F_{st}^{*}(z,\overline{z},u)+O(l+2)
\end{eqnarray*}
and, for all complex number $\mu ,$%
\[
F_{st}^{*}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l+1}F_{st}^{*}(z,%
\overline{z},u).
\]
Indeed, from the equality \ref{sourse}, we obtain
\begin{eqnarray*}
F_{22}^{*}(z,\overline{z},u) &=&2(k-1)i\langle \chi z,z\rangle \langle
z,z\rangle u^{k-2}-k(k-1)ig\langle z,z\rangle ^{2}u^{k-2} \\
F_{23}^{*}(z,\overline{z},u) &=&-2k(k-1)\langle \kappa ,z\rangle \langle
z,z\rangle ^{2}u^{k-2}+2i\langle a,z\rangle F_{22}(z,\overline{z},u) \\
&&+2i\sum_{\alpha }\left( \frac{\partial F_{22}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u)a^{\alpha }\langle z,z\rangle -\frac{i}{2}\left(
\frac{\partial F_{22}}{\partial u}\right) (z,\overline{z},u)\langle
a,z\rangle u \\
&&+\sum_{\alpha }\left( \frac{\partial F_{33}}{\partial z^{\alpha }}\right)
(z,\overline{z},u)a^{\alpha }u+\sum_{\alpha }\left( \frac{\partial F_{24}}{%
\partial \overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}%
^{\alpha }u \\
F_{33}^{*}(z,\overline{z},u) &=&-\frac{k(k-1)(k-2)}{3}g\langle z,z\rangle
^{3}u^{k-3} \\
&&-2i\langle z,a\rangle F_{23}(z,\overline{z},u)+2i\langle a,z\rangle
F_{32}(z,\overline{z},u) \\
&&+\sum_{\alpha }\left( \frac{\partial F_{43}}{\partial z^{\alpha }}\right)
(z,\overline{z},u)a^{\alpha }u+i\sum_{\alpha }\left( \frac{\partial F_{32}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }\langle z,z\rangle
\\
&&+\sum_{\alpha }\left( \frac{\partial F_{34}}{\partial \overline{z}^{\alpha
}}\right) (z,\overline{z},u)\overline{a}^{\alpha }u-i\sum_{\alpha }\left(
\frac{\partial F_{23}}{\partial \overline{z}^{\alpha }}\right) (z,\overline{z%
},u)\overline{a}^{\alpha }\langle z,z\rangle \\
&&+\frac{i}{2}\left( \frac{\partial F_{23}}{\partial u}\right) (z,\overline{z%
},u)\langle z,a\rangle u-\frac{i}{2}\left( \frac{\partial F_{32}}{\partial u}%
\right) (z,\overline{z},u)\langle a,z\rangle u.
\end{eqnarray*}
Hence we obtain
\begin{eqnarray*}
\Delta F_{22}^{*}(z,\overline{z},u) &=&2(k-1)(n+2)i\langle \chi z,z\rangle
u^{k-2}+2(k-1)i\mathrm{Tr(}\chi \mathrm{)}\langle z,z\rangle u^{k-2} \\
&&-2k(k-1)(n+1)ig\langle z,z\rangle u^{k-2} \\
\Delta ^{2}F_{22}^{*}(z,\overline{z},u) &=&4(k-1)(n+1)i\mathrm{Tr(}\chi
\mathrm{)}u^{k-2}-2k(k-1)n(n+1)igu^{k-2} \\
\Delta ^{2}F_{23}^{*}(z,\overline{z},u) &=&-4k(k-1)(n+1)(n+2)\langle \kappa
,z\rangle u^{k-2} \\
&&+\sum_{\alpha }ua^{\alpha }\Delta ^{2}\left( \frac{\partial F_{33}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)+\sum_{\alpha }u\overline{a}%
^{\alpha }\Delta ^{2}\left( \frac{\partial F_{24}}{\partial \overline{z}%
^{\alpha }}\right) (z,\overline{z},u) \\
\Delta ^{3}F_{33}^{*}(z,\overline{z},u) &=&-2k(k-1)(k-2)n(n+1)(n+2)gu^{k-3}
\\
&&+\sum_{\alpha }ua^{\alpha }\Delta ^{3}\left( \frac{\partial F_{43}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)+\sum_{\alpha }u\overline{a}%
^{\alpha }\Delta ^{3}\left( \frac{\partial F_{34}}{\partial \overline{z}%
^{\alpha }}\right) (z,\overline{z},u)
\end{eqnarray*}
Note that the condition $\Delta F_{22}^{*}=0$ yields
\begin{equation}
2\langle \chi z,z\rangle =kg\langle z,z\rangle . \tag*{(2.3)}
\label{sourse4}
\end{equation}
The condition $\Delta ^{2}F_{23}^{*}=\Delta ^{3}F_{33}^{*}=0,$ with the
equality \ref{sourse4}, uniquely determines the constants $\chi ,\kappa ,g.$
Then we easily see that
\[
F^{*}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z}%
,u)+O(l+2)
\]
if and only if
\[
\sum_{\min (s,t)\geq 2}F_{st}^{*}(z,\overline{z},u)=0.
\]
This completes the proof.\endproof
Note that, for odd integer $l,$%
\[
G_{l+1}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2,s=t}G_{st}(z,\overline{z}%
,u)
\]
and, for even integer $l,$%
\[
G_{l+1}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2,s=t\pm 1}G_{st}(z,\overline{%
z},u).
\]
\begin{lemma}
Let $M$ be a real hypersurface in normal form defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+O(l+2)
\]
where
\[
F_{l}(z,\overline{z},u)=\sum_{s\geq 2}F_{ss}(z,\overline{z},u)
\]
and, for all complex number $\mu $,
\[
F_{l}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{l}(z,\overline{z},u).
\]
Let $\phi $ be a normalization with initial value $(id_{n\times n},a,1,0)\in
H$ such that $\phi $ transforms $M$ to a real hypersurface $M^{\prime }$ in
normal form defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u).
\]
Suppose that
\[
F^{*}(z,\overline{z},u)=F_{l}(z,\overline{z},u)+O(l+2).
\]
Then there is an identity, for each integer $s,$ $3\leq s\leq k$, as
follows:
\[
\sum_{s=2}^{k}i^{k-s}\langle z,z\rangle ^{k-s}\sum_{\alpha }a^{\alpha }\frac{%
\partial }{\partial z^{\alpha }}\left\{ \langle z,z\rangle \left( \frac{%
F_{ss}(z,\overline{z},u)}{u^{k-s}}\right) \right\} =-(2i)^{k-1}\langle
\kappa ,z\rangle \langle z,z\rangle ^{k}
\]
where
\[
l=2k.
\]
\end{lemma}
\proof
By the condition
\[
F_{l}(z,\overline{z},u)=\sum_{s\geq 2}F_{ss}(z,\overline{z},u),
\]
the identity \ref{lem-a} in Lemma \ref{Lemma-a} comes to
\begin{align}
& 2i(\langle z,a\rangle -\langle a,z\rangle )\sum_{s\geq 2}F_{ss}(z,%
\overline{z},u) \nonumber \\
& -2i\langle z,z\rangle \sum_{\alpha }\left\{ \left( \frac{\partial F_{22}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }-\left( \frac{%
\partial F_{22}}{\partial \overline{z}^{\alpha }}\right) (z,\overline{z},u)%
\overline{a}^{\alpha }\right\} \nonumber \\
& -\sum_{s\geq 3}\sum_{\alpha }\left\{ \left( \frac{\partial F_{ss}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }(u+i\langle
z,z\rangle )\right. \nonumber \\
& \hspace{3cm}\left. +\left( \frac{\partial F_{ss}}{\partial \overline{z}%
^{\alpha }}\right) (z,\overline{z},u)\overline{a}^{\alpha }(u-i\langle
z,z\rangle )\right\} \nonumber \\
& -\frac{i}{2}\sum_{s\geq 2}\left( \frac{\partial F_{ss}}{\partial u}\right)
(z,\overline{z},u)\{\langle z,a\rangle (u+i\langle z,z\rangle )-\langle
a,z\rangle (u-i\langle z,z\rangle )\} \nonumber \\
& =\langle \kappa ,z\rangle (u+i\langle z,z\rangle )^{k}+\langle z,\kappa
\rangle (u-i\langle z,z\rangle )^{k} \nonumber \\
& +2ik\langle z,z\rangle \langle z,\kappa \rangle (u+i\langle z,z\rangle
)^{k-1}-2ik\langle z,z\rangle \langle \kappa ,z\rangle (u-i\langle
z,z\rangle )^{k-1} \nonumber \\
& -\langle z,\kappa \rangle (u+i\langle z,z\rangle )^{k}-\langle \kappa
,z\rangle (u-i\langle z,z\rangle )^{k} \nonumber \\
& =\langle \kappa ,z\rangle \sum_{t=2}^{k}\{1+(-1)^{t}(2t-1)\}\binom{k}{t}%
u^{k-t}(i\langle z,z\rangle )^{t} \nonumber \\
& +\langle z,\kappa \rangle \sum_{t=2}^{k}\{1+(-1)^{t}(2t-1)\}\binom{k}{t}%
u^{k-t}(-i\langle z,z\rangle )^{t} \tag*{(2.4)} \label{lem-b}
\end{align}
Then, by Lemma \ref{Lemma-a}, the constant $\kappa $ is given by
\begin{equation}
\langle \kappa ,z\rangle =\frac{u^{2-k}}{4k(k-1)(n+1)(n+2)}\sum_{\alpha
}ua^{\alpha }\Delta ^{2}\left( \frac{\partial F_{33}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u). \nonumber \label{b5}
\end{equation}
By collecting functions of type $(m+2,m+3)$ for $m=0,\cdots ,k-2$ in the
identity \ref{lem-b}, we obtain the following identities for each integer $%
s, $ $3\leq s\leq k$:
\begin{align}
& \langle z,z\rangle \left\{ \langle a,z\rangle \left( \frac{\partial
F_{s-1,s-1}}{\partial u}\right) (z,\overline{z},u)+2i\sum_{\alpha }a^{\alpha
}\left( \frac{\partial F_{ss}}{\partial z^{\alpha }}\right) (z,\overline{z}%
,u)\right\} \nonumber \\
& =4i\langle a,z\rangle F_{ss}(z,\overline{z},u)+4i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{ss}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u) \nonumber \\
& +2\langle \kappa ,z\rangle \{1+(-1)^{s}(2s-1)\}\binom{k}{s}%
u^{k-s}(i\langle z,z\rangle )^{s} \nonumber \\
& -iu\left\{ \langle a,z\rangle \left( \frac{\partial F_{ss}}{\partial u}%
\right) (z,\overline{z},u)+2i\sum_{\alpha }a^{\alpha }\left( \frac{\partial
F_{s+1,s+1}}{\partial z^{\alpha }}\right) (z,\overline{z},u)\right\}
\tag*{(2.5)} \label{6.12}
\end{align}
and
\begin{align}
& 4i\langle a,z\rangle F_{22}(z,\overline{z},u)+4i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{22}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u) \nonumber \\
& -4k(k-1)\langle \kappa ,z\rangle u^{k-2}\langle z,z\rangle ^{2} \nonumber
\\
& -iu\left\{ \langle a,z\rangle \left( \frac{\partial F_{22}}{\partial u}%
\right) (z,\overline{z},u)+2i\sum_{\alpha }a^{\alpha }\left( \frac{\partial
F_{33}}{\partial z^{\alpha }}\right) (z,\overline{z},u)\right\} \nonumber \\
& =0. \tag*{(2.6)} \label{6.11}
\end{align}
In the equality \ref{6.12}, we assume
\[
F_{k+1,k+1}(z,\overline{z},u)=0.
\]
From the equalities \ref{6.12} and \ref{6.11}, we obtain the following
recurrence relation:
\begin{eqnarray*}
A(s) &=&iu^{-1}\langle z,z\rangle A(s-1) \\
&&+4u^{-1}\sum_{\alpha }a^{\alpha }\frac{\partial }{\partial z^{\alpha }}%
\left\{ \langle z,z\rangle F_{ss}(z,\overline{z},u)\right\} \\
&&-2i\langle \kappa ,z\rangle \{1+(-1)^{s}(2s-1)\}\binom{k}{s}%
u^{k-s-1}(i\langle z,z\rangle )^{s}
\end{eqnarray*}
for $s=2,\cdots ,k,$ and
\[
A(1)=A(k)=0.
\]
Thus we obtain the following identity:
\begin{align}
& \sum_{s=2}^{k}i^{k-s}\langle z,z\rangle ^{k-s}\sum_{\alpha }a^{\alpha }%
\frac{\partial }{\partial z^{\alpha }}\left\{ \langle z,z\rangle \left(
\frac{F_{ss}(z,\overline{z},u)}{u^{k-s}}\right) \right\} \nonumber \\
& =\frac{i^{k+1}}{2}\langle \kappa ,z\rangle \langle z,z\rangle
^{k}\sum_{s=2}^{k}\{1+(-1)^{s}(2s-1)\}\binom{k}{s} \nonumber \\
& =-(2i)^{k-1}\langle \kappa ,z\rangle \langle z,z\rangle ^{k}. \tag*{(2.7)}
\label{a.20}
\end{align}
This completes the proof.\endproof
\begin{lemma}
\label{divided}Suppose that the functions $F_{ss}(z,\overline{z},u),$ $%
s=2,\cdots ,k,$ satisfy the equalities \ref{6.12}, where $l=2k$. Then that
the polynomial
\[
F_{ss}(z,\overline{z},u),\quad s=\max (k-m,2),\cdots ,k-1,
\]
is divided by $\langle z,z\rangle ^{m-k+s}$ whenever
\[
a\neq 0
\]
and $F_{kk}(z,\overline{z},u)$ is divided by $\langle z,z\rangle ^{m}$ for $%
0\leq m\leq k.$
\end{lemma}
\proof
The equality \ref{6.12} yields, for $s=3,\cdots ,k,$%
\begin{eqnarray*}
&&(k-s+1)\langle z,z\rangle \langle a,z\rangle F_{s-1,s-1}(z,\overline{z},u)
\\
&=&-i(k-s-4)\langle a,z\rangle F_{ss}(z,\overline{z},u)+2i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{ss}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u) \\
&&+2\langle \kappa ,z\rangle \langle z,z\rangle ^{s}\{1+(-1)^{s}(2s-1)\}%
\binom{k}{s}i^{s}u^{k-s} \\
&&+2u\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{s+1,s+1}}{\partial
z^{\alpha }}\right) (z,\overline{z},u).
\end{eqnarray*}
Since $\langle a,z\rangle $ is not a devisor of $\langle z,z\rangle ,$ this
equality yields the desired result. This completes the proof.\endproof
\begin{lemma}
Let $M$ be a real hypersurface in normal form defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+O(l+2)
\]
where
\[
F_{l}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u)
\]
and, for all complex number $\mu $,
\[
F_{l}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{l}(z,\overline{z},u).
\]
Let $\phi $ be a normalization with initial value $(id_{n\times n},a,1,0)\in
H$ such that $\phi $ transforms $M$ to a real hypersurface $M^{\prime }$ in
normal form defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u).
\]
Suppose that
\[
F^{*}(z,\overline{z},u)=F_{l}(z,\overline{z},u)+O(l+2)
\]
and the function $F_{l}(z,\overline{z},u)$ contains a nonzero function $%
F_{st}(z,\overline{z},u)$ of type $(s,t),$ $s\neq t.$ Then there is an
identity, for each integer $s,$ $3\leq s\leq p$, as follows:
\begin{equation}
\sum_{s=2}^{p}i^{p-s}\langle z,z\rangle ^{p-s}\sum_{\alpha }a^{\alpha }\frac{%
\partial }{\partial z^{\alpha }}\left\{ \langle z,z\rangle \left( \frac{%
F_{s,l-2p+s}(z,\overline{z},u)}{u^{p-s}}\right) \right\} =0 \nonumber
\end{equation}
where
\[
l-2p=\max \left\{ \left| t-s\right| :F_{st}(z,\overline{z},u)\neq 0\right\} .
\]
\end{lemma}
\proof
We easily verify that $p$ is an integer satisfying
\begin{equation}
2\leq p\leq \left[ \frac{l-1}{2}\right] . \tag*{(2.8)} \label{4.56}
\end{equation}
By collecting functions of type $(s,t)$ satisfying
\[
t-s=l-2p+1
\]
in the identity \ref{lem-a} in Lemma \ref{Lemma-a}, we obtain the following
identities for each integer $s,$ $3\leq s\leq p$:
\begin{align}
& \langle z,z\rangle \left\{ \langle a,z\rangle \left( \frac{\partial
F_{s-1,l-2p+s-1}}{\partial u}\right) (z,\overline{z},u)+2i\sum_{\alpha
}a^{\alpha }\left( \frac{\partial F_{s,l-2p+s}}{\partial z^{\alpha }}\right)
(z,\overline{z},u)\right\} \nonumber \\
& =4i\langle a,z\rangle F_{s,l-2p+s}(z,\overline{z},u)+4i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{s,l-2p+s}}{\partial
z^{\alpha }}\right) (z,\overline{z},u) \nonumber \\
& -iu\left\{ \langle a,z\rangle \left( \frac{\partial F_{s,l-2p+s}}{\partial
u}\right) (z,\overline{z},u)+2i\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial F_{s+1,l-2p+s+1}}{\partial z^{\alpha }}\right) (z,\overline{z}%
,u)\right\} \tag*{(2.9)} \label{4.58}
\end{align}
and
\begin{align}
& 4i\langle a,z\rangle F_{2,l-2p+2}(z,\overline{z},u)+4i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{2,l-2p+2}}{\partial
z^{\alpha }}\right) (z,\overline{z},u) \nonumber \\
& -iu\left\{ \langle a,z\rangle \left( \frac{\partial F_{2,l-2p+2}}{\partial
u}\right) (z,\overline{z},u)+2i\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial F_{3,l-2p+3}}{\partial z^{\alpha }}\right) (z,\overline{z}%
,u)\right\} \nonumber \\
& =0. \tag*{(2.10)} \label{4.55}
\end{align}
In the equality \ref{4.58}, we assume
\[
F_{p+1,l-p+1}(z,\overline{z},u)=0.
\]
From the equalities \ref{4.58} and \ref{4.55}, we obtain the following
recurrence relation:
\begin{eqnarray*}
A(s) &=&iu^{-1}\langle z,z\rangle A(s-1) \\
&&+4u^{-1}\sum_{\alpha }a^{\alpha }\frac{\partial }{\partial z^{\alpha }}%
\left\{ \langle z,z\rangle F_{s,l-2p+s}(z,\overline{z},u)\right\}
\end{eqnarray*}
for $s=2,\cdots ,p,$ and
\[
A(1)=A(p)=0.
\]
Thus we obtain the following identity:
\begin{equation}
\sum_{s=2}^{p}i^{p-s}\langle z,z\rangle ^{p-s}\sum_{\alpha }a^{\alpha }\frac{%
\partial }{\partial z^{\alpha }}\left\{ \langle z,z\rangle \left( \frac{%
F_{s,l-2p+s}(z,\overline{z},u)}{u^{p-s}}\right) \right\} =0. \tag*{(2.11)}
\label{a.19}
\end{equation}
This completes the proof.\endproof
\begin{lemma}
\label{divided2}Suppose that the functions $F_{st}(z,\overline{z},u)$
satisfy the equalities \ref{4.58}. Then the polynomial
\[
F_{s,l-2p+s}(z,\overline{z},u),\quad s=\max (p-m,2),\cdots ,p-1,
\]
is divided by $\langle z,z\rangle ^{m-p+s}$ whenever
\[
a\neq 0
\]
and $F_{p,l-p}(z,\overline{z},u)$ is divided by $\langle z,z\rangle ^{m}$
for $1\leq m\leq p.$
\end{lemma}
\proof
The equality \ref{4.58} yields, for $s=3,\cdots ,p,$%
\begin{eqnarray*}
&&(p-s+1)\langle z,z\rangle \langle a,z\rangle F_{s-1,l-2p+s-1}(z,\overline{z%
},u) \\
&=&-(p-s-4)i\langle a,z\rangle F_{s,l-2p+s}(z,\overline{z},u)+2i\langle
z,z\rangle \sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{s,l-2p+s}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u) \\
&&+2u\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{s+1,l-2p+s+1}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u).
\end{eqnarray*}
Since $\langle a,z\rangle $ is not a divisor of $\langle z,z\rangle ,$ this
equality yields the desired result. This completes the proof.\endproof
\subsection{Injectivity of a Linear Mapping}
\begin{lemma}
\label{li-in}Let $l$ be a positive integer$\geq 4$ and $F_{2,l-2}(z,%
\overline{z},0)$ be a nonzero function of type $(2,l-2).$ Then the following
functions
\[
H_{\alpha }(z,\overline{z},0)=\frac{\partial }{\partial z^{\alpha }}\left\{
\langle z,z\rangle F_{2,l-2}(z,\overline{z},0)\right\} \quad \text{for }%
\alpha =1,.\cdots ,n,
\]
are linearly independent.
\end{lemma}
\proof
Suppose the functions $H_{1}(z,\overline{z},0),\cdots ,H_{n}(z,\overline{z}%
,0)$ are linearly dependent over $\Bbb{C}$. Then there is a nonzero vector $%
a=(a^{\alpha })\in \Bbb{C}^{n}$ such that
\[
\sum_{\alpha }a^{\alpha }\frac{\partial }{\partial z^{\alpha }}\{\langle
z,z\rangle F_{2,l-2}(z,\overline{z},0)\}=0.
\]
Then we obtain
\begin{equation}
\langle a,z\rangle F_{2,l-2}(z,\overline{z},0)=\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{2,l-2}}{\partial z^{\alpha
}}\right) (z,\overline{z},0). \tag*{(2.12)} \label{b1}
\end{equation}
Note that $\langle a,z\rangle $ is not a devisor of $\langle z,z\rangle $
whenever $a\neq 0.$ Otherwise there would be a vector $b\in \Bbb{C}^{n}$ so
that
\[
\langle z,z\rangle =\langle a,z\rangle \langle z,b\rangle .
\]
This is a contradiction to the fact that the hermitian form $\langle
z,z\rangle $ is nondegenerate. Hence the polynomial
\[
F_{2,l-2}(z,\overline{z},0)
\]
is divided by $\langle z,z\rangle $ so that there is a polynomial $%
G_{1,l-3}(z,\overline{z},0)$ of type $(1,l-3)$ as follows:
\[
F_{2,l-2}(z,\overline{z},0)=\langle z,z\rangle G_{1,l-3}(z,\overline{z},0).
\]
Then the equality \ref{b1} comes to
\[
2\langle a,z\rangle G_{1,l-3}(z,\overline{z},0)=\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial G_{1,l-3}}{\partial z^{\alpha
}}\right) (z,\overline{z},0).
\]
Note that $G_{1,l-3}(z,\overline{z},0)$ is divided by $\langle z,z\rangle $
as well so that there is a polynomial $G_{0,l-4}(z,\overline{z},0)$ as
follows:
\[
F_{2,l-2}(z,\overline{z},0)=\langle z,z\rangle ^{2}G_{0,l-4}(z,\overline{z}%
,0).
\]
Then the equality \ref{b1} comes to
\begin{equation}
\langle a,z\rangle G_{0,l-4}(z,\overline{z},0)=0. \tag*{(2.13)} \label{wed}
\end{equation}
Note that $\langle a,z\rangle \neq 0$ unless $a=0.$ Thus the equality \ref
{wed} yields
\[
F_{2,l-2}(z,\overline{z},0)=0.
\]
This is a contradiction to the assumption $F_{2,l-2}(z,\overline{z},0)\neq
0. $ This completes the proof.\endproof
\begin{lemma}
\label{Theo1}Suppose that
\[
F_{l}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u)
\]
where
\[
F_{l}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{l}(z,\overline{z},u)
\]
and
\[
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0.
\]
Then the linear mapping
\[
a\longmapsto H_{l+1}(z,\overline{z},u;a)
\]
is injective, where
\begin{eqnarray*}
H_{l+1}(z,\overline{z},u;a)\equiv -2i(\langle z,a\rangle -\langle a,z\rangle
)\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u) \\
+\sum_{\min (s-1,t)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }(u+i\langle
z,z\rangle ) \\
+2i\sum_{t\geq 2}\sum_{\alpha }\left( \frac{\partial F_{2t}}{\partial
z^{\alpha }}\right) (z,\overline{z},u)a^{\alpha }\langle z,z\rangle \\
+\sum_{\min (s,t-1)\geq 2}\sum_{\alpha }\left( \frac{\partial F_{st}}{%
\partial \overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}%
^{\alpha }(u-i\langle z,z\rangle ) \\
-2i\sum_{s\geq 2}\sum_{\alpha }\left( \frac{\partial F_{s2}}{\partial
\overline{z}^{\alpha }}\right) (z,\overline{z},u)\overline{a}^{\alpha
}\langle z,z\rangle \\
+\frac{i}{2}\sum_{\min (s,t)\geq 2}\left( \frac{\partial F_{st}}{\partial u}%
\right) (z,\overline{z},u)\{\langle z,a\rangle (u+i\langle z,z\rangle ) \\
\hspace{5cm}-\langle a,z\rangle (u-i\langle z,z\rangle )\} \\
+G_{l+1}(z,\overline{z},u)
\end{eqnarray*}
and $G_{l+1}(z,\overline{z},u)$ is the function given in Lemma \ref{Lemma-a}.
\end{lemma}
\proof
First, we assume that $l=2k$ and
\[
F_{l}(z,\overline{z},u)=\sum_{s=2}^{k}F_{ss}(z,\overline{z},u).
\]
Suppose that $a\neq 0$ and $F_{kk}(z,\overline{z},0)$ is divided by $\langle
z,z\rangle ^{m}$ for an integer $0\leq m\leq k.$ Then, by Lemma \ref{divided}%
, there are polynomials
\[
G_{k-m,k-m}^{s}(z,\overline{z},0),\quad s=\max (k-m,2),\cdots ,k,
\]
of type $(k-m,k-m)$ satisfying
\[
\frac{F_{ss}(z,\overline{z},u)}{u^{k-s}}=i^{s-k}\langle z,z\rangle
^{m-k+s}G_{k-m,k-m}^{s}(z,\overline{z},0),
\]
for
\[
\max (k-m,2)\leq s\leq k.
\]
Then from the equality \ref{a.20} we obtain
\begin{eqnarray*}
&&\sum_{s=\max (k-m,2)}^{k}(m-k+s+1)\langle a,z\rangle \langle z,z\rangle
^{m}G_{k-m,k-m}^{s}(z,\overline{z},0) \\
&=&-\sum_{s=\max (k-m,2)}^{k}\langle z,z\rangle ^{m+1}\sum_{\alpha
}a^{\alpha }\left( \frac{\partial G_{k-m,k-m}^{s}}{\partial z^{\alpha }}%
\right) (z,\overline{z},0) \\
&&-\sum_{2\leq s\leq k-m-1}i^{k-s}\langle z,z\rangle ^{k-s}\sum_{\alpha
}a^{\alpha }\frac{\partial }{\partial z^{\alpha }}\{\langle z,z\rangle
F_{ss}(z,\overline{z},0)\} \\
&&-(2i)^{k-1}\langle \kappa ,z\rangle \langle z,z\rangle ^{k}.
\end{eqnarray*}
Hence there are polynomials $A(z,\overline{z};m),$ $1\leq m\leq k,$ such
that
\begin{equation}
\sum_{s=\max (k-m,2)}^{k}(m-k+s+1)G_{k-m,k-m}^{s}(z,\overline{z},0)=\langle
z,z\rangle A(z,\overline{z};m). \tag*{(2.14)} \label{C1}
\end{equation}
The polynomial $A(z,\overline{z};m)$ for $m=k$ is given by
\begin{align}
A(z,\overline{z};k)& =-(2i)^{k-1}e\langle z,z\rangle ^{k} \nonumber \\
\langle \kappa ,z\rangle & =e\langle a,z\rangle \tag*{(2.15)} \label{D1}
\end{align}
for some constant $e.$ From the equality \ref{6.12}, we obtain for $s=\max
(k-m,2)+1,\cdots ,k,$%
\begin{eqnarray*}
&&(k-s+1)\langle a,z\rangle \langle z,z\rangle ^{m-k+s}G_{k-m,k-m}^{s-1}(z,%
\overline{z},0) \\
&&+(4+2m-3k+3s)\langle a,z\rangle \langle z,z\rangle
^{m-k+s}G_{k-m,k-m}^{s}(z,\overline{z},0) \\
&&+2(m-k+s+1)\langle a,z\rangle \langle z,z\rangle
^{m-k+s}G_{k-m,k-m}^{s+1}(z,\overline{z},0) \\
&=&-2\langle z,z\rangle ^{m-k+s+1}\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{k-m,k-m}^{s}}{\partial z^{\alpha }}\right) (z,\overline{z},0) \\
&&-2\langle z,z\rangle ^{m-k+s+1}\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{k-m,k-m}^{s+1}}{\partial z^{\alpha }}\right) (z,\overline{z},0)
\\
&&+2\langle \kappa ,z\rangle \{1+(-1)^{s}(2s-1)\}\binom{k}{s}(i\langle
z,z\rangle )^{s}.
\end{eqnarray*}
Thus there are polynomials $B^{s-1}(z,\overline{z};m),$ $s=\min
(k-m,2)+1,\cdots ,k,$ such that
\begin{align}
& (k-s+1)G_{k-m,k-m}^{s-1}(z,\overline{z},0) \nonumber \\
& +(4-3k+3s+2m)G_{k-m,k-m}^{s}(z,\overline{z},0) \nonumber \\
& +2(1-k+s+m)G_{k-m,k-m}^{s+1}(z,\overline{z},0) \nonumber \\
& =\langle z,z\rangle B^{s-1}(z,\overline{z};m). \tag*{(2.16)} \label{C2}
\end{align}
The polynomial $B^{s-1}(z,\overline{z};m)$ for $m=k$ is given by
\begin{align}
B^{s-1}(z,\overline{z};k)& =2i^{s}e\{1+(-1)^{s}(2s-1)\}\binom{k}{s}\langle
z,z\rangle ^{k-m-1}, \nonumber \\
\langle \kappa ,z\rangle & =e\langle a,z\rangle . \tag*{(2.17)} \label{D2}
\end{align}
Hence from the equalities \ref{C1} and \ref{C2} we obtain for $k-m\geq 2$%
\begin{equation}
B_{m}\left(
\begin{array}{l}
G_{k-m,k-m}^{k-m}(z,\overline{z},0) \\
G_{k-m,k-m}^{k-m+1}(z,\overline{z},0) \\
\vdots \\
\\
G_{k-m,k-m}^{k}(z,\overline{z},0)
\end{array}
\right) =\langle z,z\rangle \left(
\begin{array}{l}
A(z,\overline{z};m) \\
B^{k-m}(z,\overline{z};m) \\
B^{k-m+1}(z,\overline{z};m) \\
\vdots \\
B^{k-1}(z,\overline{z};m)
\end{array}
\right) \tag*{(2.18)} \label{qqq}
\end{equation}
where
\[
B_{m}=\left(
\begin{array}{cccccc}
1 & 2 & 3 & \cdots & m & m+1 \\
m & 7-m & 4 & 0 & \cdots & 0 \\
0 & m-1 & 10-m & 6 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
By Lemma \ref{Corr1}, the equality \ref{qqq} implies that the function $%
G_{k-m,k-m}^{k}(z,\overline{z},0)$ is divided by $\langle z,z\rangle $ for
all $m\leq k-2.$ Hence the polynomial $F_{kk}(z,\overline{z},0)$ is divided
by $\langle z,z\rangle ^{k-1}$ whenever $a\neq 0.$
Thus $F_{22}(z,\overline{z},u)$ is divided by $\langle z,z\rangle .$ Then
the condition $\Delta F_{22}=0$ implies
\[
F_{22}(z,\overline{z},u)=i^{2-k}u^{k-2}\langle z,z\rangle G_{11}^{2}(z,%
\overline{z},0)=0.
\]
Then the equalities \ref{C1} and \ref{C2} yield
\begin{equation}
B_{k-1}(2)\left(
\begin{array}{l}
0 \\
G_{11}^{3}(z,\overline{z},0) \\
\vdots \\
\\
G_{11}^{k}(z,\overline{z},0)
\end{array}
\right) =\langle z,z\rangle \left(
\begin{array}{l}
d_{1} \\
d_{2} \\
d_{3} \\
\vdots \\
d_{k-1}
\end{array}
\right) \tag*{(2.19)} \label{kookoo}
\end{equation}
where $d_{1},\cdots ,d_{k-1}$ are constants and
\[
B_{k-1}(2)=\left(
\begin{array}{cccccc}
2 & 3 & 4 & \cdots & k-1 & k \\
k-2 & 11-k & 6 & 0 & \cdots & 0 \\
0 & k-3 & 14-k & 8 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2k-1 & 2(k-1) \\
0 & \cdots & & 0 & 1 & 2k+2
\end{array}
\right) .
\]
By Lemma \ref{Lemm2} and Lemma \ref{nonsingular}, the equality \ref{kookoo}
implies that the function $G_{11}^{k}(z,\overline{z},0)$ is divided by $%
\langle z,z\rangle .$ Hence the polynomial $F_{kk}(z,\overline{z},0)$ is
divided by $\langle z,z\rangle ^{k}$ whenever $a\neq 0.$
Thus we obtain
\begin{eqnarray*}
F_{22}(z,\overline{z},u) &=&0, \\
F_{ss}(z,\overline{z},u) &=&c_{s}\langle z,z\rangle ^{s}\quad \text{for all }%
s=3,\cdots ,k
\end{eqnarray*}
where $c_{s}$ are constant real numbers. By the way, by Lemma \ref{Lemma-a},
the constant $\kappa $ is given by
\begin{equation}
\langle \kappa ,z\rangle =\frac{u^{2-k}}{4k(k-1)(n+1)(n+2)}\sum_{\alpha
}ua^{\alpha }\Delta ^{2}\left( \frac{\partial F_{33}}{\partial z^{\alpha }}%
\right) (z,\overline{z},u). \nonumber
\end{equation}
Because of the condition $\Delta ^{3}F_{33}=0,$ we obtain
\[
F_{33}(z,\overline{z},u)=0\quad \text{and}\quad \kappa =0
\]
whenever $F_{33}(z,\overline{z},u)$ is divided by $\langle z,z\rangle ^{3}$.
Therefore, we have
\[
c_{3}=\kappa =0.
\]
Thus the equalities \ref{D1} and \ref{D2} yield
\[
A(z,\overline{z};k)=B^{s-1}(z,\overline{z};k)=0
\]
for all $s=3,\cdots ,k.$ Then the equalities \ref{C1} and \ref{C2} yield
\[
\left(
\begin{array}{cccccc}
3 & 4 & 5 & \cdots & k & k+1 \\
k-2 & 13-k & 8 & 0 & \cdots & 0 \\
0 & k-3 & 16-k & 10 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2k+1 & 2k \\
0 & \cdots & & 0 & 1 & 2k+4
\end{array}
\right) \left(
\begin{array}{l}
0 \\
0 \\
c_{4} \\
\vdots \\
c_{k-1} \\
c_{k}
\end{array}
\right) =0.
\]
Hence we obtain
\[
c_{4}=\cdots =c_{k}=0.
\]
This is a contradiction to the assumption
\[
F_{l}(z,\overline{z},u)\neq 0.
\]
Thus we ought to have $a=0.$
Assume that $F_{l}(z,\overline{z},u)$ contains a function $F_{st}(z,%
\overline{z},u)$ of type $(s,t),s\neq t,$ so that
\begin{eqnarray}
l-2p &=&\max \left\{ \left| t-s\right| :F_{st}(z,\overline{z},u)\neq
0\right\} \nonumber \\
2 &\leq &p\leq \left[ \frac{l-1}{2}\right] , \nonumber
\end{eqnarray}
where
\[
F_{l}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u).
\]
Suppose that $p=2.$ Then the equalities \ref{4.58} and \ref{4.55} reduce to
\[
4i\langle a,z\rangle F_{2,l-2}(z,\overline{z},u)+4i\langle z,z\rangle
\sum_{\alpha }a^{\alpha }\left( \frac{\partial F_{2,l-2}}{\partial z^{\alpha
}}\right) (z,\overline{z},u)=0,
\]
where
\[
F_{2,l-2}(z,\overline{z},u)\neq 0.
\]
Hence we obtain
\[
\sum_{\alpha }a^{\alpha }\frac{\partial }{\partial z^{\alpha }}\left\{
\langle z,z\rangle F_{2,l-2}(z,\overline{z},u)\right\} =0.
\]
By Lemma \ref{li-in}, we obtain $a=0.$
Suppose that
\[
3\leq p\leq \left[ \frac{l-1}{2}\right] .
\]
and
\[
F_{p,l-p}(z,\overline{z},u)=0.
\]
Then by the equalities \ref{4.58} and \ref{4.55}, there is a integer $m$
such that
\[
\langle z,z\rangle \langle a,z\rangle \left( \frac{\partial F_{m-1,l-m-1}}{%
\partial u}\right) (z,\overline{z},u)=0,
\]
where
\begin{gather*}
3\leq m\leq p, \\
F_{m-1,l-m-1}(z,\overline{z},u)\neq 0.
\end{gather*}
Note that
\begin{eqnarray*}
\left( \frac{\partial F_{m-1,l-m-1}}{\partial u}\right) (z,\overline{z},u)
&=&(p-m+1)u^{-1}F_{m-1,l-m-1}(z,\overline{z},u) \\
&\neq &0.
\end{eqnarray*}
Thus we obtain $a=0.$
Hence we may assume that
\[
3\leq p\leq \left[ \frac{l-1}{2}\right] .
\]
and
\[
F_{p,l-p}(z,\overline{z},u)\neq 0.
\]
We claim that $F_{p,l-p}(z,\overline{z},0)$ is divided by $\langle
z,z\rangle ^{p-1}$ whenever $a\neq 0.$ Suppose that $a\neq 0$ and $%
F_{p,l-p}(z,\overline{z},0)$ is divided by $\langle z,z\rangle ^{m}$ for an
integer $m,$ $0\leq m\leq p-2.$ Then, by Lemma \ref{divided2}, there are
polynomials
\begin{equation}
G_{p-m,l-p-m}^{s}(z,\overline{z},0),\quad s=\max (p-m,2),\cdots ,p,
\tag*{(2.20)} \label{g-fun}
\end{equation}
of type $(p-m,l-p-m)$ satisfying
\[
\frac{F_{s,l-2p+s}(z,\overline{z},u)}{u^{p-s}}=i^{s-p}\langle z,z\rangle
^{m-p+s}G_{p-m,l-p-m}^{s}(z,\overline{z},0),
\]
for
\[
\max (p-m,2)\leq s\leq p.
\]
With the polynomials $G_{p-m,l-p-m}^{s}(z,\overline{z},0)$ in \ref{g-fun},
the equality \ref{a.19} yields
\begin{eqnarray*}
&&\sum_{s=\max (p-m,2)}^{p}(m-p+s+1)\langle a,z\rangle \langle z,z\rangle
^{m}G_{p-m,l-p-m}^{s}(z,\overline{z},0) \\
&=&-\sum_{s=p-m}^{p}\langle z,z\rangle ^{m+1}\sum_{\alpha }a^{\alpha }\left(
\frac{\partial G_{p-m,l-p-m}^{s}}{\partial z^{\alpha }}\right) (z,\overline{z%
},0) \\
&&-\sum_{2\leq s\leq p-m-1}i^{p-s}\langle z,z\rangle ^{p-s}\sum_{\alpha
}a^{\alpha }\frac{\partial }{\partial z^{\alpha }}\{\langle z,z\rangle
F_{s,l-2p+s}(z,\overline{z},0)\}.
\end{eqnarray*}
Thus there are polynomials $A(z,\overline{z};m),$ $0\leq m\leq p-2,$ such
that
\begin{equation}
\sum_{s=p-m}^{p}(m-p+s+1)G_{p-m,l-p-m}^{s}(z,\overline{z},0)=\langle
z,z\rangle A(z,\overline{z};m). \tag*{(2.21)} \label{B2}
\end{equation}
From the equality \ref{4.58}, we obtain for $s=p-m+1,\cdots ,p,$%
\begin{eqnarray*}
&&(p-s+1)\langle a,z\rangle \langle z,z\rangle ^{m-p+s}G_{p-m,l-p-m}^{s-1}(z,%
\overline{z},0) \\
&&+(4+2m-3p+3s)\langle a,z\rangle \langle z,z\rangle
^{m-p+s}G_{p-m,l-p-m}^{s}(z,\overline{z},0) \\
&&+2(m-p+s+1)\langle a,z\rangle \langle z,z\rangle
^{m-p+s}G_{p-m,l-p-m}^{s+1}(z,\overline{z},0) \\
&=&-2\langle z,z\rangle ^{m-p+s+1}\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{p-m,l-p-m}^{s}}{\partial z^{\alpha }}\right) (z,\overline{z},0)
\\
&&-2\langle z,z\rangle ^{m-p+s+1}\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{p-m,l-p-m}^{s+1}}{\partial z^{\alpha }}\right) (z,\overline{z}%
,0).
\end{eqnarray*}
Thus there are polynomials $B^{s-1}(z,\overline{z};m),$ $s=p-m+1,\cdots ,p,$
such that
\begin{align}
& (p-s+1)G_{p-m,l-p-m}^{s-1}(z,\overline{z},0) \nonumber \\
& +(4-3p+3s+2m)G_{p-m,l-p-m}^{s}(z,\overline{z},0) \nonumber \\
& +2(1-p+s+m)G_{p-m,l-p-m}^{s+1}(z,\overline{z},0) \nonumber \\
& =\langle z,z\rangle B^{s-1}(z,\overline{z};m). \tag*{(2.22)} \label{B3}
\end{align}
Hence, from the equalities \ref{B2} and \ref{B3}, we obtain
\begin{equation}
B_{m}\left(
\begin{array}{l}
G_{p-m,l-p-m}^{p-m}(z,\overline{z},0) \\
G_{p-m,l-p-m}^{p-m+1}(z,\overline{z},0) \\
G_{p-m,l-p-m}^{p-m+2}(z,\overline{z},0) \\
\vdots \\
G_{p-m,l-p-m}^{p}(z,\overline{z},0)
\end{array}
\right) =\langle z,z\rangle \left(
\begin{array}{l}
A(z,\overline{z};m) \\
B^{p-m}(z,\overline{z};mu) \\
B^{p-m+1}(z,\overline{z};m) \\
\vdots \\
B^{p-1}(z,\overline{z};m)
\end{array}
\right) \tag*{(2.23)} \label{eed}
\end{equation}
where
\[
B_{m}=\left(
\begin{array}{cccccc}
1 & 2 & 3 & \cdots & m & m+1 \\
m & 7-m & 4 & 0 & \cdots & 0 \\
0 & m-1 & 10-m & 6 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2m+1 & 2m \\
0 & \cdots & & 0 & 1 & 2m+4
\end{array}
\right) .
\]
By Lemma \ref{Corr1}, the equality \ref{eed} implies that the function $%
G_{p-m,l-p-m}^{p}(z,\overline{z},0)$ is divided by $\langle z,z\rangle .$
Hence we prove our claim that $F_{p,l-p}(z,\overline{z},0)$ is divided by $%
\langle z,z\rangle ^{p-1}$ whenever $a\neq 0.$
Then we claim that $F_{p,l-p}(z,\overline{z},0)$ is divided by $\langle
z,z\rangle ^{p}$ whenever $a\neq 0.$ With the polynomials $%
G_{1,l-2p+1}^{s}(z,\overline{z},0)$ in \ref{g-fun}, the equality \ref{a.19}
yields
\[
\langle a,z\rangle \sum_{s=2}^{p}sG_{1,l-2p+1}^{s}(z,\overline{z}%
,0)=-\langle z,z\rangle \sum_{s=2}^{p}\sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{1,l-2p+1}^{s}}{\partial z^{\alpha }}\right) (z,\overline{z},0).
\]
So there is a polynomial $A(z,\overline{z};p-1)$ of type $(0,l-2p)$ such
that
\begin{equation}
\sum_{s=2}^{p}sG_{1,l-2p+1}^{s}(z,\overline{z},0)=\langle z,z\rangle A(z,%
\overline{z};p-1). \tag*{(2.24)} \label{edd}
\end{equation}
With the polynomials $G_{1,l-2p+1}^{s}(z,\overline{z},0)$ in \ref{g-fun},
the equality \ref{4.58} yields
\begin{eqnarray*}
&&\langle a,z\rangle \left\{ (p-s+1)G_{1,l-2p+1}^{s-1}(z,\overline{z}%
,0)+(2-p+3s)G_{1,l-2p+1}^{s}(z,\overline{z},0)\right. \\
&&\hspace{3cm}\left. +2sG_{1,l-2p+1}^{s+1}(z,\overline{z},0)\right\} \\
&=&-2\langle z,z\rangle \left\{ \sum_{\alpha }a^{\alpha }\left( \frac{%
\partial G_{1,l-2p+1}^{s}}{\partial z^{\alpha }}\right) (z,\overline{z}%
,0)+\sum_{\alpha }a^{\alpha }\left( \frac{\partial G_{1,l-2p+1}^{s+1}}{%
\partial z^{\alpha }}\right) (z,\overline{z},0)\right\} .
\end{eqnarray*}
Then there are polynomials $B^{s-1}(z,\overline{z};p-1)$ of type $(0,l-2p)$
for $s=3,\cdots ,p$ such that
\begin{align}
(p-s+1)G_{1,l-2p+1}^{s-1}(z,\overline{z},0)+& (2-p+3s)G_{1,l-2p+1}^{s}(z,%
\overline{z},0) \nonumber \\
+2sG_{1,l-2p+1}^{s+1}(z,\overline{z},0)& =\langle z,z\rangle B^{s-1}(z,%
\overline{z};p-1). \tag*{(2.25)} \label{edd2}
\end{align}
Hence, from the equalities \ref{edd} and \ref{edd2}, we obtain
\begin{equation}
B_{p-1}(2)\left(
\begin{array}{l}
G_{1,l-2p+1}^{2}(z,\overline{z},0) \\
G_{1,l-2p+1}^{3}(z,\overline{z},0) \\
\vdots \\
\\
G_{1,l-2p+1}^{p}(z,\overline{z},0)
\end{array}
\right) =\langle z,z\rangle \left(
\begin{array}{l}
A(z,\overline{z};p-1) \\
B^{2}(z,\overline{z};p-1) \\
B^{3}(z,\overline{z};p-1) \\
\vdots \\
B^{p-1}(z,\overline{z};p-1)
\end{array}
\right) \tag*{(2.26)} \label{ewr}
\end{equation}
where
\[
B_{p-1}(2)=\left(
\begin{array}{cccccc}
2 & 3 & 4 & \cdots & p-1 & p \\
p-2 & 11-p & 6 & 0 & \cdots & 0 \\
0 & p-3 & 14-p & 8 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2p-1 & 2(p-1) \\
0 & \cdots & & 0 & 1 & 2p+2
\end{array}
\right) .
\]
By Lemma \ref{Lemm2} and Lemma \ref{nonsingular}, the equality \ref{ewr}
implies that the polynomial $G_{1,l-2p+1}^{p}(z,\overline{z},0)$ is divided
by $\langle z,z\rangle .$ Hence we prove our claim that $F_{p,l-p}(z,%
\overline{z},0)$ is divided by $\langle z,z\rangle ^{p}$ whenever $a\neq 0.$
Then with the polynomials $G_{0,l-2p}^{s}(z,\overline{z},0)$ in \ref{g-fun},
the equality \ref{a.19} yields
\[
\langle a,z\rangle \sum_{s=2}^{p}(s+1)G_{0,l-2p}^{s}(z,\overline{z},0)=0.
\]
Whenever $a\neq 0,$ we have
\begin{equation}
\sum_{s=2}^{p}(s+1)G_{1,l-2p+1}^{s}(z,\overline{z},0)=0. \tag*{(2.27)}
\label{end}
\end{equation}
With the polynomials $G_{0,l-2p}^{s}(z,\overline{z},0)$ in \ref{g-fun}, the
equality \ref{4.58} yields
\begin{align*}
\langle a,z\rangle & \left\{ (p-s+1)G_{0,l-2p}^{s-1}(z,\overline{z}%
,0)+(4-p+3s)G_{0,l-2p}^{s}(z,\overline{z},0)+\right. \\
& \hspace{5.2cm}\left. 2(s+1)G_{0,l-2p}^{s+1}(z,\overline{z},0)\right\} =0.
\end{align*}
Whenever $a\neq 0,$ we have
\begin{align}
(p-s+1)& G_{0,l-2p}^{s-1}(z,\overline{z},0)+(4-p+3s)G_{0,l-2p}^{s}(z,%
\overline{z},0)+ \nonumber \\
& \hspace{3cm}2(s+1)G_{0,l-2p}^{s+1}(z,\overline{z},0)=0 \tag*{(2.28)}
\label{end2}
\end{align}
for $s=3,\cdots ,p.$ Hence, from the equalities \ref{end} and \ref{end2}, we
obtain
\begin{equation}
B_{p}(3)\left(
\begin{array}{l}
G_{0,l-2p}^{2}(z,\overline{z},0) \\
G_{0,l-2p}^{3}(z,\overline{z},0) \\
\vdots \\
G_{0,l-2p}^{p}(z,\overline{z},0)
\end{array}
\right) =0 \tag*{(2.29)} \label{eww}
\end{equation}
where
\[
B_{p}(3)=\left(
\begin{array}{cccccc}
3 & 4 & 5 & \cdots & p & p+1 \\
p-2 & 13-p & 8 & 0 & \cdots & 0 \\
0 & p-3 & 16-p & 10 & \ddots & \vdots \\
& \ddots & \ddots & \ddots & \ddots & 0 \\
\vdots & & \ddots & 2 & 2p+1 & 2p \\
0 & \cdots & & 0 & 1 & 2p+4
\end{array}
\right) .
\]
By Lemma \ref{Lemm2} and Lemma \ref{nonsingular}, the equality \ref{eww}
implies
\[
G_{0,l-2p}^{p}(z,\overline{z},0)=0.
\]
This is a contradiction to the assumption
\[
F_{p,l-p}(z,\overline{z},u)=\langle z,z\rangle ^{p}G_{0,l-2p}^{p}(z,%
\overline{z},0)\neq 0.
\]
Thus we ought to have $a=0$ as well for the case of $3\leq p\leq \left[
\frac{l-1}{2}\right] .$ Therefore we obtain $a=0$ whenever $F_{l}(z,%
\overline{z},u)$ contains a nonvanishing term $F_{st}(z,\overline{z},u)$ of
type $(s,t),$ $s\neq t.$
Therefore, we have showed that $a=0$ whenever
\[
F_{l}(z,\overline{z},u)\neq 0\quad \text{and}\quad H_{l+1}(z,\overline{z}%
,u;a)=0.
\]
This completes the proof.\endproof
\begin{theorem}
Let $M$ be a real hypersurface in normal form defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+O(l+2),
\]
where
\[
F_{l}(z,\overline{z},u)\neq 0
\]
and, for all complex numbers $\mu ,$%
\[
F_{l}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{l}(z,\overline{z},u).
\]
Suppose that there is a normalization $\phi $ of $M$ with initial value $%
(id_{n\times n},a,1,0)\in H$ such that $\phi $ transforms $M$ to a real
hypersurface $M^{\prime }$ defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u)
\]
and
\[
F^{*}(z,\overline{z},u)=F_{l}(z,\overline{z},u)+O(l+2).
\]
Then the normalization $\phi $ has identity initial value, i.e., $a=0.$
\end{theorem}
\proof
The conclusion follows from Lemma \ref{Lemma-a} and \ref{Theo1}.\endproof
\subsection{Beloshapka-Loboda Theorem}
\begin{lemma}
\label{orbit}Let $M$ be a real hypersurface in normal form and $\phi
_{\sigma _{1}}$ be a normalization of $M$ with initial value $\sigma _{1}\in
H.$ Suppose that $M$ is transformed to $M^{\prime }$ by the normalization $%
\phi _{\sigma _{1}}$ and $\phi _{\sigma _{2}}$ is a normalization of $%
M^{\prime }$ with initial value $\sigma _{2}\in H$. Then
\[
\phi _{\sigma _{1}}\circ \phi _{\sigma _{2}}=\phi _{\sigma _{1}\sigma _{2}}
\]
where $\phi _{\sigma _{1}\sigma _{2}}$ is a normalization of $M$ with
initial value $\sigma _{1}\sigma _{2}\in H$.
\end{lemma}
In the paper \cite{Pa2}, we have given the proof of Lemma \ref{orbit}.
\begin{lemma}
Let $M$ be a real hypersurface in normal form defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+O(l+2),
\]
where
\[
F_{l}(z,\overline{z},u)\neq 0
\]
and, for all complex numbers $\mu ,$%
\[
F_{l}(\mu z,\mu \overline{z},\mu ^{2}u)=\mu ^{l}F_{l}(z,\overline{z},u).
\]
Suppose that there is a normalization $\phi $ of $M$ such that $\phi (M)$ is
defined by the equation
\[
v=\langle z,z\rangle +\rho F_{l}(C^{-1}z,\overline{C^{-1}z},\rho
^{-1}u)+O(l+2)
\]
where
\[
\sigma \left( \phi \right) =(C,a,\rho ,r)\in H.
\]
Then the normalization $\phi $ have the initial value $(C,0,\rho ,r)\in H$,
i.e., $a=0.$
\end{lemma}
\proof
Note that there is a decomposition of $\phi $ as follows(cf. \cite{Pa1}):
\[
\phi =\phi _{\sigma _{1}}\circ \phi _{\sigma _{2}}
\]
where $\phi _{\sigma _{1}},\phi _{\sigma _{2}}$ are normalizations with the
initial values $\sigma _{1},\sigma _{2}$ respectively:
\begin{eqnarray*}
\sigma _{1} &=&(C,0,\rho ,r)\in H, \\
\sigma _{2} &=&(id_{n\times n},a,1,0)\in H.
\end{eqnarray*}
Then, by Lemma \ref{orbit}, we obtain
\begin{eqnarray*}
\phi _{\sigma _{2}} &=&\phi _{\sigma _{1}^{-1}}\circ \phi _{\sigma
_{1}}\circ \phi _{\sigma _{2}} \\
&=&\phi _{\sigma _{1}^{-1}}\circ \phi
\end{eqnarray*}
where $\phi _{\sigma _{1}^{-1}}$ is a normalization with initial value $%
\sigma _{1}^{-1}\in H$. Further, suppose that $\phi _{\sigma _{2}}(M)$ is
defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u).
\]
Then we obtain
\[
F^{*}(z,\overline{z},u)=F_{l}(z,\overline{z},u)+O(l+2).
\]
Thus, by Lemma \ref{Theo1}, we obtain
\[
a=0.
\]
This completes the proof.\endproof
\begin{theorem}[Beloshapka, Loboda, Vitushkin]
\label{ThBL}Let $M$ be an analytic real hypersurface in normal form, which
is not a real hyperquadric, and $H(M)$ be the isotropy subgroup of $M$ at
the origin. Then there are functions
\[
\rho (U),\quad a(U),\quad r(U)
\]
on the set
\begin{equation}
\left\{ U:\left( U,a,\rho ,r\right) \in H(M)\subset H\right\} \nonumber
\end{equation}
such that, for all $(U,a,\rho ,r)\in H(M),$%
\[
a=a(U),\quad \rho =\rho (U),\quad r=r(U).
\]
\end{theorem}
\proof
Suppose that $M$ is defined in normal form by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+F_{l+1}(z,\overline{z}%
,u)+F_{l+2}(z,\overline{z},u)+O(l+3),
\]
where
\[
F_{l}(z,\overline{z},u)\neq 0,
\]
and the integers $l,l+1,l+2$ represent the weight of the functions
\[
F_{l}(z,\overline{z},u),\quad F_{l+1}(z,\overline{z},u),\quad F_{l+2}(z,%
\overline{z},u).
\]
Let $\phi _{\sigma }$ be a normalization of $M$ with initial value $\sigma
\in H(M)$. Suppose that the real hypersurface $\phi _{\sigma }(M)$ is
defined near the origin up to weight $l$ by the equation
\begin{eqnarray*}
v &=&\langle z,z\rangle +\rho F_{l}(C^{-1}z,\overline{C^{-1}z},\rho
^{-1}u)+O(l+1) \\
&=&\langle z,z\rangle +F_{l}(z,\overline{z},u)+O(l+1)
\end{eqnarray*}
where
\[
\sigma =(C,a,\rho ,r)\in H(M)\subset H.
\]
Then we have
\begin{equation}
\left| \rho \right| ^{\frac{l-2}{2}}F_{l}(z,\overline{z},u)=\lambda
F_{l}(U^{-1}z,\overline{U^{-1}z},\lambda u)\neq 0. \tag*{(2.30)}
\label{rho-abs}
\end{equation}
The relation
\[
\langle Uz,Uz\rangle =\lambda \langle z,z\rangle ,\quad \lambda =\mathrm{%
sign\{}\rho \mathrm{\}}
\]
yields
\begin{equation}
\lambda =\frac{1}{n}\Delta \langle Uz,Uz\rangle =\pm 1. \tag*{(2.31)}
\label{rho-sig}
\end{equation}
Then we take a value $z,u$ in the equality \ref{rho-abs} such that
\[
F_{l}(z,\overline{z},u)\in \Bbb{R}\backslash \{0\}
\]
and define
\[
\rho _{1}(U)=\left( \frac{\lambda F_{l}(U^{-1}z,\overline{U^{-1}z},\lambda u)%
}{F_{l}(z,\overline{z},u)}\right) ^{\frac{2}{l-2}}.
\]
By the unique factorization of a polynomial, we have
\[
\left| \rho \right| =\rho _{1}(U)
\]
regardless the choice of the value $z,u.$ Hence, by the equality \ref
{rho-sig}, we define
\[
\rho (U)\equiv \frac{1}{n}\Delta \langle Uz,Uz\rangle \cdot \rho _{1}(U)
\]
so that
\begin{equation}
\rho =\rho (U) \tag*{(2.32)} \label{rho}
\end{equation}
for all
\[
\left( U,a,\rho ,r\right) \in H(M).
\]
Suppose that the real hypersurface $\phi _{\sigma }(M)$ is defined near the
origin up to weight $l+1$ by the equation
\begin{eqnarray*}
v-\langle z,z\rangle &=&\rho F_{l}(C^{-1}z,\overline{C^{-1}z},\rho
^{-1}u)+F_{l+1}^{*}(z,\overline{z},u)+O(l+2) \\
&=&F_{l}(z,\overline{z},u)+F_{l+1}(z,\overline{z},u)+O(l+2).
\end{eqnarray*}
By using the equality
\[
\rho F_{l}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u)=F_{l}(z,\overline{z},u),
\]
we obtain
\[
F_{l+1}^{*}(z,\overline{z},u)=H_{l+1}(z,\overline{z},u;\rho ^{-1}Ca)+\rho
F_{l+1}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u)
\]
where $a^{*}\rightarrow H_{l+1}(z,\overline{z},u;a^{*})$ is the injective
linear mapping in Lemma \ref{Theo1}.
Then the following requirement
\[
F_{l+1}^{*}(z,\overline{z},u)=F_{l+1}(z,\overline{z},u)
\]
yields
\[
H_{l+1}(z,\overline{z},u;a^{*})=F_{l+1}(z,\overline{z},u)-\rho
F_{l+1}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u).
\]
Then, by the equality \ref{rho}, the equality
\begin{eqnarray*}
H_{l+1}(z,\overline{z},u;a^{*}) &=&F_{l+1}(z,\overline{z},u)-\rho
F_{l+1}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u) \\
&=&F_{l+1}(z,\overline{z},u)-\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}%
\left| \rho \left( U\right) \right| ^{\frac{l+3}{2}}F_{l+1}(U^{-1}z,%
\overline{U^{-1}z},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u),
\end{eqnarray*}
yields a unique function $a^{*}(U)$ of $U$ satisfying
\[
a^{*}=\rho ^{-1}Ca=a^{*}(U).
\]
Hence we obtain a unique function $a(U)$ of $U$ such that
\begin{align}
a& =a(U) \nonumber \\
& \equiv \rho (U)\left| \rho (U)\right| ^{-\frac{1}{2}}U^{-1}a^{*}(U)
\tag*{(2.33)} \label{a}
\end{align}
for all
\[
\left( U,a,\rho ,r\right) \in H(M).
\]
Then we decompose the normalization $\phi _{\sigma }$ as follows:
\[
\phi _{\sigma }=\phi _{2}\circ \phi _{1},
\]
where $\phi _{1},\phi _{2}$ are normalizations with the initial values $%
\sigma _{1},\sigma _{2}$ respectively:
\[
\sigma _{1}=(id_{n\times n},a,1,0)\quad \text{and\quad }\sigma
_{2}=(C,0,\rho ,r).
\]
Suppose that the real hypersurface $\phi _{1}(M)$ is defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+\tilde{F}_{l+1}(z,\overline{z}%
,u)+\tilde{F}_{l+2}(z,\overline{z},u)+O(l+3)
\]
where the functions $\tilde{F}_{l+1}(z,\overline{z},u)$ and $\tilde{F}%
_{l+2}(z,\overline{z},u)$ depend of the parameter $a,$ i.e.,
\begin{eqnarray*}
\tilde{F}_{l+1}(z,\overline{z},u) &=&\tilde{F}_{l+1}(z,\overline{z},u;a) \\
\tilde{F}_{l+2}(z,\overline{z},u) &=&\tilde{F}_{l+2}(z,\overline{z},u;a).
\end{eqnarray*}
Then suppose that the real hypersurface $\phi _{\sigma }(M)$ is defined near
the origin up to weight $l+2$ by the equation
\begin{eqnarray*}
v &=&\langle z,z\rangle +\rho F_{l}(C^{-1}z,\overline{C^{-1}z},\rho
^{-1}u)+\rho \tilde{F}_{l+1}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u) \\
&&+\rho \tilde{F}_{l+2}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u) \\
&&-\frac{r}{2}\left\{ \sum_{\min (s,t)\geq 2}(l+s+t)uF_{st}(C^{-1}z,%
\overline{C^{-1}z},\rho ^{-1}u)\right. \\
&&\hspace{1.5cm}+\sum_{\min (s,t)\geq 2}2(s-t)i\langle z,z\rangle
F_{st}(C^{-1}z,\overline{C^{-1}z},\rho ^{-1}u) \\
&&\hspace{1.5cm}\left. -\sum_{\min (s,t)\geq 2}2\rho ^{-1}\langle z,z\rangle
^{2}\left( \frac{\partial F_{st}}{\partial u}\right) (C^{-1}z,\overline{%
C^{-1}z},\rho ^{-1}u)\right\} \\
&&+O(l+3) \\
&=&F_{l}(z,\overline{z},u)+F_{l+1}(z,\overline{z},u)+F_{l+2}(z,\overline{z}%
,u)+O(l+3),
\end{eqnarray*}
where
\[
F_{l}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u).
\]
Hence we have the equality
\begin{align}
& -\frac{r}{2}\left\{ \sum_{\min (s,t)\geq 2}\left( (l+s+t)u+2(s-t)i\langle
z,z\rangle \right) F_{st}(z,\overline{z},u)\right. \nonumber \\
& \hspace{1.5cm}\left. -\sum_{\min (s,t)\geq 2}2\langle z,z\rangle
^{2}\left( \frac{\partial F_{st}}{\partial u}\right) (z,\overline{z}%
,u)\right\} \nonumber \\
& =\rho ^{-1}F_{l+2}(Cz,\overline{Cz},\rho u)-\tilde{F}_{l+2}\left( z,%
\overline{z},u;a\right) . \tag*{(2.34)} \label{rrr}
\end{align}
Note that $F_{l}(z,\overline{z},u)\neq 0$ implies
\[
\sum_{\min (s,t)\geq 2}\left\{ \left( (l+s+t)u+2(s-t)i\langle z,z\rangle
\right) F_{st}(z,\overline{z},u)-2\langle z,z\rangle ^{2}\left( \frac{%
\partial F_{st}}{\partial u}\right) (z,\overline{z},u)\right\} \neq 0.
\]
Otherwise, we would have
\begin{align*}
& (l+s+t)uF_{st}(z,\overline{z},u)+2(s-t)i\langle z,z\rangle F_{s-1,t-1}(z,%
\overline{z},u) \\
& -2\langle z,z\rangle ^{2}\left( \frac{\partial F_{s-2,t-2}}{\partial u}%
\right) (z,\overline{z},u)=0
\end{align*}
which yields
\[
F_{st}(z,\overline{z},u)=0\quad \text{for all }s,t.
\]
From the equalities \ref{rho} and \ref{a}, we have
\[
\rho =\rho (U),\quad a=a(U)
\]
for all
\[
(U,a,\rho ,r)\in H(M).
\]
Then we take a value $z,u$ in the equality \ref{rrr} such that
\[
\sum_{\min (s,t)\geq 2}\left\{ \left( (l+s+t)u+2(s-t)i\langle z,z\rangle
\right) F_{st}(z,\overline{z},u)-2\langle z,z\rangle ^{2}\left( \frac{%
\partial F_{st}}{\partial u}\right) (z,\overline{z},u)\right\} \in \Bbb{R}%
\backslash \{0\}
\]
and define
\[
r(U)=\frac{-2\left\{ \rho (U)^{-1}\left| \rho (U)\right| ^{\frac{l+2}{2}%
}F_{l+2}(Uz,\overline{Uz},\lambda u)-\tilde{F}_{l+2}\left( z,\overline{z}%
,u;a(U)\right) \right\} }{\sum_{\min (s,t)\geq 2}\left\{ \left(
(l+s+t)u+2(s-t)i\langle z,z\rangle \right) F_{st}(z,\overline{z},u)-2\langle
z,z\rangle ^{2}\left( \frac{\partial F_{st}}{\partial u}\right) (z,\overline{%
z},u)\right\} }.
\]
By the unique factorization of a polynomial, we have
\[
r=r(U)
\]
regardless the choice of the value $z,u.$ Thus the equality \ref{rrr} yields
a unique function $r(U)$ of $U$ satisfying
\[
r=r(U)
\]
for all
\[
\left( U,a,\rho ,r\right) \in H(M).
\]
This completes the proof.\endproof
\section{Compact local automorphism groups}
\subsection{Compactness}
\begin{lemma}
\label{analytic}Let $M$ be a nondegenerate analytic real hypersurface
defined by
\[
v=F(z,\overline{z},u),\quad \left. F\right| _{0}=\left. dF\right| _{0}=0,
\]
and $\phi _{\sigma }$ be a normalization of $M$ with initial value $\sigma
\in H$. Suppose that $\phi _{\sigma }$ transforms $M$ to a real hypersurface
defined by the equation
\[
v=\langle z,z\rangle +F^{*}(z,\overline{z},u;\sigma ).
\]
Then the functions $\phi _{\sigma }\left( z,w\right) $ and $F^{*}(z,%
\overline{z},u;\sigma )$ are analytic of
\[
\sigma =\left( U,a,\rho ,r\right) \in H.
\]
Further, each coefficient
\[
\left( \left. \frac{\partial ^{\left| I\right| +l}\phi _{\sigma }}{\partial
z^{\left| I\right| }\partial w^{l}}\right| _{0}\right) \quad \text{and}\quad
\left( \left. \frac{\partial ^{\left| I\right| +\left| J\right| +l}F^{*}}{%
\partial z^{\left| I\right| }\partial \overline{z}^{\left| J\right|
}\partial u^{l}}\right| _{0}\right)
\]
depends polynomially on the parameters
\[
C\equiv \sqrt{\left| \rho \right| }U,\quad C^{-1},\quad \rho ,\quad \rho
^{-1},\quad a,\quad r.
\]
\end{lemma}
In the paper \cite{Pa2}, we have given the proof of Lemma \ref{analytic}.
Let $M$ be a real hypersurface $M$ in normal form. We define the isotropy
subgroup $H(M)$ of $M$ at the origin as follows:
\[
H(M)=\{\sigma \in H:\phi _{\sigma }(M)=M\}
\]
where $\phi _{\sigma }$ is a normalization of $M$ with initial value $\sigma
\in H$. By Lemma \ref{analytic}, the group $H$ is homeomorphic to the set of
germs $\phi _{\sigma },$ $\sigma \in H,$ with a topology induced from the
natural compact-open topology. Further, by Lemma \ref{orbit} and Lemma \ref
{analytic}, the group $H(M)$ is isomorphic as Lie group to the local
automorphism group of $M$.
\begin{lemma}
\label{estimation}Let $M$ be a nonspherical analytic real hypersurface and $%
H(M)$ be the isotropy subgroup of $M$ such that there is a real number $%
c\geq 1$ satisfying
\[
\sup_{\left( U,a,\rho ,r\right) \in H(M)}\left\| U\right\| \leq c<\infty .
\]
Then there exists a real number $e>0$ satisfying
\[
\left| a\right| \leq e,\quad e^{-1}\leq \left| \rho \right| \leq e,\quad
\left| r\right| \leq e
\]
for all elements
\[
\left( U,a,\rho ,r\right) \in H(M)
\]
where $e$ may depend on $M$ and $c.$
\end{lemma}
\proof
For the parameter $\rho ,$ we have
\begin{eqnarray*}
\left| \rho \left( U\right) \right| ^{\frac{l-2}{2}} &=&\frac{\left|
F_{l}(U^{-1}z,\overline{U^{-1}z},\lambda u)\right| }{\left| F_{l}(z,%
\overline{z},u)\right| } \\
\left| \rho \left( U\right) \right| ^{-\frac{l-2}{2}} &=&\frac{\left|
F_{l}(Uz,\overline{Uz},\lambda u)\right| }{\left| F_{l}(z,\overline{z}%
,u)\right| }
\end{eqnarray*}
whenever we take a value $z,u$ satisfying
\[
F_{l}(z,\overline{z},u)\neq 0.
\]
Hence we have the following estimate:
\begin{eqnarray*}
\left| \rho \left( U\right) \right| ^{\frac{l-2}{2}} &\leq &\sup_{U}\frac{%
\left| F_{l}(U^{-1}z,\overline{U^{-1}z},\lambda u)\right| }{\left| F_{l}(z,%
\overline{z},u)\right| } \\
\left| \rho \left( U\right) \right| ^{-\frac{l-2}{2}} &\leq &\sup_{U}\frac{%
\left| F_{l}(Uz,\overline{Uz},\lambda u)\right| }{\left| F_{l}(z,\overline{z}%
,u)\right| }
\end{eqnarray*}
so that
\[
\left( \sup_{U}\frac{\left| F_{l}(Uz,\overline{Uz},\lambda u)\right| }{%
\left| F_{l}(z,\overline{z},u)\right| }\right) ^{-1}\leq \left| \rho \left(
U\right) \right| ^{\frac{l-2}{2}}\leq \sup_{U}\frac{\left| F_{l}(U^{-1}z,%
\overline{U^{-1}z},\lambda u)\right| }{\left| F_{l}(z,\overline{z},u)\right|
}.
\]
Note that there is a real number $d$ depending only on $F_{l}(z,\overline{z}%
,u)$ such that
\[
\frac{\left| F_{l}(U^{-1}z,\overline{U^{-1}z},\lambda u)\right| }{\left|
F_{l}(z,\overline{z},u)\right| }\leq d_{1}\cdot c^{l}
\]
where
\[
c\equiv \sup_{\left( U,a,\rho ,r\right) \in H(M)}\left\| U\right\| \geq 1.
\]
Thus we obtain
\[
d_{1}^{-\frac{2}{l-2}}\cdot c^{-\frac{2l}{l-2}}\leq \left| \rho \left(
U\right) \right| \leq d_{1}^{\frac{2}{l-2}}\cdot c^{\frac{2l}{l-2}}.
\]
For the parameter $a,$ we have
\begin{eqnarray*}
&&H_{l+1}\left( z,\overline{z},u;a^{*}(U)\right) \\
&=&F_{l+1}(z,\overline{z},u)-\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}%
\left| \rho \left( U\right) \right| ^{\frac{l+3}{2}}F_{l+1}(U^{-1}z,%
\overline{U^{-1}z},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u)
\end{eqnarray*}
where
\[
a^{*}\left( U\right) =\rho \left( U\right) ^{-1}\sqrt{\left| \rho \left(
U\right) \right| }Ua\left( U\right) .
\]
Since the mapping $a\mapsto H_{l+1}\left( z,\overline{z},u;a\right) $ is
injective and the function $H_{l+1}\left( z,\overline{z},u;a\right) $
depends only on $F_{l}(z,\overline{z},u),$ we have the following estimate:
\[
\left| a^{*}\left( U\right) \right| \leq d_{2}^{*}\cdot c^{\frac{2(l^{2}+l-1)%
}{l-2}}
\]
which yields
\[
\left| a\left( U\right) \right| \leq d_{2}\cdot c^{\frac{2(l^{2}+2l-2)}{l-2}%
}
\]
where $d_{2}^{*},d_{2}$ depend only on $F_{l}(z,\overline{z},u)$ and $%
F_{l+1}(z,\overline{z},u).$
For the parameter $r,$ we have
\begin{align}
& -\frac{r\left( U\right) }{2}\left\{ \sum_{\min (s,t)\geq
2}(l+s+t)uF_{st}(z,\overline{z},u)\right. \nonumber \\
& \hspace{1.5cm}+\sum_{\min (s,t)\geq 2}2(s-t)i\langle z,z\rangle F_{st}(z,%
\overline{z},u) \nonumber \\
& \hspace{1.5cm}\left. -\sum_{\min (s,t)\geq 2}2\langle z,z\rangle
^{2}\left( \frac{\partial F_{st}}{\partial u}\right) (z,\overline{z}%
,u)\right\} \nonumber \\
& =\lambda \left| \rho \left( U\right) \right| ^{\frac{l}{2}}F_{l+2}(Uz,%
\overline{Uz},\lambda u)-\tilde{F}_{l+2}\left( z,\overline{z},u;a\right) .
\nonumber
\end{align}
By Lemma \ref{analytic}, the function $\tilde{F}_{l+2}\left( z,\overline{z}%
,u;a\right) $ depend polynomially, in fact, quadratically, on the parameter $%
a.$ Hence we obtain the following estimate:
\[
\left| r\left( U\right) \right| \leq d_{3}\cdot c^{L}\quad \text{for some }%
L\in \Bbb{N}
\]
where $d_{3}$ depends only on $F_{l}\left( z,\overline{z},u\right) ,$ $%
F_{l+1}\left( z,\overline{z},u\right) ,$ and $F_{l+2}\left( z,\overline{z}%
,u\right) .$
Then we take
\[
e=\max \left\{ d_{1}^{\frac{2}{l-2}}\cdot c^{\frac{2l}{l-2}},\quad
d_{2}\cdot c^{\frac{2(l^{2}+2l-2)}{l-2}},\quad d_{3}\cdot c^{L}\right\} .
\]
This completes the proof.\endproof
\begin{theorem}
\label{compactness}Let $M$ be a nonspherical analytic real hypersurface in
normal form. Suppose that there is a real number $c\geq 1$ satisfying
\begin{equation}
\sup_{\left( U,a,\rho ,r\right) \in H(M)}\left\| U\right\| \leq c<\infty .
\tag*{(3.1)} \label{fff}
\end{equation}
Then the group $H(M)$ is compact.
\end{theorem}
\proof
By Theorem \ref{ThBL}, the group $H(M)$ is isomorphic to the following
group:
\begin{equation}
\left\{ U:\left( U,a,\rho ,r\right) \in H(M)\right\} . \nonumber
\end{equation}
We claim that the group $H(M)$ is closed under the condition \ref{fff}.
Suppose that there is a convergent sequence in $GL(n;\Bbb{C})$ such that
\[
U_{m}\in \left\{ U:\left( U,a,\rho ,r\right) \in H(M)\right\} \quad \text{%
for all }m\in \Bbb{N}
\]
and, by the condition \ref{fff},
\[
\lim_{m\rightarrow \infty }U_{m}=U\in GL(n;\Bbb{C}).
\]
Then, by the functions $\rho (U),a(U),r(U)$ in Theorem \ref{ThBL}, we have
the following sequence:
\[
\left( U_{m},a\left( U_{m}\right) ,\rho \left( U_{m}\right) ,r\left(
U_{m}\right) \right) \in H(M).
\]
Under the condition \ref{fff}, by Lemma \ref{estimation}, there is a real
number $e>0$ such that
\[
\left| a\left( U_{m}\right) \right| \leq e,\quad e^{-1}\leq \left| \rho
\left( U_{m}\right) \right| \leq e,\quad \left| r\left( U_{m}\right) \right|
\leq e\quad \text{for all }m.
\]
Then, by compactness, there is a subsequence $m_{j}$ such that the following
limits exists:
\begin{eqnarray*}
a &=&\lim_{j\rightarrow \infty }a\left( U_{m_{j}}\right) , \\
\rho &=&\lim_{j\rightarrow \infty }\rho \left( U_{m_{j}}\right) , \\
r &=&\lim_{j\rightarrow \infty }r\left( U_{m_{j}}\right) ,
\end{eqnarray*}
which satisfy
\[
\left| a\right| \leq e,\quad e^{-1}\leq \left| \rho \right| \leq e,\quad
\left| r\right| \leq e.
\]
Then we consider the following subset $K$ of $H$ given by
\begin{eqnarray*}
K &=&\left\{ \left( U,a,\rho ,r\right) \in H:\frac{1}{c}\leq \left\|
U\right\| \leq c\right. . \\
&&\hspace{1in}\left. \left| a\right| \leq e,\quad e^{-1}\leq \left| \rho
\right| \leq e,\quad \left| r\right| \leq e\right\} .
\end{eqnarray*}
Note that the set $K$ is compact and
\[
\left( U_{m},a\left( U_{m}\right) ,\rho \left( U_{m}\right) ,r\left(
U_{m}\right) \right) \in K\quad \text{for all }m.
\]
Then, by Lemma \ref{analytic}, for each $\sigma \in K,$ there exist real
numbers $\varepsilon _{\sigma },\delta _{\sigma }>0$ such that all
normalizations
\[
\phi _{\sigma ^{\prime }},\quad \sigma ^{\prime }\in K\cap \left\{ \tau \in
GL(n;\Bbb{C}):\left\| \tau -\sigma \right\| \leq \varepsilon _{\sigma
}\right\}
\]
as a power series at the origin converge absolutely and uniformly on the
open ball $B(0;\delta _{\sigma }).$ Notice that the following family of open
sets
\[
\left\{ \tau \in GL(n+2;\Bbb{C}):\left\| \tau -\sigma \right\| <\varepsilon
_{\sigma }\right\} ,\quad \sigma \in K
\]
is an open covering of the set $K.$ Since $K$ is compact, there is a finite
subcover, say,
\[
\left\{ \tau \in GL(n+2;\Bbb{C}):\left\| \tau -\sigma _{j}\right\|
<\varepsilon _{\sigma _{j}}\right\} ,\quad \sigma _{j}\in H(M),\quad
j=1,\cdots ,l.
\]
Then we set
\[
\delta =\min_{1\leq j\leq m}\left\{ \delta _{\sigma _{j}}\right\} >0
\]
so that each normalization $\phi _{\sigma }$, $\sigma \in K,$ as a power
series at the origin converges absolutely and uniformly on the open ball $%
B(0;\delta ).$ Thus, by Motel theorem, the family of normalizations $\phi
_{\sigma },$ $\sigma \in K$, are a normal family on $B(0;\delta ).$
By a standard argument of a normal family, passing to a subsequence of $%
\{m_{j}\},$ if necessary, there is a holomorphic mapping $\phi $ on the open
ball $B(0;\delta )$ such that
\[
\phi =\lim_{j\rightarrow \infty }\phi _{\sigma _{m_{j}}}
\]
where
\[
\sigma _{m_{j}}=\left( U_{m_{j}},a\left( U_{m_{j}}\right) ,\rho \left(
U_{m_{j}}\right) ,r\left( U_{m_{j}}\right) \right) .
\]
Then, for $\phi =(f,g),$ we have
\begin{eqnarray*}
\left( \left. \frac{\partial f}{\partial z}\right| _{0}\right)
&=&\lim_{j\rightarrow \infty }\sqrt{\left| \rho (U_{m_{j}})\right| }%
U_{m_{j}}=\sqrt{\left| \rho \right| }U \\
\left( \left. \frac{\partial f}{\partial w}\right| _{0}\right)
&=&-\lim_{j\rightarrow \infty }\sqrt{\left| \rho (U_{m_{j}})\right| }%
U_{m_{j}}a(U_{m_{j}})=\sqrt{\left| \rho \right| }Ua \\
\left( \left. \frac{\partial g}{\partial w}\right| _{0}\right)
&=&\lim_{j\rightarrow \infty }\rho (U_{m_{j}})=\rho \\
\left( \left. \frac{\partial ^{2}g}{\partial w^{2}}\right| _{0}\right)
&=&2\lim_{j\rightarrow \infty }\rho (U_{m_{j}})r(U_{m_{j}})=2\rho r.
\end{eqnarray*}
Note that
\[
0<\left| \det \phi ^{\prime }\right| =\left| \rho \right| ^{\frac{n+2}{2}%
}\left| \det U\right| <\infty .
\]
Thus, by Hurwitz theorem, the mapping $\phi $ is a biholomorphic mapping on
the ball $B(0;\delta ).$ Further, notice
\[
\phi _{\sigma _{m}}\left( M\cap B(0;\delta )\right) \subset M\quad \text{for
all }m\in \Bbb{N},
\]
so that
\[
\phi \left( M\cap B(0;\delta )\right) \subset M.
\]
Hence the mapping $\phi $ is a biholomorphic automorphism of $M$ with
initial value $\sigma \in H$ such that
\[
\sigma =\left( U,a,\rho ,r\right) \in H(M).
\]
Thus we have showed
\[
U=\lim_{m\rightarrow \infty }U_{m}\in \left\{ U:\left( U,a,\rho ,r\right)
\in H(M)\right\} .
\]
Then the group
\[
\left\{ U:\left( U,a,\rho ,r\right) \in H(M)\right\} \subset GL(n+2;\Bbb{C})
\]
is closed so that it is a compact Lie group. Therefore, we prove our claim
that the group $H(M)$ is closed. Hence $H(M)$ is a compact Lie group. This
completes the proof.\endproof
\subsection{Theorem of a germ of a biholomorphic mapping}
We study the analytic continuation of a germ of a biholomorphic mapping to a
finite neighborhood(cf. \cite{Vi85}).
\begin{lemma}
\label{L9}Let $M$ be a nonspherical analytic real hypersurface in normal
form and $H(M)$ be the isotropy subgroup of $M$ such that there is a real
number $c\geq 1$ satisfying
\[
\sup_{\left( U,a,\rho ,r\right) \in H(M)}\left\| U\right\| \leq c<\infty .
\]
Then there is a real number $\delta >0$ such that all local automorphisms of
$M,$ $\phi _{\sigma },$ $\sigma \in H(M)$, converge absolutely and uniformly
on the open ball $B(0;\delta )$.
\end{lemma}
\proof
By Lemma \ref{analytic}, for each $\sigma \in H(M),$ there exist real
numbers $\varepsilon _{\sigma },\delta _{\sigma }>0$ such that all
normalizations
\[
\phi _{\sigma ^{\prime }},\quad \sigma ^{\prime }\in H(M)\cap \left\{ \tau
\in GL(n+2;\Bbb{C}):\left\| \tau -\sigma \right\| \leq \varepsilon _{\sigma
}\right\}
\]
as a power series at the origin converges absolutely and uniformly on the
open ball $B(0;\delta _{\sigma }).$
Note that the following family
\[
\left\{ \tau \in GL(n+2;\Bbb{C}):\left\| \tau -\sigma \right\| <\varepsilon
_{\sigma }\right\} ,\quad \sigma \in H(M)
\]
is an open covering of the set $H(M).$ By Lemma \ref{compactness}, $H(M)$ is
compact. Thus there is a finite subcover, say,
\[
\left\{ \tau \in GL(n+2;\Bbb{C}):\left\| \tau -\sigma _{j}\right\|
<\varepsilon _{\sigma _{j}}\right\} ,\quad \sigma _{j}\in H(M),\quad
j=1,\cdots ,m.
\]
Then we take
\[
\delta =\min_{1\leq j\leq m}\left\{ \delta _{\sigma _{j}}\right\} >0.
\]
This completes the proof.\endproof
\begin{theorem}[Vitushkin]
\label{ThVi}Let $M,$ $M^{\prime }$ be a nonspherical analytic real
hypersurface and $p,p^{\prime }$ be points respectively of $M,M^{\prime }$
such that the two germs $M$ at $p$ and $M^{\prime }$ at $p^{\prime }$ are
biholomorphically equivalent. Suppose that there is a real number $c\geq 1$
satisfying
\[
\sup_{\left( U,a,\rho ,r\right) \in H_{p}(M)}\left\| U\right\| \leq c<\infty
\]
where $H_{p}(M)$ is a local automorphism group of $M$ at the point $p$ in a
normal coordinate. Then there is a real number $\delta >0$ depending only on
$M$ and $M^{\prime }$ such that each biholomorphic mapping $\phi $ of $M$
near the point $p$ is analytically continued to the open ball $B(p;\delta )$
whenever $\phi (p)=p^{\prime }$ and there is an open neighborhood $U\subset
B(p;\delta )$ of the point $p$ satisfying
\[
\phi (U\cap M)\subset M^{\prime }.
\]
\end{theorem}
\proof
We take a biholomorphic mapping $\phi $ of $M$ to $M^{\prime }$ such that
\[
\phi (p)=p^{\prime }
\]
and, for an open neighborhood $U$ of the point $p,$%
\[
\phi (U\cap M)\subset M^{\prime }.
\]
Then we take normalizations $\phi _{1},\phi _{2}$ respectively of $%
M,M^{\prime }$ such that $\phi _{1},\phi _{2}$ translate the points $%
p,p^{\prime }$ to the origin and there exist open neighborhoods $U_{1},U_{2}$
respectively of $p,p^{\prime }$ and a real hypersurface $M^{*}$ in normal
form satisfying
\begin{eqnarray*}
\phi _{1}(U_{1}\cap M) &\subset &M^{*} \\
\phi _{2}(U_{2}\cap M) &\subset &M^{*}.
\end{eqnarray*}
Then, we obtain a biholomorphic mapping $\phi ^{*}$ defined by
\[
\phi ^{*}=\phi _{2}\circ \phi \circ \phi _{1}^{-1}.
\]
Notice that the mapping $\phi ^{*}$ is a local automorphism of $M^{*}$. By
Lemma \ref{L9}, there is a real number $\delta ^{*}>0$ such that the mapping
$\phi ^{*}$ continues holomorphically to the open ball $B(0;\delta ^{*})$
satisfying
\begin{eqnarray*}
B(0;\delta ^{*}) &\subset &\phi _{1}\left( U_{1}\right) \\
B(0;\delta ^{*}) &\subset &\phi _{2}\left( U_{2}\right) .
\end{eqnarray*}
Then the mapping
\[
\phi =\phi _{2}^{-1}\circ \phi _{\sigma ^{*}}\circ \phi _{1}
\]
is biholomorphically continued to the open set
\[
U_{1}\cap \phi _{1}^{-1}\left( B(0;\delta ^{*})\right) .
\]
We take a real number $\delta >0$ such that
\[
B(p;\delta )\subset U_{1}\cap \phi _{1}^{-1}\left( B(0;\delta ^{*})\right) .
\]
This completes the proof.\endproof
\subsection{Kruzhilin-Loboda Theorem}
By Lemma \ref{orbit}, we have a $H$-group action on real hypersurfaces in
normal form. Then the orbit structure in normal form may be studied by
examining the isotropy subgroup $H(M)$ for a real hypersurface $M$ in normal
form.
\begin{lemma}
Let $K$ be a subset of $H.$ The necessary and sufficient condition for the
set $K$ to be conjugate to a subset of
\[
\left\{ \left( U,0,\pm 1,0\right) \equiv \left(
\begin{array}{ccc}
\pm 1 & 0 & 0 \\
0 & U & 0 \\
0 & 0 & 1
\end{array}
\right) \in H\right\}
\]
is given as follows:
\[
K\subset \left\{ \left( U,a,\pm 1,r\right) \in H\right\}
\]
and there exist a vector $d\in \Bbb{C}^{n}$ and a real number $e\in \Bbb{R}$
such that
\begin{align*}
\left( id_{n\times n}-\lambda U^{-1}\right) d=a \\
\left( 1-\lambda \right) e+i\langle d,a\rangle -i\langle a,d\rangle =r
\end{align*}
for all
\[
\left( U,a,\lambda ,r\right) \in K.
\]
\end{lemma}
\proof
Each element of $H$ is decomposed as follows:
\[
\left(
\begin{array}{ccc}
\rho ^{\prime } & 0 & 0 \\
-C^{\prime }a^{\prime } & C^{\prime } & 0 \\
-r^{\prime }-i\langle a^{\prime },a^{\prime }\rangle & 2ia^{\prime \dagger }
& 1
\end{array}
\right) =\left(
\begin{array}{ccc}
\rho ^{\prime } & 0 & 0 \\
0 & C^{\prime } & 0 \\
0 & 0 & 1
\end{array}
\right) \left(
\begin{array}{ccc}
1 & 0 & 0 \\
-a^{\prime } & id_{n\times n} & 0 \\
-r^{\prime }-i\langle a^{\prime },a^{\prime }\rangle & 2ia^{\prime \dagger }
& 1
\end{array}
\right)
\]
where
\[
a^{\prime \dagger }z=\langle z,a^{\prime }\rangle .
\]
Note that
\[
\left(
\begin{array}{ccc}
\rho ^{\prime } & 0 & 0 \\
0 & C^{\prime } & 0 \\
0 & 0 & 1
\end{array}
\right) \left(
\begin{array}{ccc}
\pm 1 & 0 & 0 \\
0 & U & 0 \\
0 & 0 & 1
\end{array}
\right) \left(
\begin{array}{ccc}
\rho ^{\prime } & 0 & 0 \\
0 & C^{\prime } & 0 \\
0 & 0 & 1
\end{array}
\right) ^{-1}=\left(
\begin{array}{ccc}
\pm 1 & 0 & 0 \\
0 & C^{\prime }UC^{\prime -1} & 0 \\
0 & 0 & 1
\end{array}
\right) .
\]
Thus the straight forward computation yields
\begin{eqnarray*}
&&\left(
\begin{array}{ccc}
1 & 0 & 0 \\
-a^{\prime } & id_{n\times n} & 0 \\
-r^{\prime }-i\langle a^{\prime },a^{\prime }\rangle & 2ia^{\prime \dagger }
& 1
\end{array}
\right) \left(
\begin{array}{ccc}
\rho & 0 & 0 \\
-Ca & C & 0 \\
-r-i\langle a,a\rangle & 2ia^{\dagger } & 1
\end{array}
\right) \times \\
&&\hspace{2in}\left(
\begin{array}{ccc}
1 & 0 & 0 \\
-a^{\prime } & id_{n\times n} & 0 \\
-r^{\prime }-i\langle a^{\prime },a^{\prime }\rangle & 2ia^{\prime \dagger }
& 1
\end{array}
\right) ^{-1} \\
&=&\left(
\begin{array}{ccc}
\rho & 0 & 0 \\
-Ca^{*} & C & 0 \\
-r^{*}-i\langle a^{*},a^{*}\rangle & 2ia^{*\dagger } & 1
\end{array}
\right)
\end{eqnarray*}
where
\begin{eqnarray*}
a^{*} &=&\rho C^{-1}a^{\prime }+a-a^{\prime } \\
r^{*} &=&\rho r^{\prime }-r^{\prime }+r+i\langle C(a-a^{\prime }),a^{\prime
}\rangle -i\langle a^{\prime },C(a-a^{\prime })\rangle \\
&&+i\langle a,a^{\prime }\rangle -i\langle a^{\prime },a\rangle .
\end{eqnarray*}
Hence the necessary and sufficient condition for the set $K$ to be conjugate
to a subset of
\[
\left\{ \left( U,0,\pm 1,0\right) \equiv \left(
\begin{array}{ccc}
\pm 1 & 0 & 0 \\
0 & U & 0 \\
0 & 0 & 1
\end{array}
\right) \in H\right\}
\]
is given by
\[
\left| \rho \right| =1\quad \text{and}\quad a^{*}=r^{*}=0
\]
for all
\[
\left( U,a,\rho ,r\right) \in K.
\]
The equalities $a^{*}=r^{*}=0$ is yields
\begin{eqnarray*}
(id_{n\times n}-\rho C^{-1})a^{\prime } &=&a \\
(1-\rho )r^{\prime }+i\langle a^{\prime },a\rangle -i\langle a,a^{\prime
}\rangle &=&r.
\end{eqnarray*}
The necessary and sufficient condition is equivalent to the existence of a
vector $a^{\prime }\in \Bbb{C}^{n}$ and $r^{\prime }\in \Bbb{R}$ satisfying
\begin{eqnarray*}
\left| \rho \right| &=&1 \\
(id_{n\times n}-\rho C^{-1})a^{\prime } &=&a \\
(1-\rho )r^{\prime }+i\langle a^{\prime },a\rangle -i\langle a,a^{\prime
}\rangle &=&r.
\end{eqnarray*}
for all
\[
\left( U,a,\rho ,r\right) \in K.
\]
This completes the proof.\endproof
\begin{theorem}[Kruzhilin-Loboda]
Let $M$ be a real hypersurface in normal form and $H(M)$ be the isotropy
group of $M$ such that there is a real number $c\geq 1$ satisfying
\[
\sup_{\left( U,a,\rho ,r\right) \in H(M)}\left\| U\right\| \leq c<\infty .
\]
Then there exists an element $\sigma \in H$ satisfying
\[
\sigma H(M)\sigma ^{-1}\subset \left\{ \left(
\begin{array}{ccc}
\pm 1 & 0 & 0 \\
0 & U & 0 \\
0 & 0 & 1
\end{array}
\right) \in H\right\} .
\]
\end{theorem}
\proof
By Lemma \ref{compactness}, the group
\[
G=\left\{ U:\left( U,a,\rho ,r\right) \in H(M)\right\} ,
\]
is a compact Lie group. Thus we have a unique Haar measure $\mu $ on $G$
such that
\[
\int_{V\in G}d\mu \left( V\right) =1.
\]
Suppose that $M$ is defined by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+F_{l+1}\left( z,\overline{z}%
,u\right) +O(l+2).
\]
By Theorem \ref{ThBL}, there is a function $\rho (U)$ satisfying
\[
\rho =\rho (U)
\]
for all
\[
\left( U,a,\rho ,r\right) \in H(M).
\]
Then we have the following identity
\[
\left| \rho \left( U\right) \right| ^{\frac{l-2}{2}}F_{l}(z,\overline{z},u)=%
\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}F_{l}\left( U^{-1}z,\overline{%
U^{-1}z},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u\right)
\]
which yields
\[
F_{l}(z,\overline{z},u)=\frac{\int_{G}\left\{ \mathrm{sign\{}\rho \left(
V\right) \mathrm{\}}F_{l}\left( V^{-1}z,\overline{V^{-1}z},\mathrm{sign\{}%
\rho \left( V\right) \mathrm{\}}u\right) \right\} d\mu \left( V\right) }{%
\int_{G}\left| \rho \left( V\right) \right| ^{\frac{l-2}{2}}d\mu \left(
V\right) }.
\]
Hence we easily see
\[
\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}F_{l}\left( U^{-1}z,\overline{%
U^{-1}z},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u\right) =F_{l}(z,%
\overline{z},u)
\]
so that
\[
\left| \rho \left( U\right) \right| ^{\frac{l-2}{2}}F_{l}(z,\overline{z}%
,u)=F_{l}(z,\overline{z},u).
\]
Thus we have
\[
\left| \rho \left( U\right) \right| \equiv 1\quad \text{for all }U\in G
\]
so that
\[
H(M)\subset \left\{ \left( U,a,\pm 1,r\right) \in H\right\} .
\]
By Theorem \ref{ThBL}, there is a function $a(U)$ satisfying
\[
a=a(U)
\]
for all
\[
\left( U,a,\pm 1,r\right) \in H(M).
\]
Then we have the identity
\begin{eqnarray*}
&&H_{l+1}\left( z,\overline{z},u;\mathrm{sign\{}\rho \left( U\right) \mathrm{%
\}}Ua\left( U\right) \right) \\
&=&F_{l+1}(z,\overline{z},u)-\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}%
F_{l+1}\left( U^{-1}z,\overline{U^{-1}z},\mathrm{sign\{}\rho \left( U\right)
\mathrm{\}}u\right) .
\end{eqnarray*}
Hence there is a vector $a^{*}$ satisfying
\begin{eqnarray*}
&&H_{l+1}\left( z,\overline{z},u;a^{*}\right) \\
&=&F_{l+1}(z,\overline{z},u)-\int_{G}\left\{ \mathrm{sign\{}\rho \left(
V\right) \mathrm{\}}F_{l+1}\left( V^{-1}z,\overline{V^{-1}z},\mathrm{sign\{}%
\rho \left( V\right) \mathrm{\}}u\right) \right\} d\mu (V)
\end{eqnarray*}
where
\[
a^{*}=\int_{G}\mathrm{sign\{}\rho \left( V\right) \mathrm{\}}Va\left(
V\right) d\mu (V).
\]
Suppose that the normalization $\phi $ of $M$ with initial value
\[
(id_{n\times n},-a^{*},1,0)\in H
\]
transforms $M$ to a real hypersurface $M^{\prime }.$ Then $M^{\prime }$ is
defined up to weight $l+1$ by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+F_{l+1}^{*}\left( z,\overline{z%
},u\right) +O(l+2)
\]
where
\[
F_{l+1}^{*}\left( z,\overline{z},u\right) =\int_{G}\left\{ \mathrm{sign\{}%
\rho \left( V\right) \mathrm{\}}F_{l+1}\left( V^{-1}z,\overline{V^{-1}z},%
\mathrm{sign\{}\rho \left( V\right) \mathrm{\}}u\right) \right\} d\mu (V).
\]
We easily see that
\[
\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}F_{l+1}^{*}\left( U^{-1}z,%
\overline{U^{-1}z},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u\right)
=F_{l+1}^{*}\left( z,\overline{z},u\right) .
\]
Because the linear mapping $a^{*}\mapsto H_{l+1}\left( z,\overline{z}%
,u;a^{*}\right) $ is injective, we obtain
\[
H(M^{\prime })\subset \left\{ \left( U,0,\pm 1,r\right) \in H\right\} .
\]
Suppose that $M^{\prime }$ is defined up to weight $l+2$ by the equation
\[
F(M^{\prime })=F_{l}(z,\overline{z},u)+F_{l+1}^{*}\left( z,\overline{z}%
,u\right) +F_{l+2}\left( z,\overline{z},u\right) +O(l+3).
\]
By Theorem \ref{ThBL}, there is a function $r(U)$ satisfying
\[
r=r(U)
\]
for all
\[
\left( U,0,\pm 1,r\right) \in H(M^{\prime }).
\]
Then we have the following identity
\begin{align*}
& -\frac{r\left( U\right) }{2}\left\{ \sum_{\min (s,t)\geq
2}(l+s+t)uF_{st}\left( z,\overline{z},u\right) \right. \\
& \hspace{2cm}\left. +\sum_{\min (s,t)\geq 2}2(s-t)i\langle z,z\rangle
F_{st}\left( z,\overline{z},u\right) \right. \\
& \hspace{2cm}\left. -\sum_{\min (s,t)\geq 2}2\langle z,z\rangle ^{2}\left(
\frac{\partial F_{st}}{\partial u}\right) \left( z,\overline{z},u\right)
\right\} \\
& =\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}F_{l+2}\left( Uz,\overline{%
Uz},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u\right) -F_{l+2}\left( z,%
\overline{z},u\right)
\end{align*}
where
\[
F_{l}(z,\overline{z},u)=\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u).
\]
Hence there is a real number $r^{*}$ satisfying
\begin{align*}
& -\frac{r^{*}}{2}\left\{ \sum_{\min (s,t)\geq 2}(l+s+t)uF_{st}\left( z,%
\overline{z},u\right) \right. \\
& \hspace{2cm}\left. +\sum_{\min (s,t)\geq 2}2(s-t)i\langle z,z\rangle
F_{st}\left( z,\overline{z},u\right) \right. \\
& \hspace{2cm}\left. -\sum_{\min (s,t)\geq 2}2\langle z,z\rangle ^{2}\left(
\frac{\partial F_{st}}{\partial u}\right) \left( z,\overline{z},u\right)
\right\} \\
& =\int_{G}\mathrm{sign\{}\rho \left( V\right) \mathrm{\}}F_{l+2}\left( Vz,%
\overline{Vz},\mathrm{sign\{}\rho \left( V\right) \mathrm{\}}u\right) d\mu
\left( V\right) -F_{l+2}\left( z,\overline{z},u\right)
\end{align*}
where
\[
r^{*}=\int_{G}r\left( V\right) d\mu \left( V\right) .
\]
Suppose that the normalization $\phi ^{\prime }$ of $M^{\prime }$ with
initial value
\[
\left( id,0,1,r^{*}\right) \in H
\]
transforms $M^{\prime }$ to a real hypersurface $M^{\prime \prime }.$ Then $%
M^{\prime \prime }$ is defined up to weight $l+2$ by the equation
\[
v=\langle z,z\rangle +F_{l}(z,\overline{z},u)+F_{l+1}^{*}\left( z,\overline{z%
},u\right) +F_{l+2}^{*}\left( z,\overline{z},u\right) +O(l+3)
\]
where
\[
F_{l+2}^{*}\left( z,\overline{z},u\right) =\int_{G}\mathrm{sign\{}\rho
\left( V\right) \mathrm{\}}F_{l+2}\left( Vz,\overline{Vz},\mathrm{sign\{}%
\rho \left( V\right) \mathrm{\}}u\right) d\mu \left( V\right) .
\]
We easily see that
\[
\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}F_{l+2}^{*}\left( Uz,%
\overline{Uz},\mathrm{sign\{}\rho \left( U\right) \mathrm{\}}u\right)
=F_{l+2}^{*}\left( z,\overline{z},u\right)
\]
which yields
\[
H(M^{\prime \prime })\subset \left\{ \left( U,0,\pm 1,0\right) \in H\right\}
.
\]
Then we take a normalization $\phi _{\sigma }$ with initial value $\sigma
\in H$ such that
\[
\phi _{\sigma }(M)=M^{\prime \prime }.
\]
Then, by Lemma \ref{orbit}, we obtain
\[
\sigma H(M)\sigma ^{-1}=H(M^{\prime \prime }).
\]
This completes the proof.\endproof
\section{Analytic continuation of a normalizing mapping}
\subsection{Chains on a spherical real hypersurface}
By Theorem \ref{exi-uni}, each biholomorphic automorphism of the real
hyperquadric
\[
v=\langle z,z\rangle
\]
is uniquely given by a composition of an affine mapping
\begin{align}
z^{*}& =z+b \nonumber \\
w^{*}& =w+2i\langle z,b\rangle +c+i\langle b,b\rangle \tag*{(4.1)}
\label{line}
\end{align}
and a fractional linear mapping:
\begin{equation}
\phi =\phi _{\sigma }:\left\{
\begin{array}{c}
z^{*}=\frac{C(z-aw)}{1+2i\langle z,a\rangle -w(r+i\langle a,a\rangle )} \\
w^{*}=\frac{\rho w}{1+2i\langle z,a\rangle -w(r+i\langle a,a\rangle )}
\end{array}
\right. \tag*{(4.2)} \label{2.1}
\end{equation}
where
\[
b\in \Bbb{C}^{n},\quad c\in \Bbb{R}
\]
and the constants $\sigma =(C,a,\rho ,r)$ satisfy
\begin{gather*}
a\in \Bbb{C}^{n},\quad \rho \neq 0,\quad \rho ,r\in \Bbb{R}, \\
C\in GL(n;\Bbb{C}),\quad \langle Cz,Cz\rangle =\rho \langle z,z\rangle .
\end{gather*}
Note that the local automorphism $\phi $ decomposes to
\[
\phi =\varphi \circ \psi ,
\]
where
\begin{equation}
\psi :\left\{
\begin{array}{c}
z^{*}=\frac{z-aw}{1+2i\langle z,a\rangle -i\langle a,a\rangle w} \\
w^{*}=\frac{w}{1+2i\langle z,a\rangle -i\langle a,a\rangle w}
\end{array}
\right. \quad \text{and}\quad \varphi :\left\{
\begin{array}{c}
z^{*}=\frac{Cz}{1-rw} \\
w^{*}=\frac{\rho w}{1-rw}
\end{array}
\right. . \nonumber \label{2.2}
\end{equation}
\begin{lemma}
\label{trans}Let $M$ be the real hyperquadric $v=\langle z,z\rangle .$ Then
the intersection of the real hyperquadric $M$ by a complex line $l$ is given
by a point, a curve $\gamma $, or the complex line $l$ itself. If the
intersection is a curve $\gamma $, then $\gamma $ is transversal to the
complex tangent hyperplane at every point of $\gamma .$
\end{lemma}
\proof
Let $(\kappa ,\chi )\in \Bbb{C}^{n}\times \Bbb{C}$ be a point of the real
hyperquadric $v=\langle z,z\rangle $ such that
\[
\Im \chi =\langle \kappa ,\kappa \rangle .
\]
Then a complex line $l$ passing through the point $(\kappa ,\chi )$ is given
by
\[
\left\{ (\kappa ,\chi )+e(\mu ,\nu ):e\in \Bbb{C}\right\}
\]
for some nonzero vector $(\mu ,\nu )\in \Bbb{C}^{n}\times \Bbb{C}.$ Then the
affine mapping \ref{line} send the complex line $l$ to another complex line $%
l^{*}$ given by
\[
\left\{ (\kappa +b,\chi +2i\langle \kappa ,b\rangle +c+i\langle b,b\rangle
)+e(\mu ,\nu +2i\langle \mu ,b\rangle ):e\in \Bbb{C}\right\} .
\]
Note that the complex line $l^{*}$ passes through the origin by taking
\[
b=-\kappa ,\quad c=-\Re \chi .
\]
Thus we reduce the discussion to complex lines passing through the origin.
Suppose that the complex line $l$ is tangent to the complex tangent
hyperplane at the origin so that $l$ is given by
\[
\left\{ c(a,0):c\in \Bbb{C}\right\}
\]
for some nonzero vector $a\in \Bbb{C}^{n}.$ Then each point in the
intersection of the real hyperquadric $M$ by the complex line $l$ satisfies
\[
c\overline{c}\langle a,a\rangle =0.
\]
Thus we obtain that, whenever $\langle a,a\rangle \neq 0,$%
\[
M\cap \left\{ c(a,0):c\in \Bbb{C}\right\} =\left\{ (0,0)\right\}
\]
and, whenever $\langle a,a\rangle =0,$%
\[
M\cap \left\{ c(a,0):c\in \Bbb{C}\right\} =\left\{ c(a,0):c\in \Bbb{C}%
\right\} .
\]
Suppose that the complex line $l$ is transversal to the complex tangent
hyperplane at the origin so that $l$ is given by
\[
\left\{ c(a,1):c\in \Bbb{C}\right\}
\]
for some vector $a\in \Bbb{C}^{n}.$ We claim that the complex tangent
hyperplanes of the real hyperquadric $M$ and the complex line $l$ are
transversal at each point of the intersection $\gamma $:
\[
\gamma =M\cap \left\{ c(a,1):c\in \Bbb{C}\right\} .
\]
Let $(ca,c),$ $c\neq 0,$ be a point of $M$ so that
\begin{equation}
\frac{1}{2i}(c+\overline{c})=c\overline{c}\langle a,a\rangle \tag*{(4.3)}
\label{rela}
\end{equation}
and $(\mu ,\nu )\in \Bbb{C}^{n}\times \Bbb{C}$ be a vector tangent to the
complex tangent hyperplane of $M$ at the point $(ca,c).$ Then we obtain
\[
\nu -2i\langle \mu ,ca\rangle =0
\]
so that
\begin{eqnarray*}
(\mu ,\nu ) &=&\mu _{1}(1,0,\cdots ,0,0,2ie_{1}\overline{c}\overline{a}^{1})
\\
&&+\mu _{2}(0,1,\cdots ,0,0,2ie_{2}\overline{c}\overline{a}^{2}) \\
&&+\cdots \\
&&+\mu _{n}(0,0,\cdots ,0,1,2ie_{n}\overline{c}\overline{a}^{n})
\end{eqnarray*}
where
\begin{gather*}
\langle z,z\rangle =e_{1}z^{1}\overline{z}^{1}+\cdots +e_{n}z^{n}\overline{z}%
^{n} \\
e_{1},\cdots ,e_{n}=\pm 1.
\end{gather*}
Thus the transversality at $\gamma $ is determined by the value:
\[
\det \left(
\begin{array}{ccccc}
1 & 0 & \cdots & 0 & 2ie_{1}\overline{c}\overline{a}^{1} \\
0 & 1 & \ddots & \vdots & 2ie_{2}\overline{c}\overline{a}^{2} \\
\vdots & \ddots & \ddots & 0 & \vdots \\
0 & \cdots & 0 & 1 & 2ie_{n}\overline{c}\overline{a}^{n} \\
a^{1} & a^{2} & \cdots & a^{n} & 1
\end{array}
\right) =1-2i\overline{c}\langle a,a\rangle .
\]
Suppose that
\[
1-2i\overline{c}\langle a,a\rangle =0.
\]
Then the equality \ref{rela} yields $c=0.$ This is a contradiction to $c\neq
0$.
Thus the complex tangent hyperplanes of the real hyperquadric $M$ and the
complex line $l$ are transversal at each point of the intersection $\gamma .$
Therefore, the intersection $\gamma $ is a curve transversal to the complex
tangent hyperplanes of $M$ at each point of $\gamma $. This completes the
proof.\endproof
Let $M$ be a nondegenerate analytic real hypersurface in a complex manifold
and $p$ be a point of $M.$ Then we can take a normal coordinate with the
center at the point $p$ so that $M$ is defined by
\begin{equation}
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\bar{z},u) \nonumber
\end{equation}
where
\[
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0.
\]
A connected open curve $\gamma $ on $M$ is called a chain if it is locally
putted into the $u$-curve of a normal coordinate at each point of $\gamma .$
A connected closed subarc of a chain $\gamma $ shall be called a
chain-segment.
\begin{lemma}
\label{chain-chain}Let $M,M^{\prime }$ be nondegenerate analytic real
hypersurfaces and $\phi $ be a biholomorphic mapping on an open neighborhood
$U$ of a point $p\in M$ such that
\[
\phi \left( U\cap M\right) \subset M^{\prime }.
\]
Suppose that there is a chain $\gamma $ of $M$ passing through the point $p.$
Then the analytic curve $\phi \left( U\cap \gamma \right) $ is a chain of $%
M^{\prime }$.
\end{lemma}
\proof
Let $q\in \phi \left( U\cap \gamma \right) .$ Since $\gamma $ is a chain of $%
M,$ there exist an open neighborhood $V$ of the point $\phi ^{-1}(q)\in
\gamma $ and a normalization $\phi _{1}$ of $M$ such that $\phi _{1}$ is
biholomorphic $V$ and
\[
\phi _{1}\left( V\cap \gamma \right) \subset \left\{ z=v=0\right\} .
\]
Note that $\phi _{1}\circ \phi ^{-1}$ is a normalization of $M^{\prime }$
such that, for a sufficiently small open neighborhood $O$ of the point $q,$ $%
\phi _{1}\circ \phi ^{-1}$ is biholomorphic on $O$ and
\[
\phi _{1}\circ \phi ^{-1}\left( O\cap \phi \left( V\cap \gamma \right)
\right) \subset \left\{ z=v=0\right\} .
\]
Since $q$ is an arbitrary point of $\phi \left( U\cap \gamma \right) ,$ the
analytic curve $\phi \left( U\cap \gamma \right) $ is a chain. This
completes the proof.\endproof
\begin{lemma}
\label{trans2}Let $M$ be a real hyperquadric $v=\langle z,z\rangle $ and $%
p=(\kappa ,\chi )\neq 0$ be a point of $M.$ Then there is a chain-segment $%
\gamma :[0,1]\rightarrow M$ such that
\[
\gamma (0)=0,\quad \gamma (1)=p
\]
whenever
\[
\Re \chi \neq 0\quad \text{or}\quad \langle \kappa ,\kappa \rangle \neq 0.
\]
\end{lemma}
\proof
Let $\phi _{\sigma }$ be a local automorphism of a real hyperquadric with
initial value $\sigma \in H$. Then the inverse $\phi _{\sigma }^{-1}$ of the
local automorphism $\phi _{\sigma }$ is given by
\begin{equation}
\phi _{\sigma }^{-1}:\left\{
\begin{array}{c}
z=\frac{C^{-1}(z^{*}+\rho ^{-1}Caw^{*})}{1-2i\langle z^{*},\rho
^{-1}Ca\rangle -w^{*}(-r\rho ^{-1}+i\langle \rho ^{-1}Ca,\rho ^{-1}Ca\rangle
)} \\
w=\frac{\rho ^{-1}w^{*}}{1-2i\langle z^{*},\rho ^{-1}Ca\rangle -w^{*}(-r\rho
^{-1}+i\langle \rho ^{-1}Ca,\rho ^{-1}Ca\rangle )}
\end{array}
\right. . \tag*{(4.4)} \label{inverse}
\end{equation}
Thus the chain passing through the origin and transversal to the complex
tangent hyperplane at the origin, $\gamma ,$ is given with a normal
parametrization by
\[
\gamma =\phi ^{-1}\left( z^{*}=v^{*}=0\right) :\left\{
\begin{array}{c}
z=\frac{\rho ^{-1}au^{*}}{1-\rho ^{-1}u^{*}(-r+i\langle a,a\rangle )} \\
w=\frac{\rho ^{-1}u^{*}}{1-\rho ^{-1}u^{*}(-r+i\langle a,a\rangle )}
\end{array}
\right. .
\]
By taking $r=0,$ we easily see that the chain $\gamma $ is the intersection
of $M$ and the complex line
\[
\left\{ c(a,1):c\in \Bbb{C}\right\} .
\]
By Lemma \ref{trans}, the chain $\gamma $ is transversal to the complex
tangent hyperplanes of $M$ at each point of $\gamma .$
Let $(\kappa ,\chi )\neq 0$ be a point in $\Bbb{C}^{n}\times \Bbb{C}$ on $M$
such that
\[
\Im \chi =\langle \kappa ,\kappa \rangle .
\]
Then we have $\chi \neq 0$ whenever
\[
\Re \chi \neq 0\quad \text{or}\quad \langle \kappa ,\kappa \rangle \neq 0.
\]
Note that the origin and the point $(\kappa ,\chi ),$ $\chi \neq 0,$ is
connected by the chain
\[
\Gamma =M\cap \left\{ c(\chi ^{-1}\kappa ,1):c\in \Bbb{C}\right\} .
\]
This completes the proof.\endproof
\begin{lemma}
\label{spherical}Let $M$ be a spherical analytic real hypersurface. Then $M$
is locally biholomorphic to a real hyperquadric.
\end{lemma}
In the paper \cite{Pa2}, we have proved Lemma \ref{spherical}.
\begin{theorem}
\label{core2-lemma}Let $M$ be a spherical analytic real hypersurface and $%
\gamma :[0,1]\rightarrow M$ be a curve such that $\gamma [0,\tau ]$ is a
chain-segment for each $\tau <1.$ Then $\gamma [0,1]$ is a chain-segment of $%
M.$
\end{theorem}
\proof
By Lemma \ref{spherical}, the real hypersurface $M$ is biholomorphic to a
real hyperquadric at the point $\gamma (1).$ Then, by Lemma \ref{spherical},
taking a normal coordinate with center at the point $\gamma (1)$ yields
\[
v=\langle z,z\rangle
\]
where the curve $\gamma [0,1]$ touches the origin and the part $\gamma (0,1)$
is a chain. By Lemma \ref{trans} and Lemma \ref{trans2}, there exist a chain
$\Gamma $ passing through the origin, an open neighborhood $U$ of the
origin, and a normalization $\phi $ of $M$ such that $\phi $ is
biholomorphic on $U$ and
\[
\phi \left( \Gamma \cap U\right) \subset \left\{ z=v=0\right\} .
\]
Since $\gamma [0,1)\subset \Gamma $ and $\Gamma $ is a chain of $M,$ $\gamma
[0,1]$ is a chain-segment. This completes the proof.\endproof
\begin{proposition}
\label{cone}Let $M$ be a spherical analytic real hypersurface and $p$ be a
point of $M.$ Suppose that there are an open cone $V_{\theta }$ with its
vertex at the point $p$ and euclidean angle $\theta ,$ $0<\theta <\frac{\pi
}{2},$ to the complex tangent hyperplane at the point $p,$ and an open
neighborhood $U$ of the point $p.$ Then there is a number $\delta >0$ such
that, for each given curve $\eta :[0,1]\rightarrow V_{\theta }\cap
B(p;\delta ),$ there is a continuous family of chain-segments
\[
\gamma :[0,1]\times [0,1]\rightarrow U\cap M
\]
where $\gamma (s,\cdot ):[0,1]\rightarrow U\cap M$ is a chain-segment of $M$
for each $s\in [0,1]$ satisfying
\[
\gamma (s,0)=p\quad \text{and}\quad \gamma (s,1)=\eta (s)\quad \text{for all
}s\in [0,1].
\]
\end{proposition}
\proof
By Lemma \ref{spherical}, there is a biholomorphic mapping $\phi $ of $M$
near the point $p$ to a real hyperquadric $v=\langle z,z\rangle $ such that $%
N(p)=0.$ Then we take a sufficiently small number $\varepsilon >0$ so that
each point $q\in \phi (V_{\theta })\cap B(0;\varepsilon )$ is connected by a
chain-segment $\gamma \subset \phi (U\cap M)$ to the origin$.$ Then we take
a number $\delta >0$ such that
\[
\phi \left( V_{\theta }\cap B(p;\delta )\right) \subset \phi (V_{\theta
})\cap B(0;\varepsilon ).
\]
By Lemma \ref{trans2}, there is a continuous family of complex line $l_{\tau
}$ such that the intersection $l_{\tau }\cap \phi (U\cap M)$ is a chain and
the chain $l_{\tau }\cap \phi (U\cap M)$ connects the point $p$ and the
point $\phi (\eta (\tau ))$ for each $\tau \in [0,1].$ Hence there is a
continuous family of chain-segments
\[
\Gamma :[0,1]\times [0,1]\rightarrow \phi (U\cap M)
\]
where $\Gamma (\tau ,\cdot ):[0,1]\rightarrow \phi (U\cap M)$ is a
chain-segment for each $\tau \in [0,1]$ satisfying
\[
\Gamma (\tau ,0)=0\quad \text{and}\quad \Gamma (\tau ,1)=\phi (\eta (\tau
))\quad \text{for all }\tau \in [0,1].
\]
Then, by Lemma \ref{chain-chain}, the desired family of chain-segments on $M$
is given by
\[
\gamma \equiv \phi ^{-1}\circ \Gamma :[0,1]\times [0,1]\rightarrow U\cap M.
\]
This completes the proof.\endproof
\subsection{Chains on a nonspherical real hypersurface}
\begin{lemma}
\label{core1-lemma}Let $M$ be a nondegenerate analytic real hypersurface and
$p$ be a point of $M.$ Suppose that there are an open cone $V_{\theta }$
with its vertex at the point $p$ and euclidean angle $\theta ,$ $0<\theta <%
\frac{\pi }{2},$ to the complex tangent hyperplane at the point $p,$ and an
open neighborhood $U$ of the point $p.$ Then there is a number $\delta >0$
such that, for each given curve $\eta :[0,1]\rightarrow V_{\theta }\cap
B(p;\delta ),$ there is a continuous family of chain-segments
\[
\gamma :[0,1]\times [0,1]\rightarrow U\cap M
\]
where $\gamma (s,\cdot ):[0,1]\rightarrow U\cap M$ is a chain-segment of $M$
for each $s\in [0,1]$ satisfying
\[
\gamma (s,0)=p\quad \text{and}\quad \gamma (s,1)=\eta (s)\quad \text{for all
}s\in [0,1].
\]
\end{lemma}
\proof
By translation and unitary transformation, if necessary, we may assume that
the point $p$ is at the origin and the real hypersurface $M$ is defined near
the origin by
\[
v=F(z,\overline{z},u),\quad \left. F\right| _{0}=\left. F_{z}\right|
_{0}=\left. F_{\overline{z}}\right| _{0}=0
\]
so that
\[
F(z,\overline{z},u)=\sum_{s=2}^{\infty }F_{2}(z,\overline{z},u).
\]
With a sufficiently small number $\varepsilon >0,$ we consider an analytic
family of real hypersurfaces $M_{\mu },$ $\left| \mu \right| \leq
\varepsilon ,$ defined near the origin by the equations:
\[
v=F^{*}(z,\overline{z},u;\mu )
\]
where
\[
F^{*}(z,\overline{z},u;\mu )=\sum_{s=2}^{\infty }\mu ^{k-2}F_{k}(z,\overline{%
z},u).
\]
Note that the function $F^{*}(z,\overline{z},u;\mu )$ is analytic of $%
z,u,\mu $ and the real hypersurface $M_{0}$(i.e., $\mu =0$) is spherical.
Then we obtain an analytic family of ordinary differential equations
\begin{equation}
p^{\prime \prime }=Q(\tau ,p,\overline{p},p^{\prime },\overline{p}^{\prime
};\mu ) \tag*{(4.5)} \label{chain eq}
\end{equation}
so that each chain $\gamma $ passing through the origin on $M_{\mu }$ is
given by
\[
\gamma :\left\{
\begin{array}{l}
z=p(\tau ) \\
w=\tau +iF^{*}\left( p(\tau ),\overline{p}(\tau ),\tau ;\mu \right)
\end{array}
\right.
\]
where $p(\tau )$ is a solution of the equation \ref{chain eq}. The solution $%
p$ of the equation \ref{chain eq} is given as an analytic function of $\tau
,\mu ,a,$ where
\[
a=p^{\prime }(0).
\]
In fact, for a given real number $\nu \in \Bbb{R}^{+},$ there are real
numbers $\tau _{1},\varepsilon _{1}$ such that the analytic function
\[
p=p(\tau ,\mu ,a)
\]
converges absolutely and uniformly on the range
\[
\left| a\right| \leq \nu ,\quad \left| \tau \right| \leq \tau _{1},\quad
\left| \mu \right| \leq \varepsilon _{1}.
\]
Since $M_{0}$ is spherical, by Theorem \ref{cone}, for an open neighborhood $%
U_{0}$ of the origin and an open cone $V_{\theta _{0}}$ with its vertex at
the origin and euclidean angle $\theta _{0},$ $0<\theta _{0}<\frac{\pi }{2},$
to the complex tangent hyperplane at the origin$,$ there is a number $\delta
_{0}>0$ such that, for each given curve $\eta :[0,1]\rightarrow V_{\theta
_{0}}\cap B(0;\delta _{0}),$ there is a continuous family of chain-segments
\[
\gamma _{0}:[0,1]\times [0,1]\rightarrow U_{0}\cap M_{0}
\]
where $\gamma _{0}(s,\cdot ):[0,1]\rightarrow U_{0}\cap M_{0}$ is a
chain-segment of $M_{0}$ for each $s\in [0,1]$ satisfying
\[
\gamma _{0}(s,0)=0\quad \text{and}\quad \gamma _{0}(s,1)=\eta (s)\quad \text{%
for all }s\in [0,1].
\]
Then, for an open neighborhood $U_{1}$ of the origin and an open cone $%
V_{\theta _{1}}$ with its vertex at the origin and euclidean angle $\theta
_{1},$ $0<\theta _{1}<\frac{\pi }{2},$ to the complex tangent hyperplane at
the origin, there exist real numbers $\mu _{1},\delta _{1}>0$ such that, for
each given curve $\eta :[0,1]\rightarrow V_{\theta _{1}}\cap B(0;\delta
_{1}),$ there is a continuous family of chain-segments
\[
\gamma _{1}:[0,1]\times [0,1]\rightarrow U_{1}\cap M_{\mu _{1}}
\]
where $\gamma _{1}(s,\cdot ):[0,1]\rightarrow U_{1}\cap M_{\mu _{1}}$ is a
chain-segment of $M_{\mu _{1}}$ for each $s\in [0,1]$ satisfying
\[
\gamma _{1}(s,0)=0\quad \text{and}\quad \gamma _{1}(s,1)=\eta (s)\quad \text{%
for all }s\in [0,1].
\]
By the way, the real hypersurface $M_{\mu },$ $\mu \neq 0,$ is obtained from
$M$ by the biholomorphic mapping:
\[
\chi _{\mu }:\left\{
\begin{array}{l}
z^{*}=\mu ^{-1}z \\
w^{*}=\mu ^{-2}w
\end{array}
\right. .
\]
For an open neighborhood $U$ of the origin and an open cone $V_{\theta }$
with its vertex at the origin and euclidean angle $\theta ,$ $0<\theta <%
\frac{\pi }{2},$ to the complex tangent hyperplane at the origin, we take $%
\theta _{1}$ and a real number $\delta >0$ such that
\[
\chi _{\mu _{1}}\left( V_{\theta }\cap B(0;\delta )\right) \subset V_{\theta
_{1}}\cap B(0;\delta _{1}).
\]
Then, for each given curve $\eta :[0,1]\rightarrow V_{\theta }\cap
B(0;\delta ),$ there is a continuous family of chain-segments
\[
\gamma _{1}:[0,1]\times [0,1]\rightarrow U_{1}\cap M_{\mu _{1}}
\]
where $\gamma _{1}(s,\cdot ):[0,1]\rightarrow \chi _{\mu _{1}}\left( U\cap
M\right) $ is a chain-segment of $M_{\mu _{1}}$ for each $s\in [0,1]$
satisfying
\[
\gamma _{1}(s,0)=0\quad \text{and}\quad \gamma _{1}(s,1)=\chi _{\mu
_{1}}\left( \eta (s)\right) \quad \text{for all }s\in [0,1].
\]
Then, by Lemma \ref{chain-chain}, the desired family of chain-segments on $M$
is given by
\[
\gamma \equiv \chi _{\mu _{1}}^{-1}\circ \gamma _{1}:[0,1]\times
[0,1]\rightarrow U\cap M.
\]
This completes the proof.\endproof
\begin{theorem}
\label{core1}Let $M$ be an analytic real hypersurface with nondegenerate
Levi form and $U$ be an open neighborhood of a point $p$ of $M.$ Then there
are a number $\varepsilon >0$ and a point $q\in U\cap M$ such that
\[
B(p;\varepsilon )\subset U
\]
and, for each given curve $\eta :[0,1]\rightarrow B(p;\varepsilon )\cap M,$
there is a continuous family of chain-segments
\[
\gamma :[0,1]\times [0,1]\rightarrow U\cap M
\]
where $\gamma (s,\cdot ):[0,1]\rightarrow U\cap M$ is a chain-segment of $M$
for each $s\in [0,1]$ satisfying
\[
\gamma (s,0)=q\quad \text{and}\quad \gamma (s,1)=\eta (s)\quad \text{for all
}s\in [0,1].
\]
\end{theorem}
\proof
We take a point $q$ sufficiently near the point $p$ such that there are an
open cone $V_{\theta }$ and a number $\delta >0$ in Lemma \ref{core1-lemma}
satisfying
\[
p\in V_{\theta }\cap B(q;\delta ).
\]
Then we take a number $\varepsilon >0$ such that
\[
B(p;\varepsilon )\in V_{\theta }\cap B(q;\delta ).
\]
The desired result follows from Lemma \ref{core1-lemma}. This completes the
proof.\endproof
\subsection{Piecewise chain curve}
Let $M$ be an analytic real hypersurface with nondegenerate Levi form. Let $%
\gamma $ be a piecewise differentiable curve of $[0,1]$ into $M$ such that
there are disjoint open intervals $I_{i},$ $i=1,\cdots ,m,$ satisfying
\[
\lbrack 0,1]=\bigcup_{i=1}^{m}\overline{I_{i}}
\]
and each fraction $\gamma \left( I_{i}\right) ,$ $i=1,\cdots ,m,$ is a
chain-segment. Then $\gamma $ shall be called a piecewise chain curve.
\begin{lemma}
\label{pcc}Let $M$ be a connected analytic real hypersurface and $\Gamma $
be a continuous curve on $M$ connecting two points $p,q\in M.$ Then, for a
given number $\varepsilon >0,$ there is a piecewise chain curve $\gamma
:[0,1]\rightarrow M$ such that
\begin{align*}
\gamma (0)=p,\quad \gamma (1)=q \\
\gamma [0,1]\subset \bigcup_{x\in \Gamma }B(x;\varepsilon ).
\end{align*}
\end{lemma}
\proof
Since the curve $\Gamma $ is compact, there are finitely many points $%
x_{i}\in \Gamma ,$ $i=1,\cdots ,l,$ such that
\[
\Gamma \subset \bigcup_{i=1}^{l}B(x_{i};\varepsilon ).
\]
Suppose that $x$ is a point on $\Gamma $ and $x\in B(x_{i};\varepsilon ).$
Then, by Lemma \ref{core1}, there is a number $\delta _{x}>0$ such that
every two points $y,z\in B(x;\delta _{x})$ are connected by at most $2$%
-pieced chain curve $\gamma \subset B(x_{i};\varepsilon ).$
Note that the set $\left\{ B(x;\delta _{x}):x\in \Gamma \right\} $ is an
open covering of $\Gamma .$ Since $\Gamma $ is compact, there is a finite
subcover, say,
\[
\Gamma \subset \bigcup_{j=1}^{k}B(y_{j};\delta _{y_{j}}).
\]
Then there is at most $2k$-pieced chain curve $\gamma :[0,1]\rightarrow M$
connecting the two point $p,q\in M$ such that
\[
\gamma [0,1]\subset \bigcup_{x\in \Gamma }B(x;\varepsilon ).
\]
This completes the proof.\endproof
\begin{lemma}
\label{pcc2}Let $M$ be a nondegenerate analytic real hypersurface$.$ Suppose
that there is a piecewise chain curve $\gamma $ connecting two points $%
p,q\in M.$ Then $M$ is biholomorphic to a real hyperquadric at the point $p$
if and only if $M$ is biholomorphic to a real hyperquadric at the point $q.$
\end{lemma}
\proof
Without loss of generality, we may assume that $p$ and $q$ are connected by
a chain-segment $\gamma :[0,1]\rightarrow M$. Then there is a chain $\Gamma $
of $M$ satisfying
\[
\gamma [0,1]\subset \Gamma .
\]
For each point $x\in \Gamma ,$ there are an open neighborhood $U_{x}$ of $x$
and a biholomorphic mapping $N_{x}$ such that
\begin{eqnarray*}
N_{x}(x) &=&0 \\
N_{x}(U_{x}\cap \Gamma ) &\subset &\left\{ z=v=0\right\} .
\end{eqnarray*}
Since the subset $\gamma [0,1]$ is compact, there is a finite subcover, say,
\[
\left\{ U_{x_{i}}:i=1,\cdots ,m\right\} .
\]
Suppose that the normalization $N_{x_{i}}$ transforms $M\cap U_{x_{i}}$ to
the real hypersurface $M_{x_{i}}^{\prime }$ defined near the origin by
\[
v=\langle z,z\rangle +\sum_{s,t\geq 2}F_{st}^{i}(z,\overline{z},u).
\]
Note that the functions $F_{st}^{i}(z,\overline{z},u)$ are analytic of $u$
on the set $N_{x_{i}}(U_{x_{i}}\cap \Gamma ).$ Thus
\[
F_{st}^{i}(z,\overline{z},u)\equiv 0
\]
whenever there is an open subset $U\subset U_{x_{i}}$ satisfying
\[
F_{st}^{i}(z,\overline{z},u)=0\quad \text{for }u\in N_{x_{i}}(U\cap \Gamma
).
\]
Thus the desired result follows. This completes the proof.\endproof
\begin{theorem}
\label{remove2}Let $M$ be a connected nondegenerate analytic real
hypersurface$.$ Then $M$ is not biholomorphic to a real hyperquadric at each
point of $M$ whenever there is a point $p$ of $M$ at which $M$ is not
biholomorphic to a real hyperquadric.
\end{theorem}
\proof
The contrapositive may be stated as follows: $M$ is locally biholomorphic to
a real hyperquadric at each point of $M$ whenever $M$ is biholomorphic to a
real hyperquadric at a point $p$ of $M.$ Suppose that there is a point $p$
of $M$ at which $M$ is biholomorphic to a real hyperquadric. By Lemma \ref
{pcc}, each point $q$ of $M$ is connected to $p$ by a piecewise chain curve.
Then, by Lemma \ref{pcc2}, $M$ is biholomorphic to a real hyperquadric at
the point $q$ as well. Since $M$ is connected, this completes the proof.\endproof
\begin{lemma}
\label{umbilic}Let $M$ be an analytic real hypersurface and $U$ be an open
neighborhood of a point $p\in M.$ Suppose that $U\cap M$ consists of umbilic
points. Then $U\cap M$ is locally biholomorphic to a real hyperquadric.
\end{lemma}
In the paper \cite{Pa2}, we have given the proof of Lemma \ref{umbilic}.
\begin{theorem}
Let $M$ be a nondegenerate analytic real hypersurface and $p$ be a point of $%
M.$ Suppose that $M$ is not biholomorphic to a real hyperquadric at the
point $p.$ Then there is a normalization $\phi $ near the point $p$ such
that $\phi \left( M\right) $ is defined by the equation, for $\dim M=3,$%
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2,\max (s,t)\geq 4}F_{st}(z,%
\overline{z},u)
\]
where
\[
F_{24}(z,\overline{z},u)\neq 0,
\]
and, for $\dim M\geq 5,$%
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\overline{z},u)
\]
where
\[
F_{22}(z,\overline{z},u)\neq 0.
\]
\end{theorem}
\proof
Suppose that the assertion is not true. Then $M$ is umbilic on all points of
all chains passing through the point $p.$ Then, by Theorem \ref{core1-lemma}%
, there is an open set $U$ such that every point of $U\cap M$ is connected
to $p$ by a chain of $M.$ Hence $U\cap M$ consists of umbilic points so
that, by Lemma \ref{umbilic}, $U\cap M$ is locally biholomorphic to a real
hyperquadric. By Lemma \ref{pcc2}, $M$ is biholomorphic to a real
hyperquadric at the point $p$ as well. This is a contradiction. This
completes the proof.\endproof
\subsection{Global straightening of a chain}
Let $M$ be a nondegenerate analytic real hypersurface defined near the
origin by the equation
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }+\sum_{\min
(s,t)\geq 2}F_{st}(z,\bar{z},u)
\]
where $\alpha $ is a given real number and
\[
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0.
\]
By using the expansion
\[
-\ln \left( 1-x\right) =\sum_{m=1}^{\infty }\frac{x^{m}}{m},
\]
the defining equation of $M$ comes to
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}^{*}(z,\bar{z},u)
\]
where
\begin{eqnarray*}
\Delta F_{22}^{*}(z,\overline{z},u) &=&4\alpha (n+1)\langle z,z\rangle \\
\Delta ^{2}F_{23}^{*}(z,\overline{z},u) &=&0 \\
\Delta ^{3}F_{33}^{*}(z,\overline{z},u) &=&32\alpha ^{2}n(n+1)(n+2).
\end{eqnarray*}
We may require the maximal analytic extension along the $u$-curve on the
real hypersurface $M$ in Moser-Vitushkin normal form.
\begin{lemma}
\label{linear-linear}Let $M$ be a real hypersurface defined near the origin
by
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\bar{z},u)
\]
where
\[
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0.
\]
Let $\frak{L}$ be the mapping
\[
\frak{L}:\left\{
\begin{array}{l}
z^{*}=\frac{z}{1-i\alpha w} \\
w^{*}=\frac{1}{2i\alpha }\ln \frac{1+i\alpha w}{1-i\alpha w}=\frac{1}{\alpha
}\tan ^{-1}\alpha w
\end{array}
\right. .
\]
Then $\frak{L}\left( M\right) $ is defined near the origin by the equation
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }+\sum_{\min
(s,t)\geq 2}^{\infty }F_{st}^{*}(z,\bar{z},u)
\]
where
\[
\Delta F_{22}^{*}=\Delta ^{2}F_{23}^{*}=\Delta ^{3}F_{33}^{*}=0.
\]
\end{lemma}
\proof
Suppose that the real hypersurface $M$ is in Chern-Moser normal form is
defined by the equation
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\bar{z},u)
\]
where
\begin{equation}
\Delta F_{22}=\Delta ^{2}F_{23}=\Delta ^{3}F_{33}=0. \tag*{(4.6)}
\label{un}
\end{equation}
Let $\frak{L}$ be a normalization of $M$ to Moser-Vitushkin normal form
leaving the $u$-curve invariant. We require the identity initial value on
the normalization $\frak{L}$ so that $\frak{L}$ is necessarily of the
form(cf. the proof of Theorem \ref{exi-uni})
\[
\frak{L}:\left\{
\begin{array}{l}
z^{*}=\sqrt{q^{\prime }(w)}U(w)z \\
w^{*}=q(w)
\end{array}
\right.
\]
where
\[
\langle U(u)z,U(u)z\rangle =\langle z,z\rangle \quad \mathrm{and}\quad
U(0)=id_{n\times n}.
\]
Suppose that $\frak{L}$ transforms $M$ to a real hypersurface $M^{\prime }$
defined by
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}^{*}(z,\bar{z},u).
\]
Then we obtain
\begin{align}
F_{22}(z,\overline{z},u)& =q^{\prime }(u)F_{22}^{*}(U(u)z,\overline{U(u)z}%
,q(u)) \nonumber \\
& -i\langle z,z\rangle \langle z,U(u)^{-1}\{U^{\prime }(u)+\frac{1}{2}%
q^{\prime }(u)^{-1}q^{\prime \prime }(u)U(u)\}z\rangle \nonumber \\
& +i\langle z,z\rangle \langle U(u)^{-1}\{U^{\prime }(u)+\frac{1}{2}%
q^{\prime }(u)^{-1}q^{\prime \prime }(u)U(u)\}z,z\rangle \tag*{(4.7)}
\label{deux}
\end{align}
and
\begin{equation}
F_{23}(z,\overline{z},u)=q^{\prime }(u)\sqrt{\left| q^{\prime }(u)\right| }%
F_{23}^{*}(U(u)z,\overline{U(u)z},q(u)). \tag*{(4.8)} \label{trois}
\end{equation}
The condition $\Delta ^{2}F_{23}=0$ in \ref{un} yields
\[
\Delta ^{2}F_{23}^{*}(z,\overline{z},u)=0
\]
so that the $u$-curve is a chain of $M^{\prime }.$ We require that $%
M^{\prime }$ is in Moser-Vitushkin normal form so that
\begin{eqnarray*}
\Delta F_{22}^{*}(z,\overline{z},u) &=&4\alpha (n+1)\langle z,z\rangle \\
\Delta ^{2}F_{23}^{*}(z,\overline{z},u) &=&0 \\
\Delta ^{3}F_{33}^{*}(z,\overline{z},u) &=&32\alpha ^{2}n(n+1)(n+2).
\end{eqnarray*}
Then, in \ref{deux}, we require the following condition
\begin{eqnarray*}
\Delta F_{22}(z,\overline{z},u) &=&0 \\
\Delta F_{22}^{*}(z,\overline{z},u) &=&4\alpha (n+1)\langle z,z\rangle
\end{eqnarray*}
so that
\begin{align}
& 4\alpha (n+1)q^{\prime }(u)\langle z,z\rangle \nonumber \\
& +i(n+2)\{\langle U(u)^{-1}U^{\prime }(u)z,z\rangle -\langle
z,U(u)^{-1}U^{\prime }(u)z\rangle \} \nonumber \\
& +i\langle z,z\rangle \{\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))-\overline{%
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))}\} \nonumber \\
& =0. \tag*{(4.9)} \label{quatre}
\end{align}
From the condition $\langle U(u)z,U(u)z\rangle =\langle z,z\rangle ,$ we
obtain
\begin{eqnarray*}
\langle U(u)^{-1}U^{\prime }(u)z,z\rangle +\langle z,U(u)^{-1}U^{\prime
}(u)z\rangle &=&0 \\
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))+\overline{\mathrm{Tr}%
(U(u)^{-1}U^{\prime }(u))} &=&0.
\end{eqnarray*}
The equality \ref{quatre} comes to
\[
2\alpha i(n+1)q^{\prime }(u)id_{n\times n}=(n+2)U(u)^{-1}U^{\prime }(u)+%
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))id_{n\times n},
\]
which yields
\[
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))=\alpha niq^{\prime }(u).
\]
Hence we obtain
\[
U^{\prime }(u)=\alpha iq^{\prime }(u)U(u).
\]
Thus the function $U(u)$ is given by
\[
U(u)=\exp \alpha iq(u).
\]
Then the mapping $\frak{L}$ is necessarily of the form:
\[
\frak{L}:\left\{
\begin{array}{l}
z^{*}=\sqrt{q^{\prime }(w)}z\exp \alpha iq(w) \\
w^{*}=q(w)
\end{array}
\right. .
\]
Then we have
\begin{align}
F_{33}(z,\overline{z},u)& =q^{\prime }(u)\left| q^{\prime }(u)\right|
F_{33}^{*}(z,\overline{z},q(u)) \nonumber \\
& -6\alpha q^{\prime }(u)^{2}\langle z,z\rangle F_{22}^{*}(z,\overline{z}%
,q(u)) \nonumber \\
& +\left\{ -\frac{q^{\prime \prime \prime }(u)}{3q^{\prime }(u)}+\frac{1}{2}%
\left( \frac{q^{\prime \prime }(u)}{q^{\prime }(u)}\right) ^{2}+6\alpha
^{2}q^{\prime }(u)^{2}\right\} \langle z,z\rangle ^{3}. \tag*{(4.10)}
\label{cinq}
\end{align}
We have the following identities:
\begin{eqnarray*}
\Delta ^{3}\langle z,z\rangle ^{3} &=&6n(n+1)(n+2) \\
\Delta ^{3}\left\{ F_{33}^{*}(z,\overline{z},q(u))\right\} &=&\Delta
^{3}F_{33}^{*}(z,\overline{z},q(u)) \\
\Delta ^{3}\left\{ \langle z,z\rangle F_{22}^{*}(z,\overline{z}%
,q(u))\right\} &=&3(n+2)\Delta ^{2}F_{22}^{*}(z,\overline{z},q(u)).
\end{eqnarray*}
Then, by requiring in \ref{cinq} the following conditions
\[
\Delta ^{3}F_{33}(z,\overline{z},u)=0
\]
and
\begin{eqnarray*}
\Delta F_{22}^{*}(z,\overline{z},u) &=&4\alpha (n+1)\langle z,z\rangle \\
\Delta ^{3}F_{33}^{*}(z,\overline{z},u) &=&32\alpha ^{2}n(n+1)(n+2),
\end{eqnarray*}
we obtain
\[
\frac{q^{\prime \prime \prime }(u)}{3q^{\prime }(u)}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }(u)}{q^{\prime }(u)}\right) ^{2}+\frac{2\alpha ^{2}}{3}%
\cdot q^{\prime }(u)^{2}=0.
\]
We easily check that the solution $q(u)$ with the initial value
\[
q(0)=q^{\prime \prime }(0)=0\quad \mathrm{and}\quad q^{\prime }(0)=1
\]
is given by
\[
q(w)=\frac{1}{2i\alpha }\ln \frac{1+i\alpha w}{1-i\alpha w}=\frac{1}{\alpha }%
\tan ^{-1}\alpha w.
\]
Then, with this $q(w),$ we easily check as well that
\[
\sqrt{q^{\prime }(w)}\exp \alpha iq(w)=\frac{1}{1-i\alpha w}.
\]
This completes the proof.\endproof
\begin{lemma}[Ezhov]
\label{MV}Let $M$ be an analytic real hypersurface in Moser-Vitushkin normal
form and $\phi $ be a normalization of $M$ to Moser-Vitushkin normal form
such that $\phi $ leaves the $u$-curve invariant. Then the mapping $\phi $
is given by
\[
\phi :\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}%
Uze^{\alpha i(q(w)-w)} \\
w^{*}=q(w)
\end{array}
\right.
\]
where
\[
\langle Uz,Uz\rangle =\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}\langle
z,z\rangle
\]
and the function $q(u)$ is a solution of the equation:
\[
\frac{q^{\prime \prime \prime }}{3q^{\prime }}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }}{q^{\prime }}\right) ^{2}+\frac{2\alpha ^{2}}{3}\left(
q^{\prime 2}-1\right) =0.
\]
\end{lemma}
\proof
Suppose that the real hypersurface $M$ is in Moser-Vitushkin normal form is
defined by the equation
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}(z,\bar{z},u)
\]
where
\begin{eqnarray*}
\Delta F_{22}(z,\overline{z},u) &=&4\alpha (n+1)\langle z,z\rangle \\
\Delta ^{2}F_{23}(z,\overline{z},u) &=&0 \\
\Delta ^{3}F_{33}(z,\overline{z},u) &=&32\alpha ^{2}n(n+1)(n+2).
\end{eqnarray*}
Let $\phi $ be a normalization of $M$ to Moser-Vitushkin normal form leaving
the $u$-curve invariant. Then the mapping $\phi $ is necessarily of the
form(cf. the proof of Theorem \ref{exi-uni})
\[
\phi :\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}U(w)z \\
w^{*}=q(w)
\end{array}
\right.
\]
where
\[
\langle U(u)z,U(u)z\rangle =\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}\langle
z,z\rangle .
\]
Suppose that $\phi $ transforms $M$ to a real hypersurface $M^{\prime }$
defined by
\[
v=\langle z,z\rangle +\sum_{\min (s,t)\geq 2}F_{st}^{*}(z,\bar{z},u).
\]
Then we obtain
\begin{align*}
F_{22}(z,\overline{z},u)& =q^{\prime }(u)F_{22}^{*}(U(u)z,\overline{U(u)z}%
,q(u)) \\
& -i\langle z,z\rangle \langle z,U(u)^{-1}\{U^{\prime }(u)+\frac{1}{2}%
q^{\prime }(u)^{-1}q^{\prime \prime }(u)U(u)\}z\rangle \\
& +i\langle z,z\rangle \langle U(u)^{-1}\{U^{\prime }(u)+\frac{1}{2}%
q^{\prime }(u)^{-1}q^{\prime \prime }(u)U(u)\}z,z\rangle \\
F_{23}(z,\overline{z},u)& =q^{\prime }(u)\sqrt{\left| q^{\prime }(u)\right| }%
F_{23}^{*}(U(u)z,\overline{U(u)z},q(u)).
\end{align*}
Then we easily see that
\[
\Delta ^{2}F_{23}^{*}(z,\overline{z},u)=0.
\]
We require the following condition
\begin{eqnarray*}
\Delta F_{22}(z,\overline{z},u) &=&\Delta F_{22}^{*}(z,\overline{z},u) \\
&=&4\alpha (n+1)\langle z,z\rangle
\end{eqnarray*}
so that
\begin{align}
& 4\alpha (n+1)(q^{\prime }(u)-1)\langle z,z\rangle \nonumber \\
& +i(n+2)\{\langle U(u)^{-1}U^{\prime }(u)z,z\rangle -\langle
z,U(u)^{-1}U^{\prime }(u)z\rangle \} \nonumber \\
& +i\langle z,z\rangle \{\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))-\overline{%
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))}\} \nonumber \\
& =0. \tag*{(4.11)} \label{equ 22}
\end{align}
From the equality
\[
\langle U(u)z,U(u)z\rangle =\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}\langle
z,z\rangle ,
\]
we have identities
\begin{eqnarray*}
\langle U(u)^{-1}U^{\prime }(u)z,z\rangle +\langle z,U(u)^{-1}U^{\prime
}(u)z\rangle &=&0 \\
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))+\overline{\mathrm{Tr}%
(U(u)^{-1}U^{\prime }(u))} &=&0.
\end{eqnarray*}
The equality \ref{equ 22} comes to
\[
2\alpha i(n+1)(q^{\prime }(u)-1)id_{n\times n}=(n+2)U(u)^{-1}U^{\prime }(u)+%
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))id_{n\times n},
\]
which yields
\[
\mathrm{Tr}(U(u)^{-1}U^{\prime }(u))=\alpha ni(q^{\prime }(u)-1).
\]
Hence we obtain
\[
U^{\prime }(u)=\alpha i(q^{\prime }(u)-1)U(u).
\]
Thus the function $U(u)$ is given by
\[
U(u)=U(0)e^{\alpha i(q(u)-u)}.
\]
Then the mapping $\phi $ is necessarily of the form:
\[
\phi :\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}q^{\prime }(w)}U(0)z\exp
\alpha i(q(w)-w) \\
w^{*}=q(w)
\end{array}
\right.
\]
where
\[
\langle U(0)z,U(0)z\rangle =\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}\langle
z,z\rangle .
\]
Then we have
\begin{align}
F_{33}(z,\overline{z},u)& =q^{\prime }(u)\left| q^{\prime }(u)\right|
F_{33}^{*}(U(0)z,\overline{U(0)z},q(u)) \nonumber \\
& -6\alpha q^{\prime }(u)(q^{\prime }(u)-1)\langle z,z\rangle
F_{22}^{*}(U(0)z,\overline{U(0)z},q(u)) \nonumber \\
& +\left\{ -\frac{q^{\prime \prime \prime }(u)}{3q^{\prime }(u)}+\frac{1}{2}%
\left( \frac{q^{\prime \prime }(u)}{q^{\prime }(u)}\right) ^{2}+3\alpha
^{2}\left( q^{\prime }(u)-1\right) ^{2}\right\} \langle z,z\rangle ^{3}.
\tag*{(4.12)} \label{f33}
\end{align}
We have the following identities:
\begin{eqnarray*}
\Delta ^{3}\langle z,z\rangle ^{3} &=&6n(n+1)(n+2) \\
\Delta ^{3}\left\{ F_{33}^{*}(U(0)z,\overline{U(0)z},q(u))\right\} &=&%
\mathrm{sign\{}q^{\prime }(0)\mathrm{\}}\Delta ^{3}F_{33}^{*}(z,\overline{z}%
,q(u)) \\
\Delta ^{3}\left\{ \langle z,z\rangle F_{22}^{*}(U(0)z,\overline{U(0)z}%
,q(u))\right\} &=&3(n+2)\Delta ^{2}F_{22}^{*}(z,\overline{z},q(u)).
\end{eqnarray*}
Then, requiring in \ref{f33} the following condition
\begin{eqnarray*}
\Delta ^{3}F_{33}(z,\overline{z},u) &=&\Delta ^{3}F_{33}^{*}(z,\overline{z}%
,u) \\
&=&32\alpha ^{2}n(n+1)(n+2),
\end{eqnarray*}
we obtain
\begin{eqnarray*}
\frac{q^{\prime \prime \prime }(u)}{3q^{\prime }(u)}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }(u)}{q^{\prime }(u)}\right) ^{2} &=&6\alpha ^{2}\left(
q^{\prime }(u)-1\right) ^{2}+\frac{16\alpha ^{2}}{3}\left( q^{\prime
}(u)^{2}-1\right) \\
&&-12\alpha ^{2}q^{\prime }(u)\left( q^{\prime }(u)-1\right) \\
&=&-\frac{2\alpha ^{2}}{3}\left( q^{\prime }(u)^{2}-1\right) .
\end{eqnarray*}
This completes the proof.\endproof
\begin{lemma}[Vitushkin]
\label{Vit}Let $q(u)$ be an analytic solution of the equation
\begin{equation}
\frac{q^{\prime \prime \prime }(u)}{3q^{\prime }(u)}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }(u)}{q^{\prime }(u)}\right) ^{2}+\frac{2\alpha ^{2}}{3}%
\left( q^{\prime }(u)^{2}-1\right) =0. \tag*{(4.13)} \label{schwarz}
\end{equation}
Then the function $q(u)$ is given by the relation
\[
e^{2\alpha iq(u)}=e^{i\lambda }\frac{e^{2\alpha iu}+\kappa }{1+\overline{%
\kappa }e^{2\alpha iu}}
\]
where
\[
\lambda \in \Bbb{R},\Bbb{\quad }\kappa \in \Bbb{C},\Bbb{\quad }\left| \kappa
\right| \neq 1.
\]
Further, the function $q(u)$ satisfies the relation
\[
\left[ \frac{q(u_{2})-q(u_{1})}{\pi \alpha ^{-1}}\right] =\mathrm{sign\{}%
q^{\prime }(0)\mathrm{\}}\left[ \frac{u_{2}-u_{1}}{\pi \alpha ^{-1}}\right]
.
\]
\end{lemma}
\proof
Let $\frak{L}$ be the mapping in Lemma \ref{linear-linear}. Then the
normalization $\phi =(f^{*},g^{*})$ to Moser-Vitushkin normal form is given
by the relation
\[
\phi =\frak{L}\circ \varphi \circ \frak{L}^{-1}
\]
where $\varphi =(f,g)$ is a normalization to Chern-Moser normal form.
Explicitly, we obtain
\[
\left\{
\begin{array}{l}
f^{*}(z,w)=\frac{f\left( z(1-i\tan \alpha w),\alpha ^{-1}\tan \alpha
w\right) }{1-i\alpha g\left( z(1-i\tan \alpha w),\alpha ^{-1}\tan \alpha
w\right) } \\
g^{*}(z,w)=\alpha ^{-1}\tan ^{-1}\alpha g\left( z(1-i\tan \alpha w),\alpha
^{-1}\tan \alpha w\right)
\end{array}
\right. .
\]
Here we take
\begin{equation}
\varphi :\left\{
\begin{array}{c}
z^{*}=\frac{Cz}{1-rw} \\
w^{*}=\frac{\rho w}{1-rw}
\end{array}
\right. \nonumber
\end{equation}
so that
\begin{equation}
\phi :\left\{
\begin{array}{l}
z^{*}=\sqrt{\mathrm{sign\{}Q^{\prime }(0)\mathrm{\}}Q^{\prime }(\exp 2\alpha
iw)}Cz \\
w^{*}=\frac{1}{2\alpha i}\ln Q(\exp 2\alpha iw)
\end{array}
\right. \nonumber
\end{equation}
where
\[
Q(w)=e^{i\lambda }\frac{w+\kappa }{1+\overline{\kappa }w}
\]
and
\begin{gather*}
e^{i\lambda }=\frac{\alpha (1+\rho )+ir}{\alpha (1+\rho )-ir},\quad \kappa =%
\frac{\alpha (1-\rho )-ir}{\alpha (1+\rho )+ir}, \\
\rho =Q^{\prime }(0)\neq 0,\quad \rho ,r\in \Bbb{R}.
\end{gather*}
Then the solution $q(u)$ of the equation \ref{schwarz} is given by
\[
q(u)=\frac{1}{2\alpha i}\ln Q(\exp 2\alpha iu)
\]
so that
\[
e^{2\alpha iq(u)}=e^{i\lambda }\frac{e^{2\alpha iu}+\kappa }{1+\overline{%
\kappa }e^{2\alpha iu}}
\]
where
\[
\lambda \in \Bbb{R}\quad \text{and}\quad \left| \kappa \right| \neq 1.
\]
Finally, note that the mapping
\[
w^{*}=e^{i\lambda }\frac{w+\kappa }{1+\overline{\kappa }w}
\]
is an automorphism of the circle $S^{1}.$ Thus we obtain
\[
\left[ \frac{q(u_{2})-q(u_{1})}{\pi \alpha ^{-1}}\right] =\mathrm{sign\{}%
q^{\prime }(0)\mathrm{\}}\left[ \frac{u_{2}-u_{1}}{\pi \alpha ^{-1}}\right]
.
\]
This completes the proof.\endproof
\begin{theorem}[Vitushkin]
\label{embedding}Let $M$ be a nondegenerate analytic real hypersurface and $%
\gamma $ be a chain passing through the point $p.$ Suppose that there are an
open neighborhood $U$ of $p$ and a normalizing mapping $\phi $ of $M$ to
Moser-Vitushkin normal form such that $\phi $ translates the point $p$ to
the origin and
\[
\phi (\gamma \cap U)\subset \left\{ z=v=0\right\} .
\]
Then the biholomorphic mapping $\phi $ of $M$ is biholomorphically continued
along $\gamma $ such that $\gamma $ is mapped by the mapping $\phi $ into
the $u$-curve in Moser-Vitushkin normal form.
\end{theorem}
\proof
Let $M^{\prime }$ be a real hypersurface in Moser-Vitushkin normal form such
that $M^{\prime }$ is maximally extended along the $u$-curve to the interval
$(u_{-},u_{+}),$ where
\[
-\infty \leq u_{-}<0<u_{+}\leq \infty .
\]
Let $\phi $ be a normalizing mapping of $M$ to $M^{\prime }$ such that $%
\gamma $ is mapped by $\phi $ into the $u$-curve and the point $p$ is mapped
by $\phi $ to the origin. Then we claim that the mapping $\phi $ is
biholomorphically continued along $\gamma $ so that
\begin{equation}
\phi (\gamma )\subset (u_{-},u_{+}). \tag*{(4.14)} \label{included}
\end{equation}
Suppose that the assertion is not true. Then there is a chain-segment $%
\lambda :[0,1]\rightarrow \gamma $ such that $\lambda (0)=p$ and $\phi $ is
analytically continued along all subpath $\lambda [0,\tau ],$ $\tau <1,$ but
not the whole path $\lambda [0,1].$ Let $q=\lambda (1).$ Since $\gamma $ is
a chain and $q$ is an interior point of $\gamma ,$ there are an open
neighborhood $V$ of the point $q$ and a normalizing mapping $h$ of $M$ to
Moser-Vitushkin normal form satisfying
\begin{eqnarray*}
h(q) &=&0 \\
h(V\cap \gamma ) &\subset &\left\{ z=v=0\right\} .
\end{eqnarray*}
We take a point $x$ on $\lambda [0,1]$ such that
\[
x\in \lambda [0,1)\cap U.
\]
Then, by Lemma \ref{MV}, there are an open neighborhood $W$ of the point $x$
and a biholomorphic mapping $k$ satisfying
\[
\phi =k\circ h\quad \text{on }W\cap V.
\]
By Lemma \ref{MV} and Lemma \ref{Vit}, the mapping $k$ is biholomorphically
extended to an open neighborhood of the whole $u$-curve. Thus passing to an
open subset of $U$ containing $\lambda [0,1]\cap U,$ if necessary, the
following mapping
\[
k\circ h\quad \text{on }V
\]
is an analytic continuation of $\phi $ over the point $\lambda (1).$ Then
necessarily we have
\[
k\circ h(\lambda [0,1]\cap V)\subset (u_{-},u_{+}).
\]
This completes the proof.\endproof
\subsection{Extension of a chain}
\begin{lemma}
\label{conti-family}Let $M$ be a nondegenerate analytic real hypersurface
and $\gamma :[0,1]\rightarrow M$ be a continuous curve. Then there exist a
continuous family of real hypersurfaces $M_{\tau },$ $\tau \in [0,1],$ in
normal form and a continuous family of biholomorphic mappings $\phi _{\tau
}, $ $\tau \in [0,1],$ such that $\phi _{\tau }$ translates the point $%
\gamma (\tau )$ to the origin and transforms the germ $M$ at $\gamma (\tau )$
to the germ $M_{\tau }$ at the origin for each $\tau \in [0,1]$ and the
radius of convergence of the mapping $\phi _{\tau }$ at the origin depends
only on $M$ and the point $\gamma (\tau ).$
\end{lemma}
\proof
Without loss of generality, we may assume that the point $\gamma (0)$ is the
origin and the real hypersurface $M$ is defined near the origin by
\[
v=F\left( z,\overline{z},u\right) ,\quad \left. F\right| _{0}=\left.
F_{z}\right| _{0}=\left. F_{\overline{z}}\right| _{0}=0
\]
and the curve $\gamma [0,1]$ is given by some continuous functions $p(\tau )$
and $q(\tau )$ via the equation
\[
\gamma :\left\{
\begin{array}{l}
z=p(\tau ) \\
w=q(\tau )+iF\left( p(\tau ),\overline{p}(\tau ),q(\tau )\right)
\end{array}
\right. \quad \text{for }\tau \in [0,1]
\]
where
\[
\overline{q(\tau )}=q(\tau ).
\]
Let $\varphi _{\tau }$ be a biholomorphic mapping defined by
\[
\varphi _{\tau }:\left\{
\begin{array}{l}
z^{*}=z-p(\tau ) \\
w^{*}=w-q(\tau )-2i\sum_{\alpha =1}^{n}z^{\alpha }\left( \frac{\partial F}{%
\partial z^{\alpha }}\right) \left( p(\tau ),\overline{p}(\tau ),q(\tau
)\right)
\end{array}
\right. .
\]
Then we obtain the real hypersurfaces $\varphi _{\tau }\left( M\right) ,\tau
\in [0,1],$ defined at the origin by the equation
\[
v=F^{\tau }\left( z,\overline{z},u\right) ,\quad \left. F^{\tau }\right|
_{0}=\left. F_{z}^{\tau }\right| _{0}=\left. F_{\overline{z}}^{\tau }\right|
_{0}=0
\]
where
\begin{eqnarray*}
F^{\tau }\left( z,\overline{z},u\right) &=&F\left( z+p(\tau ),\overline{z}+%
\overline{p}(\tau ),u+q^{*}(\tau )\right) \\
&&-F\left( p(\tau ),\overline{p}(\tau ),q(\tau )\right) \\
&&-\sum_{\alpha =1}^{n}z^{\alpha }\left( \frac{\partial F}{\partial
z^{\alpha }}\right) \left( p(\tau ),\overline{p}(\tau ),q(\tau )\right) \\
&&-\sum_{\beta =1}^{n}\overline{z}^{\beta }\left( \frac{\partial F}{\partial
\overline{z}^{\beta }}\right) \left( p(\tau ),\overline{p}(\tau ),q(\tau
)\right)
\end{eqnarray*}
and
\begin{eqnarray*}
q^{*}(\tau ) &=&q(\tau )+i\sum_{\alpha =1}^{n}z^{\alpha }\left( \frac{%
\partial F}{\partial z^{\alpha }}\right) \left( p(\tau ),\overline{p}(\tau
),q(\tau )\right) \\
&&-i\sum_{\beta =1}^{n}\overline{z}^{\beta }\left( \frac{\partial F}{%
\partial \overline{z}^{\beta }}\right) \left( p(\tau ),\overline{p}(\tau
),q(\tau )\right) .
\end{eqnarray*}
Let $\psi _{\tau }=\left( f^{\tau },g^{\tau }\right) $ be a normalization of
the germs $\varphi _{\tau }\left( M\right) ,\tau \in [0,1],$ with identity
initial value such that the real hypersurface $\psi _{\tau }\circ \varphi
_{\tau }\left( M\right) $ is defined by the equation
\[
v=\langle z,z\rangle +\sum_{k=4}^{\infty }F_{k}^{*\tau }(z,\bar{z},u).
\]
By Theorem \ref{main}, the functions
\[
\left\{
\begin{array}{l}
\tau \longmapsto \left( f^{\tau },g^{\tau }\right) \\
\tau \longmapsto \sum_{k=4}^{\infty }F_{k}^{*\tau }(z,\bar{z},u)
\end{array}
\right.
\]
are conditnuous. Then the mappings $\phi _{\tau }=\psi _{\tau }\circ \varphi
_{\tau }$ and the real hypersurfaces $M_{\tau }=\psi _{\tau }\circ \varphi
_{\tau }\left( M\right) $ for each $\tau \in [0,1]$ satisfy all the required
conditions. This cmpletes the proof.\endproof
\begin{lemma}
\label{u-estimate}Let $M$ be a nondegenerate analytic real hypersurface and $%
\gamma :[0,1]\rightarrow M$ be a curve such that $\gamma [0,\tau ]$ is a
chain-segment for each $\tau <0.$ Let $U$ be an open set satisfying $\gamma
[0,1)\subset U$ and $\phi $ be a normalization of $M$ on $U$ to
Moser-Vitushkin normal form. Suppose that there is a chain-segment $\lambda $
on $\phi \left( M\right) $ in the $u$-curve satisfying
\[
\phi (\gamma [0,1))\subset \lambda .
\]
Then
\[
\sup_{0\leq \tau <1}\left\{
\begin{array}{ll}
\frac{\left\| \left( \left. \frac{\partial f}{\partial z}\right| _{\phi
\circ \gamma (\tau )}\right) \right\| }{\left( \left| \left. \frac{\partial g%
}{\partial w}\right| _{\phi \circ \gamma (\tau )}\right| \right) ^{\frac{1}{2%
}}},\quad & \frac{\left\| \left( \left. \frac{\partial f}{\partial z}\right|
_{\phi \circ \gamma (\tau )}\right) ^{-1}\right\| }{\left( \left| \left.
\frac{\partial g}{\partial w}\right| _{\phi \circ \gamma (\tau )}\right|
\right) ^{-\frac{1}{2}}}
\end{array}
\right\} <\infty
\]
where
\[
\phi ^{-1}=\left( f,g\right) .
\]
\end{lemma}
\proof
Suppose that the real hypersurface $M$ is defined on an open neighborhood $U$
of the origin by the equation
\[
v=F\left( z,\overline{z},u\right) ,\quad \left. F\right| _{0}=\left.
dF\right| _{0}=0
\]
and the curve $\gamma [0,1]\subset M\cap U$ is passing through the origin.
Then there is a biholomorphic mapping
\[
\phi :\left\{
\begin{array}{l}
z=z^{*}+D\left( z^{*},w^{*}\right) \\
w=w^{*}+g\left( z^{*},w^{*}\right)
\end{array}
\right.
\]
where
\[
D_{z}\left( 0,u\right) =0,\quad \Re g(0,u)=0
\]
such that the mapping $\phi $ straightens $\gamma [0,1]$ into the $u$-curve
and transforms $M$ to a real hypersurface $\phi \left( M\right) $ defined by
\[
v=F_{11}^{*}\left( z,\overline{z},u\right) +\sum_{s,t\geq 2}F_{st}^{*}\left(
z,\overline{z},u\right)
\]
where
\[
\left( \mathrm{tr}\right) ^{2}F_{23}=0.
\]
Note that $F_{11}^{*}\left( z,\overline{z},u_{\tau }\right) $ is the Levi
form of $M$ at the point $\gamma \left( \tau \right) \in U\cap M$ for $\tau
\in [0,1],$ where
\[
\left( 0,u_{\tau }\right) =\phi \circ \gamma \left( \tau \right) .
\]
Since $\lambda $ is a chain-segment, $F_{11}^{*}\left( z,\overline{z}%
,u_{\tau }\right) $ may be finite for all $\tau \in [0,1].$ Thus we can take
a matrix $E_{1}\left( u\right) $ and a real number $c>0$ such that
\[
F_{11}^{*}\left( z,\overline{z},u\right) =\langle E_{1}\left( u\right)
z,E_{1}\left( u\right) z\rangle
\]
and
\[
\sup_{\tau \in [0,1]}\left\{ \left\| E_{1}\left( u_{\tau }\right) \right\|
,\quad \left\| E_{1}\left( u_{\tau }\right) ^{-1}\right\| \right\} \leq
c<\infty .
\]
We shall show
\[
\sup_{0\leq \tau <1}\left\{ \left\| E\left( u_{\tau }\right) \right\| ,\text{
}\left\| E\left( u_{\tau }\right) ^{-1}\right\| \right\} <\infty
\]
where $\phi ^{-1}=\left( f,g\right) $ and
\[
E(u)=\frac{\left( \left. \frac{\partial f}{\partial z}\right|
_{z=v=0}\right) }{\sqrt{\left| \left. \frac{\partial g}{\partial w}\right|
_{z=v=0}\right| }}.
\]
Here the function $E(u)$ satisfies the following ordinary differential
equation(cf. \cite{Pa1})
\begin{eqnarray*}
&&F_{11}^{*}\left( E\left( u\right) ^{-1}E^{\prime }\left( u\right) z,%
\overline{z},u\right) \\
&=&-\frac{2i}{n+1}\cdot \mathrm{tr}F_{22}^{*}\left( z,\overline{z},u\right) +%
\frac{1}{2}\left( \frac{\partial F_{11}^{*}}{\partial u}\right) \left( z,%
\overline{z},u\right) \\
&&+\frac{i}{(n+1)(n+2)}\cdot \left( \mathrm{tr}\right) ^{2}F_{22}^{*}\times
F_{11}^{*}\left( z,\overline{z},u\right) .
\end{eqnarray*}
We easily see that there is a real number $e>0$ such that
\[
\sup_{0\leq \tau <1}\left\| E(u_{\tau })^{-1}E^{\prime }(u_{\tau })\right\|
\leq e<\infty .
\]
Notice that
\[
E(u)^{-1}E^{\prime }(u)=-\left( E(u)^{-1}\right) ^{\prime }E(u).
\]
Because $\lambda $ is a chain-segment, we have
\[
\int_{\phi \circ \gamma [0,1]}du\leq \int_{\lambda }du<\infty .
\]
Hence we obtain the following estimates
\begin{eqnarray*}
\left\| E(u_{\tau })\right\| &\leq &\left\| E(u_{0})\right\| \exp \int_{\phi
\circ \gamma [0,1]}edu<\infty \\
\left\| E(u_{\tau })^{-1}\right\| &\leq &\left\| E(u_{0})^{-1}\right\| \exp
\int_{\phi \circ \gamma [0,1]}edu<\infty
\end{eqnarray*}
where
\[
\left( 0,u_{0}\right) =\phi \circ \gamma \left( 0\right) .
\]
So the condition $F_{11}^{*}\left( z,\overline{z},u\right) =\langle E\left(
u\right) z,E\left( u\right) z\rangle $ determines the matrix $E\left(
u\right) $ such that
\[
E\left( u\right) =U\left( u\right) E_{1}\left( u\right)
\]
where $U\left( u\right) $ is any matrix satisfying
\[
\langle U\left( u\right) z,U\left( u\right) z\rangle =\langle z,z\rangle .
\]
Hence we have the following relation
\begin{eqnarray*}
c^{-1}\left\| E\left( u_{\tau }\right) \right\| &\leq &\left\| U\left(
u_{\tau }\right) \right\| \leq c\left\| E\left( u_{\tau }\right) \right\| \\
c^{-1}\left\| E\left( u_{\tau }\right) ^{-1}\right\| &\leq &\left\| U\left(
u_{\tau }\right) ^{-1}\right\| \leq c\left\| E\left( u_{\tau }\right)
^{-1}\right\|
\end{eqnarray*}
for all $\tau \in [0,1].$ Therefore, we also have showed
\[
\sup_{\tau \in [0,1)}\left\{ \left\| U\left( u_{\tau }\right) \right\|
,\quad \left\| U\left( u_{\tau }\right) ^{-1}\right\| \right\} <\infty .
\]
This completes the proof.\endproof
\begin{lemma}
\label{lowest}Let $M$ be an analytic real hypersurface in normal form
defined by
\[
v=\langle z,z\rangle +F^{*}\left( z,\overline{z},u\right)
\]
where
\[
F^{*}\left( z,\overline{z},u\right) =\sum_{k=4}^{\infty }F_{k}^{*}\left( z,%
\overline{z},u\right) .
\]
Suppose that $M$ is not a real hyperquadric. Then there is an integer $l\geq
4$ such that
\begin{align}
F_{k}^{*}\left( z,\overline{z},u\right) =0\quad \text{for all }k\leq l-1
\nonumber \\
F_{l}^{*}\left( z,\overline{z},u\right) \neq 0 \nonumber \label{integer 1}
\end{align}
for any value of $U,a,\rho ,r$.
\end{lemma}
In the paper \cite{Pa2}, we have given the proof of Lemma \ref{lowest}.
\begin{theorem}
\label{core2}Let $M,M^{\prime }$ be nonspherical analytic real hypersurfaces
and $\gamma :[0,1]\rightarrow M$ be a curve such that $\gamma [0,\tau ]$ is
a chain-segment for each $\tau <1.$ Let $U$ be an open neighborhood of $%
\gamma [0,1)$ and $\phi $ be a biholomorphic mapping on $U$ such that $\phi $
transforms $M$ to a real hypersurface $M^{\prime }$ satisfying
\[
\phi (U\cap M)\subset M^{\prime }
\]
and there is a chain-segment $\lambda :[0,1]\rightarrow M^{\prime }$
satisfying
\[
\phi (\gamma [0,1))\subset \lambda .
\]
Suppose that there is a real number $c\geq 1$ such that
\begin{equation}
\sup_{0\leq \tau \leq 1}\sup_{\left( U_{\tau },0,\rho _{\tau },r_{\tau
}\right) \in H_{\lambda (\tau )}(M^{\prime })}\left\| U_{\tau }\right\| \leq
c<\infty \tag*{(4.15)} \label{U-auto}
\end{equation}
where $H_{\lambda (\tau )}(M^{\prime })$ is the local automorphism group of $%
M^{\prime }$ at the point $\lambda (\tau )$ in a normal coordinate with the
chain-segment $\lambda $ on the $u$-curve. Then there exists a chain $\Gamma
$ on $M$ satisfying
\[
\gamma [0,1]\subset \Gamma ,
\]
i.e., $\gamma [0,1]$ is a chain-segment.
\end{theorem}
\proof
Without loss of generality, we may assume that the real hypersurface $%
M^{\prime }$ is in Moser-Vitushkin normal form with the chain-segment $%
\lambda $ on the $u$-curve so that $M^{\prime }$ is defined by the equation
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }%
+\sum_{k=4}^{\infty }G_{k}\left( z,\overline{z},u\right) .
\]
Here we assume $\alpha \neq 0$ and later we shall take a sufficiently small
value for $\alpha .$
There exists a continuous function $\tau \mapsto u_{\tau }$ for $\tau \in
[0,1]$ such that
\[
(0,u_{\tau })=\phi (\gamma (\tau ))\subset \lambda \quad \text{for }\tau \in
[0,1).
\]
Since the chain-segment $\lambda $ is compact, there is a real number $u_{1}$
such that
\[
(0,u_{1})=\lim_{\tau \rightarrow 1}\phi (\gamma (\tau ))\in \lambda .
\]
Then we obtain a continuous family of analytic real hypersurfaces $M_{\tau
}^{\prime },$ $\tau \in [0,1],$ defined near the origin by
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }+G^{\tau
}\left( z,\overline{z},u\right)
\]
where, for $\tau \in [0,1],$%
\begin{eqnarray*}
G^{\tau }\left( z,\overline{z},u\right) &=&\sum_{k=4}^{\infty }G_{k}\left( z,%
\overline{z},u+u_{\tau }\right) \\
&=&\sum_{k=4}^{\infty }G_{k}^{\tau }\left( z,\overline{z},u\right) .
\end{eqnarray*}
By Lemma \ref{conti-family}, we obtain a continuous family of analytic real
hypersurfaces $M_{\tau },$ $\tau \in [0,1],$ in normal form and a continuous
family of biholomorphic mappings $\varphi _{\tau }$ for the real
hypersurface $M$ and the curve $\gamma :[0,1]\rightarrow M.$ Then, for each $%
\tau \in [0,1),$ there exist an open neighborhood $V_{\tau }$ of the origin
and a chain $\gamma _{\tau }$ on $M_{\tau }$ passing through the origin such
that
\[
\varphi _{\tau }^{-1}\left( V_{\tau }\cap \gamma _{\tau }\right) \subset
\gamma [0,1).
\]
Suppose that $M_{\tau },$ $\tau \in [0,1],$ is defined in normal form by
\[
v=\langle z,z\rangle +\sum_{k=4}^{\infty }F_{k}^{\tau }\left( z,\overline{z}%
,u\right) .
\]
By Lemma \ref{lowest}, there is a well-defined integer $m_{\tau },$ $\tau
\in [0,1],$ such that
\[
\left\{
\begin{array}{l}
F_{k}^{\tau }\left( z,\overline{z},u\right) =0\quad \text{for }k\leq m_{\tau
}-1 \\
F_{m_{\tau }}^{\tau }\left( z,\overline{z},u\right) \neq 0
\end{array}
\right.
\]
because $M_{\tau }$ is nonspherical.
Let $\phi _{\tau }$ be a normalization of $M_{\tau }$ for each $\tau \in
[0,1)$ to Moser-Vitushkin normal form such that the initial value $\sigma $
of the mapping $\phi _{\tau }$ is given by
\[
\sigma =\left( id_{n\times n},a_{\tau },1,0\right)
\]
where $a_{\tau }$ is determined by the condition
\[
\phi _{\tau }\left( \gamma _{\tau }\cap M_{\tau }\right) \subset \left\{
z=v=0\right\}
\]
Suppose that $\phi _{\tau }\left( M_{\tau }\right) ,\tau \in (0,1),$ is
defined near the origin by the equation
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }+F^{*\tau
}\left( z,\overline{z},u;a_{\tau }\right)
\]
where
\[
F^{*\tau }\left( z,\overline{z},u;a_{\tau }\right) =\sum_{k=4}^{\infty
}F_{k}^{*\tau }\left( z,\overline{z},u;a_{\tau }\right) .
\]
Notice that the function $\tau \longmapsto a_{\tau }$ is continuous on $%
[0,1) $ and, by Theorem \ref{main}, the function
\[
\tau \longmapsto F^{*\tau }\left( z,\overline{z},u;a\right)
\]
is continuous on $[0,1)$ for a fixed $a\in \Bbb{C}^{n}$.
Note that the two real hypersurfaces $\phi _{\tau }\left( M_{\tau }\right) $
and $M_{\tau }^{\prime }$ are in Moser-Vitushkin normal form and
biholomorphic to each other at the origin for all $\tau \in [0,1)$ by a
biholomorphic mapping leaving the $u$-curve invariant. Thus there is a
mapping
\[
\psi _{\tau }:\left\{
\begin{array}{l}
z^{*}=\sqrt{q_{\tau }^{\prime }(w)}U_{\tau }z\exp \alpha i(q_{\tau }(w)-w)
\\
w^{*}=q_{\tau }(w)
\end{array}
\right.
\]
such that
\[
\phi _{\tau }\left( M_{\tau }\right) =\psi _{\tau }\left( M_{\tau }^{\prime
}\right) \quad \text{for all }\tau \in [0,1).
\]
Then the function $q_{\tau }(u)$ is a solution of the ordinary differential
equation
\[
\frac{q^{\prime \prime \prime }}{3q^{\prime }}-\frac{1}{2}\left( \frac{%
q^{\prime \prime }}{q^{\prime }}\right) ^{2}+\frac{2\alpha ^{2}}{3}\left(
q^{\prime 2}-1\right) =0
\]
with the initial conditions
\[
\Re q(0)=0,\quad \Re q^{\prime }(0)=\rho _{\tau }\in \Bbb{R}^{+},\quad \Re
q^{\prime \prime }(0)=2\rho _{\tau }r_{\tau }\in \Bbb{R}.
\]
Suppose that $\psi _{\tau }\left( M_{\tau }^{\prime }\right) ,\tau \in
[0,1), $ is defined by the equation
\[
v=\frac{1}{4\alpha }\ln \frac{1}{1-4\alpha \langle z,z\rangle }+G^{*\tau
}\left( z,\overline{z},u\right)
\]
where
\[
G^{*\tau }\left( z,\overline{z},u\right) =\sum_{k=4}^{\infty }G_{k}^{*\tau
}\left( z,\overline{z},u;U_{\tau },\rho _{\tau },r_{\tau }\right) .
\]
Since $\phi _{\tau }\left( M_{\tau }\right) =\psi _{\tau }\left( M_{\tau
}^{\prime }\right) $ for $\tau \in [0,1),$ we obtain
\begin{equation}
F_{k}^{*\tau }\left( z,\overline{z},u;a_{\tau }\right) =G_{k}^{*\tau }\left(
z,\overline{z},u;U_{\tau },\rho _{\tau },r_{\tau }\right) \quad \text{for }%
k\geq 4. \tag*{(4.16)} \label{first}
\end{equation}
We take a sequence $\tau _{j},j\in \Bbb{N},$ such that
\[
\tau _{j}\in [0,1)\quad \text{and}\quad \tau _{j}\nearrow 1.
\]
Then there exist a matrix $U_{\tau _{j}}$ and a function $q_{\tau _{j}}(u)$
such that
\[
\psi _{\tau _{j}}:\left\{
\begin{array}{l}
z^{*}=\sqrt{q_{\tau _{j}}^{\prime }(w)}U_{\tau _{j}}z\exp \alpha i(q_{\tau
_{j}}(w)-w) \\
w^{*}=q_{\tau _{j}}(w)
\end{array}
\right. .
\]
Lemma \ref{u-estimate} and the condition \ref{U-auto} allow us to assume
\[
\sup_{j}\left\| U_{\tau _{j}}\right\| <\infty
\]
so that, passing to a subsequence, if necessary, there exist a matrix $U$
satisfying
\[
U=\lim_{j\rightarrow \infty }U_{\tau _{j}}.
\]
By Lemma \ref{Vit}, all the functions $q_{\tau _{j}}(u)$ satisfy the
following estimate
\begin{eqnarray*}
\left| q_{\tau _{j}}(u)\right| &=&\left| q_{\tau _{j}}(u)-q_{\tau
_{j}}(0)\right| \\
&\leq &\pi \left| \alpha \right| ^{-1}\left\{ \left[ \frac{\left| q_{\tau
_{j}}(u)-q_{\tau _{j}}(0)\right| }{\pi \left| \alpha \right| ^{-1}}\right]
+1\right\} \\
&\leq &\pi \left| \alpha \right| ^{-1}\left\{ \left[ \frac{\left| u\right| }{%
\pi \left| \alpha \right| ^{-1}}\right] +2\right\} \\
&\leq &\left| u\right| +2\pi \left| \alpha \right| ^{-1}.
\end{eqnarray*}
Because $\lambda $ is a chain-segment on $M^{\prime },$ the functions $%
q_{\tau _{j}}(u)$ are bounded in the range which we have interested in.
Further, notice that
\[
q_{\tau _{j}}\left( \pi \alpha ^{-1}\right) =\pm \pi \alpha ^{-1}\quad \text{%
for all }j.
\]
Then, passing to a subsequence, if necessary, Montel theorem and Hurwitz
theorem allow us to have a function $q(u)$ such that
\[
q(u)=\lim_{j\rightarrow \infty }q_{\tau _{j}}(u)
\]
and
\[
q\left( 0\right) =0\quad \text{and}\quad q^{\prime }\left( 0\right) \neq 0.
\]
Hence, passing to a subsequence, if necessary, there is a real number $e>0$
such that
\[
\sup_{j}\left\{ \left\| U_{\tau _{j}}\right\| ,\left\| U_{\tau
_{j}}^{-1}\right\| ,\left| \rho _{\tau _{j}}\right| ,\left| \rho _{\tau
_{j}}^{-1}\right| ,\left| r_{\tau _{j}}\right| \right\} \leq e<\infty .
\]
By the definition of the integer $m_{1},$ we have
\[
\left\{
\begin{array}{l}
\lim_{\tau \rightarrow 1}F_{k}^{\tau }\left( z,\overline{z},u\right)
=0,\quad k=4,\cdots ,m_{1}-1, \\
\lim_{\tau \rightarrow 1}F_{m_{1}}^{\tau }\left( z,\overline{z},u\right)
\neq 0.
\end{array}
\right.
\]
Then the funtion $F_{m_{1}+1}^{*\tau }\left( z,\overline{z},u;a_{\tau
}\right) $ may be decomposed to three parts as follows
\begin{equation}
F_{m_{1}+1}^{*\tau }\left( z,\overline{z},u;a_{\tau }\right)
=F_{m_{1}+1}^{\tau }\left( z,\overline{z},u\right) +H_{m_{1}+1}^{\tau
}\left( z,\overline{z},u;a_{\tau }\right) +L_{m_{1}+1}^{\tau }\left( z,%
\overline{z},u;a_{\tau }\right) \tag*{(4.17)} \label{second}
\end{equation}
where
\begin{enumerate}
\item[(1)] the function $H_{m_{1}+1}^{\tau }\left( z,\overline{z}%
,u;a\right) $ is determined by the function $F_{m_{1}}^{\tau }\left( z,%
\overline{z},u\right) $,
\item[(2)] the function $L_{m_{1}+1}^{\tau }\left( z,\overline{z}%
,u;a\right) $ is determined by the functions $F_{k}^{\tau }\left( z,%
\overline{z},u\right) ,$ $k\leq m_{1}-1,$
\item[(3)] the function $H_{m_{1}+1}^{\tau }\left( z,\overline{z}%
,u;a\right) $ is linear with respect to $a$ and the mapping
\[
a\longmapsto \lim_{\alpha \rightarrow 0}\lim_{\tau \rightarrow
1}H_{m_{1}+1}^{\tau }\left( z,\overline{z},u;a\right)
\]
is injective(cf. Lemma \ref{Theo1}), where $\alpha $ is the parameter of
Moser-Vitushkin normal form,
\item[(4)] the function $L_{m_{1}+1}^{\tau }\left( z,\overline{z}%
,u;a\right) $ depends polynomially on the parameter $a$ and
\[
\lim_{\alpha \rightarrow 0}\lim_{\tau \rightarrow 1}L_{m_{1}+1}^{\tau
}\left( z,\overline{z},u;a\right) =0\quad \text{for any fixed }a\in \Bbb{C}%
^{n}.
\]
\end{enumerate}
\noindent Notice that there is a real number $\varepsilon _{1}>0$ such that
the mapping
\[
a\longmapsto H_{m_{1}+1}^{\tau }\left( z,\overline{z},u;a\right)
\]
is injective for all $\left| \alpha \right| \leq \varepsilon _{1}$ and all $%
\tau \geq 1-\varepsilon _{1}$. We take a value for the parameter $\alpha $
such that $0<\left| \alpha \right| \leq \varepsilon _{1}.$
Then the equalities \ref{first} and \ref{second} yields
\begin{eqnarray*}
&&H_{m_{1}+1}^{\tau _{j}}\left( z,\overline{z},u;a_{\tau _{j}}\right)
+L_{m_{1}+1}^{\tau _{j}}\left( z,\overline{z},u;a_{\tau _{j}}\right) \\
&=&G_{m_{1}+1}^{*\tau _{j}}\left( z,\overline{z},u;U_{\tau _{j}},\rho _{\tau
_{j}},r_{\tau _{j}}\right) -F_{m_{1}+1}^{\tau _{j}}\left( z,\overline{z}%
,u\right) .
\end{eqnarray*}
By taking smaller $\varepsilon _{1}>0,$ if necessary, the injectivity of the
mapping $a\longmapsto H_{m_{1}+1}^{\tau }\left( z,\overline{z},u;a\right) $
allows to take an estimate of $a_{\tau _{j}}$ such that there is a real
number $c_{1}>0$ and
\[
\left| a_{\tau _{j}}\right| \leq c_{1}<\infty \quad \text{for all }\tau _{j},%
\text{ }j\in \Bbb{N}.
\]
Notice that the function $\tau \rightarrow a_{\tau }$ is continuous. Thus
there exists a real number $c>0$ such that
\[
\left| a_{\tau }\right| \leq c<\infty \quad \text{for all }\tau \in [0,1).
\]
Therefore, there exists a sufficiently small real number $\delta >0$
independent of $\tau \in [0,1)$ such that the real hypersurface $M_{\tau }$
and the chain $\gamma _{\tau }\subset M_{\tau }$ extend to
\begin{eqnarray*}
M_{\tau } &=&\phi _{\tau }^{-1}\left( \psi _{\tau }\left( M_{\tau }^{\prime
}\right) \cap B(0;\delta )\right) \\
\gamma _{\tau } &=&\phi _{\tau }^{-1}\left( \left\{ z=v=0\right\} \cap
B(0;\delta )\right)
\end{eqnarray*}
and the mappings $\varphi _{\tau }^{-1}$ extends biholomorphically on $%
B(0;\delta ).$
Then there is a sufficiently small real numbers $\varepsilon >0$ such that
\[
\gamma (1)\in \varphi _{1-\varepsilon }^{-1}\left( B(0;\delta )\right)
\]
so that the following curve $\Gamma $ defined by
\[
\Gamma =\gamma [0,1)\cup \varphi _{1-\varepsilon }^{-1}\left( \gamma
_{1-\varepsilon }\right)
\]
is a chain on $M$ such that
\[
\gamma [0,1]\subset \Gamma .
\]
This completes the proof.\endproof
Note that the condition \ref{U-auto} is trivially satisfied if the Levi form
on the real hypersurface $M$ is definite.
\begin{theorem}
Let $M$ be a strongly pseudoconvex analytic real hypersurface and $\gamma $
be a chain on $M.$ Let $\Gamma :(0,1)\rightarrow M$ be the maximally
extended connected open analytic curve on $M$ containing the chain $\gamma $
and $\Gamma _{0}$ be a maximal subarc of $\Gamma $ such that $\Gamma _{0}$
contains the chain $\gamma $ and $\Gamma _{0}$ is transversal to the complex
tangent hyperplane of $M$ at each point of $\Gamma _{0}.$ Then $\Gamma _{0}$
is a chain, i.e., for each point $p\in \Gamma _{0},$ there exist an open
neighborhood $U$ of the point $p$ and a biholomorphic mapping $\phi $ on $U$
such that
\[
\phi (U\cap \Gamma )\subset \left\{ z=v=0\right\}
\]
and the mapping $\phi $ translates the point $p$ to the origin and
transforms the germ $M$ at the point $p$ to normal form.
\end{theorem}
\proof
Let $M^{\prime }$ be a real hypersurface in Moser-Vitushkin normal form such
that $M^{\prime }$ is maximally extended along the $u$-curve to the interval
$(u_{-},u_{+}),$ where
\[
-\infty \leq u_{-}<0<u_{+}\leq \infty .
\]
Suppose that there is a normalizing mapping $\phi $ of $M$ to $M^{\prime }$
such that $\gamma $ is mapped by $\phi $ into the $u$-curve. Then, by
Theorem \ref{embedding}, the mapping $\phi $ is biholomorphically continued
along $\gamma $ so that
\[
\phi (\gamma )\subset (u_{-},u_{+}).
\]
By Theorem \ref{core2}, the chain $\gamma $ can be extended on $M,$ say, to
a chain $\Gamma \subset M,$ whenever an end limit of $\gamma $ exists on $M$
and the corresponding end limit of $\phi (\gamma )$ is an interior point of $%
(u_{-},u_{+}).$ By Theorem \ref{embedding}, the mapping $\phi $ is
biholomorphically continued along $\Gamma $ so that
\[
\phi (\Gamma )\subset (u_{-},u_{+}).
\]
Hence there exists a unique chain $\Gamma :(0,1)\rightarrow M$ maximally
extended from the chain $\gamma $ such that
\[
\lim_{\tau \rightarrow 0}\Gamma (\tau )\notin M\quad \text{or}\quad
\lim_{\tau \rightarrow 0}\phi (\Gamma (\tau ))\in \{u_{-},u_{+}\}
\]
and
\[
\lim_{\tau \rightarrow 1}\Gamma (\tau )\notin M\quad \text{or}\quad
\lim_{\tau \rightarrow 1}\phi (\Gamma (\tau ))\in \{u_{-},u_{+}\}.
\]
Suppose that
\[
\lim_{\tau \rightarrow 0}\Gamma (\tau )\in M\quad \text{and}\quad \lim_{\tau
\rightarrow 0}\phi (\Gamma (\tau ))\in \{u_{-},u_{+}\}.
\]
We claim that the analytic curve $\Gamma :(0,1)\rightarrow M$ is not
analytically continued over the limit
\[
q=\lim_{\tau \rightarrow 0}\Gamma (\tau )\in M
\]
transversely to the complex tangent hyperplane of $M$ at the point $q\in M.$
Otherwise, there exist an open neighborhood $U$ of the point $q$ and an
analytic curve $\lambda :(-1,1)\rightarrow M$ such that
\[
U\cap \lambda (0,1)\subset \Gamma (0,1)\quad \text{and}\quad \lambda (0)=q
\]
and $\lambda $ is transversal to the complex tangent hyperplanes of $M$ at
each point of $\lambda .$ Then there exist a sufficiently small real number $%
\varepsilon >0$ such that the analytic curve
\[
\lambda (-\varepsilon ,\varepsilon )\cup \Gamma (0,1)
\]
is an chain as well. Then $M^{\prime }$ is analytically extended along the $%
u $-curve over the point $u_{-}$ or $u_{+}.$ This is a contradiction to the
definition of the point $u_{-}$ and $u_{+}$.
Therefore, the chain $\Gamma $ is the maximally extended connected open
analytic curve containing the chain $\gamma $ which is transversal to the
complex tangent hyperplanes of $M$ at each point of $\Gamma .$ This
completes the proof.\endproof
\section{Analytic continuation of a biholomorphic mapping}
\subsection{On a spherical real hypersurface}
\begin{theorem}[Pinchuk, Chern-Ji]
\label{conti}Let $M$ be a spherical analytic real hypersurfaces with
definite Levi form in a complex manifold, $U$ be a connected neighborhood of
a point $p\in M,$ and $\phi $ be a biholomorphic mapping such that $\phi
(U\cap M)\subset S^{2n+1}.$ Then the mapping $\phi $ continues
holomorphically along any path in $M$ as a locally biholomorphic mapping.
\end{theorem}
\proof
Suppose that the assertion is not true. Then there would exists a path $%
\gamma [0,1]$ such that a biholomorphic mapping $\phi $ at the point $%
p=\gamma (0)$ can be biholomorphically continued along all subpath $\gamma
[0,\tau ]$ with $\tau <1,$ but not along the whole path. We set $q=\gamma
(1).$ Since $M$ is spherical, every point of $M$ is umbilic. By Lemma \ref
{umbilic}, there is an open subset $U_{q}$ of the point $q$ and a
biholomorphic mapping $h_{q}$ on $U_{q}$ such that
\[
h_{q}(U_{q}\cap M)\subset S^{2n+1}
\]
and we can take $\tau $ satisfying $\gamma (t)\in U_{q}\cap M$ for all $t\in
[\tau ,1]$. Then there are an open neighborhood $U$ of the point $\gamma
(\tau )$ and a unique automorphisms $\varphi $ of $S^{2n+1}$ such that
\[
\phi =\varphi \circ h_{q}\quad \text{on }U\cap U_{q}.
\]
By a classical theorem of Poincar\'{e}, $\varphi $ is biholomorphic on an
open neighborhood of $S^{2n+1}.$ Thus passing to an open subset of $U_{q}$
containing $\gamma [0,1]\cap U_{q},$ if necessary, $\varphi \circ h_{q}$ is
an analytic continuation of $\phi $ on $U_{q}.$ This is a contradiction.
This completes the proof.\endproof
\begin{theorem}[Pinchuk]
\label{pcj}Let $D$ be a bounded strongly pseudoconvex domain in $\Bbb{C}%
^{n+1}$ with simply connected real-analytic boundary $\partial D$. Suppose
that $\partial D$ is a spherical analytic real hypersurface. Then there is a
biholomorphic mapping $\phi $ of $D$ onto $B^{n+1}.$
\end{theorem}
\proof
By Lemma \ref{umbilic}, $\partial D$ is locally biholomorphic to $S^{2n+1}.$
We take a point $p\in \partial D$ and an open neighborhood $U$ of $p$ such
that there is a biholomorphic mapping $\phi $ on $U$ satisfying $\phi (U\cap
\partial D)\subset S^{2n+1}.$ Then, by Theorem \ref{conti}, the mapping $%
\phi $ extends along any path on $\partial D$ as a local biholomorphic
mapping. Since $\partial D$ is simply connected, the monodromy theorem
yields a unique biholomorphic extension $\phi ,$ by keeping the same
notation, on an open neighborhood of $\partial D.$
Note that $\phi :\partial D\rightarrow S^{2n+1}$ is an open mapping because $%
\phi $ is biholomorphic on an open neighborhood of $\partial D.$ Since $%
\partial D$ is compact, the mapping $\phi :\partial D\rightarrow S^{2n+1}$
is a covering map. Further, since $S^{2n+1}$ is simply connected, the
mapping $\phi :\partial D\rightarrow S^{2n+1}$ is a simple covering map so
that there exists a biholomorphic inverse $\phi ^{-1}:S^{2n+1}\rightarrow
\partial D.$ By Hartogs extension theorem, the mappings $\phi ,\phi ^{-1}$
extend to open neighborhoods respectively of $\overline{D}$ and $\overline{%
B^{n+1}}.$ Thus the mapping $\phi $ induces a biholomorphic mapping of $D$
onto $B^{n+1}.$ This completes the proof.\endproof
\begin{theorem}
Let $D$ be simply connected open set in a complex manifold with compact
simply connected real-analytic boundary $\partial D$ and compact closure $%
\overline{D}$. Suppose that $\partial D$ is a spherical analytic real
hypersurface. Then there is a biholomorphic mapping $\phi $ of $D$ onto $%
B^{n+1}.$
\end{theorem}
\proof
By the same argument, there is a biholomorphic mapping $\phi $ on an open
neighborhood of the boundary $\partial D$ such that $\phi :\partial
D\rightarrow S^{2n+1}$ is a simple covering map. Thus there exists a
biholomorphic inverse $\phi ^{-1}:S^{2n+1}\rightarrow \partial D.$ By
Hartogs extension theorem, the mapping $\phi ^{-1}$ extends to the open ball
$B^{n+1}$ as a local biholomorphic mapping. Since $\overline{B^{n+1}}$ and $%
\overline{D}$ are compact, the mapping $\phi ^{-1}:B^{n+1}\rightarrow D$ is
a covering mapping. Since $D$ is simply connected, $\phi
^{-1}:B^{n+1}\rightarrow D$ is a simple covering map. Thus the mapping $\phi
$ induces a biholomorphic mapping of $D$ onto $B^{n+1}.$ This completes the
proof.\endproof
Let $Q$ be a real hyperquadric in $\Bbb{CP}^{n+1}$ which is defined in a
homogeneous coordinate
\[
\left( \eta ,\zeta ^{1},\cdots ,\zeta ^{n},\xi \right) \in \Bbb{C}^{n+2}
\]
by the equation
\[
\frac{1}{2i}\left( \xi \overline{\eta }-\eta \overline{\xi }\right) =\langle
\zeta ,\zeta \rangle
\]
where
\[
\langle \zeta ,\zeta \rangle \equiv \zeta ^{1}\overline{\zeta ^{1}}+\cdots
+\zeta ^{e}\overline{\zeta ^{e}}-\cdots -\zeta ^{n}\overline{\zeta ^{n}}.
\]
Then the real hyperquadric $Q$ is given in the inhomogeneous coordinate chart%
$\simeq \Bbb{C}^{n+1}$ by the equation
\[
\frac{1}{2i}\left( w-\overline{w}\right) =\langle z,z\rangle
\]
where
\[
z=\left( \frac{\zeta ^{1}}{\eta },\cdots ,\frac{\zeta ^{n}}{\eta }\right)
,\quad w=\frac{\xi }{\eta }.
\]
\begin{lemma}[Chern-Moser]
\label{extension}Let $Q$ be a real hyperquadric in $\Bbb{CP}^{n+1}$ and $U$
be an open neighborhood of a point $p\in Q.$ Suppose that there is a
biholomorphic mapping $\phi $ on $U$ such that $\phi (U\cap Q)\subset Q.$
Then the mapping $\phi $ extends to be an automorphism of $Q$ which is
biholomorphic on an open neighborhood of $Q.$
\end{lemma}
\proof
By composing a linear mapping of $\Bbb{CP}^{n+1}$ to $\phi ,$ if
necessarily, we may assume that $\phi $ has a fixed point $q\in Q.$ Further,
passing to an inhomogeneous coordinate chart, $\phi $ is a local
automorphism of the real hyperquadric $v=\langle z,z\rangle $ in $\Bbb{C}%
^{n+1}.$ By Theorem \ref{exi-uni}, the mapping $\phi $ is necessarily to be
a fractional linear mapping as follows:
\begin{eqnarray*}
z^{*} &=&\frac{C(z-aw)}{1+2i\langle z,a\rangle -w(r+i\langle a,a\rangle )} \\
w^{*} &=&\frac{\rho w}{1+2i\langle z,a\rangle -w(r+i\langle a,a\rangle )}.
\end{eqnarray*}
Thus the mapping $\phi $ extends to be a linear mapping in $\Bbb{CP}^{n+1}.$
This completes the proof.\endproof
\begin{theorem}
Let $M$ be a spherical analytic real hypersurface with nondefinite Levi form
in a complex manifold, $U$ be a connected neighborhood of a point $p\in M,$
and $\phi $ be a biholomorphic mapping on $U$ such that
\[
\phi (U\cap M)\subset Q
\]
where $Q$ is a real hyperquadric in $\Bbb{CP}^{n+1}.$ Then the mapping $\phi
$ continues holomorphically along any path on $M$ as a locally biholomorphic
mapping.
\end{theorem}
\proof
Suppose that the assertion is not true. Then there would exists a path $%
\gamma [0,1]$ with $p=\gamma (0)$ such that a biholomorphic mapping $\phi $
on the neighborhood $U$ of $p$ can be biholomorphically continued along all
subpath $\gamma [0,\tau ]$ with $\tau <1,$ but not along the whole path. We
set $q=\gamma (1).$ Since $M$ is spherical, by Lemma \ref{umbilic}, there is
an open subset $U_{q}$ of the point $q$ and a biholomorphic mapping $h_{q}$
on $U_{q}$ such that
\[
h_{q}(U_{q}\cap M)\subset Q.
\]
We take $\tau $ satisfying $\gamma (t)\in U_{q}\cap M$ for all $t\in [\tau
,1]$. Then there are an open neighborhood $U$ of the point $\gamma (\tau )$
and a unique automorphisms $\varphi $ of $Q$ such that
\[
\phi =\varphi \circ h_{q}\quad \text{on }U\cap U_{q}.
\]
By Lemma \ref{extension}, $\varphi $ is biholomorphic on an open
neighborhood of $Q.$ Then passing to an open subset of $U_{q}$ containing $%
\gamma [0,1]\cap U_{q},$ if necessary, $\varphi \circ h_{q}$ is an analytic
continuation of $\phi $ on $U_{q}.$ This is a contradiction. This completes
the proof.\endproof
\subsection{On a nonspherical real hypersurface}
\begin{lemma}
\label{continuation}Let $M,M^{\prime }$ be nonspherical analytic real
hypersurfaces and $U$ be an open neighborhood of a point $p\in M$. Suppose
that $M^{\prime }$ is compact and the local automorphism group of $M^{\prime
}$ at each point $q\in M^{\prime }$ is compact. Let $\phi $ be a
biholomorphic mapping of $M$ such that $\phi (U\cap M)\subset M^{\prime }.$
Then $\phi $ is analytically continued along any chain $\gamma $ passing
through the point $p.$
\end{lemma}
\proof
Suppose that the assertion is not true. Then there is a chain-segment $%
\gamma :[0,1]\rightarrow M$ such that $\gamma (0)=p$ and $\phi $ can be
biholomorphically continued along all subpath $\gamma [0,\tau ]$ with $\tau
<1,$ but not along the whole path.
Because $M^{\prime }$ is compact, there exists the limit
\[
q\equiv \lim_{\tau \rightarrow 1}\phi (\gamma (\tau ))\in M^{\prime }.
\]
By Lemma \ref{chain-chain}, the subarc $\phi \circ \gamma :[0,\tau
]\rightarrow M^{\prime }$ is a chain-segment for all $\tau <1.$ Then, by
Theorem \ref{core2}, there exists a chain $\Gamma ^{\prime }$ on $M^{\prime
} $ such that
\[
\lim_{\tau \rightarrow 1}\phi (\gamma [0,\tau ])\subset \Gamma ^{\prime },
\]
where the condition \ref{U-auto} in Theroem \ref{core2} is satisfied because
the local automorphism group of $M^{\prime }$ at each point $q\in M^{\prime
} $ is compact.
Without loss of generality, we may assume that $M^{\prime }$ is in
Moser-Vitushkin normal form with the chain $\Gamma ^{\prime }$ in the $u$%
-curve. Since $\gamma :[0,1]\rightarrow M$ is a chain-segment, there is a
chain $\Gamma $ on $M$ such that
\[
\gamma [0,1]\subset \Gamma .
\]
Note that $\phi (U\cap \gamma [0,1])\subset \Gamma ^{\prime }.$ Then, by
Theorem \ref{embedding}, the mapping $\phi $ is biholomorphically continued
along the chain $\Gamma .$ Since the point $\gamma (1)$ is an interior point
of $\Gamma ,$ $\phi $ is biholomorphically continued on an open neighborhood
of the point $\gamma (1).$ This is a contradiction. This completes the proof.\endproof
\begin{theorem}[Pinchuk, Ezhov-Kruzhilin-Vitushkin]
\label{nonspherical}Let $M,$ $M^{\prime }$ be nonspherical connected
analytic real hypersurfaces in complex manifolds such that $M^{\prime }$ is
compact and every local automorphism group of $M^{\prime }$ at each point is
compact. Suppose that there exist an open neighborhood $U$ of a point $p$ of
$M$ and a biholomorphic mapping $\phi $ on $U$ such that $\phi (U\cap
M)\subset M^{\prime }.$ Then the mapping $\phi $ is biholomorphically
continued along any path in $M$.
\end{theorem}
\proof
Suppose that the assertion is not true. Then there is a path $\gamma
:[0,1]\rightarrow M$ such that $\gamma (0)=p$ and the mapping $\phi $ can be
biholomorphically continued along all subpath $\gamma [0,\tau ]$ with $\tau
<1,$ but not along the whole path.
Let $V$ be an open neighborhood of the point $q=\gamma (1).$ Then, by
Theorem \ref{core1}, there are a real number $\delta >0$ and a point $x\in
V\cap M$ such that $B(q;\delta )\subset V$ and, for each given curve $\eta
:[0,1]\rightarrow B(q;\delta )\cap M,$ there is a continuous family of
chain-segments
\[
\Gamma :[0,1]\times [0,1]\rightarrow V\cap M
\]
where $\Gamma (s,\cdot ):[0,1]\rightarrow V\cap M$ is a chain-segment of $M$
for each $s\in [0,1]$ satisfying
\[
\Gamma (s,0)=q\quad \text{and}\quad \Gamma (s,1)=\eta (s)\quad \text{for all
}s\in [0,1].
\]
We take $\tau $ such that $\tau <1$ and $\gamma [\tau ,1]\subset B(q;\delta
)\cap M.$ Then there is a continuous family of chain-segments
\[
\Gamma :[\tau ,1]\times [0,1]\rightarrow V\cap M
\]
such that
\[
\Gamma (s,0)=x\quad \text{and}\quad \Gamma (s,1)=\gamma (s)\quad \text{for
all }s\in [\tau ,1].
\]
By Lemma \ref{continuation}, the germ $\phi _{\gamma _{\tau }}$ at the point
$\gamma (\tau )$ is analytically continued to a germ $\phi _{x}$ at the
point $x$ along the chain-segment $\Gamma (\tau ,\cdot ).$ Then, by Lemma
\ref{continuation}, the germ $\phi _{x}$ is analytically continued to a germ
$\phi _{\gamma _{s}}$ at each point $\gamma _{s}\in \gamma [\tau ,1]$ along
the chain-segments $\Gamma (s,\cdot ),$ $s\in [\tau ,1].$
We claim that the germs $\phi _{\gamma _{s}},$ $s\in [\tau ,1],$ are the
analytic continuations of the germ $\phi _{\gamma _{\tau }}$ at the point $%
\gamma (\tau )$ along the subarc $\gamma [\tau ,1].$ Otherwise, there would
exist a number $r,$ $\tau <r\leq 1,$ such that the germs $\phi _{\gamma
_{s}},$ $s\in [\tau ,r),$ are analytic continuations of the germ $\phi
_{\gamma _{\tau }}$ at the point $\gamma (\tau ),$ but the germ $\phi
_{\gamma _{r}}$ is not an analytic continuation of the germ $\phi _{\gamma
_{\tau }}$. By the way, the germ $\phi _{\gamma _{r}}$ is an analytic
continuation of the germ $\phi _{x}.$ Note that the chain-segment $\Gamma
(r,\cdot )$ is compact. Thus there is a number $\varepsilon >0$ such that
each germ $\phi _{\Gamma (r,t)},$ $0\leq t\leq 1,$ at the point $\Gamma
(r,t) $ converges absolutely and uniformly on the open ball $B(\Gamma
(r,t);\tau ). $ Then we can find a number $r_{1},$ $\tau <r_{1}<r,$ such
that
\[
\Gamma (r_{1},[0,1])\subset \bigcup_{0\leq t\leq 1}B(\Gamma (r,t);\tau ).
\]
Note that the germ $\phi _{\gamma _{r_{1}}}$ is an analytic continuation of $%
\phi _{x}$ along the chain-segment $\Gamma (r_{1},\cdot )$ and, at the same
time, it is an analytic continuation of $\phi _{\gamma _{r}}$ along the
subarc $\gamma [r_{1},r].$ Then, necessarily, $\phi _{\gamma _{r}}$ is an
analytic continuation of $\phi $ at the point $\gamma (\tau )$ along the
path $\gamma [\tau ,r].$ This contradiction proves our claim.
Therefore, the mapping $\phi $ is analytically continued to an open
neighborhood of the point $q=\gamma (1)$ along the path $\gamma [0,1].$ This
is a contradiction. This completes the proof.\endproof
\begin{theorem}[Pinchuk]
\label{pin}Let $D,D^{\prime }$ be bounded strongly pseudoconvex domains in $%
\Bbb{C}^{n+1}$ with simply connected real-analytic boundaries. Suppose that
there is a connected neighborhood $U$ of a point $p\in \partial D$ and a
biholomorphic mapping $\phi $ on $U$ such that $\phi (U\cap \partial
D)\subset \partial D^{\prime }.$ Then $\phi $ extends to a biholomorphic
mapping of $D$ onto $D^{\prime }.$
\end{theorem}
\proof
Suppose that $\partial D$ is a nonspherical real hypersurface. Then, by
Theorem \ref{remove2}, $\partial D$ is nonspherical as well. By Theorem \ref
{nonspherical}, $\phi $ analytically extends along any path on $\partial D.$
Since $\partial D$ is simply connected, by the monodromy theorem, $\phi $
analytically extend to an open neighborhood of $\partial D$ as a local
biholomorphic mapping. Since $\partial D$ is compact, $\phi :\partial
D\rightarrow \partial D^{\prime }$ is a covering map. Since $\partial
D^{\prime }$ is simply connected, $\phi :\partial D\rightarrow \partial
D^{\prime }$ is a simple covering map so that there is a biholomorphic
inverse $\phi ^{-1}:\partial D^{\prime }\rightarrow \partial D.$ Then, by
Hartogs extension theorem, $\phi ,\phi ^{-1}$ analytically extend to open
neighborhoods respectively of $\overline{D},\overline{D^{\prime }}.$
Suppose that $\partial D$ is a spherical real hypersurface. Then, by Theorem
\ref{remove2} and Lemma \ref{umbilic}, $\partial D^{\prime }$ is spherical
as well. By Theorem \ref{pcj}, the domains $D,D^{\prime }$ are both
biholomorphic to an open ball $B^{n+1}$ so that $D$ is biholomorphic to $%
D^{\prime }.$ This completes the proof.\endproof
|
2,869,038,157,079 | arxiv | \section{Introduction} %
Over the last decade, machine learning (ML) potentials~\cite{behl-parr07prl,bart+10prl,rupp+12prl} have demonstrated to be a very effective tool to improve the trade-off between accuracy and speed of atomistic simulations, allowing a quantum-level description of interatomic forces at a small fraction of the typical computational cost of ab-initio calculations.
%
The combination of ML potentials and traditional atomistic simulations techniques, such as molecular dynamics, has made it possible to address difficult scientific problems in chemistry~\cite{Smith_Isayev_Roitberg_2017,Devereux2020,chmi+18nc,ross+20jctc} and materials science~\cite{behl11pccp,soss+12prb,Bernstein2019,Artrith2018,Zamani2020,chen+20nature,Zeni2019,Fronzi2020}. The possibility of predicting properties beyond the potential energy -- from NMR chemical shieldings~\cite{cuny+16jctc,paru+18ncomm} to the electron density~\cite{broc+17nc,gris+19acscs} -- points to a bright future in which first-principles quality predictions of atomistic properties can be coupled with large-scale simulations~\cite{jia2020pushing} and thorough sampling of quantum statistics and dynamics~\cite{kapi+19jctc2,kapi+20jcp}.
As ML models become ubiquitous in atomic-scale modeling, the question naturally arises of how much one can trust the predictions of a purely inductive, data-driven approach when using it on systems that are not part of the training set.
This question is particularly pressing because the regression techniques that underlie ML models are inherently interpolative, and their ability to make predictions on new systems hinges on the possibility of decomposing the target property into a sum of atom-centred contributions. Thus, a ML prediction is only reliable if all the local environments that appear in the system of interest are properly represented in the training set.
It is therefore crucial to obtain an estimate of the error and uncertainty that derive from the finite number of reference structures, and
many methodological frameworks have been proposed that yield a measure of the uncertainty in the prediction of a machine learning model.\cite{Tran_Neiswanger_Yoon_Zhang_Xing_Ulissi_2020}
Within Bayesian schemes, such as Gaussian process regression, the uncertainty quantification is naturally encoded in the regression algorithm -- although computing the error is substantially more demanding than evaluating the prediction.\cite{rasm05book}
Sub-sampling approaches constitute an alternative. The uncertainty is estimated on the basis of the spread of the predictions of an ensemble (committee) of independently trained ML models, e.g. by subsampling of the full training dataset \cite{Peterson_Christensen_Khorshidi_2017,behl15ijqc, Shapeev2020,poli-roma94aos,efron1979}.
These uncertainty quantification schemes provide qualitative information on the reliability of the ML predictions, and are widely used in the context of online and offline active learning, to identify regions of configuration space that need to be added to the training set
\cite{Shapeev_Gubaev_Tsymbalov_Podryabinkin_2020, shuaibi2020, ross+20jctc, Schran_Brezina_Marsalek_2020, Jinnouchi_Lahnsteiner_Karsai_Kresse_Bokdam_2019, Vandermause_Torrisi_Batzner_Xie_Sun_Kolpak_Kozinsky_2020,Schran2020}.
When appropriately calibrated~\cite{musi+19jctc}, committee models can further provide a \emph{quantitative} assessment of the uncertainty in %
the prediction of an ML model, which can be readily propagated to estimate the error in properties that are obtained indirectly from the ML predictions such as vibrational spectra \cite{raim+19njp}.
In this paper we consider how to best exploit the availability of \rev{machine-learning models that include an error estimation} in the context of molecular dynamics simulations, and more generally in the evaluation of thermodynamic observables.
First, we show how to construct a \emph{weighted baseline} ML scheme, in which the uncertainty is used to ensure that whenever the simulation enters an extrapolative regime, the potential falls back to a reliable (if not very accurate) baseline.
Second, we use \rev{errors computed for individual configurations} to estimate the ML uncertainty associated with static \textit{thermodynamic averages} from MD trajectories computed using a single potential. Specifically, we introduce an on-the-fly reweighting technique, which takes into account both
i) the ML uncertainty on single-configuration calculations for a given observable over a significant sample of configurations, and
ii) the distortion of the sampling probability, due to the model-dependent Boltzmann factor entering the statistical averages.
We showcase applications of these methods to several different classes of materials science and chemical systems, ranging from polypeptides, to solutions, to liquid metals, and to both structural and functional properties.
\section{Theory}\label{sec:theory}
\rev{We consider a machine-learning model that can predict, for a structure $A$, the value of a property $y(A)$ as well as its uncertainty $\sigma^2(A)$.
We focus our derivations on committee models, that are easy to implement and allow for straightforward error propagation. However, most of the results we derive can be applied to any scheme that provide a differentiable uncertainty estimate for each property prediction. }
\subsection{Committee model and single-point uncertainty estimation}
We start our discussion by summarizing the formulation of the uncertainty estimation scheme based on a calibrated committee of sub-sampled models, introduced in Ref.~\citenum{musi+19jctc}. In a nutshell, the full training set of $N$ input-observation pairs $({A} , y_{\text{ref}}({A} ))$ is sub-sampled (without replacement) into $M$ training subsets of size $N_s<N$.
$M$ models are then trained independently on this ensemble of resampled data sets, inducing a fully non-parametric estimate of the distribution $P(y|{A} )$ of the prediction $y$, given an input ${A} $. The moments of such distribution can be readily computed, so that, for instance, the first (mean value) and second (variance) moments are
\begin{align}
& \bar{y}({A} ) = \frac{1}{M} \sum_{i=1}^M y^{(i)}({A} ) \label{eq:y_committee}\\
& \sigma^2({A} ) = \frac{1}{M-1} \sum_{i=1}^M \label{eq:sigma}
\left| y^{(i)}({A} ) - \bar{y}({A} )\right|^2.
\end{align}
Here, $y^{(i)}({A} )$ is the prediction of the $i-$th model, while the mean value $\bar{y}({A} )$ will be dubbed in the following as the \textit{committee} prediction.
The advantage of this machinery is that the ensemble $\lbrace y^{(i)}({A} )\rbrace_{i=1,\ldots,M}$ of model predictions provides an immediate estimate of the single-point uncertainty $\sigma^2({A} )$, since it fully characterises the error statistics.
The reduced size $N_s$ of the set of input-observation pairs on which the sub-sampled models are trained implies that the conditional probability distribution $P(y_{\text{ref}}({A} )|{A} )$ may deviate from the ideal Gaussian behaviour. We assume that such deviation only affects the width of the distribution, which may be too broad or (usually) too narrow, an effect that can also be seen as a consequence of the fact that training points cannot be considered to be independent identically distributed samples. We incorporate this deviation through a linear re-scaling factor $\alpha$ of the width $\sigma$ of the distribution. We further assume that $\alpha$ is independent of ${A} $, and that any two true values $y_{\text{ref}}({A} )$ and $y_{\text{ref}}({A} ^\prime)$ are uncorrelated if ${A} \neq {A} ^\prime$, so that the predictive distribution has the following form:
\begin{equation}
\begin{split}
& P(\mathbf{y}_{\text{ref}} | \lbrace {A} \rbrace, \alpha) \\
& =
\prod_{A} \frac{1}{\sqrt{2\pi \alpha^2 \sigma^2({A} )}}
\exp\left[-\frac{\left| y_{\text{ref}}({A} ) - \bar{y}({A} )\right|^2}{2\alpha^2 \sigma^2({A} )}\right]
\end{split}
\end{equation}
The parameter $\alpha$ is then fixed by maximizing the log-likelihood of this distribution,
\begin{equation}
LL(\alpha) = \frac{1}{N_\mathrm{val}} \sum_{{A} \in\mathrm{val}} \log P(y_{\text{ref}}({A} )|{A} , \alpha)
\end{equation}
over a set of $N_\mathrm{val}$ validation configurations, giving the optimal
\begin{equation}
\alpha{^2} \equiv \frac{1}{N_\mathrm{val}} \sum_{{A} \in \mathrm{val}}
\frac{\left|y_{\text{ref}}({A} ) - \bar{y}({A} )\right|^2}{\sigma^2({A} )}. \label{eq:alpha}
\end{equation}
In practice, the explicit construction of a validation set can be avoided by means of a scheme where the validation points still belong to the training set, yet they are absent from a given number of sub-sampled models, as discussed in depth in Ref.~\citenum{musi+19jctc}.
Note that Eq.~\eqref{eq:alpha} is a biased estimator when the number of committee members $M$ is small. In Appendix~\ref{app:bias} we discuss the issue in more detail, and show that the bias can be corrected by computing
\begin{equation}
\alpha{^2} \equiv - \frac{1}{M} + \frac{M-3}{M-1} \frac{1}{N_\mathrm{val}} \sum_{{A} \in \mathrm{val}}
\frac{\left|y_{\text{ref}}({A} ) - \bar{y}({A} )\right|^2}{\sigma^2({A} )}. \label{eq:alpha-unbiased-theory}
\end{equation}
We apply this expression in the numerical demonstrations in Section~\ref{sec:results}, but assume the asymptotic $M\rightarrow\infty$ limit in the rest of the formal derivations.%
The determination of the optimal $\alpha$ also allows us to properly re-scale the predictions of the models to be consistent with Eqs. $\eqref{eq:y_committee}$ and $\eqref{eq:sigma}$ and the optimized distribution:
\begin{align}
y^{(i)}({A} ) &\leftarrow \bar{y}({A} ) + \alpha [ y^{(i)}({A} ) - \bar{y}({A} ) ].
\end{align}
The committee prediction $\bar{y}$ is invariant under rescaling, and the spread of the predictions is adjusted according to $\sigma \leftarrow \alpha \sigma$. The rescaled predictions can be used to compute arbitrarily-complicated non-linear functions of $y$, and the mean and spread of the transformed predictions are indicative of the distribution of the target quantities. In what follows, we always assume that the committee predictions have been subject to this calibration procedure.
\subsection{Using errors for robust sampling and active learning}
Let us consider the following \textit{baselined model}
\begin{equation}
V^{(i)}({A} ) = V_b({A} ) + V_{\delta}^{(i)}({A} )
\end{equation}
where the training of the $i-$th model potential $V^{(i)}_{\delta}$ is on the (set of) differences between a target, say DFT-accurate, potential $\{V_\text{ref}({A} )\}$ and a baseline potential $\{V_b({A} )\}$.
Splitting a potential in a cheap-to-compute but inaccurate, and an accurate-but-expensive parts has been part of the molecular dynamics toolkit for a long time~\cite{tuck+92jcp,mark-mano08cpl,kapi+16jcp}, and has proven very effective in the context of machine-learning models~\cite{rama+15jctc,bart+17sa}.
Let us define the full committee potential
\begin{equation}
\bar{V}({A} ) = V_b({A} ) + \bar{V}_{\delta}({A} ),
\end{equation},
the committee average of the correction potentials
\begin{equation}
\bar{V}_{\delta}({A} ) = \frac{1}{M} \sum_{i=1}^M V_{\delta}^{(i)}({A} ),
\end{equation}
and its uncertainty
\begin{equation}
\sigma^2({A} ) = \frac{1}{M-1} \sum_{i=1}^{M} \left| V_{\delta}^{(i)} - \bar{V}_{\delta}({A} ) \right|^2,
\end{equation}
as in Eqs.~\eqref{eq:y_committee} and \eqref{eq:sigma}.
This uncertainty estimate, \rev{as well as any other similarly accurate and differentiable measure of the error,} can be used as an indication of the reliability of the ML predictions, and incorporated in an active-learning framework~\cite{li+15prl,smit+18jcp,jane+19cs,Schran_Brezina_Marsalek_2020}: during a molecular dynamics simulation, whenever the trajectory enters a region in which the model exhibits an extrapolative behaviour, the uncertainty $\sigma$ increases, and one can gather new configurations for an improved model~\cite{ross+20jctc}.
Unfortunately, trajectories entering an extrapolative region often become unstable very quickly, leading to sampling of unphysical configurations or the complete failure of the simulation.
Crucially, when using a baseline potential, one can stabilize the simulation by dynamically switching to using only $V_b$. This automatic fall-back mechanism can be realized by performing MD using the weighted-baseline potential
\begin{equation}
\begin{split}
U({A} ) &=\left[\frac{1}{\sigma_b^2}+\frac{1}{\sigma^2({A} )}\right]^{-1} \left[\frac{1}{\sigma_b^2} V_b({A} ) +\frac{1}{\sigma^2({A} )} \bar{V}({A} )\right] \\
&=V_b({A} ) + \frac{\sigma_b^2}{\sigma_b^2 + \sigma^2({A} )} \bar{V}_{\delta}({A} ), \label{eq:U_combined}
\end{split}
\end{equation}
where the baseline uncertainty $\sigma_b$ is estimated as the variance of the difference between baseline and reference
\begin{multline}
\sigma_b^2 \equiv \frac{1}{N-1} \left[ \sum_{{A} } \left| V_b({A} ) - V_\text{ref}({A} ) \right|^2 \right.\\
\left.-\frac{1}{N} \left(\sum_{{A} } V_b({A} ) - V_\text{ref}({A} )\right)^2 \right], \label{eq:sigma_b}
\end{multline}
the sum running on the full training set, and $V_\text{ref}({A} )$ being the target energy for configuration ${A} $. This definition explicitly takes into account the fact that the baseline and reference often differ by a huge constant.
Eq.~\eqref{eq:U_combined} corresponds to the weighted sum of the baseline potential $V_b({A} )$ and the full committee potential $\bar{V}({A} )$, consistent with a minimization of the combined error.
The forces (and higher derivatives) can be defined straightforwardly, paying attention to the ${A} $-dependence of $\sigma^2({A} )$ when the derivatives of $U({A} )$ are taken.
Note also that in many cases -- including Behler-Parrinello neural networks~\cite{behl-parr07prl} and SOAP-GAP models~\cite{bart+10prl} -- the ML energy is computed as a sum of atom-centred contributions
\begin{equation}
\bar{V}_\delta({A} ) = \sum_{k\in {A} }
\bar{V}_\delta({A} _k),
\end{equation}
where ${A} _k$ indicates the environment centred on the $k$-th atom in structure ${A} $. Thus, it is possible to compute uncertainty estimates at the level of individual atomic contributions, and evaluate Eq.~\eqref{eq:U_combined} as
\begin{equation}
U(A) = V_b({A} ) + \sum_{k\in {A} } \frac{\sigma_b^2}{\sigma_b^2 + \sigma^2({A} _k)} \bar{V}_{\delta}({A} _k).
\end{equation}
This expression can be used even if the baseline does not entail a natural atom-centred decomposition, although in such a case one needs to re-define $\sigma_b$ so that it corresponds to the estimated error \emph{per atom}.
This can be beneficial when the error is not spread equally across the system, e.g. when an unexpected chemical reaction occurs in an otherwise homogeneous system.
By monitoring the weight of the ML correction one can determine whether the simulation remains largely in the low-uncertainty region, or whether it enters the extrapolative regime too frequently, requiring further training.
Finally, it is worth mentioning that a similar strategy could be used to combine multiple ML potentials with different levels of accuracy, for instance one based on short-range/two-body interactions, that is more resilient but inaccurate, and one based on a long-range and high-body-order parameterization, which is likely to be more accurate, but requires large amounts of data for training, and is therefore more likely to enter high-uncertainty regions.
\subsection{On-the-fly uncertainty of thermodynamic averages}\label{sec:ML-Uncertainty}
The machinery discussed so far paves the way for reliable estimates of the uncertainty of single-point calculations, i.e. of the value an observable quantity assumes when evaluated at a specific point in phase-space. It also allows computing the uncertainty of predictions averaged over several samples, assuming that the only source of error is that associated with the ML model of the target property~\cite{chiheb2020}.
However, the uncertainty in predictions also propagates to thermodynamic averages of target properties.
\rev{Estimating how such uncertainty propagates is particularly straightforward in the case of a committee-based estimate.}
Computing the mean of an observable $a$ over a trajectory sampling e.g. the mean potential $\bar{V}$ from a committee of $M$ potential models (PMs) $V^{(i)}$ yields
\begin{equation}
\Bar{\Bar{a}} \equiv \langle \bar{a} \rangle_{\bar{V}} = \frac{1}{M'} \sum_{j=1}^{M'} \left<a^{(j)}\right>_{\bar{V}},
\label{eq:a-bbar}
\end{equation}
where $a^{(j)}$ indicates the member of a committee of $M'$ observable models (OMs), and $\langle a\rangle_V$ the mean of an observable over the ensemble defined by the potential $V$.
When computing thermodynamic averages, one should therefore also include the uncertainty in the ensemble of configurations.
A na\"ive (but very time-consuming) way to estimate the full uncertainty relies on running $M$ simulations, each driven by the (re-scaled) force field of a specific PM, and computing the averages $\langle a ^{(j)}\rangle_{V^{(i)}}$ of the target observable $a^{(j)}$ for each OM, and finally the average
\begin{equation}
\tilde{a} \equiv \frac{1}{M M'} \sum_{i=1}^M \sum_{j=1}^{M'} \langle a^{(j)} \rangle_{V^{(i)}}
\label{eq:dbl-avg}
\end{equation}
and variance over both OMs and PMs. While trivially parallelizable, this strategy is inconvenient, as it prevents exploiting the considerable computational savings that can be achieved by computing multiple committee members over the same atomic configuration. %
The need for different trajectories can be avoided by employing an on-the-fly re-weighting strategy~\cite{torr-vall99jcp}. For a canonical distribution at temperature $T = 1/(\beta k_B)$,
\begin{equation}
\langle a^{(j)} \rangle_{V^{(i)}} \equiv \frac{1}{Z^{(i)}} \int a^{(j)}(\mathbf{q}) e^{-\beta V^{(i)}(\mathbf{q})} d\mathbf{q},\label{eq:direct}
\end{equation}
where $\mathbf{q}=(\mathbf{q}_1,\ldots,\mathbf{q}_{N_p})$ is the set of positions of the $N_p$ particles,
\begin{equation}
Z^{(i)} \equiv \int e^{-\beta V^{(i)}(\mathbf{q})} d\mathbf{q}
\end{equation}
is the configurational partition function and $V^{(i)}(\mathbf{q})$ is the potential energy of the $i-$th model.
By introducing the \textit{weights}
\begin{equation}
w^{(i)}(\mathbf{q}) \equiv e^{-\beta [V^{(i)}(\mathbf{q}) - \bar{V}(\mathbf{q})]} ,\label{eq:weights}
\end{equation}
where $\bar{V}$ is the mean committee potential energy, we find
\begin{equation}
\langle a^{(j)} \rangle_{V^{(i)}} = \frac{\int w^{(i)}(\mathbf{q}) a^{(j)}(\mathbf{q}) e^{-\beta \bar{V}(\mathbf{q})} d\mathbf{q}}{\int w^{(i)}(\mathbf{q}) e^{-\beta \bar{V}(\mathbf{q})} d\mathbf{q}}
\end{equation}
or, in shorthand notation,
\begin{equation}
\langle a^{(j)} \rangle_{V^{(i)}} = \frac{\left\langle w^{(i)} a^{(j)}\right\rangle_{\bar{V}}}{\left\langle w^{(i)}\right\rangle_{\bar{V}}}. \label{eq:reweighted}
\end{equation}
This means that, under the ergodic hypothesis, the re-weighting technique allows us to run \textit{a single trajectory} driven by the force field of the committee, and yet to obtain estimates for the averages as computed via the different models.
Thus, it is possible to compute the full uncertainty, including both the error on the OMs and the PMs, by using the reweighting formula to evaluate
\begin{equation}
\tilde{\sigma}^2 \equiv \frac{1}{MM' - 1} \sum_{i=1}^M \sum_{j=1}^{M'} \left| \langle a^{(j)} \rangle_{V^{(i)}} - \tilde{a} \right|^2 \label{eq:sigma_tilde}
\end{equation}
This reweighing approach has further important implications to molecular dynamics simulations: for instance, in on-the-fly learning it is customary to correct (re-train) the ML force-field from time to time along a molecular dynamics simulation so to include new configurations in the training set:\cite{csanyiPRL2003,li2015molecular} an operation which can introduce systematic errors on the estimation of canonical averages, due to the different potential-energy fields along the trajectory.
By simply storing the model-dependent potential energies along the simulation alongside the corresponding configurations, one can at any time compute a set of weights based on the most recent value of the potential, to obtain averages that use the entire trajectory and yet are consistent with the most accurate model available.
Equation $\eqref{eq:reweighted}$ is in principle exact. However, from a computational standpoint, the efficiency in sampling the probability measure of the $i-$th model through reweighing is in general lower than what it would be by direct sampling as in Eq. $\eqref{eq:direct}$, with an error growing exponentially with the variance of $h^{(i)} \equiv-\ln w^{(i)} = \beta (V^{(i)} - \bar{V})$, that inevitably increases with system size.
Given that we are only interested in computing an estimate of the uncertainty, we can use an approximate (but statistically more stable) expression introduced in Ref.~\citenum{ceri+12prsa}, based on a cumulant expansion.
Assuming that $ a^{(j)} $ and $h^{(i)}$ are correlated Gaussian variates %
(all with respect to the committee phase-space probability measure), we have
\begin{equation}
\begin{split}
&\langle a^{(j)} \rangle_{V^{(i)}} \approx \langle a^{(j)} \rangle_{\bar{V}} \\ &-\beta [\langle a^{(j)} (V^{(i)} - \bar{V}) \rangle_{\bar{V}} - \langle a^{(j)} \rangle_{\bar{V}} \langle V^{(i)} - \bar{V} \rangle_{\bar{V}} ]. \label{eq:cumulant}
\end{split}
\end{equation}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{Hydrogen-hydrogen radial pair correlation function in water. (Top) pair distribution function computed for a simulation driven by the committee average; (middle) deviations, from the plot in the top panel, of the pair distribution functions extracted from $M=4$ independent trajectories (one for each NNP, displayed in different colours); (bottom) comparison between the result from an independent trajectory driven by NNP 3 (orange), and the pair correlation obtained from the committee-driven trajectory by direct re-weighting, Eq.~\eqref{eq:reweighted} and the cumulant expansion approximation (CEA), Eq.~\eqref{eq:cumulant}.}\label{fig:water_gofr_models}
\end{figure}
In order to compare the different definitions given so far for a physical example, we consider a simple thermodynamic average, i.e.~the radial pair correlation function $g(r)$
between H atoms in water.
We refer to Sec.~\ref{sec:results} for the specific details of the simulation.
The top panel in Fig.~\ref{fig:water_gofr_models} displays $\bar{\bar{g}}(r)$ determined, as in Eq.~\eqref{eq:a-bbar}, by averaging over a significant number of atomic configurations sampled from a trajectory driven by a committee of $M=4$ models (neural network potentials, NNPs).
The middle panel displays the differences $\Delta g^{(i)}(r) = g^{(i)}(r) - \bar{\bar{g}}(r)$, with $g^{(i)}(r)$ obtained after sampling structures from separate trajectories driven by each NNP model.
In the bottom panel, we focus on one of the models, and we compare the deviation of the pair distribution function, with respect to $\bar{\bar{g}}(r)$, computed according to: an independent trajectory driven by NNP 3 (orange, same as in the central panel); the direct re-weighting of the sampling from the trajectory driven by the committee as in Eq.~\eqref{eq:reweighted} (purple); and within the cumulant expansion approximation (CEA), Eq.~\eqref{eq:cumulant} (dark green).
The match between the three curves shows that the re-weighting procedure, both in its exact form and using the CEA, is capable of reproducing the result obtained from an independent trajectory generated by a specific NNP without the need of explicitly running it.
For this example, which entails a relatively small simulation cell and low discrepancy between the committee average and the individual NNPs, there is no substantial difference between the exact and CEA reweighing.
We recommend using the CEA over the direct estimator, not only because of its improved stability and statistical efficiency, but also because the linearized form emphasizes the different sources of error associated with the single-trajectory average~\eqref{eq:a-bbar}, and has several desirable formal implications.
First, using the CEA the mean over the trajectories is consistent with the average computed over the trajectory driven by $\bar{V}$ -- whereas in general Eq.~\eqref{eq:dbl-avg} would yield a different value from~\eqref{eq:a-bbar}:
\begin{equation}
\!\tilde{a} \approx
\Bar{\Bar{a}} + \!\frac{\beta}{M}\sum_i [\langle \bar{a} (V^{(i)} - \bar{V}) \rangle_{\bar{V}} - \langle \bar{a} \rangle_{\bar{V}} \langle V^{(i)} \!-\! \bar{V} \rangle_{\bar{V}} ]= \Bar{\Bar{a}}.
\end{equation}
Second, one sees that
\begin{equation}
\tilde{\sigma}^2 \approx {\frac{M(M'-1)}{MM' - 1}} \sigma^2_a + {\frac{M'(M-1)}{MM' - 1}} \sigma^2_{aV}
\underset{M,M'\rightarrow \infty}{=}\sigma^2_a+\sigma^2_{aV} \label{eq:fullsigma2_CEA}
\end{equation}
where
\begin{equation}
\sigma^2_a \equiv \frac{1}{M'-1} \sum_{j=1}^{M'}
\left| \langle a^{(j)} \rangle_{\bar{V}} - \Bar{\Bar{a}} \right|^2 \label{eq:sigma_a}
\end{equation}
indicates the uncertainty arising from the OMs, and
\begin{equation}
\begin{split}
\sigma_{aV}^2 & \equiv \frac{1}{M'} \sum_{j=1}^{M'} {\sigma_{aV}^{2}}^{\!(j)}, \\
{\sigma_{aV}^{2}}^{\!(j)}
&\equiv {\frac{1}{M-1}} \sum_{i=1}^M \left| \langle a^{(j)} \rangle_{V^{(i)}} - \frac{1}{M} \sum_{i=1}^M \langle a^{(j)} \rangle_{V^{(i)}}\right|^2 \\
&\approx \frac{\beta^2}{M-1} \sum_{i=1}^M \left|\langle a^{(j)} ( V^{(i)} - \bar{V} ) \rangle_{\bar{V}} - \langle a^{(j)} \rangle_{\bar{V}} \langle V^{(i)} - \bar{V} \rangle_{\bar{V}} \right|^2
\end{split} \label{eq:sigma_V}
\end{equation}
indicates the uncertainty that arises due to the sampling of the different PMs.
In the general case of an uncertainty estimation that is \emph{not} based on a committee model, where only the ``best values'', $\bar{a}(\mathbf{q})$ and $\bar{V}(\mathbf{q})$, and their uncertainties, $\sigma_{\bar{a}}(\mathbf{q})$ and $\sigma_{\bar{V}}(\mathbf{q})$, are available, the reweighting technique so far described becomes inapplicable. The error-propagation formula for the uncertainty $\tilde{\sigma}^2$ on the canonical average $\langle \bar{a} \rangle_{\bar{V}}$ cannot be straightforwardly implemented either, since it requires the off-diagonal elements of the covariance matrix, and not only $\sigma^2_{\bar{a}}(\mathbf{q})$ and $\sigma^2_{\bar{V}}(\mathbf{q})$.
Nonetheless, as shown in Appendix \ref{app:unc-prop}, even in this case, a simple upper bound for $\tilde{\sigma}^2$ can be obtained:
\begin{equation}
\tilde{\sigma} \leq \langle \sigma_{\bar{a}} \rangle + \beta\left\langle \big|\langle \bar{a} \rangle - \bar{a} \big|\, \sigma_{\bar{V}} \right\rangle, \label{eq:sigma_gen}
\end{equation}
which corresponds, at least in spirit, to the results we obtain for the committee model, Eqs.~\eqref{eq:fullsigma2_CEA}, \eqref{eq:sigma_a}, and \eqref{eq:sigma_V}, and its implementation shows no hurdles.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig2.pdf}
\caption{Graphical summary concerning the essential steps of the workflow described in Sec. \ref{sec:theory} to train a committee of ML potentials (and possibly other observables), internally validate and re-calibrate it through a proper scaling factor, $\alpha$, and finally use it for uncertainty estimation in molecular dynamics and thermodynamic averages.
}
\label{fig:workflow}
\end{figure*}
\section{Applications}
\label{sec:results}
\rev{Fig.~\ref{fig:workflow} summarizes how the weighted baseline scheme, and the on-the-fly estimation of errors for statistical averages, can be integrated with a calibrated committee model, in the context of a molecular dynamics simulation.}
After the construction of a suitable database on which reference values (say of energies and forces) are computed, the database is randomly sub-sampled into $M$ smaller training sets on which a committee of $M$ ML models are trained.
Depending on the specific physical system/quantity analyzed we adopt two alternative but equally correct approaches to construct a validation set, in order to calibrate the uncertainty of the committee and estimate the re-scaling factor $\alpha$. The first strategy consists in extracting $N_\mathrm{val}$ decorrelated configurations from short committee MD trajectories, calculating forces and energies with the reference method, and employing these as the validation set. In the second strategy, instead, the ensemble of $N_\mathrm{val}$ validation structures was gathered by selecting, in the original training database, those structures that do not appear in at least $n$ of the training subset.
Following the $\alpha$ calibration step, MD simulation are driven by the committee model.
The weighted-baseline numerical integration of
the equations of motion is based on Eq.~\eqref{eq:U_combined}, which reduces to a non-baselined model by setting $V_b = 0$.
During the MD simulation driven by the committee model, all the (re-scaled) model-dependent quantities of interest are stored for a significant set of (uncorrelated) configurations, eventually leading to re-weighting and, therefore, to uncertainty estimation of the chosen thermodynamic averages.
Any configuration encountered along the trajectory that is associated with an error higher than a set threshold can be used to improve the reference database, in an offline (or online) active learning scheme.
In the next subsections we describe how we applied this routine to weighted baseline integration (Sec. \ref{sec:tripeptide}), as well as to compute thermodynamic average and the related ML uncertainty for different observables in different physico-chemical environments (Secs.~\ref{sec:PDF}, \ref{sec:FES}, \ref{sec:FT-DOS}).
All the simulations are run with the molecular dynamics engine i-PI\cite{kapi+19cpc} interfaced with the massively parallel molecular dynamics code LAMMPS\cite{plim95jcp} with the n2p2 plugin \cite{singraber2019library} to evaluate the neural network potentials.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig3.pdf}
\caption{A visualization of the results of the replica-exchange MD simulation of the Phe-Gly-Phe tripeptide, using a weighted-baseline scheme.
Central scatter-plot: a set of 2,000 atomic configurations collected from all replicas is classified according to the first two principal components of their SOAP features ($x$ and $y$ axes), and the replica temperature ($z$ axis, in logarithmic scale). The SOAP representation employs a cut-off radius of 4 \AA, a basis of $n=6$ radial and $l=4$ angular functions, and a Gaussian width of 0.3 \AA. Each point corresponds to one configuration, colour-coded according to the weight of the ML correction to the baseline potential, see Eq.~\eqref{eq:U_combined}. Examples of typical configurations that are representative of the different temperatures and ML correction weights are displayed in the panels surrounding the scatter plot.}
\label{fig:tripeptide}
\end{figure*}
\subsection{Weighted baseline integration}\label{sec:tripeptide}
We begin by performing and analyzing a 120\,ps temperature replica-exchange molecular dynamics (REMD)~\cite{petraglia2016beyond} simulation of the Phe-Gly-Phe tripeptide, using the weighted baseline method.
The i-PI energy and force engine~\cite{kapi+19cpc} is used to simulate 12 Langevin-thermostatted replicas with temperatures between 300\,K and 2440\,K using a time-step of 0.5\,fs.
Baseline density-functional-based tight binding energies and forces are evaluated using the DFTB+~\cite{aradi2007dftb+} package and the DFTB3/3OB~\cite{gaus2012parametrization, gaus2014parameterization} parametrisation with a D3BJ~\cite{grimme2011effect} dispersion correction (3OB+D3BJ).
An ensemble of $M=4$ Behler-Parrinello artificial neural networks (NN)~\cite{behl-parr07prl} is then used to promote this baseline to a first-principles density-functional-theory (DFT) level of theory.
The DFT calculations are performed using the GAMESS-US~\cite{schmidt1993general, gordon2005advances} code and the PBE density functional~\cite{perdew1996generalized} with a dDsC dispersion correction~\cite{steinmann2010system, steinmann2011comprehensive, steinmann2011generalized} and the def2-TZVP basis set~\cite{Schafer1992}.
The NNs are trained to reproduce the differences between the DFTB+ baseline and the target DFT energies and forces.
The NNs differ only in the initialisation of the NN weights and the internal cross-validation splits of the reference data into 90\% training and 10\% test data.
The reference data underlying the NNs is constructed by farthest-point sampling configurations from 1.5\,ns long REMD simulations of 26 aminoacids, each composed of 16 Langevin-thermostatted replicas with logarithmically-spaced temperatures between 300\,K and 1000\,K.
The resultant set of configurations is enriched with 3,380 geometry-optimised dimers from the BioFragment Database\cite{burns_biofragment_2017}.
Note that the aminoacids are simulated at less than half the maximum temperature, at which the tripeptide is simulated.
The uncertainties associated with the ensemble predictions are estimated using the scheme of Ref.~\citenum{musi+19jctc}, using a scaling correction of $\alpha = 1.0$, computed on the tripeptide validation data.
The uncertainty of the ML model is used, together with a baseline uncertainty of DFTB $\sigma_b = 7 \times 10^{-3}$\,meV/atom, estimated according to Eq.~\eqref{eq:sigma_b}, to build a weighted baseline model following Eq.~\eqref{eq:U_combined}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig4.pdf}
\caption{Weights for the ML correction in the weighted-baseline scheme for the Phe-Gly-Phe tripeptide discussed in the text. In the left panels the weights $w$ are displayed at different temperatures for a segment of the REMD trajectory. The rightmost panel shows the log-histogram of the occurrences of the weights at different temperatures.}
\label{fig:tripep_weights}
\end{figure*}
The results of the REMD simulation of the Phe-Gly-Phe tripeptide are portrayed in Fig.~\ref{fig:tripeptide}.
The central scatter-plot shows 2,000 atomic configurations, drawn at constant stride from all REMD target ensemble temperatures.
The configurations are classified according to the first two principal components ($x$ and $y$ axes), obtained from a principal component analysis (PCA) of their SOAP features, and temperature ($z$ axis).
Each configuration $A$ is coloured according to the weight $w(A) = \sigma_b^2/[\sigma_b^2 + \sigma^2(A)]$ of the ML correction applied to the baseline potential during the simulation (see Eq.~\eqref{eq:U_combined}).
Examples of configurations with very low ($0\leq w \leq 0.2$), modest ($0.3\leq w \leq 0.5$), and large weights ($0.6\leq w \leq 1$) are grouped at the top, bottom and left of the scatter plot, respectively.
The figure shows that at low temperature the simulation samples exclusively different conformations of the polypeptide chain, that are well-represented in the training set and that are therefore associated with low ML uncertainty and high values of $w(A)$.
At temperatures above $\approx 500$K, the polypeptide starts decomposing, releasing first \ce{CO2} and, at temperatures above $\approx 1000$K, \ce{NH3}, \ce{H2O}, as well as larger fragments. None of these highly energetic reactions are represented in the training set, which is reflected in the sharp decrease of the weight.
Upon entering the extrapolative regime, the NN correction to the baseline, $\bar{V}_\delta$, is suppressed by the vanishing weight $w$, thereby ensuring numerical stability of the simulation subject to the baseline potential.
A quantitative analysis of weight distributions is shown in Fig. \ref{fig:tripep_weights}. Higher temperatures are displayed in warmer colours. The left panels show the weights $w$ along the REMD trajectory. These values are collected in the rightmost histogram which displays, in semi-log scale, the distribution $p(w)$ of weights at different temperatures. We see that at intermediate $T$, an ``island'' at $w\approx 0.4$ -- or a peak in $p(w\approx 0.4)$ -- emerges, which corresponds to the tripeptide dissociation and the release of a \ce{CO2} molecule. At even larger temperatures the probability $p(w=0)$ grows, while the peak at $w\approx 0.4$ is levelled out by the increase in the number of low-$w$ snapshots and the $p(1)$ decreases by more than an order of magnitude due to the persistence of the extrapolative regime at $T \gg 1000$ K.
This simulation provides a compelling example of how a weighted baseline scheme allows exploring all parts of configuration space without incurring in unphysical behaviour and instability due to extrapolations of the NNs -- which typically occur within the first 100\,ps of a similar REMD simulation using a non-weighted baseline correction.
Quite obviously, the configurations collected in the extrapolative regime do not reach the level of accuracy of the high-end electronic structure method, but only that afforded by the baseline potential. Nonetheless, simulations based on this scheme can be used whenever extrapolation occurs only over brief stretches of the trajectory, or when (as it is often the case) one is only interested in the low-temperature portion of a REMD simulation, with the high temperature replicas used only to accelerate sampling.
Furthermore, one can store configurations characterised by a large $\sigma(A)$ in order to add them to the training database, which simplifies greatly the implementation of online and offline active learning schemes.
\subsection{Pair distribution function}\label{sec:PDF}
\begin{figure*}
\includegraphics[width=2\columnwidth]{fig5.pdf}
\caption{Pair correlation function in water (left, middle panels) and phenol-solvated methanesulphonic acid (right panel). The committee value (blue solid line) and its uncertainty (shaded red area) as estimated from Eq.~\eqref{eq:sigma_V} are displayed.}\label{fig:water_gofr}
\end{figure*}
The radial distribution function represents a simple and insightful structural observable to test the method developed in Sec.~\ref{sec:theory} to estimate the uncertainty on thermodynamic averages.
Computationally, $g(r)$ is usually determined \textit{i)} by sampling a significant number of atomic configurations from a thermodynamic ensemble; \textit{ii)} by computing the minimum image separations $\mathbf{r}_i - \mathbf{r}_j$ of all the atomic pairs, for each sampled configuration, and \textit{iii)} by sorting these separations into an histogram $h$ whose bins extend in the interval $[r,r+\delta r]$. When the reweighting procedure is considered
point \textit{i)} is performed by running a MD trajectory driven by the committee model alone, and the model-dependent phase-space sampling is accounted by the weights, Eq.~\eqref{eq:weights}.
Notice that the calculation of the radial distribution function $g^{(i)}(r)$ of the $i$-th member of the ML committee depends on $i$ through the weights alone, i.e.~through the calibrated potential energy estimate for each member.
\subsubsection{Water}\label{sec:water_gofr}
A committee of $M=4$ NNP models was trained via the n2p2 code \cite{singraber2019parallel} over a dataset of 1593 64-molecule bulk liquid water structure whose total energy and the full set of interatomic-force components were computed at the revPBE0-D3 level with CP2K~\cite{matcloud18a}.
The atomic environments are described within a cutoff radius of $12.0~\text{a.u.}$ using the symmetry function sets for H atoms (27 functions) and O atoms (30 functions), as selected in Ref.~\onlinecite{mora+16pnas}. The hydrogen and oxygen atomic NNs consist of two hidden layers with 20 nodes each. We refer to Ref.~\onlinecite{chen+19pnas} for further details on the training set.
We run an $NVT$ MD trajectory, driven by the committee, at $T=300~\text{K}$ for $2~\text{ns}$ on a system of 64 water molecules inside an equilibrated cubic box of side $23.86~\text{\AA}$.
We obtain an unbiased estimate for the correction factor $\alpha = 2.1$, using the expression in Appendix~\ref{app:bias}. Note that without applying the correction for the estimator bias, would lead to substantial over-estimation of the correction factor, in this case $\alpha = 3.75$.
Figure \ref{fig:water_gofr} displays the hydrogen-hydrogen (left) and oxygen-oxygen (middle) pair distribution function $g(r)$. The ML uncertainty, computed as in Eq.~\eqref{eq:sigma_V}, is shown as a shaded area. The error on position and height of the first peak is minuscule, while slighlty larger uncertainty is predicted on the longer-range features for the O--O correlations. This analysis demonstrates, with a simple post-processing of a single trajectory, that the accuracy of the NNP is sufficient to describe quantitatively the $g(r)$ -- a useful verification of the reliability of the model.
\subsubsection{Methanesulphonic acid in phenol}\label{sec:ch3so2oh-gr}
As a second example, we consider the solvation of methanesulfonic acid (CH$_3$SO$_2$OH) in phenol (C$_6$H$_6$O), a system that was studied in Ref.~\citenum{ross+20jctc} because of its relevance to the synthesis of commodity chemicals such as hydroquinone and catechol, in which methanesulfonic acid acts as a catalyst for the reaction between \ce{H2O2} and phenol.
We use an ensemble of $M=5$ neural network (NN) machine learning potentials to simulate one acid molecule dissolved in 20 phenol molecules at $T=363$ K.
The technical details and the resulting potentials are identical to those presented in Ref.~\onlinecite{ross+20jctc}, that are available from Ref.~\citenum{matcloud20d}. Note that in the original publication the calibration factor was estimated to be $\alpha=5.8$. Using the unbiased estimator introduced here, Eq.~\eqref{eq:alpha-unbiased-theory}, yields a corrected value of $\alpha=4.1$.
An understanding of the solvation of \ce{CH3SO2OH} by phenol is a necessary preliminary step towards rationalizing the regio-selectivity of this acid in the catalytic hydroxylation of phenol to form catechol or hydroquinone.
Methanesulfonic acid acts both as a hydrogen bond acceptor through its sulfonil oxygen atoms, and as a donor through the methanesulfonic hydroxyl group.
The strength and population of hydrogen bonds can be inferred by a quantitative analysis of the pair correlation function $g(r)$ between the protonated O in the hydroxyl group of methanesulfonic acid (\ce{CH3SO2}\textbf{O}H) and the O atom in phenol (\ce{C6H5}\textbf{O}H).
We compute the pair correlation function from 16 independent MD simulation runs for a total of about 1.6~ns.
A thorough discussion of the MD integration set up and the related technical details can be found in Ref.~\onlinecite{ross+20jctc}.
The uncertainty in the $g(r)$ obtained by a CEA reweighing of the committee members, as in Eq.~\eqref{eq:sigma_V}, is considerably larger than what observed for the case of water (right panel of Fig.~\ref{fig:water_gofr}), together with its uncertainty calculated as in Eq.~\eqref{eq:sigma_V} (shaded area).
This can be ascribed in part to the slightly larger test error computed for the ML potential (which is unsurprising given the considerably more complex composition), but also in part to poorer statistics due to the presence of just a single acid molecule in the simulation cell. The statistical uncertainty on the committee $g(r)$ obtained via a block analysis is indeed comparable to the one due estimated by the committee reweighing.
Similar to what we observe for the O-O $g(r)$ in water, the uncertainty is not constant, but is largest at the minimum between the first and second coordination shell. The fact that the first coordination shell is affected by a small error is reassuring, suggesting that the geometry and population of hydrogen-bonded configuration is predicted reliably.
Overall, this example demonstrates how the estimates we introduce \rev{for the effects of the ML error on sampling} make it possible to assess the reliability of structural observables, particularly in difficult cases in which the model exhibits a substantial error, and so it is important to determine precisely whether such error does or does not (as in this case) affect the qualitative interpretation of simulations results. %
\subsection{Free energy landscapes}\label{sec:FES}
Combining ML potentials and enhanced sampling techniques makes it possible to explore computationally free-energy landscapes that involve activated events, such as chemical reactions and phase transitions. In this Section, we show how on-the-fly reweighing can straightforwardly applied to the calculation of free-energy differences and enhanced sampling simulations.
\subsubsection{Melting point of water}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{Chemical potential difference between hexagonal ice and liquid water as a function of temperature. Upper panel: the fit obtained for the trajectory driven by the committee mean.
Lower panel: individual fits for each committee model. }
%
\label{fig:melting_point}
\end{figure}
We begin by demonstrating the calculation of the free energy difference between hexagonal ice and liquid water, $\Delta\mu = \mu^{Ih} - \mu^{L}$, at 8 different temperatures,
using the interface pinning (IP) technique.\cite{Pedersen_Hummel_Kresse_Kahl_Dellago_2013}
The basic idea of IP involves performing a biased simulation in which the system is forced to retain a solid-liquid interface whose position fluctuates around an average value. This is practically achieved by including an additional pinning potential
\begin{equation}
W(A) = \frac{\kappa}{2}[Q(A) - a]^2 \label{eq:U_pinning}
\end{equation}
where $Q(A)$ is an order parameter which identifies the phase of the system (local Q6, defined as in Ref.\cite{lechner2008accurate,stei+83prb}), $\kappa$ is a spring constant dictating the amplitude of interface fluctuations, and $a$ is the reference value for the collective variable (usually taken as the value of $Q$ at which half the system is in the solid phase).
The chemical potential difference at the simulation temperature $T$ can then be estimated by
\begin{equation}
\Delta\mu (T) = -\kappa (\langle Q\rangle' - a) \label{eq:Deltamu_pin}
\end{equation}
where $\langle \cdot \rangle'$ indicates $NP_z\kappa T$-ensemble averages with the additional term $W$ defined in Eq. \eqref{eq:U_pinning}.
In the present work, simulations are driven by the same committee of $M=4$ NNP models discussed in Sec.~\ref{sec:water_gofr}, using PLUMED\cite{PLUMED} to constrain the order parameter to the target value $a= 165$. A total of 336 water molecules are simulated in a supercell with an elongated side to allow probing the coexistence of the two phases, separated by the planar interface; in particular we employed an orthorhombic supercell of size $15.93 \times 13.79 \times 52.47~\text{\AA}^3$.
We compute the value of $\Delta\mu (T)$ at different temperatures, and perform a linear fit from which we determine the melting temperature $T_\text{m}$ as the intercept with the abscissa, $\Delta\mu(T_\text{m}) = 0$. We also obtain the entropy of melting per molecule, $\Delta s_\text{m} = \left.\frac{\partial \Delta \mu}{\partial T}\right|_{T_\text{m}}$, as the slope of the fit, and the latent heat of melting per molecule $\Delta h_\text{m} = T_\text{m} \Delta s_\text{m} $.
As shown in the top panel of Fig.~\ref{fig:melting_point}, even though the individual points are somewhat scattered due to statistical errors, it is possible to determine a clear linear trend resulting in $T_\text{m}=290$ K, $\Delta s_\text{m} = 0.16$ meV/K/molecule, and $\Delta h_\text{m} = 46$ meV/molecule.
It should be noted that these values deviate from those that have been computed with a similarly trained potential\cite{chen+19pnas} due to the presence of substantial finite-size effects in the present simulations, which are only meant to demonstrate the application of this uncertainty quantification approach, and not to provide size and sampling-converged values of the averages.
In order to estimate the uncertainty due to the MLPs, we combine Eq. \eqref{eq:Deltamu_pin} with the CEA, to compute the model-dependent chemical potential differences $\Delta\mu^{(i)}$ using
\begin{equation}
\langle Q\rangle'_{V^{(i)}} = \langle Q\rangle_{\bar{V}}' - \beta [ \langle Q (V^{(i)} - \bar{V}) \rangle'_{\bar{V}} - \langle Q \rangle'_{\bar{V}} \langle V^{(i)} - \bar{V} \rangle'_{\bar{V}}].
\end{equation}
In line with the uncertainty propagation framework developed in Sec. \ref{sec:theory}, we compute four different fits, one for each model, and from them four different melting temperatures $T_\text{m}^{(i)}$, indicated by the coloured crosses in the lower panel of Fig.~\ref{fig:melting_point}. By taking the average and standard deviation of the model-dependent $T_\text{m}^{(i)}$, as well as the associated $\Delta s^{(i)}_\text{m}$ and $\Delta h^{(i)}_\text{m}$, we can determine the mean values and the ML uncertainty intervals, namely $\overline{T_\text{m}} =290 \pm 5$ K, $\overline{\Delta s_\text{m}} = 0.16 \pm 0.01$ meV/K/molecule, and $\overline{\Delta h_\text{m}} = 46 \pm 3$ meV/molecule.
In view of the linear nature of the CEA, the values of the molar entropy and latent heat of melting computed from the mean of the committee estimates match exactly those computed directly from the committee estimates. In principle, the two estimates $\overline{T_\text{m}}$ and $T_\text{m}$ differ, even if in this case they are equal within the confidence interval. Whenever a non-linear procedure is involved in the calculation of the property of interest, results may change based on the way the committee estimates are combined. Comparing different approaches is then a useful check to assess the robustness of the error estimation.
\newcommand{\text{s}^{\ce{O}}}{\text{s}^{\ce{O}}}
\subsubsection{Deprotonation of methanesulfonic acid}
We use the committee model discussed in Sec.~\ref{sec:ch3so2oh-gr} and the same metadynamics protocol described in Ref.~\onlinecite{ross+20jctc} to compute the free energy profile for the deprotonation of \ce{CH3SO2OH} in phenol, a key quantity to rationalize the activity of methanesulfonic acid in catalyzing the hydroxylation of phenol.
We define the free energy as a function of the coordination, $\text{s}^{\ce{O}}$, of the oxygen atoms in the acid with respect to the hydrogen atoms in the system.
The free energy at $\text{s}^{\ce{O}}$ is by definition $kT$ times the negative of the logarithm of the population fraction $p(\text{s}^{\ce{O}})$ of the configurations with a given $\text{s}^{\ce{O}}$.
To obtain an unbiased estimate of $p(\text{s}^{\ce{O}})$ from a trajectory with a time-dependent bias $\tilde{v}(t)$, we weight the configurations by $u(A(t)) = e^{\beta(\tilde{v}(t)-c(t))}$, where the time-dependent offset $c(t)$ is computed using the Iterative Trajectory Reweighting (ITRE) algorithm.\cite{Giberti_Cheng_Tribello_Ceriotti_2020}
The population fraction for the committee, $\bar{p}(\text{s}^{\ce{O}})$, is computed as the ITRE-reweighted normalized histogram of the occurrences of configurations $A$ with a given $\text{s}^{\ce{O}}(A)$:
\begin{equation}
\bar{p}(s) = \langle \delta(\text{s}^{\ce{O}}(A)-s) u(A) \rangle_{\bar{V}}
\end{equation}
where the average is over the metadynamics trajectory, and $\delta(\text{s}^{\ce{O}}(A)-s)$ selects structures with a prescribed value of the coordination number.
In turn, the model-dependent population can be readily obtained, through the CEA, as
\begin{equation}
\begin{split}
p^{(i)}(s) &= \bar{p}(s) - \Delta p^{(i)}(s)\\
\Delta p^{(i)}(s) &= \beta \langle
~ \delta(\text{s}^{\ce{O}}(A)-s) ~ u(A) ~ ( V^{(i)}(A) - \bar{V}(A) ) ~ \rangle_{\bar{V}}\\ & -\beta \langle
~ \delta(\text{s}^{\ce{O}}(A)-s) ~ u(A) ~ \rangle_{\bar{V}} \langle V^{(i)}(A) - \bar{V}(A) ~ \rangle_{\bar{V}}
\end{split}
\end{equation}
Finally, the uncertainty in the population, $\Delta p$, is obtained as the standard deviation of $\Delta p^{(i)}$ over the $M$ models, as in Eq.~\eqref{eq:sigma_V}.
The symmetric uncertainty on the population results in a confidence range on the free energy which is \textit{asymmetric} about $-kT\log(\bar{p})$, spanning values from $-kT\log( \bar{p} + \Delta{p} ) $ to $-kT\log(\bar{p} - \Delta{p}) $.
As shown in Fig. \ref{fig:methanesulfonic_phenol_pKa}, the uncertainty between the models is very small around the minimum corresponding to the neutral state of the acid, but grows substantially in the deprotonated state -- which is consistent with the qualitative observation made in Ref.~\citenum{ross+20jctc} of the increase in the uncertainty on the NNP predictions for dissociated configurations, that are less represented in the training set. %
Interpreting the configurations with $\text{s}^{\ce{O}}\approx 0.5$ as the deprotonated state, the free energy cost for the dissociation of \ce{CH3SO2OH} in phenol can be estimated to be 20{\raisebox{0.5ex}{\tiny$^{+5}_{-2}$}} kJ/mol. Even though in this specific instance other errors, e.g. those due to finite-size effects and reference energetics, are likely to be comparable with that obtained from the spread of the committee members, the substantial uncertainty computed by on-the-flight reweighting underscores the importance of error estimation when using machine learning models.
\begin{figure}
\includegraphics[width=0.95\linewidth]{fig7.pdf}
\caption{Projection of the free energy along the proton transfer reaction $\text{s}^{\ce{O}}$ for a system of one methanesulfonic acid molecule dissolved in 20 phenol molecules. $\text{s}^{\ce{O}} \approx 1$ indicates the neutral state, while $\text{s}^{\ce{O}} < 1$ a deprotonated state of the acid. The shaded area represents the ML uncertainty obtained from Eq.~\eqref{eq:fullsigma2_CEA}.}\label{fig:methanesulfonic_phenol_pKa}
\end{figure}
\subsection{Finite-temperature density of states}\label{sec:FT-DOS}
As a last example, that we use to highlight the interplay between sampling and model uncertainties, we consider the finite-temperature density of states (DOS) of gallium in its metallic liquid phase.
The sampling of configurations is performed through MD simulations driven by a committee of $M=4$ NNPs, based on the potential introduced in Ref.~\citenum{zama+20am}, that is available from Ref.~\citenum{matcloud20a}.
We consider a system of 384 Ga atoms in the NVT ensemble, sampled at a temperature $T=1800$ K using a combination of a generalized Langevin\cite{ceri+10jctc} and stochastic velocity rescaling thermostats,\cite{bussi2007canonical} as implemented in i-PI. We employ a timestep of 4 fs to integrate the equations of motion for a total of 400 ps.
The DOS model is based on the framework developed in Ref.~\citenum{chiheb2020}, which we briefly summarise. For a given configuration $A$, the DOS is defined as
\begin{equation}
\text{DOS}(E, A) = \frac{2}{N_b N_\mathbf{k}} \sum_n \sum_{\mathbf{k}} \delta(E- E_n(\mathbf{k}, A)) ,
\end{equation}
where $E_n(\mathbf{k})$ is the energy for the (doubly-degenerate) $n$-th band and wavevector $\mathbf{k}$. The DOS is normalized to the number of electronic states, $N_b N_\mathbf{k}$, where $N_b$ and $N_\mathbf{k}$ are the number of bands and $\mathbf{k}$-points considered, respectively.
We adopt a ML approach based on a local-environments decomposition to predict $\text{DOS}(E,A)$, and train a committee of observable models (OM, see Sec. \ref{sec:theory}), in order to estimate a ML uncertainty. The predicted DOS of a given structure $A$, and the $j$-th model reads
\begin{equation}
\text{DOS}^{(j)}(E, A) = \sum_{k\in A} \text{LDOS}^{(j)}(E, A_k) , \quad j=1, \ldots, M'.
\end{equation}
The training set for each OM is represented by \rev{150} random structures extracted from a total of \rev{274 Ga training configurations, including mostly liquid structures at various temperatures and pressures and a few solid ones}. For this training set, we compute reference DFT calculations for the $\text{DOS}_\text{ref}(E,A)$ as the convolution of the Kohn-Sham eigenvalues $E_{\text{ref},n}(\mathbf{k})$ with a Gaussian smearing of width 0.5 eV.\cite{chiheb2020}
The reference DFT calculations are performed at the level of the PBE functional~\cite{perd+96prl} via the \textsc{Quantum ESPRESSO} code,\cite{quantum-espresso-1,quantum-espresso-2} with a Monkhorst-Pack $k$-point grid that ensures a density of at least 6.5 $\mathbf{k}$-points \AA.
In order to compare DOS belonging to the different structures of the training set, we align the DOS at the Fermi level. The latter, $E_F(A, T)$, is defined as the solution of the charge-neutrality constraint $N_e = \sum_E f(E,E_F,T) \text{DOS}(E, A) $, where $f(E,E_F,T)$ is the Fermi-Dirac distribution and $N_e=2$ due to spin degeneracy.
The featurization is done using a SOAP kernel with $n=12,~l=9,~g_s=0.5,~r_c=6\text{\AA},~c=1,~m=5,~r_0=6.0$ (the parameters follow the notation in Ref.~\citenum{chiheb2020}). Given the small train set size, and that commitee predictions for sparse kernel models add negligible overhead on top of a single prediction, we use a large committee with $M'=64$ members.
According to Eqs.~\eqref{eq:fullsigma2_CEA}, \eqref{eq:sigma_a}, and \eqref{eq:sigma_V}, the total ML uncertainty $\sigma$ on $\langle \text{DOS}(E) \rangle_T${} derives from both the uncertainty on individual DOS predictions, $\sigma_a$ and the uncertainty on the phase space sampling associated with the committee of MLPs driving the dynamics, $\sigma_{aV}$.
The results of these calculations are displayed in Fig. \ref{fig:DOS}: in the upper panel the average $\langle \text{DOS}(E) \rangle_T${} is reported together with its total ML uncertainty, $\sigma$, as computed by in Eq. \eqref{eq:fullsigma2_CEA}.
In the lower panel we show the individual contributions of the uncertainty on the property, $\sigma_a$, and that associated with sampling, $\sigma_{aV}$, to the total $\sigma$\rev{, together with the upper bound estimate of the uncertainty}.
The absolute error on the DOS is small, and is dominated by $\sigma_a$.
The contribution $\sigma_{aV}$ associated with sampling is sizeable, and in some energy range it dominates the uncertainty. The coupling between the potential energy and the observable property cannot be neglected. %
\rev{Notice that the upper bound given by Eq.~\ref{eq:sigma_gen} (shaded red area) largely overestimates the uncertainty based on the committee model, where we have access to the single-configuration deviations with respect to the best (i.e. the committee) values for $\mathrm{DOS}(E,A)$ and $V(A)$, and not only to estimates of their absolute values.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig8.pdf}
\caption{Machine-learned average density of states, $\langle \text{DOS}(E) \rangle_T$, computed for a simulation of liquid gallium at $T=1800$ K. The zero is set at the Fermi energy, to which the single-configuration DOS entering the average were align. The average $\langle \text{DOS}(E) \rangle_T$ (solid line) is reported together with its statistical uncertainty (shaded \rev{gray} area).\rev{The red shaded area represents the upper bound of the uncertainty, computed as in Eq.~\eqref{eq:sigma_gen}}}
\label{fig:DOS}
\end{figure}
\section{Conclusions}
\rev{This work demonstrates how to use the uncertainty estimation of the machine-learning prediction of individual atomistic structures in the context of molecular dynamics simulations.
We focus in particular on a recently-introduced calibrated committee model to account for the uncertainty stemming from the finite number of reference structures employed in the training of the model, for which the uncertainty propagation procedure is particularly natural, but our approaches can be readily adapted to any error estimation scheme.}
First we develop an uncertainty-weighted baselined ML potential scheme, that achieves robust sampling by using the error estimator to
interpolate smoothly between a reference baseline potential (e.g., a semiempirical electronic structure method, such as DFTB) %
and a ML correction that promotes it to a higher level of theory.
Whenever the ML correction enters an extrapolative regime, the potential reverts to the less accurate but more robust reference, guaranteeing stable trajectories, and simplifying greatly the practical implementation of ML potentials for all cases in which a reasonable baseline potential is available.
This scheme has an obvious, straightforward application to online/offline active learning.
Even though we demonstrate this strategy for the overall potential, a local implementation in terms of individual atomic contributions is possible even when the baseline does not offer a natural atom-centered decomposition.
This would be particularly beneficial whenever the ML error is not equally distributed among the atoms of the system, affecting instead a rather small number of atoms, e.g., those involved into a chemical reaction.
We also show how to obtain a quantitative estimate of the machine learning uncertainty of static thermodynamic averages of physical observables, by taking into account both the ML uncertainty on single-configuration calculations, and the distortion of the sampling probability due to the Boltzmann factor entering canonical averages.
We circumvent the poor computational efficiency of statistical reweighing using a cumulant expansion approximation (CEA), consisting in a linearized version of the reweighed average which proves to be statistically more stable.\cite{ceri+12prsa}
This ML uncertainty propagation scheme proves to be applicable to several physical observables and condensed-phase systems, ranging from the pair distribution function and free-energy calculations on liquid water and methanesulfonic acid in phenol, to the finite-temperature electronic density of states of liquid gallium.
Depending on the application and the target property, the uncertainty that can be ascribed to the ML-driven sampling, or to the ML property models, can be substantial.
A scheme such as the one we propose here, that allows to achieve uncertainty quantification with an affordable computational cost, should be applied across the board to all simulation based on data-driven schemes.
\section{Data availability}
Data supporting the findings in this paper are available from public repositories as referenced, or upon reasonable request to the authors. \rev{An open-source implementation of the methods discussed in the present paper is available in i-PI\cite{ipicode}.}
\section{Acknowledgements}
We thank F\'elix Musil for insightful discussions and Chiheb Ben Mahmoud for technical assistance on machine learning the electronic density of states. Training data for the oligopeptides model was kindly provided by Alberto Fabrizio, Raimon Fabregat and Clemence Corminboeuf.
GI, MC, VK and EAE acknowledge support by the NCCR MARVEL, funded by the Swiss National Science Foundation (SNSF).
FG and MC acknowledge funding by the Swiss National Science Foundation (Project No. 200021-182057). YZ acknowledges support for a research visit at EPFL by the Graduate School of Xiamen University.
KR was supported by an industrial grant with Solvay.
|
2,869,038,157,080 | arxiv | \section{Introduction}
\heading{Introduction}%
Majorana fermions are intriguing objects because they are their own
antiparticles.
In condensed matter physics, Majorana fermions arise not as
elementary particles, but rather as superpositions of electrons
and holes forming the zero mode states in topological
superconducting states.
Majorana fermions were first proposed to exist in vortex cores and
on boundaries of the $p$-wave Cooper pairing systems \cite{Volovik1999,Read2000,Kitaev2001}.
More recently, they were also predicted
in conventional superconductors in the presence of strong spin-orbit
(SO) coupling and the Zeeman field
\cite{Fu2008,Lutchyn2010,Sau2010,Oreg2010,Alicea2010,Mao2012}.
In cold atom physics, SO coupling has been realized by using atom-laser
coupling \cite{Lin2009,Lin2011,Wang2012,Cheuk2012,Goldman2013}.
This progress offers an opportunity to realize and manipulate Majorana
fermions in a highly controllable manner, which has attracted
a great deal of attention both theoretical and experimental
\cite{Zhang2008,Sato2009,Zhu2011,Jiang2011,Gong2012,Seo2012,
Wei2012,Liu2012,Liu2012b,Romito2012,Kraus2013,Li2013,Mizushima2013,Qu2013}.
Experimental signatures of Majorana zero modes have been reported as
zero bias peaks in the tunneling spectroscopy of a single quantum
wire with strong SO coupling which is either coupled with
an $s$-wave superconductor through the proximity effect
\cite{Mourik2012,Deng2012,Das2012,Rokhinson2012,Finck2013,Lee2012,
Lee2014}, or, is superconducting by itself \cite{Rodrigo2012,*Rodrigo2013}.
A further study of an array of quantum wires is natural
\cite{Li2013,Mizushima2013,Seroussi2014,Lutchyn2011,Stanescu2011,
Tewari2012,Diez2012,Qu2013,Kells2013},
in particular for the purpose of studying
interaction effects among edge Majorana zero
modes \cite{Fidkowski2010,Ryu2012,Qi2013,Li2013}.
Topological states in an array of parallel wires in
magnetic fields in the fractional quantum Hall regime have been
studied recently \cite{klinovaja2013,klinovaja2013a}.
Without imposing self-consistency, flat bands of Majorana zero edge
modes have been found for the uniform pairing
as well as the Fulde-Ferrell-Larkin-Ovchinnikov
pairing \cite{Lutchyn2011,Stanescu2011,Tewari2012,Diez2012,Qu2013},
because under time-reversal (TR) symmetry these Majorana zero modes
do not couple.
However, the band flatness of the edge Majorana zero modes
is unstable due to interaction effects.
Li and two of the authors proposed the mechanism of spontaneous
TR symmetry breaking for the gap opening in the edge Majorana flat
bands\cite{Li2013}.
Even in the simplest case of spinless fermions
without any other interaction channels, the coupling between Majorana
zero modes and the pairing phase spontaneously generates staggered
circulating currents near the edge such that Majorana modes can
couple to each other to open the gap due to the breaking of TR symmetry.
Similar results are also obtained recently in Refs.
[\onlinecite{Mizushima2013,Potter2013}].
The mechanism of gap opening based on spontaneous TR symmetry
breaking also occurs in the helical edge modes of
the 2D topological insulators under strong repulsive interactions
which leads to edge magnetism
\cite{Wu2006,Xu2006}.
\heading{Main results}
In this article, we investigate a coupled array of $s$-wave superconducting
chains with intra-chain SO coupling and external Zeeman field.
We consider both the proximity-induced superconductivity and the intrinsic one.
For the proximity-induced case, the array is placed on top of a bulk
superconductor, the phase coherence induces a nearly
uniform pairing distribution in the quantum chains, $\Delta_\vec{r} = \Delta$.
The bulk band structure exhibits several topologically
distinct gapped phases intervened by a gapless phase.
In the gapless phase, edge Majorana zero modes interpolate between
nodes in the bulk energy spectrum.
In the topological gapped phase, they extend into a flat band
across the entire edge Brillouin zone.
On the other hand, if either the phase coherence of the bulk
superconductor is weak, or the superconductivity is intrinsic, such as
in the case of Pb nanowires \cite{Rodrigo2012,*Rodrigo2013}
or cold atom systems near Feshbach resonance \cite{Jiang2011},
then $\Delta_\vec{r}$ has to be solved self-consistently.
We find that when the bulk is in the topological gapped phase,
the phase distribution of pairing order parameters is inhomogenous
along the edge exhibiting TR symmetry breaking.
It induces edge currents and gaps out the edge Majorana zero modes except
when the chain number is odd, in which case
one Majorana zero mode survives.
If the bulk is in the gapless phase, in general TR breaking is also observed
but not always, because Majorana modes associated with opposite winding
numbers can coexist on the same edge which can be coupled by TR invaraint
perturbations.
\begin{figure}
\includegraphics[width=\linewidth,height=0.7\linewidth]{bulkphasediagram}
\includegraphics[width=\linewidth,height=0.7\linewidth]{winding}
\caption{
($a$) Bulk phase diagram of the 2D Hamiltonian Eqs. \ref{eq:ham_band}
and \ref{eq:ham_ex} in the $\mu$-$B$ plane with $B>0$,
and that with $B<0$ is symmetric with respect to the axis of $B=0$.
The parameter values are $t=1$, $t_\perp=0.5$, $\lambda=2$, $\Delta=0.5$.
(a) The white solid lines enclose the gapless phase and separate the rest
into two topologically trivial gapped phases and two non-trivial
phases, respectively.
Inside the gapless phase, states in the diamond enclosed by the white
dashed lines exhibit edge modes associated with opposite winding numbers,
and those outside the diamond only exhibit edge modes associated
with same winding number.
Color scale encodes the momentum averaged winding number $r$ defined
in Eq. \ref{eq:topo_avg}.
The gapless phase is suppressed as decreasing $t_\perp$,
and it is compressed into the black dashed line at $t_\perp=0$
(the single chain limit).
Points I $\sim$ IV are used in Fig. \ref{fig:ek_noscf}.
(b) $W_{k_y}$ v.s. $k_y$ and $\mu$ are shown along
the lines of $L_1\sim L_5$ in ($a$), respectively.
The white solid and dashed boundaries of the regions of
$W_{k_y}=\pm1$ represent that the gap closing points are
located at $(0,k_y)$ and $(\pi,k_y)$, respectively.
\label{fig:bulkphasediagram}}
\end{figure}
\heading{Model of quantum wire array}%
Consider an array of SO coupled chains with the proximity effect
induced $s$-wave pairing along the $x$-direction, which are juxtaposed
along the $y$ direction.
The band Hamiltonian is
\begin{eqnarray}
H_0&=&-\sum_{\vec{r} \sigma} t \left(c_{\vec{r} \sigma}^\dag c_{\vec{r} +\hat{x},\sigma}
+ h.c.\right) - \mu c_{\vec{r} \sigma}^\dag c_{\vec{r} \sigma} \nonumber \\
&-&\sum_{\vec{r} } i\lambda \left(c_{\vec{r} \uparrow}^\dag c_{\vec{r} +\hat{x},\uparrow}
- c_{\vec{r} \downarrow}^\dag c_{\vec{r} +\hat{x},\downarrow}\right) + h.c.
\nonumber \\
&-&\sum_{\vec{r} \sigma} t_\perp \left( c_{\vec{r} \sigma}^\dag c_{\vec{r} +\hat{y},\sigma}
+ h.c. \right),
\label{eq:ham_band}
\end{eqnarray}
where $\vec r$ is the lattice site index; $\sigma=\uparrow\,,\,\downarrow$
labels two spin states; $t$ and $t_{\perp}$ are intra- and
inter-chain nearest neighbor hoppings, respectively,
and $\mu$ is the chemical potential.
$\lambda$ here is the SO coupling, which we choose to lie only in the $x$-direction.
This uni-directional SO coupling is a natural
setup in cold atom experiments using atom-laser interaction.\cite{Jiang2011,Qu2013}
The external field part of the Hamiltonian is
\begin{eqnarray}
H_{ex}=\sum_{\vec{r}}
\Delta_\vec{r}( c_{\vec{r} \uparrow}^\dag c_{\vec{r} \downarrow}^\dag +
h.c.)-B (c_{\vec{r} \uparrow}^\dag c_{\vec{r} \downarrow}
+ h.c.),
\label{eq:ham_ex}
\end{eqnarray}
The first term accounts for superconducting pairing,
where $\Delta_{\vec r}$ is the $s$-wave pairing on site $\vec{r}$, and
can be induced either through proximity effect or intrinsically. For
the proximity induced superconductivity, we take $\Delta_\vec{r}$ to be
spatially uniform, which is a commonly used approximation. For
intrinsic superconductivity, $\Delta_\vec{r}$ will be solved
self-consistently. The second term arises from an external Zeeman
field $B$, which can also be simulated using
atom-laser coupling\cite{Jiang2011}.
\heading{Uniform pairing}%
Let us first consider a uniform pairing $\Delta_\vec{r}=\Delta$
which can be chosen as real without loss of generality.
Under periodic boundary conditions in both $x$ and $y$-directions,
the Hamiltonian Eqs.~\ref{eq:ham_band} and \ref{eq:ham_ex}
can be written in momentum space,
\begin{eqnarray}
H=H_{band}+H_{ex}=\sum_{\vec{k}}\psi_\vec{k}^\dag h_\vec{k} \psi_\vec{k},
\end{eqnarray}
where
$\psi_\vec{k}=[c_{\vec{k}\uparrow},c_{\vec{k}\downarrow},c_{-\vec{k}\uparrow}^\dag,
c_{-\vec{k}\downarrow}^\dag]^t$,
and
\begin{eqnarray}
h_\vec{k}=T_\vec{k}\tau_3+\Lambda_\vec{k}\sigma_3-B\sigma_1\tau_3+\Delta\sigma_2\tau_2\ ,
\label{eq:hk}
\end{eqnarray}
The two sets of Pauli matrices $\sigma_i$ and $\tau_i$ ($i=1,2,3$)
act in the spin and particle-hole spaces, respectively.
$T_\vec{k}$ and $\Lambda_\vec{k}$ are given by
\begin{eqnarray}
T_\vec{k}=-2t\cos k_x-2t_\perp\cos k_y-\mu ,
\end{eqnarray}
and
\begin{eqnarray}
\Lambda_\vec{k}=2\lambda\sin k_x .
\end{eqnarray}
The energy spectrum of Eq. \ref{eq:hk} is
\begin{eqnarray}
E_\vec{k}^2 = T_\vec{k}^2+\Lambda_\vec{k}^2+B^2+\Delta^2
\pm 2\sqrt{T_\vec{k}^2\Lambda_\vec{k}^2+T_\vec{k}^2B^2+B^2\Delta^2}. \nonumber \\
\end{eqnarray}
Although $h_{k_x,k_y}$ does not carry 2D topological indices,
nevertheless, we consider the 1D index of $h_{k_x,k_y}$ at each fixed
value $k_y$. It is invariant under both particle-hole ($\Xi$) and TR
($\Theta$) symmetries: define
\begin{eqnarray}
\Xi = \tau_1,\quad \Theta = \sigma_1\tau_3, \label{eq:pseudoTR}
\end{eqnarray}
then
\begin{eqnarray}
\Xi h_{k_x,k_y}\Xi^{-1}=-\Theta h_{k_x,k_y}
\Theta^{-1}=-h_{-k_x, k_y}^*.
\end{eqnarray}
Here both transformations satisfy
$\Theta^2=\Xi^2=1$.
We should emphasize here that $\Theta$ is \emph{not} the
physical time reversal, which should square to $-1$ for fermions
with a half-integer spin. Here $\Theta$ is called ``time reversal''
because it represents a symmetry operation which is anti-unitary and
relates $\vec{k}$ to $-\vec{k}$.
$\Theta$ and $\Xi$
can be combined into a chiral symmetry
defined as
\begin{eqnarray}
\mathcal{C}=\Xi\Theta,
\end{eqnarray}
which gives
\begin{eqnarray}
\mathcal{C}h_{k_x,k_y}\mathcal{C}^{-1}=-h_{k_x,k_y} .
\end{eqnarray}
These symmetries put $h_{k_x,k_y}$ at fixed $k_y$ in the BDI class as pointed
out in Ref. [\onlinecite{Tewari2012}], which is
characterized by a $k_y$-dependent 1D topological index denoted as $W_{k_y}$.
A unitary transformation is performed as
\begin{eqnarray}
U=\mbox{e}^{i(\pi/4)\sigma_2}\,u\,\mbox{e}^{-i(\pi/4)\tau_1},
\end{eqnarray}
where
\begin{eqnarray}
u=\frac{1}{2}(\sigma_0+\sigma_3)+\frac{1}{2}\tau_3(\sigma_0-\sigma_3).
\end{eqnarray}
It transforms $h_\vec{k}$ into an off-diagonal form
\begin{eqnarray}
U^{-1}h_\vec{k} U=\begin{bmatrix}
0&A_\vec{k} \\A_\vec{k}^\dag &0
\end{bmatrix}\ ,
\label{eq:hkoff}
\end{eqnarray}
where
\begin{eqnarray}
A_\vec{k}=\Delta\sigma_1-i(T_\vec{k}\sigma_0+\Lambda_\vec{k}\sigma_1+B\sigma_3).
\end{eqnarray}
$W(k_y)$ is defined as the winding
number of $\det A_{\vec{k}}$ in the complex plane as $k_x$ sweeps a
$2\pi$ cycle, \emph{viz.},
\cite{Sato2011,Tewari2012}
\begin{eqnarray}
W_{k_y}&=&-\frac{i}{2\pi}\int\limits_{\mathclap{k_x=0}}^{2\pi}\frac{\mbox{d}z(\vec{k})}{z(\vec{k})}\nonumber \\
&=&\frac{1}{2}\bigl[\mbox{sgn}(M_+)-\mbox{sgn}(M_-)\bigr]
\mbox{sgn}(\lambda \Delta)\ ,\quad
\label{eq:w-def}
\end{eqnarray}
where $z(k)=\det A_k/|\det A_k|$, in which
\begin{eqnarray}
\det A_k=B^2-T_k^2-(\Delta-i\Lambda_k)^2,
\end{eqnarray}
$M_\pm(k_y)$ are related to $\det A_{k_x,k_y}$ as
\begin{eqnarray}
M_+(k_y)=\det A_{k_x=0,k_y}, \\M_-(k_y)=\det A_{k_x=\pi,k_y}.
\end{eqnarray}
$W_{k_y}=\pm 1$ requires the condition of $M_+(k_y)M_-(k_y)<0$,
and then $h(k_y)$ is topologically nontrivial.
$W_{k_y}$ changes discretely if a gap closing state
appears on the line of $k_y$ such that
$M_+(k_y)=0$, or, $M_-(k_y)=0$.
The momenta of these states $(k_x,k_y)$ satisfy
that $k_x=0$ or $\pi$, and another condition
$T_\vec{k}^2+\Delta^2-B^2=0$ which determines $k_y$.
Based on $W_{k_y}$'s behavior over the range of $[-\pi,\pi)$, we
plot the bulk phase diagram for the 2D Hamiltonian
Eqs. \ref{eq:ham_band} and \ref{eq:ham_ex} in the parameter plane
$\mu$-$B$ shown in Fig.~\ref{fig:bulkphasediagram}($a$).
The gapped phases are characterized by $k_y$-independent values of
$W$: two phases with $W=\pm 1$ are weak topological pairing states,
and the other two with $W=0$ are trivial pairing states.
For the gapless phase, a momentum averaged topological number is defined as
\begin{eqnarray}
r = \int\frac{\mathrm{d}k_y}{2\pi} W_{k_y}.
\label{eq:topo_avg}
\end{eqnarray}
The values of $W_{k_y}$ {\it v.s.} $\mu$ and $k_y$ are depicted
in Fig.~\ref{fig:bulkphasediagram}($b$) along the line
cuts $L_1\sim L_5$ in Fig. ~\ref{fig:bulkphasediagram} (a).
Usually, $W_{k_y}$ only changes the value by 1 at one step
as varying $k_y$, but
along line $L_3$, $W_{k_y}$ can directly change between 1 and -1
without passing $0$, which means two Dirac points $(0,k_y)$
and $(\pi,k_y)$ appear at the same value of $k_y$.
Note that the SO coupling $\lambda$ is related to $W_{k_y}$ only through its sign (\textit{c.f.} Eq.~\ref{eq:w-def}),
therefore the phase diagram Fig.~\ref{fig:bulkphasediagram} is independent of
$\lambda$ (up to an overall sign flip).
\footnote{The value of $\lambda$ determines the
bulk gap in topological superconducting regime.
Therefore, for a given temperature $T$, $\lambda\gg T$
is required to observe this topological superconductivity physics.}
\begin{figure}
\includegraphics[width=\linewidth,trim=0 35 0 45]{ek_noscf}
\caption{Edge spectra with the open and periodical boundary conditions
along the $x$ and $y$-directions, respectively.
($a$), ($b$), ($c$), and ($d$) correspond to points I, II, III, and IV
marked in Fig. \ref{fig:bulkphasediagram} ($a$), respectively.
($a$) gapped trivial phase;
($b$) gapless phase with edge modes associated with the same winding number;
($c$) gapped weak topological phase;
($d$) gapless phase with edge modes associated with opposite winding numbers.
Parameters used are the same as those in Fig. \ref{fig:bulkphasediagram} $(a)$.
\label{fig:ek_noscf}}
\end{figure}
Next we discuss edge spectra in the above different phases.
The open boundary condition is applied along the $x$-direction.
In the topological trivial phase shown in Fig. \ref{fig:ek_noscf}
($a$), the zero energy edge modes are absent, while they appear and
run across the entire 1D edge Brillouin zone in the gapped weak
topological pairing phase shown in Fig. \ref{fig:ek_noscf} ($b$).
In the gapless phase, flat Majorana edge modes appear in the regimes
with $W(k_y)=\pm 1$ and terminate at the gap closing points.
\cite{Sato2011,Sau2012,Yuan2014}
These flat Majorana edge modes are lower dimensional Majorana
analogues of Fermi arcs in 3D Weyl semi-metals \cite{Wan2011,*Balents2011}.
This analogy goes further as in both cases: the gapless phase
intervenes topologically distinct gapped phases.
The flat edge Majorana modes in the gapless phase can behave differently.
In Fig. \ref{fig:ek_noscf} ($b$), all the edge flat Majorana
modes are associated with the same value of $W_{k_y}$.
In this case, these Majorana modes on the same edge are robust
at the zero energy if TR symmetry is preserved, which means that
they do not couple.
Nevertheless, TR symmetry may be spontaneously broken to gap
out these zero modes \cite{Li2013}.
On the other hand, for states inside the white dashed diamond
in Fig. \ref{fig:bulkphasediagram} ($a$), edge Majorana
modes appear with both possibilities of $W_{k_y}=\pm 1$.
In particular, in the case of $\mu=0$, the relation
$W(k_y)=-W(k_y+\pi)$ holds for edge Majorana modes
as shown in Fig. \ref{fig:ek_noscf} ($d$).
Majorana modes with opposite winding numbers on the same edge
can couple to each other even without TR breaking,
and thus are not topologically stable.
$r$ represents the net density of states
of zero modes in the edge Brillouin zone which are stable
under TR-conserved perturbations.
\begin{figure}
\includegraphics[width=0.5\textwidth,trim=0 50 0 30,clip]{L8pattern}
\caption{Self-consistent solutions for $\Delta_{\vec{r}}$
($a$) and supercurrent $J_{\vec{r}\br^\prime} $ ($b$).
Parameter values are $L_x=120$, $L_y=8$, $B=1.25$, $t=1$,
$t_\perp=0.5$, $\mu=-2$, $\lambda=2$, $g=5$.
Open and periodical boundary conditions are used along the $x$-direction
(vertical) and $y$-direction (horizontal), respectively.
Only the first 10 sites from the upper edge are plotted.
The distributions of $\Delta_\vec{r}$ and $J_{\vec{r}\br^\prime}$ are reflection
symmetric with respect to the center line of the system.
($a$) Direction and length of each arrow represent the
phase and amplitude of $\Delta_\vec{r}$ on site $\vec{r}$.
Its distribution is nearly uniform in the bulk but exhibits
spatial variations near the edge.
($b$) Each arrow represents $J_{\vec{r}\br^\prime}$ on bond $\vec{r}\br^\prime$,
which is prominent near the edge but vanishes in the bulk.
\label{fig:L8}}
\end{figure}
\heading{Self-consistent solution}
We now impose self-consistency on the pairing order parameter
$\Delta_{\vec{r}}$, which is necessary for the case of intrinsic pairings.
The pairing interaction is modeled as
\begin{eqnarray}
H_\Delta=-g\sum_{\vec{r}} n_{\vec{r},\uparrow}n_{\vec{r},\downarrow},
\end{eqnarray}
and the self-consistent
equation is
\begin{eqnarray}
\Delta_\vec{r}=-g\langle G| c_{\vec{r} \downarrow}c_{\vec{r} \uparrow}|G \rangle,
\end{eqnarray}
where $\langle G|... |G\rangle$ means the ground state average.
We have verified numerically that $\Delta_\vec{r}$ is nearly uniform inside
the bulk.
Thus the bulk shares a similar phase diagram to the case of
uniform pairing (\emph{cf.~}Fig.~\ref{fig:bulkphasediagram}),
except that the values of $\Delta$ should be self-consistently determined.
Nevertheless, near edges $\Delta_{\vec{r}}$ varies spatially in the
self-consistent solutions.
If the bulk is in the topological gapped phase, the edge Majorana
zero modes can couple with each other by breaking TR symmetry
spontaneously as shown in Ref. [\onlinecite{Li2013}].
Because of the band flatness, this effect is non-perturbative.
This will gap out the zero Majorana modes and lower the edge energy.
The system converges to an inhomogeneous distribution of
$\textsf{arg}[\Delta_{\vec{r}}]$ near the edges as shown in
Fig.~\ref{fig:L8} ($a$), even if this costs energy
by disturbing the Cooper pairing \cite{Li2013}.
This edge inhomogeneity in the pairing phase leads to an
emergent current pattern as depicted in Fig.~\ref{fig:L8} ($b$).
\begin{figure}[h!]
\includegraphics[width=0.8\linewidth]{L8-current}
\includegraphics[width=0.8\linewidth]{L8-energy}
\caption{Self-consistent solutions for coupled chains with varying
$B$-field.
Open and periodical boundary conditions are used along the $x$ and
$y$-directions, respectively.
Parameters are $L_x=120$, $L_y=8$, $t=1$, $t_\perp=0.5$, $\mu=-2$,
$\lambda=2$, and $g=5$.
In both ($a$) and ($b$), the bulk gapless phase is marked as the shaded
region, which separates the topologically trivial (on its left) and
nontrivial (on its right) gapped phases.
($a$) The bulk pairing $\Delta_{bulk}$ and the
characteristic edge current magnitude $J_{max}$
extracted as the maximal current in the system.
($b$) The energy spectra close to $E=0$.
The inset of ($b$) is for the case of $L_y=7$.
TR symmetry is spontaneously broken between the two dashed red
lines as evidenced by $J_{max}\neq 0$.
Please note that:
At large values of $B$, the edge current vanishes which is
an artifact due to the finite length of $L_x$.
The decaying lengths of edge Majorana modes are at the order
of the superconducting coherence length which is long due to
the suppression of the pairing gap.
As a result, Majorana modes on opposite edges can hybridize and
are gapped out without breaking TR symmetry.
\label{fig:L8phasediagram}}
\end{figure}
A natural question is under what conditions TR symmetry is
spontaneously broken near edges.
We have carried out extensive numerical studies and results
of $\mu=-2$ are plotted in Fig.~\ref{fig:L8phasediagram} ($a$)
and ($b$).
TR symmetry is always broken in the topological gapped phase
such that Majorana edge fermions are pushed to midgap energies,
while TR symmetry remains unbroken in the trivial gapped phase.
The latter is easy to understand because there are no Majorana fermions
to begin with.
If the bulk is in the gapless phase (shaded area in Fig.
\ref{fig:L8phasediagram}), the situation is more complicated.
TR symmetry breaking solutions are found in most part of the gapless phase.
In this regime, $|r|<1$ and thus the number of stable
Majorana modes on one edge is less than the number of chains $L_y$.
These modes are associated with the same value of
$W_{k_y}$, and thus TR symmetry breaking is needed to gap out these edge modes.
There exists a small region inside the gapless phase in which TR
symmetry is unbroken in Fig.~\ref{fig:L8phasediagram} ($a$),
which is largely due to the finite value of $L_y$.
We have tested that as increasing $L_y$ the TR
breaking regime is extended, and thus we expect that it will cover the
entire gapless phase in the thermodynamic limit.
On the other hand, for the case of the gapless phase with $\mu=0$
in which $r=0$ for even values of $L_y$, our calculations show
that all the Majorana modes are gapped out without developing
currents.
Instead a bond-wave order appears at the wavevector
of $k_y=\pi$ along the edge, which is consistent with
the fact that TR invariant perturbations can destroy
Majorana zero energy modes at $r=0$.
In general, we expect that TR symmetry is spontaneously
broken in the case of $r\neq 0$ in the thermodynamic limit.
However, not all Majorana edge modes have to be gapped out
in the topological gapped phase.
As shown in Fig. \ref{fig:L8phasediagram} ($b$), for the case of $L_y=8$,
all the edge modes become gapped due to TR symmetry breaking,
whereas for $L_y=7$, one Majorana mode survives at zero energy.
\footnote{The remaining Majorana mode is localized along $x$-direction, but delocalized along $y$-direction. Similar result can be found in Ref.~\onlinecite{Mizushima2013}.}
The reason is that breaking TR brings the system from class BDI to class
D \cite{Schnyder2008}, and the latter is characterized by
a $\mathcal{Z}_2$ index.
Physically it is because (in the infinite chain length limit)
only the Majorana modes on the same edge can be paired and gapped out,
thus beginning with $L_y$ Majorana fermions per edge, for odd $L_y$, one
of them will always remain unpaired.
In short, if TR is \emph{spontaneously} broken, only
$L_y \textsf{ mod } 2$ Majorana fermion per edge will persist
at zero energy.
\heading{Discussion}%
Before closing, a few remarks are in order. (1) The
phenomenon of spontaneous TR symmetry breaking in topological
superconductors has previously been found in a spinless $p$-wave
superconductor in Ref.~\onlinecite{Li2013}. Our work extends this
observation in three ways: (a) Our results confirm that spontaneous
TR breaking also occurs in a different setup with SO
coupling and s-wave pairing, which is more relevant to experiments.
(b) Our model hosts a gapless phase,
wherein spontaneous TR breaking may also occur. (c) We also found a
parameter regime where Majorana modes with opposite winding numbers
can coexist. This provides another route to gap out the Majorana
modes without invoking TR breaking. (2) In this work, we only
considered SO coupling in the $x$ direction, which can be exactly
simulated in cold atom systems. However, in solid state physics,
both Rashba and Dresselhaus SO couplings will involve SO coupling
along the $y$ direction as well (unless Rashba and Dresselhaus are
of equal strength, in which case SO coupling along $y$ will vanish).
This will break TR symmetry (as defined in Eq.~\ref{eq:pseudoTR},
which is not the usual physical TR symmetry)
and bring the system from class BDI to
D. In the presence of a $y$-direction SO coupling term
($\sim\sin(k_y)\sigma_2\tau_3$), the Majorana flat bands will
develop dispersion, either connecting upper and lower bulk bands
or forming isolated mid-gap states
which may cross zero at $k_y=0$ or $\pi$, consistent with a
$\mathcal{Z}_2$ description.\cite{Qu2013,Seroussi2014} (3) Disorders
such as spatial variations of chemical potential
($\sim\sigma_0\tau_3$) and Cooper pairing amplitude
($\sim\sigma_2\tau_2$) can be added without changing any of our
conclusions (provided the disorder is not strong enough to close the
bulk gap). This is because these two terms are invariant under both
particle-hole($\Xi$) and TR($\Theta$) symmetries, hence the system
still belongs to the BDI class. (4) Finally, although we modeled the
constituent nanowires each as a 1D lattice, switching to a continuum
formulation in the chain direction should not affect the formation
of edge Majorana modes (that is, before they couple and gap out).
\cite{Sau2010,Oreg2010} Thus we expect
the edge physics obtained here
to be insensitive
to how the bulk of the chains is formulated in terms of continuum
\emph{vs.~}lattice.
\heading{Summary}
We have studied quantum wire arrays with SO coupling and $s$-wave
superconductivity in an external Zeeman field.
The relation between edge Majorana zero modes and the bulk
band structure is investigated in both topologically nontrivial gapped phase
and the gapless phase.
The coupling between Majorana modes and superfluid phases leads to
spontaneous TR symmetry breaking.
Our results have several experimental bearings.
For proximity effect induced superconductivity, the number of edge
Majorana fermions in the gapless phase can be tuned by the
Zeeman field from zero all the way up to the number of chains.
This could be detected as a prominent change in the height of zero
bias peaks in tunneling spectroscopy experiments.
For the intrinsic superconductivity, edge supercurrent loops resulting
from spontaneous TR breaking will induce small magnetic moments, which
can be detected using magnetically sensitive experiments such as
nuclear magnetic resonance or neutron scattering.
The fluctuation in the number of persisting Majorana mode between
$1$ and $0$, in the TR-broken topological gapped phase, may also
show up in tunneling spectroscopy.
\heading{Acknowledgments}
We thank Yi Li for early collaborations, Hui Hu and D.~P.~Arovas for
helpful discussions, and D.~P.~Arovas for comments after reading a
draft of this paper.
DW and CW are supported by the NSF DMR-1105945 and AFOSR FA9550-11-1-
0067(YIP).
ZSH is supported by NSF through grant DMR-1007028.
CW acknowledges the support from the NSF of China
under Grant No. 11328403.
{\it Note added} Upon the completion of this work,
we became aware of the nice paper on a similar topic
\cite{Seroussi2014},
and after this work is posted, we noticed another related
work \cite{wakatsuki2014}.
|
2,869,038,157,081 | arxiv | \section{Introduction}
\label{sec:introduction}
An increasing number of information technologies focuses on how web
users can effectively share opinions about various types of products,
services or even other users. These technologies are the basis of
several types of Web 2.0 applications such as collaborative tagging,
social bookmarking \cite{cattuto07, golder05} and, in particular, also
recommender systems. Given the \textit{heterogeneity of web users}, a
major issue is how to appropriately aggregate opinions in order to
provide judgements that are useful for each individual user.
Most of these applications use collaborative filtering algorithms
which compute an index of \textit{similarity} between users or between
items, based on the ratings that users have provided on these items
\cite{goldberg92,herlocker99,montaner03a}. When a user belongs to a
community with common, shared tastes, these algorithms work well in
suggesting new items similar to the ones the users have already
rated. There are several other benefits: except providing enough
ratings, no further action is required of users; algorithms for
collaborative filtering are scalable (when similarities are computed
across items \cite{sarwar01}); and, finally, they provide some level
of personalisation. A shortcoming is that if users are looking for
items which are seldomly rated by their community, the predictions are
poor -- e.g. people who have rated only travel books may not receive
very good recommendations on tools for gardening.
To cope with this, a line of research has focused on basing
recommendations for users not on their similarity, but on their
\textit{trust relations} to other users. In this context, trust is
meant to be the ``expectancy of an agent to be able to rely on some other
agent's recommendations'' \cite{marsh94,walter08-jaamas}. There has been a
body of work on ``trust webs''
\cite{abdul-rahman00,grandison00,marsh94,sabater05} and on their
application to recommender systems
\cite{golbeck05,massa06,montaner02b}. The small-world property of
social networks \cite{newman02} allows to potentially reach a lot of
information, while the trust allows to filter out the relevant pieces
\cite{walter08-jaamas}. The benefits of these trust-based algorithms
include strong personalisation, no need to have a long rating history
in the system because recommendations are not based on similarity, and
the ability to receive recommendations on items different from the
ones already rated. Some limitations of the trust-based approach
concern the scalability and the fact that, in addition to their
ratings of items, users have to provide information about their level
of trust to some other users.
In this paper, we introduce a novel metric for trust in social
networks. A trust metric allows to compute the indirect trust between
two agents in a social network which are not neighbours, based on the
direct trust between agents that are neighbours. While it is intuitive
to do this on a chain, e.g.~from user $A$ via user $B$ to user $C$,
for instance by multiplying the values of trust along the chain, it is
not a priori trivial how to proceed when a graph contains multiple,
redundant paths, cycles, or triangles (because of mathematical issues
related to uniqueness and consistency). This is a crucial issue
because these patterns all play an important role in social networks,
in particular for the diffusion of information and the build-up of
social capital \cite{wassermann94,vega-redondo07}. Some trust metrics
address these issues by reducing the direct trust graph to an acyclic
graph before applying their computation of indirect trust
\cite{golbeck05,massa06}. Other metrics use only the path of the
shortest distance or of the highest trust \cite{walter08-jaamas}. Our
trust metric takes all the paths in the graph into account and it is
well-defined on any given graph. It provides each user with
personalised trust ratings about other users in the network. Our
metric also is dynamic, i.e.~it evolves in time depending on how
useful the information received by users is to them. This makes the
metric suitable for application in recommender systems, as we will
illustrate in the remainder of the paper.
\section{Background and Motivation}
\label{sec:background-motivation}
Consider a scenario in which there is a social network of agents which
have trust relationships among each other. This can be described by a
graph in which the nodes represent the agents and the links represent
the trust relationships. There also is a set of objects which can be
rated by agents. Since each agent only knows a few objects, it may
want to know other agent's opinions on unknown objects. However, since
there are potentially many opinions of other agents, it needs to be
able to determine which of these are trustworthy. This implies that an
agent needs to reason about the trustworthiness of other agents
\cite{walter08-jaamas}. However, since its time and resources are
constrained, an agent can only build and maintain trust relationships
with a limited number of other agents.
\textit{Thus, if $T_{ij} \in [0,\,1]$ represents the level of direct
trust of agent $i$ towards $j$, how do we compute the indirect trust
$\tilde{T}_{kl}$ between two agents $k$ and $l$ that are not
neighbours\footnote{Variables expressing indirect trust are as the
corresponding ones expressing direct trust, but with a tilde
symbol: e.g.~$T$ and $\tilde{T}$.}? }
In the following, we will describe the TrustWebRank metric for
computing indirect trust in a network with direct trust. This metric
builds on the concept of feedback centrality which assigns a
centrality score to the nodes of a network based on the centrality
scores of the node's neighbours. In other words, in feedback
centrality, the higher (or lower) the centrality score of a node's
neighbours, the higher (or lower) this node's own centrality is.
These principles can be adapted to define a metric for the
trustworthiness of agents in a social network with trust
relationships.
We briefly review PageRank, one of the most widely known and studied
feedback centrality algorithms \cite{brin98,brandes05}.
In our scenario this would mean to assign a trustworthiness score
$c_i$ to an agent $i$ that depends on the trustworthiness of its
neighbours $j$ (adapted from \cite{brandes05}):
\begin{eqnarray}
c_i = \beta \sum_{\{j : i \in N_{j}\}}{\frac{c_j}{|N_{j}|}} + (1-\beta) \quad \forall i,
\label{eq:definition_page_rank}
\end{eqnarray}
where $N_i$ is the set of neighbours of $i$, and $\beta$ is a damping
factor which is chosen around $0.8$ \cite{brin98}. In vector notation:
\begin{eqnarray}
c = \beta Pc + (1-\beta)\mathit{1},
\label{eq:eq:definition_page_rank_matrix}
\end{eqnarray}
where $P$ is a stochastic\footnote{We will always assume
row-stochastic when we state ``stochastic''; this does not imply
that the matrix need (or not) to be column-stochastic.
} transition matrix defined as
\begin{eqnarray}
P_{ij} = \left\{ \begin{array}{ll}
\frac{1}{|N_j|} & \textrm{if there exists a link from\ } j \textrm{\ to \ } i \\
0 & \textrm{otherwise.} \\
\end{array} \right.
\label{eq:eq:definition_page_rank_transition_matrix}
\end{eqnarray}
Eqs.~(\ref{eq:definition_page_rank}) and
(\ref{eq:eq:definition_page_rank_transition_matrix}) can easily be
extended to weighted graphs \cite{brandes05}. Solving Eq.
(\ref{eq:eq:definition_page_rank_matrix}) for $c$ we obtain:
\begin{eqnarray}
c & = & (I-\beta P)^{-1}(1-\beta)\mathit{1},
\label{eq:derivation_page_rank}
\end{eqnarray}
where $I$ is the identity matrix and $\mathit{1}$ is the vector
consisting of ones. Since $P$ is, by construction, stochastic and
thus, by the Perron-Frobenius theorem \cite{seneta06}, the largest
eigenvalue is $\lambda_{\mathrm{PF}}(P)=1$, it follows that
$\lambda_{\mathrm{PF}}(\beta P)=\beta<1$. This ensures the existence
of a unique solution of $c$. Usually, one uses Jacobi iteration to
compute such a solution.
The result of applying this algorithm to a graph is a vector which
gives a score of the trustworthiness $c_i$ for each node $i$ in the
graph. Note that this is a \textit{global} metric, i.e.~there is one
score for each agent. It has been observed in the literature that, for
recommender systems, such metrics are often not appropriate and that
\textit{local} metrics, which are \textit{personalised} for each agent
(``how trustworthy is agent $i$ from the perspective of agent $j$''),
are required \cite{massa06}. EigenTrust, for example, is a
PageRank-inspired, global trust metric \cite{kamvar03}.
\section{A Novel Trust Metric}
\label{sec:novel-trust-metric}
\subsection{From Centrality to Trust}
\label{sec:from-centrality-to-trust}
Proceeding in analogy to PageRank and using the principles of feedback
centrality to construct a personalised metric for trust, one could
define the indirect trust of agent $i$ to $j$ as the indirect trust of
the neighbour agents $k$ of agent $i$ to agent $j$, weighted by the
trust of agent $i$ towards these neighbour agents $k$. Let $T$ be the
trust matrix, where $T_{ij} \in [0,1]$ reflects the \textit{direct}
trust from agent $i$ to agent $j$ ($T_{ij}=0$ if there is no link
between agent $i$ and agent $j$). $S$ is the stochastic matrix
\begin{eqnarray}
S_{ij} = \frac{T_{ij}}{\sum_{k \in N_{i}}T_{ik}},
\label{eq:definition_normalized_direct_trust}
\end{eqnarray}
where $N_{i}$ is the set of neighbours of agent $i$. $S$ is a
normalisation of $T$. We define $\tilde{T}_{ij}$ to be the
\textit{indirect} trustworthiness score from $i$ to $j$:
\begin{eqnarray}
\tilde{T}_{ij} = \sum_{k \in N_i}{S_{ik}\tilde{T}_{kj}} \quad \forall i,j
\label{eq:definition_mole_tidal_trust}
\end{eqnarray}
This allows us to estimate the trust between any two agents $i$ and
$j$: if there is a link between $i$ and $j$, $T_{ij}$ reflects the
trust between them; if there is no link between $i$ and $j$, $\tilde
T_{ij}$ reflects the trust between them. Notice that this definition
is similar to to the approaches used in \cite{golbeck05,massa06}. In
matrix notation, this is the recursive definition
\begin{eqnarray}
\tilde{T} = S\tilde{T}
\label{eq:eq:definition_mole_tidal_trust_matrix}
\end{eqnarray}
Notice that this approach has several limitations:
\textit{1) Uniqueness of the solution}: Let $\tilde{v}_{*j}$ be one
column of $\tilde{T}$, i.e.~the vector that expresses how much agent
$j$ is trusted by other agents. Then, Eq.
(\ref{eq:eq:definition_mole_tidal_trust_matrix}) gives
\begin{eqnarray}
\tilde{v}_{*j}=S\tilde{v}_{*j} & \forall j.
\label{eq:eq:definition_mole_tidal_trust_vector}
\end{eqnarray}
If $S$ is acyclic \cite{seneta06} (i.e.~the underlying graph is so),
then there is a unique solution of
Eq.~(\ref{eq:eq:definition_mole_tidal_trust_vector}). If $S$ is not
acyclic, it can be either primitive or non-primitive \cite{horn90}. If
$S$ is primitive (and stochastic), there is a unique solution of
Eq.~(\ref{eq:eq:definition_mole_tidal_trust_vector}), a vector with
all components being identical \cite{seneta06}. This would imply that
all agents $i$ would trust agent $j$ equally, which is obviously not
desirable. If $S$ is not primitive, there are multiple solutions for
Eq.~(\ref{eq:eq:definition_mole_tidal_trust_vector}), which also is
not desirable.
One way of dealing with this could be to make $S$ acyclic, for example
by constructing a tree with a breadth-first search (BFS) from a chosen
node, as for example \cite{golbeck05,massa06} do. The BFS selects one
node as a root, and from there on, explores the neighbours of the
nodes, proceeding in levels $1, 2, 3, \ldots$ and removing links within
a level and links from level $k$ to level $l$ where $l<k$ at each
step. However, this entails further limitations:
Social networks are characterised by a high clustering coefficient
\cite{wassermann94,newman02,vega-redondo07}. By making the underlying
graph of a social network acyclic, one removes the links within each
level and the links from levels $k$ to $l$ where $l \leq k$, thus
making the clustering coefficient $0$. This implies that, subsequent
to this procedure, the trust metric will not be able to differentiate
well between regions of high clustering (thus, possibly high trust)
and regions with lower clustering (thus, possibly lower trust) as on
the original graph.
Further, depending on which node is chosen as the root of the BFS, the
acyclic graph will be different. This is not a problem in a
decentralised scenario, when the computation is spread over many
nodes. In this case, each node computes its own set of
$\tilde{v}_{*j}$ by being root of its own breadth-first
exploration. However, this is a problem in a centralised scenario,
where such an approach is not scalable and also not mathematically
tractable: as a result of a BFS rooting at each $i$, the computation
uses a different matrix $T$ for each node.
\textit{2) Combination of direct and indirect trust}: The metric
defined in Eq.~(\ref{eq:definition_mole_tidal_trust}) is not able to
account properly for the following situation: consider an agent $i$
that trusts a neighbour agent $j$ with intermediate level of trust,
e.g.~$T_{ij} \approx 0.5$, because it does not yet know this agent
well. If many of the other neighbours of agent $i$ trust agent $j$,
this should increase the trust between agent $i$ and $j$. This does
not happen with the current definition of trust.
\textit{3) Normalisation of trust}: another property, resulting from
Eq.~(\ref{eq:definition_normalized_direct_trust}), is that the normalisation
removes knowledge from the system. If an agent $i$ trusts $n$
neighbours equally, it does not matter whether it trusts them a lot or
a little in $[0,1]$ -- the normalisation would assign the same value
of trust of $\frac{1}{n}$ to each of the neighbours. Then, during
propagation, only the relative trust compared to other neighbours is
considered. Equally, suppose that an agent $i$ has just one neighbour
agent $j$ -- no matter whether $i$ trusts $j$ highly or lowly, in each
case the normalisation would cause the trust from $i$ to $j$ to be
$1$. The normalisation is necessary, however, to have values of direct
and indirect trust which are in the same range.
\subsection{The TrustWebRank Metric}
\label{sec:trustwebrank}
Thus, given these limitations, can we modify
Eq.~(\ref{eq:definition_mole_tidal_trust}) in such a way that the
following requirements are met?
\textbf{Requirement 1}: The solution of the equation over graphs with
cycles is unique, but not trivial.
\textbf{Requirement 2}: The range of indirect trust is the same as for
direct trust, i.e.~$[0,1]$, so that direct and indirect trust can be
compared.
\textbf{Requirement 3}: In the metric, direct trust ``adds on'' to
indirect trust (capturing the fact that it complements it).
One possibility to address these issues is the following: we compute
the indirect value of trust between two agents $i$ and $j$ based on
the direct trust between them, if there is any, but also based on the
trust that the neighbours of $i$ have in $j$:
\begin{eqnarray}
\tilde{T}_{ij}=S_{ij}+\beta\sum_{k \in N_i}{S_{ik}\tilde{T}_{kj}} \quad \forall i,j,
\label{eq:definition_indirect_trust}
\end{eqnarray}
where $\beta \in [0,1)$.
Now, in matrix form Eq.~(\ref{eq:definition_indirect_trust}) is
\begin{eqnarray}
\tilde{T} & = & S + \beta S \tilde{T},
\label{eq:eq:definition_indirect_trust_matrix}
\end{eqnarray}
and, using elementary algebra, we can derive
\begin{eqnarray}
\tilde{T} & = & (I - \beta S)^{-1}S.
\label{eq:derivation_indirect_trust}
\end{eqnarray}
There exists a unique, non-trivial solution to
Eq.~(\ref{eq:derivation_indirect_trust}) if
$\lambda_{\mathrm{PF}}(\beta S)<1$, \cite{horn90}. Since $S$ is
stochastic, i.e.~$\lambda_{\mathrm{PF}}(S)=1$, and $\beta \in [0,1)$,
it follows that $\lambda_{\mathrm{PF}}(\beta S)<1$ (Requirement 1).
The parameter $\beta$ has a similar role as the damping factor in
PageRank in Eq.~(\ref{eq:definition_page_rank}): given $\beta \in
[0,1)$, the impact of agents far away in the social network is
discounted. This can be seen more clearly when expressing $(1-\beta
S)^{-1}$ as a geometric sum in
Eq.~(\ref{eq:derivation_indirect_trust}) \cite{horn90}:
\begin{eqnarray}
\tilde{T}=(1-\beta S)^{-1}S=\sum_{k=0}^{\infty}{(\beta S)^kS}=S+\beta S^2+\beta^2 S^3+\ldots
\label{eq:role_of_beta}
\end{eqnarray}
The $k$th power of the adjacency matrix of a graph gives the number of
walks of length $k$ between any two nodes in the graph. Similarly, the
$k$th power of the matrix $S$ gives the sum of the products of the
weights along all walks of length $k$ in the underlying graph of
$S$. In Eq.~(\ref{eq:role_of_beta}), the higher the length of the
walks, the stronger the discount (since $\beta<1$). As in PageRank, a
reasonable value of $\beta$ turns out to be around $0.75$ to $0.85$
(see Section \ref{sec:empirical-validation}). Note that $\tilde{T}_{ij}
\notin [0,1]$. We can normalise it to
\begin{eqnarray}
\tilde{S}_{ij}=\frac{\tilde{T}_{ij}}{\sum_{k \in N_i}{\tilde{T}_{ik}}}
\label{eq:definition_normalized_indirect_trust}
\end{eqnarray}
to ensure the comparability of values of direct and indirect trust
(Requirement 2).
Furthermore, if agents $i$ and $j$ are not neighbours, the indirect trust
of $i$ to $j$ is entirely based on how much the neighbours of $i$
trust $j$. However, if agent $i$ has a neighbour $j$, the indirect
trust of $i$ to $j$ will also incorporate how much the other
neighbours of agent $i$ trust or do not trust agent $j$ (Requirement
3).
The definition of Eqs.~(\ref{eq:definition_indirect_trust}) and
(\ref{eq:eq:definition_indirect_trust_matrix}) naturally takes the
real structure of a social network into account without needing to
prune any link. Unlike to what would happen during the conversion of
the underlying graph to a tree using a BFS, the algorithm preserves
the links which, in a social network, lead to a high clustering
coefficient, and are not negligible when reasoning about the social
network itself \cite{wassermann94,newman02,vega-redondo07}.
When dealing with huge graphs, however, inverting a matrix as required
by Eq.~(\ref{eq:derivation_indirect_trust}) poses an issue of
computation time and memory. Yet, instead of inverting a matrix or
computing eigenvectors, it is possible to use an iterative method
\cite{brandes05} as follows:
\begin{eqnarray}
\tilde{T}_{ij}^{(k+1)}=S_{ij} + \beta \sum_{l \in N_i} S_{il} \tilde{T}_{lj}^{(k)} \quad \forall i,j.
\label{eq:iterative_computation_indirect_trust}
\end{eqnarray}
At each step $k$, one only needs the neighbourhood $N_i$ of a given
agent $i$, as well as access to the matrix of $\tilde{T}^{(k-1)}$
computed at the previous step $k-1$. Notice that now we are computing
a \textit{matrix} while, with the centrality, e.g.~in PageRank, we
were computing a \textit{vector}. This is natural since the centrality
is one value per agent (it is a global notion), while trust is a value
per pair of agents (it is a local, personalised notion). Therefore
computing trust ($\sim O(N^2)$) is inherently more expensive than
computing centrality ($\sim O(N)$). However, do we really need to
compute indirect trust among all agents? In fact, for a given agent
$i$, computing the trust to a selected amount of other agents $j$, if
well chosen, will be sufficient, as the trust to agents far away in
the network will be damped out anyway. So, the scalability of the
trust computation rather is ($\sim O(mN)$), where $m$ is the number of
other agents $j$ to consider for each agent $i$.
\section{An Application of the Metric}
\label{sec:application-of-the-metric}
So far, we have described a trust metric which allows to compute a
measure of trust between two agents which are not necessarily
neighbours in a social network. We will now construct a simple model
which applies this metric in the context of a \textit{recommender
system}. The purpose is to show how it is possible to compute
predictions of how an agent $i$ likes a particular object $o$ (suppose
a book, CD, or movie) based on how other agents $j$ liked that item
combined with how much $i$ trusts $j$.
\subsection{A Simple Model}
\label{sec:simple-model}
Suppose we have a system of agents embedded in a social network,
defined by a graph and associated to an adjacency matrix $A$. Each
agent $i$ keeps track of its trust relationships to neighbours
$j$. These are reflected in the matrix of direct trust $T$. Obviously,
$T_{ij}>0$ only if $A_{ij}=1$. For the moment, we take the network to
be described by a random graph \cite{erdos59,bollobas85} in which each
agent roughly has the degree~$d$.
Let each agent $i$ be characterised by a profile $\pi_{i}$. The
profile expresses which ratings an agent would give to all possible
objects; however, agents only know a subset of their ratings on
objects. Given an object $o$, $r_i^o \in \{-1,1\}$ is the rating of
agent $i$ on object $o$. If an agent is willing to share all its
opinions with other agents, then the set of all of its ratings
corresponds to its profile; however, there may be agents which are not
willing (because they want to keep their secrets) or able (because
they simply do not know particular objects) to share ratings. This can
be captured by a parameter $\eta$ which reflects the probability of an
agent to share -- i.e.~signal -- its rating with other agents. E.g.,~a
value of $\eta=0.1$ would imply that, on average, at each time step
10\
agents. At the moment, $\eta$ is the same value for all agents, but it
could also be set differently for each agent $i$ or even for each pair
of agents $i$ and
$j$.
If an agent $i$ is not willing or able to share its rating for an
object $o$, the system computes a prediction $p_i^o$ as follows:
\begin{eqnarray}
p_i^o=\sum_{j \in N_i}{\tilde{S}_{ij}r_j^o},
\label{eq:definition_prediction}
\end{eqnarray}
so $p_i^o \in [-1,1]$, since $\sum_{j \in N_i}\tilde{S}_{ij}=1$ and
$r_j^o \in \{-1,1\}$. In vector notation,
\begin{eqnarray}
p = \tilde{S}r,
\label{eq:eq:definition_prediction_matrix}
\end{eqnarray}
i.e.~the prediction for an agent $i$ is the sum of the ratings of all
neighbours $j$ weighted by the indirect, normalised trust that agent
$i$ has in these neighbours $j$.
Note that this bears resemblance to Collaborative Filtering (CF)
\cite{goldberg92,herlocker99} in which the prediction for an agent $i$
is also computed as a weighted sum of the ratings of all neighbours
$j$ (not neighbours in a graph-theoretic sense, but neighbours in
terms of similarity of ratings). The more similar a neighbour, the
more influential its rating will be for the prediction. In our case,
making a prediction based on the ratings of the trusted neighbours
implies that we make the assumption that agents who are connected by
trust have similar mind-sets. Notice that this does not imply that
they have rated the same items -- for example, one user could
appreciate the knowledge of another user in gardening, even though his
own domain are travel books. Thus, unlike the similarity that could be
computed e.g.~by Pearson correlation, this notion of similarity
extends not just across rated items, but rather is an ``expected''
similarity reflecting a similar mind-set of two agents.
\subsection{Trust Dynamics}
\label{sec:trust_dynamics}
So far, we have a static model which, based on the trust web of a
particular agent $i$ and the ratings $r_j^o$ of its neighbours $j$, is
able to compute predictions $p_i^o$ for that agent. We now would like
to model the evolution of the trust network over time in the sense
that, based on the quality of a particular recommendation, agent $i$
can update its trust to its neighbours $j$. This adds a time dimension
to the model and requires a mechanism to update the trust between
neighbours. This can be done by adding a utility function: agents
experience a utility by using the ratings or predictions of neighbours
and then the trust update is coupled with the utility experienced. We
define each agent $i$ to experience a utility $u_{ij}(t)$ by following
the recommendation from each neighbour $j$ at time $t$ as follows:
\begin{eqnarray}
u_{ij}(t) = \left\{ \begin{array}{ll}
1 - |r_i^o(t)-r_j^o(t)| & \textrm{if\ } j \textrm{\ signals to\ } i \\
1 - |r_i^o(t)-p_j^o(t)| & \textrm{otherwise}. \\
\end{array}\right.
\label{eq:definition_utility}
\end{eqnarray}
Note that $u_{ij}(t) \in [-1,1]$. If the neighbour $j$ signals to
agent $i$, it knows the rating $r_j^o(t)$; otherwise, it only knows a
prediction $p_j^o(t)$. The closer the recommendation of agent $j$ for
agent $i$ to the rating of agent $i$ is, the greater the agents'
similarity is and thus the higher the utility $u_{ij}(t)$ that agent
$i$ experiences from the recommendation of agent $j$ at step $t$
is. Note that because of the level of cooperation $\eta$ -- which
affects whether agent $j$ signals to $i$ -- the utility takes into
account not only similarity \cite{ziegler06}, but also cooperation
between agents. Based on the utility, agent $i$ can update the trust
towards its neighbour agents $j$. We distinguish four cases, based on
the \textit{sign} and the \textit{magnitude} of the utility:
\begin{itemize}
\item If the sign is positive, this means that the rating or
prediction of a neighbour was good; if it is negative, it means that
the rating or prediction was bad.
\item If the magnitude is large, the neighbour had a lot of trust in
the rating/prediction of its own neighbours; if it is small, the
neighbour had little trust in the rating/prediction of its own
neighbours.
\end{itemize}
This leads us to the following definition of how an agent $i$ updates
its trust to agent $j$ from time $t$ to $t+1$:
\begin{eqnarray}
\breve{T}_{ij}(t+1) = \left\{ \begin{array}{l}
\gamma T_{ij}(t) + (1-\gamma) |u_{ij}(t)| \\
\hspace{1ex} \textrm{if\ } u_{ij}(t) > u_{thr} \textrm{\ or\ } -u_{thr} \leq u_{ij}(t) \leq 0 \\
\gamma T_{ij}(t) - (1-\gamma) |u_{ij}(t)| \\
\hspace{1ex} \textrm{if\ } u_{ij}(t) < -u_{thr} \textrm{\ or\ } 0 < u_{ij}(t) \leq u_{thr}
\end{array}\right.
\label{eq:trust_update}
\end{eqnarray}
where we take $u_{thr}=0.5$ and $\gamma \in [0,1]$ is a parameter that
controls the relative weights of the current history of trust between
two agents, $T_{ij}(t)$, and of the current utility, $u_{ij}(t)$. For
$\gamma>0.5$, this gives the history of trust more weight than the
current utility. In the analysis and simulations (next section), we
found that $\gamma=0.75$ is a reasonable value. Since $\breve{T}_{ij}
\in [-1,1]$, but we want $T_{ij} \in [0,1]$, we cap it to $[0,1]$:
\begin{eqnarray}
T_{ij}(t+1)=\max(0,\min(1,\breve{T}_{ij})).
\label{eq:trust_update_minmax}
\end{eqnarray}
\begin{figure*}[t]
\centering
\includegraphics[width=0.23\textwidth]{model-illustration-1}
\includegraphics[width=0.23\textwidth]{model-illustration-2}
\includegraphics[width=0.23\textwidth]{model-illustration-3}
\includegraphics[width=0.23\textwidth]{model-illustration-5}
\caption{Social network of agents and trust build-up over time in
case of a fraction of agents not signalling as well as cycles in
the underlying network: there are two profiles, red and blue,
indicated by the cores of the node. Only square nodes are
signalling; e.g., nodes 2, 3, and 4 are not signalling. There are
two cycles from 4 to 7 and 8, respectively, to 9 and then to
4. After a few steps, the nodes learn which other nodes to trust.}
\label{fig:model-illustration}
\end{figure*}
As an example, the effects of these dynamics are illustrated in Figure
\ref{fig:model-illustration}: this is an example of a network of
agents having two profiles (red and blue). Some nodes are signalling
(squares), others are not (circles). The network contains cycles. At
$t=1$, the agents are just connected, the trust between all agents is
equal to zero. At $t=2$, agent 3 and agent 4 have received
recommendations from agents 5 and 6, and from agents 7 and 8,
respectively. Since agent 3 (4) has the same profile as agents 5 and 6
(7 and 8), namely red (blue), it perceives a high positive utility
from the recommendation and thus increases its trust to the
recommending agents. At $t=3$, the system can now provide a
recommendation to agent 2, even though agents 3 and 4 are not
signalling their own rating. Since agent 2 has the same profile as
agent 3, trust between these two agents increases. Agent 2 perceives a
high negative utility from the recommendation of agent 4, thus its
trust remains zero. At the same time, the links from 3 to 5 and 6
reinforce. The same happens in the cycles. These mechanisms continue
and we see that at $t=5$, paths of trust have developed between agents
of the same profile. Although agent 1 has no agents of its profile
that are signalling in one or two levels of distance, it is still able
to discover a path to two agents of its profile that are signaling and
further away in the network.
\subsection{Analysis of the Model}
\label{sec:analysis-model}
In this section we derive a self-consistent equation for the matrix of
trust which allows to investigate the dynamics of trust. We analyse
the case of a population of agents with only two opposite profiles
(see Section~\ref{sec:simple-model}) which provide ratings on objects
as $+1$ or $-1$, respectively.
We want to compute the expected value of trust at the equilibrium of
the dynamics defined in Eqs.~(\ref{eq:trust_update})
and~(\ref{eq:trust_update_minmax}). We do so by a mean-field
approximation in which we replace the utility $u_{ij}(t)$ in
Eq.~(\ref{eq:trust_update}) with the expected utility over time,
denoted by $u_{ij}:=E(u_{ij}(t))$ (without the time dependency). We
impose $T_{ij}(t)=T_{ij}(t+1)$ at the equilibrium, obtaining
\begin{eqnarray}
T_{ij}=\max(\min(u_{ij},1),0),
\label{eq:equilibrium_T_1}
\end{eqnarray}
which requires us to estimate $u_{ij}$. Given the definition of
$u_{ij}(t)$ in Eq.~(\ref{eq:definition_utility}) and the fact that
agents signal a rating with probability $\eta$ and they do not with
probability $1-\eta$, it follows that the expected utility $u_{ij}$ is
\begin{eqnarray}
u_{ij}= \eta (1 - |\pi_i-\pi_j| ) +(1-\eta)( 1 - |\pi_i-\sum_{k} \tilde S_{jk} \pi_{k}|).
\label{eq:expected_utility}
\end{eqnarray}
Since we are considering the simple case in which agents signal
faithfully, the expected rating provided by an agent $j$ coincides
with its profile: $E(r_j^o) = \pi_j$. We can thus express the expected
prediction for agent $j$ as $E(p_j^o) = \sum_{k} \tilde S_{jk}
\pi_{k}$. In future work, we will also consider more complicated
cases, e.g.~including non-faithful (selfish or malicious)
behaviour. Substituting into Eq.~(\ref{eq:equilibrium_T_1}), we get:
\begin{multline}
T_{ij} = \max(0,\min(1,\eta (1 - |\pi_i-\pi_j| ) \\
+ (1-\eta)( 1 - |\pi_i-\sum_{k} \tilde S_{jk} \pi_{k}|))).
\label{eq:equilibrium_T_3-tmp}
\end{multline}
Since the profiles $\pi$ are given, $T$ is a function of $\tilde
S$. Notice that by combining
Eqs.~(\ref{eq:role_of_beta}-\ref{eq:definition_normalized_indirect_trust}-\ref{eq:definition_normalized_direct_trust}),
we can express $\tilde S_{jk}$ in terms of the components $T_{jk}$,
$(T^2)_{jk}$, $(T^3)_{jk}, \ldots$ as well as $T_{jl}$, $(T^2)_{jl}$,
$(T^3)_{jl}, \ldots$ where $l$ are the other neighbours of $j$:
\begin{eqnarray}
\tilde S_{jk}
&=& \frac{T_{jk} + \beta (T^2)_{jk} + \beta^2 (T^3)_{jk} + \dots}{\sum_{l} \sum^\infty_{m=0} (T^m)_{jl}}.
\label{eq:tildeS(T)}
\end{eqnarray}
It follows that we can express the value of trust $T_{ij}$ between
any pair of agents in terms of the value of trust among the other
pairs. This leads to a self-consistent equation for $T$, where the
only parameters are the initial values of trust $T(0)$, the
probability to signal, $\eta$, the discount factor along the walks of
the graphs, $\beta$, and the profiles of the agents, $\pi$:
\begin{eqnarray}
T_{ij}= f(T,T(0),\eta,\beta,\pi)\ \forall i,j.
\label{eq:equilibrium_T_3}
\end{eqnarray}
Notice that Eq.~(\ref{eq:equilibrium_T_3}) is obtained without any
assumption on the structure of the network that is reflected in $T$.
One is, of course, interested in the fixed points of Eq.
(\ref{eq:equilibrium_T_3}), their stability and whether they are
attained by the dynamics. On the one hand, it is trivial to check that
the matrix $T$ with $T_{ij}=1$ among agents with the same profile and
$T_{ij}=0$ among agents with opposite profile is a fixed point of
Eq.~(\ref{eq:equilibrium_T_3}). Denote this configuration as
$\{T^+=1,T^-=0\}$. On the other hand, the configuration with trust
equal zero among all pairs $\{T^{+,-}=0\}$ is not a fixed point.
In the next section, we find, by means of computer simulations, that
the system, starting from a configuration with no trust among the
agents, $\{T^{+,-}=0\}$, always evolves to a configuration in which
agents with similar profile trust each other $\{T^+=1,T^-=0\}$. This
is true even if agents do not signal all the time (i.e. $\eta < 1$).
A formal investigation of the stability of all the fixed points of
Eq.~(\ref{eq:equilibrium_T_3}) will be performed in future work.
\subsection{Simulations}
\label{sec:simulations}
The simulations that we carried out were done on an agent population
of $500$ agents. We considered two opposite profiles with ratings on
objects as $+1$ or $-1$. The agents are connected in a random graph
\cite{erdos59,bollobas85}. Initially, $T_{ij}=0 \quad \forall i,j$,
i.e.~the agents have to learn who to trust. We varied the average
degree $d$ of each agent, as well as the level of cooperation $\eta$
in the system. The following figures illustrate the system behaviour
over $50$ steps; all results were averaged over $100$ runs.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{trust_d_fixed_07_n_variable}
\vspace{-1.5ex}
\caption{Trust between agents of the same profile over time, for a
fixed average degree of agents but variable level of cooperation.}
\label{fig:trust-d-n-time}
\end{figure}
Figure~\ref{fig:trust-d-n-time} illustrates the average trust between
agents of the same profile over time: the average degree of agents is
fixed, $d=7$, and the level of cooperation $\eta$ is variable, ranging
from $0.01$ to $0.25$ in steps of $0.01$. The average trust between
agents of the same profile converges to $1$ for almost all $\eta$. For
larger $\eta$, this process takes place much faster than for smaller
$\eta$. Given a sufficient level of cooperation in the system, the
agents develop trust to the agents that have the same
profile. Furthermore (this is not shown in the figure), agents of
opposite profiles do not develop trust between each other.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{trust_d_n_05}
\includegraphics[width=0.4\textwidth]{trust_d_n_10}
\caption{Trust between agents of the same profile as a function of
level of cooperation and average degree of agents at $t=5$ (left),
and $t=10$ (right).}
\label{fig:trust-d-n}
\end{figure}
Figure~\ref{fig:trust-d-n} illustrates the trust between agents of the
same profile as a function of the level of cooperation and the average
degree of agents at $t=5$, and $t=10$. Initially, at $t=0$, agents
still have to learn who to trust (and the whole figure would be blue,
corresponding to zero trust between everyone). At $t=5$, trust is
already developing; for larger average degrees of agents $d$ as well
as for larger levels of cooperation $\eta$, this happens faster. At
$t=10$, trust between agents of the same profile has developed for an
average degree of agents $d>5$ and a level of cooperation $\eta>0.05$.
The obvious consequence of the evolution of trust is that predictions
tend to match the profiles. We test this by measuring the performance
of the system. Let the performance be defined as the sum of the
products of the utility and the trust between all pairs of agents $i$
and $j$:
\begin{eqnarray}
\label{eq:performance}
\Phi = \frac{1}{n}\sum_{i}\sum_{j}u_{ij}\frac{T_{ij}}{\sum_{k}T_{ik}},
\end{eqnarray}
where $n$ is the number of agents, e.g.~in our case $n=500$. Agents
are exposed to ratings which lead to both positive or negative
utility. By building trust, they give more weight to the positive
utility and less weight to the negative utility. Therefore, this
measures ``how well agents use their trust''.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{performance_d_variable_n_fixed_0-10}
\vspace{-1.5ex}
\caption{Performance over time, for a variable average degree of
agents, but a fixed level of cooperation.}
\label{fig:performance-d-n-time}
\end{figure}
Figure~\ref{fig:performance-d-n-time} illustrates the performance over
time: again, the average degree of agents is fixed, $d=7$, and the
level of cooperation $\eta$ is variable, ranging from $0.01$ to $0.25$
in steps of $0.01$. The performance converges to $1$ for almost all
$d$. The similarity to Figure~\ref{fig:trust-d-n-time} is due to the
fact that agents who have developed trust to other agents of the same
profile are provided with good recommendations from their neighbours;
thus, these agents perceive high utility which leads to high
performance.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{performance_d_n_01}
\includegraphics[width=0.4\textwidth]{performance_d_n_05}
\caption{Performance as a function of level of cooperation and
average degree of agents at $t=1$ (left) and at $t=5$ (right).}
\label{fig:performance-d-n}
\end{figure}
Finally, Figure~\ref{fig:performance-d-n} illustrates the performance
as a function of the level of cooperation and the average degree of
agents at $t=1$ and at $t=5$. Again, just as the trust between agents
of the same profile increases in Figure~\ref{fig:trust-d-n}, the
performance increases with increasing average degree of agents and
level of cooperation. One might wonder how, at $t=1$, the performance
can already be nonzero -- this is due to the fact that there are only
two opposite profiles; this implies that half of the neighbours of an
agent are of the same profile and, as soon as an agent has developed
some trust to one of these neighbours, it will benefit from their
recommendations which, again, drives the performance up.
\subsection{Empirical Validation}
\label{sec:empirical-validation}
To support the analytical approximations of the model and the results
of the computer simulations, we empirically tested the performance of
a recommender system using our TrustWebRank (TW) metric against one
using a standard Collaborative Filtering (CF) approach, similarly to
what has been done in \cite{massa07}. We crawled Epinions.com, an
on-line platform which allows consumers to read and write reviews
about products. The unique feature of Epinions is that users can also
form a ``web-of-trust'' and specify other users that they trust with
respect to their reviews. The crawling was performed in mid-2007 and
led to a dataset of 60,918 users with 896,969 reviews on 223,687
products and with 518,505 relationships. We cleaned this dataset and
removed users that either had not written any reviews or had no
relationships to other users because no reasonable validation can be
done with these users. Furthermore, we focus on the greatest strongly
connected component (SCC) because a) there is only one large SCC and
many small SCC (1-3 users) and b) membership in this SCC can be seen
as a proxy for having a properly formed web of trust. Having applied
this procedure, we are left with 29,478 users, 731,220 reviews on
201,674 products, and 471,888 relationships. The data sparsity is
99.9877\
stars (max). There is a bias to review favourably, as 75\
ratings are either 4 or 5 stars and only 25\
or 3 stars -- probably because users are more likely to spend time to
write a review when they like a product.
We split the reviews into a training set $R_{\mathrm{Training}}$ and a
test set $R_{\mathrm{Test}}$. We then compare the performance of TW
and CF by training the algorithms on $R_{\mathrm{Training}}$ and
testing with $R_{\mathrm{Test}}$. TW, in general, has comparable
performance to CF, and performs better in particular situations, as we
will describe in the following. The complete empirical validation
will, together with some statistical analyses of the Epinions
community, be reported on in a separate paper
\cite{walter09-epinions}.
\textbf{Mean Absolute Error}: the mean absolute error (MAE) is defined
as
\begin{eqnarray}
e_{\mathrm{MAE}} = \frac{1}{|R_{\mathrm{Test}}|} \sum_{R_{\mathrm{Test}}}|r_i^o-p_i^o|.
\label{eq:mae}
\end{eqnarray}
Figure \ref{fig:mae} shows the MAE of TW for changing $\beta$ and
CF. Depending on the value of $\beta$, TW performs (marginally) better
than CF. There is an optimal $\beta_{\mathrm{opt}} \approx 0.8$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{error_absolute_against_beta_scc}
\caption{Mean Absolute Error of TW (blue/circles) against $\beta$
and CF (red/squares). The MAE is normalised to a scale in $[0,1]$,
i.e.~it reflects percentages.}
\label{fig:mae}
\end{figure}
However, the fact that most ratings are 4 or 5 limits the meaning of
the MAE as a measure of performance. Indeed, predictions based on the
Simple Average (SA) of ratings on a product, a global algorithm which
is not personalised for users, outperform both TW and CF:
$e_{\mathrm{MAE}}(SA)=0.21$. Similar results were found in
\cite{massa07} using a different dataset of Epinions (from 2003). An
explanation for this is that reviews are very homogeneous and almost
all ratings are positive. Other datasets, such as the commonly used
MovieLens dataset, are more heterogeneous and SA performs worse than
CF on such datasets. Unfortunately, at the moment, Epinions is the
only available dataset which combines rating data and a social network
-- and which is thus suitable to test the performance of TW.
\textbf{Coverage}: coverage measures the percentage of elements that
can be predicted from the training set. Both TW and CF cannot compute
predictions for all elements in the test set. For example, if there is
no similar or trusted user who has rated a particular product, CF or
TW are not able to compute a prediction for that product. CF was able
to compute 41.65\
75.11\
with CF. The reason for this is that TW is able to reach a large
neighbourhood even when the neighbourhood based on co-ratings, as in
CF, is small.
\textbf{Top-N Set Overlap}: as noted, the value of ratings in Epinions
does not seem to carry a lot of meaning -- probably because people
tend to rely more on the text of reviews than on the
rating. Therefore, it makes sense to compare the performance based on
the ability to predict the subset of products rated by a user. We
define the following measures of overlap between sets:
\begin{eqnarray}
o_i^{N} = \frac{|P_i \cap R^{N}|}{\min(|P_i|,N)} \quad \mathrm{and} \quad o_{X,i}^{N} = \frac{|P_i \cap R_{X,i}^{N}|}{\min(|P_i|,N)}
\label{eq:overlap-user}
\end{eqnarray}
where $P_i$ is the set of products rated by a user $i$; $R^{N}$ is the
set of the N most rated products overall in the system; $X$ denotes
either CF or TW and thus $R_{CF,i}^{N}$ and $R_{TW,i}^{N}$ are the
sets of the N most rated products in the neighbourhood of a user $i$
constructed by CF and TW. Note that $R^{N}$ is a global set which is
the same for all users $i$. Thus, $o_i^N$ is the counterpart of
$e_{\mathrm{MAE}}(SA)$ in this context. $R_{CF,i}^{N}$ and
$R_{TW,i}^{N}$ are personalised sets which depend on the neighbourhood
of user $i$ and thus are different for any two users. We define the
average overlap across all users as $O^N$, $O^N_{CF}$, and
$O^{N}_{TW}$. For $N=100$, we obtain $O^{N} \approx 0.0819$,
$O_{CF}^{N} \approx 0.2526$ and $O_{TW}^{N} \approx 0.1724$. Since a
larger overlap signifies a better prediction, the larger the values,
the better the performance. This implies that the global measure
$O^N$ performs worse than both $O_{CF}^N$ and $O_{TW}^N$. In
addition, CF performs better than TW. However, it should be emphasised
that this measure is obviously biased in favour of CF: by definition,
$P_i \cap R^N_{CF,i} \neq \emptyset$. In contrast, $P_i \cap
R^N_{TW,i}$ can be empty, as a user does not necessarily declare trust
to people who are have rated the same items. Still, TW performs
significantly better than the global measure $O^N$.
This illustrates the difficulty to compare the performance of TW with
CF. In fact, the most appropriate way to measure performance would be
based on user-provided feedback subsequent to having followed a
recommendation.
In conclusion, we found that TW and CF have comparable performance. TW
seems mostly useful for recommendations of items different from those
a user has already rated -- e.g.~recommendations on travel books for
people usually interested in tools for gardening.
\section{Extensions and Conclusion}
\label{sec:extensions-conclusion}
We introduced a novel metric for computing indirect trust in social
networks. We derived this metric from feedback centrality measures in
graphs and illustrated how it addresses some limitations of other
trust metrics; most importantly, that it takes cycles in the
underlying graph into account. We constructed a simple model of a
recommender system that makes use of our metric and showed how
indirect trust can be used to generate recommendations. We performed
analytical approximations and computer simulations to characterise the
system behaviour. Finally, we also tested the model by validating it
with empirical data of an Internet community devoted to product
reviews.
Some extensions to this model could involve changing the trust
dynamics:
\textit{Trust update as a slow-positive, fast-negative dynamics.} It
has been observed in the literature that trust follows a
slow-positive, fast-negative dynamics
\cite{abdul-rahman00,grandison00,marsh94,sabater05,walter08-jaamas}. This
means that trust builds up slowly, but gets torn down quickly and this
behaviour could be implemented by modifying
Eq.~(\ref{eq:trust_update}).
\textit{Coupling the utility with the level of cooperation $\eta$.}
In real applications, if, initially, the utility for users is zero,
then nobody will signal and this is a fixed point -- and a social
dilemma \cite{hardin68}. Thus, we could couple the probability of
signalling to the utility and investigate how to make the system
escape from this undesirable fixed point.
With this work, we have shown that incorporating this novel trust
metric in recommender systems is a promising and viable approach.
\bibliographystyle{acm}
|
2,869,038,157,082 | arxiv | \section{Introduction}
Chern Simons gauge theories \cite{Deser:1981wh,Deser:1982vy} have a wide class of interesting applications in both high energy and condensed matter physics. In the former case, they appear upon dimensionally reducing finite temperature four dimensional field theories with fermions \cite{Redlich:1984md}. In condensed matter physics, they are used to model the quantum Hall effect \cite{Frohlich:1990xz,Frohlich:1991wb,PhysRevB.80.205319}.
In an accompanying paper, we show that a $2+1$ dimensional $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern Simons theory exhibits superconductivity to arbitrarily high temperatures. This is achieved by turning on a constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field that spontaneously breaks the $\rm{U(1)}_{\scriptscriptstyle \cal Z}$ symmetry \cite{3dsuperconductor}. It is well known that the vacuum manifold of spontaneously broken gauge symmetries can have non-trivial topology \cite{Kibble:1976sj}. In particular, breaking ${\rm U(1)} \to 1$ gives rise to topological vortices \cite{Abrikosov:1956sx,Nielsen:1973cs}. This work is devoted to the study of the non-perturbative sector of the $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ theory in a constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field background, which is crucial to understand the nature of the superconducting vacuum at high temperatures. In fact, at low temperatures these vortices cannot alter the vacuum structure since it is expensive to produce them. However, at high enough temperatures they may proliferate and change the properties of the superconducting vacuum. This is the celebrated Berezinsky-Kosterlitz-Thouless (BKT) transition \cite{Berezinsky:1970fr,Kosterlitz:1973xp} that occurs in 2 dimensional systems.
The vortices that we describe in this work share the main characteristics of the vortices that we have studied in a previous work \cite{Anber:2015kxa}. The most striking property of both vortices is that they exhibit long range interactions since $\rm{U(1)}_{\scriptscriptstyle \cal A}$ remains unbroken in the infrared (see our accompanying work \cite{3dsuperconductor} and also \cite{Anber:2015kxa} for a detailed discussion). Therefore, a system of a vortex and an antivortex will be logarithmically confined. As we mentioned, the vortices in this work exist in a constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field background. Interestingly, we find that the magnetic field is depleted near the core, hence we call them diamagnetic vortices. In addition, they break $C$ and $P$ symmetries unlike their cousins studied in Ref.~\cite{Anber:2015kxa}.
The plan of this paper is as follows. In Sec.~\ref{sec:symmetry breaking}, we describe the action of the $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern Simons gauge theory and sumarize some basic properties of the constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field solution. The perturbative vacuum structure of the theory is studied in great detail in our accompanying paper \cite{3dsuperconductor}. We study the topological vortex solution by making use of Nielsen-Olesen like Ans\"atze in Sec.~\ref{sec:vortex}. We also obtain the asymptotic behavior of the solution for both the near core and large radius limits and show that the diamagnetic vortices break both $C$ and $P$ symmetries. Next, we numerically integrate the equations of motion to obtain the full solution for various values of the winding number. In Sec.~\ref{sec:properties}, we calculate the flux, charge and energy of the vortices. In Sec.~\ref{sec:bkt}, we sketch the essential elements of the BKT phase transition and derive the critical temperature. We conclude with a discussion of our results in Sec.~\ref{sec:discussion}.
\section{Spontaneous Symmetry Breaking by $\rm{\bf U(1)}_{\scriptscriptstyle \bf \cal A}$ Magnetic Field}
\label{sec:symmetry breaking}
Before describing the vortex solutions in this theory, we first sumarize our results in Ref.~\cite{3dsuperconductor} for completeness. We consider the $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern-Simons theory with the action \cite{Anber:2015kxa,3dsuperconductor}
\begin{eqnarray} \begin{aligned} \label{action}
S &= \int d^3x \biggl [-\frac{1}{4} \mathcal{F}_{\mu\nu} \mathcal{F}^{\mu \nu} -\frac{1}{4} \mathcal{Z}_{\mu\nu} \mathcal{Z}^{\mu \nu}+ \mu_1 \epsilon^{\mu \nu \alpha} \mathcal{F}_{\mu\nu} \mathcal{Z}_{\alpha}~~~~~ \\
& \hskip 0.5cm + \frac{\mu_2}{2} \epsilon^{\mu \nu \alpha} \mathcal{Z}_{\mu\nu} \mathcal{Z}_{\alpha} + |D_\mu \varphi|^{2} -m^2|\varphi|^{2} \biggr ] \,,
\label{the main action of the paper}
\end{aligned}
\end{eqnarray}
where $\mathcal{F}_{\mu \nu} = \partial_\mu \mathcal{A}_\nu - \partial_\nu \mathcal{A}_\mu$, $\mathcal{Z}_{\mu \nu} = \partial_\mu \mathcal{Z}_\nu - \partial_\nu \mathcal{Z}_\mu$, $D_\mu = \partial_\mu - i e \mathcal{Z}_\mu$ and $m^2>0$. The Chern Simons coupling constants $\mu_1$ and $\mu_2$, and mass parameter $m$ have mass dimension $M$, whereas the gauge coupling constant $e$ has mass dimension $M^{1/2}$. We use natural units $c=1$, $\hbar =1$, set $k_B =1$, $\epsilon^{012} =1$, and use the metric $\eta_{\mu\nu} = {\rm diag}(1, -1,-1)$ in what follows\footnote{In this paper, we drop the $\lambda |\varphi|^4/4$ term in the action as this does not change the physics of the system, but introduces some trivial corrections. For a detailed discussion, see Ref.~\cite{3dsuperconductor}.}.
The equations of motion read
\begin{eqnarray} \begin{aligned} \label{field equations}
&\partial_\beta \mathcal{F}^{\beta \sigma} + \mu_1 \epsilon^{\beta \alpha \sigma} \mathcal{Z}_{\beta \alpha} = 0 \, , \\
&\partial_\beta \mathcal{Z}^{\beta \sigma} + \mu_1 \epsilon^{\beta \alpha \sigma} \mathcal{F}_{\beta \alpha} + \mu_2 \epsilon^{\beta \alpha \sigma} \mathcal{Z}_{\beta \alpha} +j^\sigma=0 \, , \\
&D_\beta D^{\beta} \varphi +m^2 \varphi=0 \, ,
\end{aligned}
\end{eqnarray}
where
\begin{equation}\label{current}
j^{\sigma} = i e \bigl [ \varphi^{*} D^{\sigma} \varphi - (D^{\sigma} \varphi)^{*} \varphi \bigr ] \, .
\end{equation}
There exists a solution to the field equations (\ref{field equations}) of the form
\begin{equation}\label{const soln}
B_{\cal A} = B \, , ~~~~~{\cal Z}_b^0(\varphi) = -\frac{\mu_1 B}{e^2 |\varphi|^2} \, ,
\end{equation}
where $B$ is a constant. An effective potential for the Higgs field in this background can be defined as
\begin{equation}\label{V_eff B}
V_{\rm eff}(\varphi, B) = \frac{\mu_1^2 B^2}{e^2|\varphi|^2} +m^2 |\varphi|^2
\, .
\end{equation}
The Higgs potential (\ref{V_eff B}) has a minimum at
\begin{equation}\label{B minima}
\varphi_0 = \pm \sqrt{\frac{\mu_1 |B|}{e m}} \, ,
\end{equation}
at which the background ${\cal Z}_\mu$ field takes the value
\begin{equation} \label{Z_0 background}
{\cal Z}_b^0(\varphi_0) = -\frac{m}{e} \frac{|B|}{B} \, .
\end{equation}
In what follows, we will take $B>0$ without loss of generality. Thus, even though the Higgs potential does not have a tachyonic mass parameter, the presence of a constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field breaks the $\rm{U(1)}_{\scriptscriptstyle \cal Z}$ symmetry by turning on the term $(\mu_1^2 B^2)/(e^2|\varphi|^2)$ in the effective Higgs potential given by \eref{V_eff B}. This suggests that the vacuum manifold of $\varphi$, $\cal{M}$, has non-trivial topology, namely $\pi_1({\cal M}) = {\mathbb Z}$. Therefore, there exists a vortex solution, which we describe in the next section.
\section{Topological Vortex Solution}
\label{sec:vortex}
We are interested in cylindrically symmetric vortex solutions, and thus, we consider a Nielsen-Olesen like Ans\"atze for the Higgs and gauge fields \cite{Anber:2015kxa}
\begin{eqnarray}\label{ansatz}
\begin{aligned}
\varphi &= \varphi_0 f(r) e^{i n \theta}\,, \quad
\mathcal{Z}_i = -\epsilon^{i j} x_j \frac{Z(r)}{e r^2} \,, \\
\mathcal{Z}_0 &= e Z_0 (r) \,,\quad ~~~
\mathcal{A}_i = -\epsilon^{i j} x_j \frac{A(r)}{e r^2} \,, \\
\mathcal{A}_0 &= e A_0 (r) \, .
\end{aligned}
\end{eqnarray}
Note that we choose the above Ans\"atze such that all the profile functions $f(r), Z(r), Z_0(r), A(r)$ and $ A_0(r)$ are dimensionless. Upon plugging \eref{ansatz} into the equations of motion (\ref{field equations}), we obtain the following equations for the profile functions:
\begin{eqnarray}
\begin{aligned}
\label{the equations of profile functs}
&f'' + \frac{f'}{r} - (n-Z)^{2} \frac{f}{r^2} + e^4 Z_0^{2} f - m^2 f = 0 \, ,~~\\
&Z'' -\frac{Z'}{r} + 2e^2 \varphi_0^2 f^2 (n-Z) - 2e^2 r (\mu_1 A_0' + \mu_2 Z_0') = 0 \, ,~~~ \\
& Z_0'' +\frac{ Z_0'}{r} - 2e^2 \varphi_0^2 f^2 Z_0 - \frac{2}{e^2 r} (\mu_1 A' +\mu_2 Z') =0 \\
&A'' - \frac{A'}{r} - 2 \mu_1 e^2 r Z_0' = 0 \, , \\
& A_0'' + \frac{ A_0'}{r} - \frac{2\mu_1}{e^2 r} Z' = 0 \, ,
\end{aligned}
\end{eqnarray}
where $r$ is the dimensionful radius.
The last two equations can be integrated to yield the first order equations:
\begin{equation}\label{integrated equations}
A' = 2\mu_1 e^2 r Z_0 + {\cal D}_1 r \, , \qquad A_0' = \frac{2\mu_1}{e^2 r}Z + \frac{{\cal D}_2}{r} \, .
\end{equation}
We set ${\cal D}_1 = Be + 2 \mu_1 m /e^2$ in order to meet the requirement that the magnetic field very far from the core radius\footnote{For simplicity, we postpone the requirement that the magnetic flux has to be conserved for vortices, and construct the solution using an asymptotic magnetic field $B$ that has to be corrected later using the flux conservation (see Sec.~\ref{subsec:flux conservation}).} is $B$. In addition, the vortex solution must be well-behaved at the core which selects ${\cal D}_2 =0$.
\subsection{The Near-Core and Asymptotic Behavior}
At small $r$, the profile functions can be expanded in positive powers of $r$ and can be solved order by order to satisfy the equations of motion (\ref{the equations of profile functs}). This way, we can fix all the expansion coefficients except the five parameters $f_1,~ z_2,~ z_{00},~a_2,~ a_{00}$. To the leading order in $r$, we find
\begin{eqnarray}
\begin{aligned}
\label{small r}
f(r) \simeq& f_1 r^{|n|}+{\cal O}(r^3) \, , \\
Z(r) \simeq& z_2 r^2 + {\cal O}(r^4) \, , \\
Z_0(r) \simeq& z_{00} + \frac{a_2 \mu_1 + z_2 \mu_2}{e^2} r^2+ {\cal O}(r^4) \, , \\
A(r) \simeq& a_2 r^2 + {\cal O}(r^4) \, , \\
A_0(r) \simeq& a_{00}+\frac{z_2 \mu_1}{e^2} r^2 +{\cal O}(r^4) \, .
\end{aligned}
\end{eqnarray}
Similarly, at large $r$, the profile functions can be expanded in inverse powers of $r$ (see Ref.~\cite{Anber:2015kxa}). To the leading order we find:
\begin{eqnarray}
\begin{aligned}
\label{large r}
f^\infty(r) &\simeq f(\infty) - \frac{ \mu_1^2 n^2}{B e (B e + 2m \mu_1)} \frac{1}{r^2} + {\cal O}(1/r^4) \, , \\
Z^\infty(r) &\simeq Z(\infty) -\frac{4 m \mu_1 n^2 \left(n \mu_1^2 + m \mu_2 \right)}{(B e+2 m \mu_1)^3} \frac{1}{r^2} + {\cal O}(1/r^4) \, , \\
Z_0^\infty(r) &\simeq Z_0(\infty) -\frac{2\mu_1^2 m n^2}{e^2 (B e+2 m \mu_1)^2}\frac{1}{r^2} + {\cal O}(1/r^4) \, , ~~~~~\\
A^\infty(r) &\simeq A(\infty) +\frac{2 \mu_1^3 n^2 }{(Be + 2 m \mu_1)^4} \Bigg[ 3 \mu_1^2 m \left(n^2+4\right) \\
&\hskip-1cm + \frac{B^2 e^2}{m}+ 6 B e \mu_1 + 4 \mu_2 m^2 n + \frac{8 \mu_1^3 m^2}{B e}
\Bigg] \frac{1}{r^2} + {\cal O}(1/r^4) \, ,~~ \\
A_0^\infty(r) &\simeq A_0(\infty)+\frac{4 \mu_1^2 m n^2 \left(n \mu_1^2+ m\mu_2 \right)}{e^2 (B e+2 m \mu_1)^3} \frac{1}{r^2} + {\cal O}(1/r^4)\,, \\
\end{aligned}
\end{eqnarray}
and the asymptotic values of the profile functions are
\begin{eqnarray}\label{profile asymptotic}
\begin{aligned}
&f(\infty) = 1 \, , ~~ Z(\infty) = \frac{e^2 \varphi_0^2 n}{e^2 \varphi_0^2 + 2 \mu_1^2} \, , ~~ Z_0(\infty) = - \frac{m}{e^2} \, , ~~~~~\\
&A(\infty) = {\cal C}_0 + \frac{1}{2} B e r^2 - \frac{4 e \mu_1^4 \varphi_0^2 n^2 }{B (e^2 \varphi_0^2 + 2 \mu_1^2)^2} \ln \frac{e^2 r}{{\cal C}_1} \, ,\\
&A_0(\infty) = \frac{2 \mu_1 \varphi_0^2 n}{e^2 \varphi_0^2 + 2 \mu_1^2} \ln \frac{e^2 r}{{\cal C}_2} \, ,
\end{aligned}
\end{eqnarray}
where $\varphi_0$ is given by \eref{B minima}. We note that the asymptotic values of the profile functions $Z(\infty)$ and $A_0(\infty)$ in \eref{profile asymptotic} match those of our previous work \cite{Anber:2015kxa} upon substituting $\varphi_0 = v$, where $v$ is the vacuum expectation value of the $\varphi$ field in the $V(\varphi) = -m^2 \varphi^2+\lambda \varphi^{4}/4$ potential with $m^2>0$.
\subsection{C and P Violation in Diamagnetic Vortices}
We can readily understand most of the key physical properties of these diamagnetic vortices without the full numerical solution that will be discussed in the next section. First of all, we recall that the parity ($P$) and charge conjugation ($C$) operators in $2+1$ D act on the position vector, $x^\mu = \left(x^0, x^1,x^2\right)$, and fields ${\cal A}^\mu(x^\mu) = \left({\cal A}^0, {\cal A}^1,{\cal A}^2\right) (x^\mu)$ and $\varphi(x^\mu)$ as follows (see, e.g., the appendix of Ref.~\cite{Affleck:1982as}):
\begin{eqnarray}\label{P transform}
\begin{aligned}
P: x^{\mu} &\to \bar x^{\mu}= \left(x^0, -x^1,x^2\right)\, ,\\
P: {\cal A}^{\mu} (x^\mu) &\to \bar {\cal A}^{\mu} (\bar x^{\mu}) = \left({\cal A}^0, -{\cal A}^1,{\cal A}^2\right)(\bar x^{\mu}) \, , \\
P: \varphi (x^\mu) &\to \bar \varphi (\bar x^{\mu}) = \varphi (\bar x^{ \mu}) \, ,
\end{aligned}
\end{eqnarray}
and
\begin{eqnarray}\label{C transform}
\begin{aligned}
C: x^{\mu} & \to x^\mu \, , \\
C: {\cal A}^{\mu} (x^\mu) &\to - {\cal A}^{\mu} (x^\mu) \, , \\
C: \varphi (x^\mu) &\to \varphi^* (x^{ \mu}) \, .
\end{aligned}
\end{eqnarray}
From the transformation properties given in Eqs.~(\ref{P transform}) and (\ref{C transform}), it is easy to check that magnetic field $B$ is odd under both $C$ and $P$, but even under $CP$. Hence, $B$ is a pseudoscalar. The electric field ${\bm E} = \left(E^1, E^2 \right)$ transforms under $P$ and $C$ as
\begin{eqnarray}
\begin{aligned}
P: {\bm E} &\to \left(-E^1, E^2 \right) \, ,\\
C: {\bm E} &\to - {\bm E} \, .
\end{aligned}
\end{eqnarray}
Under $C$ and $P$, the vortex with winding number $n$ transforms as
\begin{eqnarray}
C: \varphi(r, \theta) \to \left(\varphi_0 f_n(r) e^{i n \theta}\right)^* = \varphi_0 f_{n}(r) e^{-i n \theta} \, , \\
P: \varphi(r, \theta) \to \varphi_0 f_n(r) e^{i n (-\theta)} = \varphi_0 f_{n}(r) e^{-i n \theta} \, ,
\end{eqnarray}
where we used \eref{P transform} and the fact that $\theta = \tan^{-1} (x_2/x_1)$. For vorticies that conserve $C$ or $P$, the $C$ or $P$ operations would simply correspond to flipping the sign of the winding number $n$, i.e. sending the vortex to anti-vortex since $f_n=f_{-n}$ in that case. However, from the asymptotic behavior of the diamagnetic vortices given in \eref{large r}, we can see that the profile functions $Z(r)$, $A(r)$, and $A_0(r)$ have both even and odd terms in $n$, e.g., $f_n\neq f_{-n}$ (this is clear for $A^\infty (r)$ at ${\cal O}(1/r^2)$, while for the rest of the profile functions one has to go to higher orders to see it). Therefore, under $C$ and $P$, the vortex does not transform into an antivortex, or vice versa. In other words, the diamagnetic vortex solution breaks the $C$ and $P$ symmetry while preserving $CP$. This is not surprising as this solution only exists in the presence of a constant magnetic field $B$ that is odd under $C$ and $P$. The explicit differences between vortex and antivortex solutions of various winding numbers are shown in Fig.~\ref{Fig:profilen}.
\subsection{Numerical Solution}
Numerical results were obtained via a shooting method as implemented in \cite{Burnier:2005he}. For winding number $n=1$ the profile functions are shown in Fig.~\ref{Fig:profile}.
We see that the profile functions satisfy both the small and large $r$ asymptotic behaviors given in Eqs.~(\ref{small r}) and (\ref{large r}). For instance, at large distance $r$, the Higgs goes to its vacuum expectation value $\varphi_0$, i.e., $f(r)\to 1$, and the ${\cal Z}$ condensate goes to its background value given in \eref{Z_0 background}, i.e., $Z_0(r)\to-m/e$. Because of the magnetic field background (\ref{const soln}), the function $A(r)$ grows like $e Br^2/2$, and thus, we only display $A(r)-e Br^2/2$ that corresponds to the vortex contribution to $A(r)$.
\begin{figure}[h]
\includegraphics[width=88mm]{profile.pdf}
\caption{Profile functions for $n=1$, $e=1$, $m=1$, $B=1$ and $\mu_1=\mu_2=1/4$ against radial distance $r$ in units of $e^2$.}
\label{Fig:profile}
\end{figure}
As the solution is neither symmetric or antisymmetric under charge conjugation $\mathcal{C}$ or under parity $P$, the profiles for negative winding numbers are different from the positive winding numbers. We compare diamagnetic vortex solutions with various positive and negative winding numbers $n=\pm1, \pm 2,\pm 3$ in Fig.~\ref{Fig:profilen}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=88mm]{profilen1.pdf}
\includegraphics[width=88mm]{profilen2.pdf}
\includegraphics[width=88mm]{profilen3.pdf}
\caption{Profile functions for $n=\pm1$ (top), $n=\pm 2$ (middle), $n=\pm 3$ (bottom) against radial distance $r$ in units of $e^2$. The profile functions for $n>0$ are shown as thick lines, whereas the profiles for $n<0$ are shown as thin lines. We used the parameters: $e=1$, $m=1$, $B=1$ and $\mu_1=\mu_2=1/4$. Note that $Z$ (and $A_0$ not shown here) are negative for $n<0$. Here we switched the sign of $Z$ for $n<0$ for easy comparison with $n>0$.}
\label{Fig:profilen}
\end{center}
\end{figure}
We also provide more precise checks for the asymptotic behavior of the profile functions $f(r)$ and $A(r)$ in Fig.~\ref{Fig:large_r2}. Note that unlike the other known vortex solutions in similar models where the profile function $f$ is monotonous, $f$ first overshoots its asymptotic value $1$, and then turns around again and finally reaches $1$ from below.
\begin{figure}[h]
\begin{center}
\includegraphics[width=87mm]{f_asymptotic}
\includegraphics[width=87mm]{B_asymptotic}
\caption{Detailed behavior of the scalar profile functions $f$ (top) and $A$ (bottom). Thick (thin) lines correspond to $n>0$ ($n<0$). The thin dotted lines underline the asymptotic behavior derived in \eref{large r}. At large $r$, $1-f(r)$ goes to zero as $1/r^2$, i.e., $r^2[1-f(r)]$ goes to a constant (top). The magnetic field also converges towards its asymptotic value as $1/r^2$ (bottom).}
\label{Fig:large_r2}
\end{center}
\end{figure}
\section{Physical Properties of Diamagnetic Vortices}
\label{sec:properties}
Using the profile functions given by \eref{ansatz}, we can calculate the electric and magnetic fields of the diamagnetic vortices as:
\begin{eqnarray} \label{elmag}
\begin{aligned}
E_\mathcal{Z} &= e Z_0' \, , \qquad B_{\mathcal{Z}} = \frac{1}{2} \epsilon^{0ij}Z_{ij} = \frac{Z'}{e r} \, , \\
E_\mathcal{A} &= e A_0' \, , \qquad B_{\mathcal{A}} = \frac{1}{2} \epsilon^{0ij}F_{ij} = \frac{A'}{e r} \, ,
\end{aligned}
\end{eqnarray}
$\mathcal{Z}_{0} = e Z_0$ and the kinetic term for the Higgs field can be similarly found as
\begin{equation}\label{D phi}
|{\bf D} \varphi|^2 = \varphi_0^2 f'^2 + \varphi_0^2 (n-Z)^2 \frac{f^2}{r^2} \, .
\end{equation}
The electric and magnetic fields are shown in Figs.~\ref{Fig:fields} and \ref{Fig:fields2}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=76mm]{Fieldsn1}
\includegraphics[width=76mm]{Fieldsn2}
\includegraphics[width=76mm]{Fieldsn3}
\caption{Electric and magnetic fields ($B_\mathcal{A}$ is shown in Fig.~\ref{Fig:fields2}). The plots in the top, middle and bottom panels correspond to $n=\pm1$, $n=\pm2$ and $n=\pm 3$, respectively. Positive (negative) winding numbers are shown as thick (thin) lines. The total $\rm{U(1)}_{\scriptscriptstyle \cal A}$ charge of the vortex is $Q_\mathcal{A}=2 B \mu_1 n/(B e^2 + 2 m e \mu_1)$ from \eref{A charge}, and we can see that $E_\mathcal{A}$ electric field asymptotically reaches $Q_\mathcal{A}/r$ at large $r$. Note also that for $n<0$, $E_\mathcal{A}$ and $B_\mathcal{Z}$ are negative. We show the absolute value of these fields for visually clear comparison. }
\label{Fig:fields}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=82mm]{Bfield}
\caption{Magnetic field $B_\mathcal{A}$ for different winding numbers $n$. The radial dependence of $B_{\cal A}$ is shown as thick (thin) lines for positive (negative) $n$. At infinity $B_\mathcal{A}$ reaches $1$ monotonously, satisfying our assumptions of a constant magnetic field background. In the core of the vortex the magnetic field is depleted for all values of $n$.}
\label{Fig:fields2}
\end{center}
\end{figure}
\subsection{Flux, Charge and Energy}
We calculated all the physical properties of similar vortices with the potential $V(\varphi) = -m^2 \varphi^2+\lambda \varphi^{4}/4$ in Ref.~\cite{Anber:2015kxa}. It is a straightforward exercise to obtain the flux, charge and energy of the diamagnetic vortices.
The ${\cal Z}_\mu$ magnetic flux of a diamagnetic vortex is
\begin{eqnarray} \label{phiBz}
\Phi_{B_\mathcal{Z}}^{\rm v} = \frac{2\pi}{e} Z(\infty)
= \frac{2 \pi n}{e} \frac{ e^2 \varphi_0^2 }{ e^2 \varphi_0^2+2\mu_1^2} \,.
\end{eqnarray}
Note that for the creation of a pair of a vortex and antivortex, the ${\cal Z}_\mu$ flux is automatically conserved as it is proportional to the winding number $n$.
The long range $\rm{U(1)}_{\scriptscriptstyle \cal A}$ charge can be found from Gauss's law
\begin{eqnarray} \label{A charge}
Q_\mathcal{A} &=& \oint_{S^{1}_{\infty}} d{\bm \ell} \cdot {\bm E}_{\cal A} = 2\pi r e A_0' \nonumber \\
&=& \frac{4 \pi n}{e}\frac{e^2 \varphi_0^2 \mu_1}{ e^2 \varphi_0^2+2\mu_1^2} \, .
\end{eqnarray}
The Hamiltonian density is given by
\begin{eqnarray} \begin{aligned} \label{H}
\mathcal{H} &= \frac{1}{2} \left( E_{\mathcal{Z}}^{2} + B_{\mathcal{Z}}^{2} + E_{\mathcal{A}}^{2} + B_{\mathcal{A}}^{2} \right) + e^4 Z_0^2 |\varphi|^2 \\
&\hskip 0.5cm + |{\bf D} \varphi|^2 + m^2 |\varphi|^2 \, ,
\end{aligned}
\end{eqnarray}
which can be expressed in terms of the profile functions using Eqs.~(\ref{elmag}) and (\ref{D phi}) as
\begin{eqnarray}\label{hamiltonian}
\begin{aligned}
\mathcal{H} =&\frac{1}{2}\biggl [ e^2 Z_0'^2 + \frac{Z'^2}{e^2 r^2} + e^2 A_0'^2 +\frac{A'^2}{e^2 r^2} + 2 e^4 \varphi_0^2 Z_0^2 f^2~~~~~~~\\
& +2 v^2 (n-Z)^2 \frac{f^2}{r^2} + 2 v^2 f'^2 +2 m^2 \varphi_0^2 f^2 \biggr] \, .
\end{aligned}
\end{eqnarray}
Integrating each term over $\mathbb{R}^2$, the total energy of a vortex reads
\begin{eqnarray} \label{Ev}
\mathcal{E} = \mathcal{E}_{\rm c}+\mathcal{E}_{\rm \scriptscriptstyle IR} \,,
\end{eqnarray}
where $\mathcal{E}_{\rm c}$ is the vortex core energy that we compute numerically, as shown in Fig.~\ref{core energy}, and $\mathcal{E}_{\rm \scriptscriptstyle IR}$ is the infrared part of the energy.
Imposing an IR cutoff at radial distance $r = L$ we find
\begin{eqnarray} \label{vortex energy}
\mathcal{E}_{\rm \scriptscriptstyle IR}&=& 2\pi \left(\frac{1}{4} B^2 + \frac{B \mu_1 m}{e}\right) L^2\notag\\&&+\frac{4 \pi \mu_1^2 n^2}{e^2} \frac{Be - 2 m \mu_1}{Be + 2 m \mu_1} \ln \frac{L}{r_{\rm c}} \, ,
\end{eqnarray}
where the first term is the IR contribution from the background magnetic field and ${\cal Z}_0$ condensate. Note also that the last term suggests that energy of a vortex has a negative contribution. This might seem puzzling at first as negative energy would be naively associated with some sort of instability in the ground state of the system. However, we emphasize at this point that we have not made use of the magnetic flux conservation yet, and thus, the magnetic field $B$ is not the {\it true} background field. Next, we calculate the value of the asymptotic magnetic field value, which upon substituting in \eref{vortex energy} yields a finite and positive vortex energy.
\begin{figure}[]
\begin{center}
\includegraphics[width=87mm]{Energy_density_asymptotic}
\caption{The energy density of vortices (thick lines) and anti-vortices (thin lines) for $n=\pm 1\, , \pm 2\, , \pm 3$. It is clear that vortices and anti-vortices have different core energies, which is expected since the vortex solution breaks the $C$ and $P$ symmetries.}
\label{core energy}
\end{center}
\end{figure}
\subsection{Conservation of Magnetic Flux}
\label{subsec:flux conservation}
For a self consistent vortex solution, we also need to take into account the fact that the magnetic flux is a conserved quantity. Using the Bianchi identity
\begin{eqnarray}
\begin{aligned}
\epsilon^{\alpha \mu\nu} \partial_\alpha {\cal F}_{\mu\nu} &=0 \, , \\
\end{aligned}
\end{eqnarray}
integrating over a surface ${\mathbb R}^2$, and making use of Stokes' theorem, we obtain
\begin{eqnarray}
\begin{aligned}
\frac{\partial \Phi_{B_{\cal A}}}{\partial t} &= - \oint d {\bm \ell} \cdot {\bm E}_{\cal A} \, , \\
\end{aligned}
\end{eqnarray}
where
\begin{eqnarray}
\Phi_{B_{\cal A}} = \int_{{\mathbb R}^2} dS~B_{\cal A} \, ,
\end{eqnarray}
is the magnetic flux, and ${\bm E_{\cal A}}$ is the electric field. Thus, assuming a manifold with electric fields perpendicular to its boundary, i.e., $\oint d {\bm \ell} \cdot {\bm E}_{\cal A}=0$, the magnetic flux must be conserved. The flux conservation also applies to all compact $2$ dimensional manifolds without boundaries.
Suppose that we start with a constant background magnetic field $B_{\cal A} = \bar B$ on ${\mathbb R}^2$ whose flux is equal to
\begin{eqnarray} \label{flux bar B}
\Phi_{B_{\cal A}} = \int_0^{2\pi} d\phi \int_{0}^{L} dr~r~ \bar B = \pi L^2 \bar B \, ,
\end{eqnarray}
where $L$ is an IR cutoff characterizing the system size and $r_c$ is the core radius\footnote{Notice that there is also core magnetic flux which should be determined numerically but it can be absorbed in the redefinition of $r_c$.}. As we can see from Fig.~\ref{Fig:fields2}, the magnetic field strength decreases near the vortices, hence the name diamagnetic. The magnetic flux of a vortex can be obtained analytically from the asymptotic value of the ${\cal A}_\mu$ field in \eref{profile asymptotic}:
\begin{eqnarray} \label{flux B}
\Phi_{B_{\cal A}} &=& \int_0^{2\pi} d\theta \int_{0}^{L} dr~r~ \frac{A'}{er} \nonumber \\
&=& \pi L^2 B - \frac{8 \pi \mu_1^3 m n^2 }{e (Be + 2 \mu_1 m)^2} \ln \frac{L}{r_c} + {\cal O} (1/L) \, .~~~
\end{eqnarray}
Requiring that the total magnetic flux in the system upon creating a vortex or antivortex is conserved, and noting that the logarithmically divergent piece is negative for both the vortex and antivortex, we have a relation between the asymptotic value of the vortex magnetic field $B$ and the {\it initial} background magnetic field $\bar B$:
\begin{eqnarray}
\bar B = B - \frac{8 \mu_1^3 m n^2 }{e (Be + 2 \mu_1 m)^2} \frac{\ln L/r_c}{L^2} \, .
\end{eqnarray}
Solving $B$ in terms of $\bar B$ to leading order in inverse powers of $L$, we obtain
\begin{eqnarray} \label{bar B}
B \simeq \bar B + \frac{8 \mu_1^3 m n^2 }{e (\bar Be + 2 \mu_1 m)^2} \frac{\ln L/r_c}{L^2} \, .
\end{eqnarray}
Since our diamagnetic vortices lower the magnetic field $B_{\cal A}$ in their core [see \eref{profile asymptotic} and Fig.~\ref{Fig:fields2}], the background magnetic field has to increase from $\bar B$ to $ B$ given by \eref{bar B} as required by magnetic flux conservation. This correction will be crucial for the pair of a vortex and an antivortex to have a finite positive energy as we show in the next section (however, notice also that $\bar B\rightarrow B$ as $L \rightarrow \infty$.).
Next, we impose the condition of flux conservation by using \eref{bar B} to find the IR energy
\begin{eqnarray} \nonumber
\mathcal{E}_{\rm \scriptscriptstyle IR}
&\approx& 2\pi \left(\frac{1}{4} \bar B^2 + \frac{\bar B \mu_1 m}{e}\right) L^2 \notag\\&&+\frac{4 \pi \mu_1^2 n^2}{e^2} \frac{\bar B e}{\bar B e + 2 m \mu_1} \ln \frac{L}{r_{\rm c}}\,. \label{energy ir}
\end{eqnarray}
Thus, by subtracting the background energy we obtain the IR energy of a single vortex:
\begin{eqnarray}
\mathcal{E}_{\rm \scriptscriptstyle IR}^{\rm v} \approx\frac{4 \pi \mu_1^2 n^2}{e^2} \frac{\bar B e}{\bar B e + 2 m \mu_1} \ln \frac{L}{r_{\rm c}}\, ,
\end{eqnarray}
which is now positive.
\subsection{Interaction between Two Diamagnetic Vortices}
The dynamics of a vortex-antivortex system can be studied by calculating the interaction energy, similar to what we have done in Ref.~\cite{Anber:2015kxa}. To this end, we use the following approximate field configurations for the Higgs and gauge fields assuming that the two vortices are separated by a distance $R$ larger than the core radii of the pair, i.e., $R \gg r_c$:
\begin{eqnarray} \begin{aligned} \label{2vortex}
&\varphi \cong \varphi_0 e^{i n \theta_1({\bf x}- {\bf x_1}) - i n \theta_2({\bf x}-{\bf x_2})} \\ &\quad\quad\quad\times \left[1+ \frac{f_2}{|{\bf x} - {\bf x_1}|^2} + \frac{f_2}{|{\bf x} - {\bf x_1}|^2} \right]\, , \\
&\mathcal{Z}_0 \cong -\frac{m}{e} + \frac{\bar z_2}{|{\bf x} - {\bf x_1}|^2} + \frac{\bar z_2}{|{\bf x} - {\bf x_1}|^2} \, , \\
&\mathcal{Z}_i \cong -\frac{Q_\mathcal{A}}{4 \pi \mu_1} \epsilon_{i j} \left[ \frac{({\bf x} - {\bf x_1})_j}{|{\bf x} - {\bf x_1}|^2} - \frac{({\bf x} - {\bf x_2})_j}{|{\bf x} - {\bf x_2}|^2} \right] \, , \\
&\mathcal{A}_0 \cong \frac{Q_\mathcal{A}}{2\pi} {\rm ln} \frac{|{\bf x} - {\bf x_1}|}{|{\bf x} - {\bf x_2}|} \, , \\
&\mathcal{A}_i \cong - \epsilon_{i j} \left[ \frac{n \mathcal{C}_2}{e} + a_L \ln \frac{|{\bf x} - {\bf x_1}|}{r_c} \right] \frac{({\bf x} - {\bf x_1})_j}{|{\bf x} - {\bf x_1}|^2} \\
&- \epsilon_{i j} \left[ -\frac{n \mathcal{C}_2}{e} + a_L \ln \frac{|{\bf x} - {\bf x_2}|}{r_c} \right] \frac{({\bf x} - {\bf x_2})_j}{|{\bf x} - {\bf x_2}|^2} - \epsilon_{i j} {\bf x}_j \frac{1}{2} B\,,
\end{aligned}
\end{eqnarray}
where
\begin{eqnarray}
\begin{aligned}
f_2 &= - \frac{ \mu_1^2 n^2}{B e (B e + 2m \mu_1)}\,,\\
\bar z_2 &= -\frac{2\mu_1^2 m n^2}{e (B e+2 m \mu_1)^2}\,,\\
a_L &= - \frac{4 \mu_1^4 \varphi_0^2 n^2 }{B (e^2 \varphi_0^2 + 2 \mu_1^2)^2} \, .
\end{aligned}
\end{eqnarray}
Substituting the above expressions in Eq. (\ref{hamiltonian}), imposing the condition of magnetic flux conservation and using $B \rightarrow \bar B$ from \eref{bar B} for a pair of a vortex and an antivortex [note that this doubles the logarithmic term in \eref{bar B}], we obtain the total energy of a pair of interacting vortices:
\begin{eqnarray}
\mathcal{E}_{\rm \scriptscriptstyle IR}^{\rm pair} &\approx&
\frac{8 \pi \mu_1^2 n^2}{e^2} \frac{\bar B e}{\bar B e + 2 m \mu_1} \ln \frac{R}{r_{\rm c}}\,.
\end{eqnarray}
As was explained in Ref.~\cite{Anber:2015kxa}, the energy receives contributions from two parts: the electrostatic energy of the long range $U(1)_{\cal A}$ field and Goldstone background.
\section{Berezinsky-Kosterlitz-Thouless Transition}
\label{sec:bkt}
In the previous sections, we studied the diamagnetic vortices appearing in $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern-Simons theory in the background of a constant $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field and ${\cal Z}_0$ condensate. Since these vortices are heavy (they cost core energy) they can not alter the low-energy description of our system which is simply a massless $\rm{U(1)}_{\scriptscriptstyle \cal A}$ theory.
At any finite temperature, these vortices are suppressed by a Boltzmann factor $e^{- \mathcal{E}_{\rm c}/T}$, where $\mathcal{E}_{\rm c}$ is the core energy. Hence, it is expensive to produce them at zero or low temperature. However, at some critical temperature, $T_c$, it will be entropically favored to produce them, and hence they will proliferate. Since these vortices are charged under $\rm{U(1)}_{\scriptscriptstyle \cal A}$, their proliferation will break the later group. This is the celebrated Berezinsky-Kosterlitz-Thouless (BKT) transition \cite{Berezinsky:1970fr,Kosterlitz:1973xp}, which can be understood by studying the partition function of a Coulomb gas of vortices.
To this end, let us consider a gas of positively and negatively charged objects interacting via a Coulomb-like potential. The grand partition function of the gas is given by
\begin{eqnarray}\nonumber
Z&=&\sum_{N_+,N_-=0}^\infty\frac{\left(\frac{\xi_+}{\varepsilon^2}\right)^{ N_+}\left(\frac{\xi_-}{\varepsilon^2}\right)^{ N_-}}{N_+!N_-!}\int \prod_{i=0}^{N_+} d^2 r^a_i\int \prod_{j=0}^{N_-} d^2 r_j^b\\
\nonumber
&&\times \exp\left[\kappa^2\sum_{i\geq j} \log\frac{| \pmb r^a_i- \pmb r^a_j|}{\varepsilon}+\kappa^2\sum_{i\geq j} \log\frac{| \pmb r^b_i- \pmb r^b_j|}{\varepsilon}\right.\\
&&\left.-\kappa^2\sum_{i\geq j} \log\frac{| \pmb r^a_i- \pmb r^b_j|}{\varepsilon} \right]\,,
\end{eqnarray}
where
\begin{eqnarray}
\kappa^2=\frac{8\pi \mu_1^2}{e^2 T}\frac{\bar Be}{\bar Be+2m\mu_1}\,,
\end{eqnarray}
$\varepsilon$ is the UV cutoff length, and $\xi_+$ and $\xi_-$ are the fugacities (recall that the fugacity is $e^{-\mathcal{E}_{\rm c}/T}$) of the positive and negative charges, respectively. The coordinates $\pmb r^a$ are used to label the positively charged objects, while $\pmb r^b$ are used for the negative ones. Next, we impose the charge neutrality on the two dimensional system by demanding that $N_+=N_-=N$. Hence, the factor $\xi_+^{ N_+}\xi_-^{ N_-}$ becomes $\left(\xi_+\xi_-\right)^N\equiv \xi^{2N}$, where we defined $\xi^2\equiv \xi_+\xi_-$. Thus, the partition function reduces to
\begin{eqnarray}\nonumber
Z&=&\sum_{q=\pm1}\sum_{N=0}^\infty\frac{\left(\frac{\xi}{\varepsilon^2}\right)^{2N}}
{(N!)^2}\int \prod_{i=0}^{2N} d^2 r_i\\
\label{resulting Z}
&&\times \exp\left[\kappa^2\sum_{i\geq j} q_iq_j \log\frac{| \pmb r_i- \pmb r_j|}{\varepsilon}\right]\,,
\end{eqnarray}
such that only neutral configurations of charges are used in computing $Z$.
Then, one uses the partition function (\ref{resulting Z}) to derive renormalization group equations (RGE)s for $\xi$ and $\kappa$. The RGE for $\xi$ can be obtained by considering a single neutral pair of charges \cite{KardarBook}:
\begin{eqnarray}\nonumber
Z^{(1)}&=&1+\frac{\xi^2}{\varepsilon^4}\int d^2r_1 d^2r_2 e^{-\kappa^2\log\frac{|\pmb r_1-\pmb r_2|}{\varepsilon}}\\
\label{single pair in Z}
&=&1+\xi^2(\varepsilon)\left(\frac{L}{\varepsilon}\right)^{4-\kappa^2}\,,
\end{eqnarray}
where $L$ is the system size, and $\xi^2(\varepsilon)$ indicates that the fugacity is being calculated given the UV cutoff $\varepsilon$. The RG invariance of the contribution (\ref{single pair in Z}) to the partition function demands that
\begin{eqnarray}\label{rescaling}
\xi^2(\varepsilon)\left(\frac{L}{\varepsilon}\right)^{4-\kappa^2}=\xi^2\left(e^b\varepsilon\right)\left(\frac{L}{e^b\varepsilon}\right)^{4-\kappa^2}
\end{eqnarray}
under the rescaling $\varepsilon\rightarrow e^b\varepsilon$. Differentiating (\ref{rescaling}) with respect to $b$ at $b=0$, we obtain the RG equation for $\xi$:
\begin{eqnarray}
\frac{d\xi}{db}=\left(2-\frac{\kappa^2}{2}\right)\xi\,.
\end{eqnarray}
This equation tells us that the fugacity remains irrelevant at large $\kappa$ or at low temperature. However, at small values of $\kappa$ or at high temperature the fugacity becomes relevant: the vortices proliferate indicating a phase transition. This is the BKT phase transition which takes place at $\kappa=2$ or
\begin{eqnarray}
T_c=\frac{2\pi \mu_1^2 \bar B}{\bar B e^2+2m e\mu_1}\,.
\end{eqnarray}
\section{Discussion}
\label{sec:discussion}
In this work, we studied the non-perturbative sector of the $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern Simons gauge theory in the background of $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field, whose detailed structure is described in our accompanying paper \cite{3dsuperconductor}. This theory admits topological vortex solutions with novel properties that we investigated by analytic and numerical techniques.
First of all, these vortices exhibit long range interactions since they are charged under the unbroken $\rm{U(1)}_{\scriptscriptstyle \cal A}$. Besides, they deplete the $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field near their cores. This can be understood by noting that the $\rm{U(1)}_{\scriptscriptstyle \cal A}$ gauge field ${\cal A}_\mu$ becomes topologically massive in the near core region. In this regard, they behave as a diamagnetic material. As was discussed in Sec.~\ref{sec:properties}, the vortex solution breaks and $C$ and $P$ symmetries, while preserving $CP$. This is also not surprising as the background magnetic field $B$ is odd under $C$ and $P$ and even under $CP$.
As is shown in our work \cite{3dsuperconductor}, the $\rm{U(1)}_{\scriptscriptstyle \cal Z} \times \rm{U(1)}_{\scriptscriptstyle \cal A}$ Chern Simons gauge theory exhibits superconductivity. The role of the condensate is played by the complex scalar field $\varphi$ which develops a vacuum expectation value because of the external $\rm{U(1)}_{\scriptscriptstyle \cal A}$ magnetic field, and thus breaks the $\rm{U(1)}_{\scriptscriptstyle \cal Z}$ symmetry spontaneously. Naively, one would expect that the superconductivity may be ruined at finite temperature because of symmetry restoration of the $\rm{U(1)}_{\scriptscriptstyle \cal Z}$ perturbatively. However, we showed in Ref.~\cite{3dsuperconductor} that this is not the case. Furthermore, the superconducting vacuum might be in danger because of the non-perturbative sector of the theory, namely the proliferation of the diamagnetic vortices. In Sec.~\ref{sec:bkt}, we studied the BKT phase transition due to these vortices, and we found that the critical temperature is $T_c= 2\pi \mu_1^2 \bar B/(\bar B e^2+2m e\mu_1)$. For temperatures $T > T_c$, diamagnetic vortices proliferate resulting in a more complicated ground state. However, the BKT transition can be postponed by taking $\mu_1 \gg \bar B e/2m$. In this limit $T_c \simeq \pi \mu_1 \bar B/ (2m e)$. Hence by increasing $\bar B$ and keeping the hierarchy $\mu_1 \gg \bar B e/2m$, we can postpone the BKT phase transition to arbitrarily high temperatures. In this regard, we can achieve superconductivity at all temperatures \cite{3dsuperconductor}.
\acknowledgements
This work was supported by the Swiss National Science Foundation. Y.B. is supported by the grant PZ00P2-142524.
\bibliographystyle{apsrev4-1}
\input{Bvortex.bbl}
\end{document} |
2,869,038,157,083 | arxiv | \section{Introduction}
With services like Amazon's Elastic MapReduce
and Microsoft's HDInsight
offering large-scale distributed cloud computing environments, computation in the cloud is becoming increasingly more available. Such services allow for computation on large volumes of data to be performed without the large investment in local computing resources. However, where the data that is processed is sensitive, such as financial or medical data, then uploading such data in its raw form to such a third-party service becomes problematic.
To take advantage of these cloud services, we require a means to process the data securely on such a platform. We designate such a computation, \emph{secure computation in the cloud} (SCC). SCC should not expose input or output data to any other party, including the cloud service provider. Furthermore, the details of the computation should not allow any other party to deduce its inputs and outputs. Cryptography seems the natural approach to this problem.
However, it should be noted that van Dijk and Juels \cite{vandijk2010impossibility} show that cryptography alone cannot realise secure \emph{multi-party} computation in the cloud, where the parties jointly compute a function over their inputs while keeping their own inputs private. Since our approach is via homomorphic encryption, we will restrict our attention to what we will call \emph{secure single-party computation in the cloud} (SSCC).
\emph{Homomorphic encryption} (HE) seems to offer a solution to the SSCC problem. First defined by Rivest et al. \cite{rivest1978data} in 1978, HE allows a function to be computed on encrypted inputs without ever decrypting the inputs. Suppose we wish to compute the function $f$ on inputs $x_1,x_2,\ldots,x_n$, then, under HE, $\textmyfont{Dec}(f'(x_1',x_2',\ldots,x_n'))=f(x_1,x_2,\ldots,x_n)$, where $x_1',\ldots,x_n'$ are the encryptions of $x_1,\ldots,x_n$, $f'$ is the equivalent of $f$ in the ciphertext space, and $\textmyfont{Dec}$ is the decryption function. One can easily see that HE would satisfy some of the requirements for secure computation in the cloud. A \emph{somewhat HE} scheme (SWHE) is an HE scheme which is HE for only limited imputs and functions.
\emph{Fully HE} (FHE) is an HE scheme that is homomorphic for all $f$. This was first realised by Gentry in 2009 \cite{gentry2009fully}, and appears to be the ideal HE scheme. However, despite the clear advantages of FHE, and many significant advances \cite{brakerski2011efficient,brakerski2011ring,brakerski2012leveled,brakerski2012fully}, it remains largely impractical. The two implementations of recent FHE schemes, HELib \cite{halevi2015bootstrapping} and FHEW \cite{ducas2015bootstrapping}, both perform very poorly in comparison with operations on unencrypted data, in their running time and space requirements. It is reported that a HELib implementation of the AES-128 circuit processed inputs in just over four minutes \cite{halevi2015helib}. Similarly, FHEW processed a single homomorphic NAND operation followed by a re-encryption in 0.69s and using 2.2GB of RAM. The paper~\cite{naehrig2011can} attempted to assess the practicality of one of the underlying SWHE schemes\cite{brakerski2011ring}, but with no positive conclusion.
Therefore, we take the view in this paper that only SWHE is currently of practical interest. Our goal is to develop new SWHE schemes which are practically useful, and which we have implemented, though we conclude the paper by showing that our ideas can be used to develop a (fairly impractical) FHE scheme.
\subsection{Scenario}
As introduced above, our work concerns secure single-party computation in the cloud. In our scenario, a secure client wishes to compute a function on a large volume of data. This function could be searching or sorting the data, computing an arithmetic function of numeric data, or any other operation. For the most part, we consider here the case where the client wishes to perform arithmetic computations on numeric data. This data might be the numeric fields within a record, and the non-numeric fields would be treated differently.
The client delegates the computation to the cloud. However, while the data is in the cloud, it could be subject to snooping, including by the cloud provider. The client does not wish to expose the input data, or the output of the computation, to possible snooping in the cloud. A snooper here will be a party who may observe the data and the computation in the cloud, but cannot, or does not, change the data or insert spurious data. (In our setting data modification would amount to pointless vandalism.) The snooping could be casual, simply displaying an uninvited interest, or malicious, intending to use the data for the attacker's own purposes.
To obtain the required data privacy, the client's function will be computed homomorphically, on an encrypted version of the data. The client encrypts the source data using a secret key and uploads the encrypted data to the cloud, along with a homomorphic equivalent of the target computation. The cloud environment performs the homomorphic computation on the encrypted data. The result of the homomorphic computation is then returned to the client, who decrypts it using the secret key, and obtains the output of the original computation.
In this scenario, we observe that the source data is never exposed in the cloud, but encryptions of the source data are. A snooper may observe the computation of the equivalent homomorphic function in the cloud environment. As a result, they may be able to deduce what operations are performed on the data, even though they do not know the inputs. A snooper may also be able to inspect the (encrypted) working data generated by the cloud computation, and even perform side computations of their own on this data. However, snoopers have no access to the secret key, so cannot make encryptions of their own to deduce the secret key.
\subsection{Definitions and Notation}
$x \xleftarrow{\$} S$ denotes a value $x$ chosen uniformly at random from the discrete set $S$.
$\textmyfont{KeyGen} : \mathcal{S} \rightarrow \mathcal{K}$ denotes the key generation function operating on the security parameter space $\mathcal{S}$ and whose range is the secret key space $\mathcal{K}$.
$\textmyfont{Enc} : \mathcal{M} \times \mathcal{K} \rightarrow \mathcal{C}$ denotes the symmetric encryption function operating on the plaintext space $\mathcal{M}$ and the secret key space $\mathcal{K}$ and whose range is the ciphertext space $\mathcal{C}$.
$\textmyfont{Dec} : \mathcal{C} \times \mathcal{K} \rightarrow \mathcal{M}$ denotes the symmetric decryption function operating on the ciphertext space $\mathcal{C}$ and the secret key space $\mathcal{K}$ and whose range is the plaintext space $\mathcal{M}$.
$\textmyfont{Add} : \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$ denotes the homomorphic addition function whose domain is $\mathcal{C}^2$ and whose range is $\mathcal{C}$.
$\textmyfont{Mult} : \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$ denotes the homomorphic mutliplication function whose domain is $\mathcal{C}^2$ and whose range is $\mathcal{C}$.
$m,m_1,m_2,\ldots$ denote plaintext values. Similarly, $c,c_1,c_2,\ldots$ denote ciphertext values.
If $k^*=\binom{k+1}{2}$, $\mbf{v}_{\star}=[v_1\ v_2\ \ldots\ v_{k^*}]^T$ denotes a $k^*$-vector which augments the $k$-vector $\mbf{v}=[v_1\ v_2\ \ldots\ v_k]^T$ by appending elements $v_i = f_i(v_1,\dots,v_k)$ $(i \in [k+1,k^*])$, for a linear function $f_i$. (All vectors are column vectors throughout.)
$\mbf{e}_i$ denotes the $i$th unit vector $(i=1,2,\ldots)$, with size determined by the context.
$[x,y]$ denotes the integers between $x$ and $y$ inclusive.
$[x,y)$ denotes $[x,y]\setminus\{y\}$.
$\log$ denotes $\log_e$ and $\lg$ denotes $\log_2$.
If $\lambda$ is a security parameter, ``with high probability'' will mean with probability $1-2^{-\epsilon\lambda}$, for some constant $\epsilon>0$.
Polynomial time or space means time or space polynomial in the security parameter~$\lambda$.
An \emph{arithmetic circuit} $\Phi$ over the ring $\mathsf{R}$ in variables $X = \{x_1,\ldots,x_n\}$ is a directed acyclic graph with every vertex (\emph{gate}) having in-degree either two or zero. Every vertex of in-degree 0 is labelled either by a variable in $X$ or by an element of $\mathsf{R}$. Each other vertex in $\Phi$ has in-degree two and is labelled by either $\times$ or $+$. Every vertex of out-degree 0 in $\Phi$ computes a polynomial in $\mathsf{R}[X]$ in the obvious manner. We refer to the directed edges in the acyclic graph as \emph{wires}. The \emph{depth} of $\Phi$ is the length of the longest directed path in it. (See~\cite{hrubevs2011arithmetic}.)
A \emph{Boolean circuit} $\Phi$ is defined similarly to an arithmetic circuit.
Every vertex of in-degree 0 is labelled either by a variable in $X$ or an element of $\{0,1\}$. Each other vertex in $\Phi$ has in-degree two and is labelled by a binary Boolean function. Then every vertex of out-degree 0 in $\Phi$ computes some Boolean function of the inputs. Note that any finite computation can be represented as a Boolean circuit. (See~\cite{vollmer1999intro}.)
\subsection{Formal Model of Scenario}\label{sec:formalmodel}
We have $n$ integer inputs $m_1, m_2, \ldots, m_n$ distributed in $[0,M)$ according to a probability distribution $\mathcal{D}$. If $X$ is a random integer sampled from $\mathcal{D}$, let $\Pr[X=i]=\xi_i$, for $i\in[0,M)$. We will consider three measures of the \emph{entropy} of $X$, measured in bits:\\[0.5ex]
\begin{tabular}{lr@{}l}
Shannon entropy & $H_1(X)$ = &\ $-\sum_{i=0}^{M-1}\xi_i \lg \xi_i$,\\[0.5ex]
Collision entropy & $H_2(X)$ = &\ $-\lg \big(\sum_{i=0}^{M-1}\xi_i^2\big)$,\\[0.5ex]
Min entropy & $H_\infty(X)$ = &\ $-\lg \big(\max_{i=0}^{M-1}\xi_i\big)$.
\end{tabular}\\[0.5ex]
It is known that $H_1(X)\geq H_2(X)\geq H_\infty(X)$, with equality if and only if $X$ has the uniform distribution on $[0,M)$, in which case all three are $\lg M$. We will denote $H_\infty(X)$ by $\rho$, so it also follows that $H_1(X),H_2(X)\geq\rho$. We use the term ``entropy'' without qualification to mean min entropy, $H_\infty(X)$. Note that $H_\infty(X)=\rho\geq\lg M$ implies $\xi_i\leq 2^{-\rho}$, $i\in[0,M)$, and that $M\geq 2^\rho$.
We wish to compute a polynomial $P$ of degree $d$ on these inputs. A secure client $A$ selects an instance $\mathcal{E}_K$ of the encryption algorithm $\mathcal{E}$ using the secret parameter set $K$. $A$ encrypts the $n$ inputs by computing $c_i = \mathcal{E}_K(m_i)$, for $i \in [1,n]$. $A$ uploads $c_1, c_2, \ldots, c_n$ and $P'$ to the cloud computing environment, where $P'$ is the homomorphic equivalent of $P$ in the ciphertext space. The cloud computing environment computes $P'(c_1, c_2, \ldots, c_n)$. $A$ retrieves $P'(c_1, c_2, \ldots, c_n)$ from the cloud, and computes \[P(m_1, m_2, \ldots, m_n) = {\mathcal{E}_K}^{-1}(P'(c_1, c_2, \ldots, c_n)).\]
A snooper is only able to inspect $c_1, c_2, \ldots, c_n$, the function $P'$, the computation of $P'(c_1, c_2, \ldots, c_n)$, including subcomputations and working data, and $P'(c_1, c_2, \ldots, c_n)$ itself.
Our encryption schemes are essentially symmetric key encryption, though there is no key distribution problem. The public parameters of our schemes are exposed to the cloud, but they do not provide an encryption oracle.
This model is clearly susceptible to certain attacks. We consider ciphertext only, brute force, and cryptanalytic attacks. To avoid cryptanalytic attacks, we must choose the parameters of the system carefully. Here, a brute force attack will mean guessing the plaintext associated with a ciphertext. In our encryption schemes, it will be true that a guess can be verified. Since $\xi_i\leq 2^{-\rho}$ for $i\in[0,M)$, the expected number $\mu$ of guesses before making a correct guess satisfies $\mu\geq2^{\rho}$. Massey~\cite{massey1994guessing} gave a corresponding result in terms of the Shannon entropy $H_1(X)$.
It follows similarly that the probability of any correct guess in $2^{\rho/2}$ guesses is at most $2^{-\rho/2}$. This bound holds if we need only to guess any one of $n$ inputs, $m_1,m_2,\ldots,m_n$, even if these inputs are not independent. Therefore, if $\rho$ is large enough, a brute force attack is infeasible.
Recall that, in our model, known plaintext attack (KPA) is possible only by brute force, and not through being given a sample of plaintext, ciphertext pairs.
We do not regard chosen plaintext attack (CPA) or chosen ciphertext attack (CCA) as being relevant to our model. Since $\mathcal{E}_K$ is never exposed in the cloud, there is no realistic analogue of an encryption or decryption oracle, as required by these attacks. Of course, in public key encryption, an encryption algorithm is publicly available as part of the system, so CPA must be forestalled. We note that, following \cite{bellare1997concrete}, it is common in studying symmetric key encryption to suppose that defence to CPA or CCA is necessary. While this may provide a stronger notion of security, it seems hard to justify. Both~\cite{bellare2005intro} and~\cite{boneh2015graduate} provide examples which are intended to justify this convention. However, these examples are unconvincing, and seem to have little practical importance. Nevertheless, since it is not difficult to do so, we show that the ``N'' variants of our HE schemes below resist CPA.
We note that observation of the function $P'$, which closely resembles $P$, might leak some information about its inputs. However, we assume that this information is far too weak to threaten the security of the system. This assumption seems universal in the existing literature on HE. However, ``garbled circuits''~\cite{bellare2012yao,goldreich1987play} are a possible solution to this problem, if the threat is significant.
Finally, we note that our model of SSCC is very similar to the model of \emph{private single-client computing}, presented in \cite{vandijk2010impossibility} along with an example application.
\subsection{Our Results}
We describe novel practical HE schemes for the encryption of integers, to be employed in a SSCC system inspired by CryptDB \cite{popa2011cryptdb}. CryptDB is an HE scheme where encryption depends on the operation to be performed. CryptDB encrypts integers using the Paillier cryptosystem \cite{paillier1999} which allows for homomorphic addition. Similar systems (\cite{tetali2013mrcrypt,stephen2014practical}) use Paillier and ElGamal \cite{elgamal1985} to support addition and multiplication, respectively. The ``unpadded'' versions of these schemes are used, which may not be secure under CPA~\cite{goldwasser1984prob}, reducing any possible advantage of a public-key system. However, these schemes do not support both addition and multiplication. To perform an inner product, say, requires re-encrypting the data once the multiplications have been performed so that the additions can then be performed. In a SSCC system, this would require shipping the data back to the initiator for re-encryption, creating a significant overhead. To avoid this problem, we aim for an HE scheme for integers supporting both addition and multiplication.
Our HE scheme over the integers is inspired by the SWHE scheme of van Dijk et al.~\cite{vandijk2010fully} (which we denote DGHVS) that is used as the basis for their public-key system (denoted as DGHV in \cite{coron2011fully}). As in their system, we add multiples of integers to the plaintext to produce a ciphertext. However, DGHVS supports only arithmetic mod~2, and we generalise this to larger moduli.
In the section above, we showed that the input data must have sufficient entropy to negate brute force attacks. If the data lacks sufficient entropy, we will introduce more entropy in two ways. The first is to add random ``noise'' of sufficient entropy to the ciphertext to ``mask'' the plaintext. This approach is employed in DGHV and in sections \ref{sec:he1n} and \ref{sec:he2n}. In our schemes we add a random multiple (from 0 to $\kappa$) of a large integer, $\kappa$, to the ciphertext, such that $m_i<\kappa$, for all $i\in[1,N]$. If the entropy of the original data was $\rho$, once transformed it is $\rho + \lg \kappa$. Therefore, if $\kappa$ is large enough we can ensure that our data has sufficient entropy. However, there is a downside. To prevent the noise term growing so large that the cipherext can no longer be decrypted successfully, we are restricted to computing polynomials of low enough degree.
The other technique will be to increase the dimension of the ciphertext. We represent the ciphertext as a $k$-vector, where each element is a linear function of the plaintext as in DGHVS. Addition and multiplication of ciphertexts are simple transformations of the ciphertexts using vector and matrix algebra. The basic case $k=1$ is described in section \ref{sec:he1}. Then we can increase the entropy $k$-fold by creating a $k$-vector ciphertext. This is because we need to guess $k$ plaintexts to successfully break the system. Assuming that the inputs $m_1,m_2,\ldots,m_n$ are chosen independently from $\mathcal{D}$, and the entropy of inputs is $\rho$, then the entropy of a $k$-tuple $(m_1,m_2,\ldots,m_k)$ is $k\rho$. Thus the $k$-vectors effectively have entropy $k \rho$. If $k$ is chosen large enough, there will be sufficient entropy to prevent brute force attack. Note that the assumption of independence among $m_1,m_2,\ldots,m_n$ can easily be relaxed, to allow some correlation, but we will not discuss the details here. The upside is that some cryptanalytic methods applicable in the case $k=1$ do not seem to generalise even to $k=2$. The downside is that ciphertexts are $k$ times larger, and each homomorphic multiplication requires $\Omega(k^3)$ time and space, in comparison with the case $k=1$. For very large $k$, this probably renders the methods impractical. Therefore, we consider the case $k=2$ in some detail in section~\ref{sec:he2}, before describing the general case in section~\ref{sec:hek}.
Our work here only aims to support integer arithmetic. Other operations, like sorting, require different HE schemes, which we will consider elsewhere. In the integer arithmetic case, a system for computing low-degree polynomials seems to suffice for most practical applications. (See~\cite{naehrig2011can}). To this end, we will consider practically implementable values for the parameters of the cryptosystems.
\subsection{Related Work}
FHE schemes start by devising a SWHE scheme which supports only homomorphic addition and multiplication, so the computation is to evaluate an arithmetic circuit on encryptions to which random ``noise'' has been edded. A general computation is represented as an arithmetised Boolean circuit~\cite{babai1991arithmetization}. As ciphertexts are added and multiplied during the computation, the ``noise'' grows until there comes a point where the plaintext cannot be uniquely recovered from the ciphertext. Therefore the arithmetic circuit must be of limited depth, to prevent this ``noise'' growing too large. If the circuit is sufficiently shallow, this SWHE scheme can be used in its own right, e.g. Boneh et al. \cite{boneh2013pdq}.
The SWHE system is transformed into an FHE scheme by a process of re-encryption where, once the noise grows too large, the ciphertext is re-encrypted, thereby allowing computation to proceed. This re-encryption is performed homomorphically, using a circuit shallow enough that the noise does not grow too large. This circuit necessarily contains information about the private key, which must be suitably hidden. This re-encryption reduces the noise in the ciphertext, and allows circuits of arbitrary depth to be computed homomorphically.
Our scheme is inspired by that of van Dijk et al. \cite{vandijk2010fully}. In their paper they produce an FHE scheme over the integers, where a simple SWHE scheme is ``bootstrapped'' to FHE. van Dijk et al. take a simple symmetric scheme where an plaintext bit $m$ is encrypted as $c = m+2r+pq$, where the secret key $p$ is an odd $\eta$-bit integer from the interval $[2^{\eta-1},2^\eta)$, and $r$ and $q$ are integers chosen randomly from an interval such that $2r < p/2$. The ciphertext $c$ is decrypted by the calculation $(c \bmod p) \bmod 2$. Our scheme HE1N below may be regarded as a generalisation of this.
van Dijk et al. transform their symmetric scheme into a public key scheme. A public key $\langle x_0,x_1,\ldots,x_\tau \rangle$ is constructed where each $x_i$ is a near multiple of $p$ of the form $pq+r$ where $q$ and $r$ are random integers chosen from a prescribed interval. To encrypt a message a subset $S$ of $x_i$ from the public key are chosen and the ciphertext is calculated as $c=m+2r+2\sum_{i \in S} x_i \mod x_0$. The ciphertext is decrypted as previously described. We could extend our HE$k$N schemes here to a public key variant, using a similar device. However, we will not do so, since public key systems appear to have little application to our model.
van Dijk et al. ``bootstrap'' their public key system to an FHE scheme, using Gentry's approach~\cite{gentry2009fully}. In this case, the bootstrapping is done by homomorphically simulating a division by $p$, thus obtaining an encryption of $c \bmod p$ which can be used to continue the computation. Our FHE proposal below is based on different principles.
In \cite{coron2011fully}, Coron et al. reduce the size of the public key by using a similar but alternative encryption scheme. In this scheme, $p$ is a prime in the specified interval, $x_0$ is an exact multiple of $p$ and the sum term in the ciphertext is quadratic rather than linear.
The above FHE schemes
represent a major theoretical achievement. However, they appear impractical for computations on large data sets, in terms of both running time and storage requirements.
Therefore, the direction of our work is similar to~\cite{naehrig2011can}. The authors implement the SWHE scheme from~\cite{brakerski2011ring}. However, they give results only for degree two polynomials. Our schemes seem capable of computing somewhat higher degree polynomials for practical key and ciphertext sizes.
Recent work on \emph{functional encryption} \cite{goldwasser2013how,goldwasser2013reusable} should also be noted.
While these results are of great theoretical interest, the scenario where such schemes might be applied is rather different from our model. Also, the methods of~\cite{goldwasser2013how,goldwasser2013reusable} do not seem likely to be of practical interest in the foreseeable future.
The symmetric MORE scheme \cite{kipnis2012efficient} and its derivative \cite{xiao2012efficient} uses linear transformations, as does our scheme HE$k$ in a different way. These systems have been shown~\cite{vizar2015} to be insecure against KPA, at least as originally proposed. However, whether KPA is practically relevant in context is moot.
We also note the work of Cheon et al. \cite{cheon2015crt}. They use the Chinese Remainder Theorem (CRT) in an FHE system. We make use of the CRT in our scheme HE2NCRT below (section \ref{sec:he2ncrt}). However, our construction differs significantly from theirs.
We should note that the encryption of the Boolean circuits in our fully homomorphic system (section \ref{sec:fhe}) has similarities to Yao's ``garbled circuits'' \cite{bellare2012yao,goldreich1987play}.
\subsection{Roadmap}
We present our initial homomorphic scheme in section \ref{sec:inithom} in two variants, HE1 and HE1N. HE1 (section \ref{sec:he1}) is suitable for integers distributed with sufficient entropy. HE1N (section \ref{sec:he1n}) deals with integers not distributed with sufficient entropy, by adding an additional ``noise'' term.
Section \ref{sec:dimension} describes a further two variants, HE2 and HE2N, which increase the entropy of the plaintext by adding a dimension to the ciphertexts, which are 2-vectors. Again, HE2 (section \ref{sec:he2}) deals with integers of sufficient entropy, HE2N (section \ref{sec:he2n}) with integers without the required entropy. We describe this in some detail, since it appears to be practically useful, and is the simplest version of our general scheme.
In section \ref{sec:genk}, we generalise HE2 and HE2N from 2-vectors to $k$-vectors, for arbitrary $k$, in the scheme HE$k$, with noisy variant HE$k$N. These schemes may also be practical for small enough $k$.
In section \ref{sec:he2ncrt}, we present an extension of HE2N, HE2NCRT, which uses the CRT to distribute the computation.
In section \ref{sec:fhe}, we discuss how HE$k$ can be transformed into an FHE scheme for large enough $k$, though the resulting scheme seems only to be of theoretical interest.
In section~\ref{sec:results} we describe extensive experimentation with the schemes, and finally, in section \ref{sec:concfurther}, we give our conclusions.
\section{Initial Homomorphic Scheme}
\label{sec:inithom}
In this section we present details of our initial SWHE schemes over the integers.
\subsection{Sufficient Entropy (HE1)}
\label{sec:he1}
We have integer inputs $m_1, m_2, \ldots, m_n \in [0,M)$. (Negative integers can be handled as in van Dijk et al.~\cite{vandijk2010fully}, by taking residues in $[-(p-1)/2,(p-1)/2)$, rather than $[0,p)$.) We wish to compute a polynomial $P$ of degree $d$ in these inputs. The inputs are distributed with entropy $\rho$, where $\rho$ is large enough, as discussed in section~\ref{sec:formalmodel} above. Our HE scheme is the system $(\textmyfont{KeyGen},\textmyfont{Enc},\textmyfont{Dec},\textmyfont{Add},\textmyfont{Mult})$.
Let $\lambda$ be a large enough security parameter, measured in bits. Let $p$ and $q$ be suitably large distinct primes such that $p\in[2^{\lambda-1},2^\lambda]$, and $q\in[2^{\eta-1},2^\eta]$, where $\eta\approx\lambda^2/\rho - \lambda$. Here $\lambda$ must be large enough to negate direct factorisation of $pq$ (see~\cite{kleinjung2010factor}), and the relative values of $p$ and $q$ are chosen to negate Coppersmith's attack \cite{coppersmith1997small}. We will also require $p > (n+1)^dM^d$ to ensure that $P(m_1,m_2,\ldots,m_n) < p$, so that the result of the computation can be successfully decrypted. (In many applications, a smaller value of p may suffice). Our function $\textmyfont{KeyGen}$ will randomly select $p$ and $q$ according to these bounds. Then $p$ is the private symmetric key for the system and $pq$ is the modulus for arithmetic performed by $\textmyfont{Add}$ and $\textmyfont{Mult}$. $pq$ is a public parameter of the system. We assume that the entropy $\rho\gg\lg\lambda$, so that a brute force attack cannot be carried out in polynomial time.
We can easily set the parameters to practical values. If $n\approx\sqrt{M}$, $M\approx2^\rho$ then we may take $\lambda \approx 3d\rho/2$ and $\eta\approx 3d\lambda/2 - \lambda$ (see appendix~\ref{app:bounds}). For, example, if $\rho=32$, $d=4,$ we can take any $\lambda > 192$, $\eta > 960$.
We encrypt a plaintext integer $m$ as
\begin{align*}
\textmyfont{Enc}(m,p) &= m + r p\ \bmod\ pq
\end{align*}
where $r\xleftarrow{\$}[1,q)$.
We decrypt the ciphertext $c$ by
\begin{align*}
\textmyfont{Dec}(c,p) &= c\ \bmod\ p
\end{align*}
The sum modulo $pq$ of two ciphertexts, $c = m + rp$ and $c' = m' + r'p$, is
\begin{align*}
\textmyfont{Add}(c,c')= c+c'\ \bmod\ pq\ =\ m+m' + (r+r')p.
\end{align*}
This decrypts to $m+m'$, provided $m+m'<p$.
The product modulo $pq$ of two ciphertexts, $c = m + rp$ and $c' = m' + r'p$, is
\begin{align*}
\textmyfont{Mult}(c,c')&= cc' \mod{pq}\\ &= mm' + (rm'+r'm+rr'p)p,
\end{align*}
which decrypts to $mm'$, provided $mm'<p$.
Security of the system is provided by the \emph{partial approximate common divisor problem} (PACDP), first posed by Howgrave-Graham~\cite{howgrave2001approx}, but can be formulated \cite{chen2012faster,cohn2012approx} as:
\begin{definition} \textit{(Partial approximate common divisor problem.)} Suppose we are given one input $x_0=pr_0$ and $n$ inputs $x_i=pr_i + m_i$, $i\in[1,n]$. We have a bound $B$ such that $|m_i| < B$ for all $i$. Under what conditions on the variables, $r_i$ and $m_i$, and the bound $B$, can an algorithm be found that can uniquely determine $p$ in a time which is polynomial in the total bit length of the numbers involved?
\end{definition}
A straightforward attack on this problem is by brute force. Consider $x_1$. Assuming that $m_1$ is sampled from $\mathcal{D}$, having entropy $\rho$, we successively try values for $m_1$ and compute $\gcd(x_0,x_1-m_1)$ in polynomial time until we find a divisor that is large enough to recover $p$. Then we can recover $m_i$ as $(x_i \bmod p)$ for $i\in[2,n]$. As discussed in section~\ref{sec:formalmodel}, the search will requires $2^{\rho}$ $\gcd$ operations in expectation.
Several attempts have been made to solve the PACDP~\cite{howgrave2001approx,cohn2012approx,chen2012faster}, resulting in theoretically faster algorithms for some cases of the problem. However, our parameters for $p$ and $q$ are chosen to negate the attacks of~\cite{howgrave2001approx,cohn2012approx}. The paper~\cite{chen2012faster} gives an algorithm requiring only $\sqrt{M}$ polynomial time operations in the special case that $\mathcal{D}$ is the uniform distribution on $[0,M)$,
and hence $\rho=\lg M$. No algorithm running in time subexponential in $\rho$ is known for this problem in the worst case. Therefore, if $\rho$ is large enough, the encryption should be secure.
In actuality, our system is a special case of PACDP because we use the residues of the approximate prime multiples modulo a distinct semiprime modulus. A semiprime is a natural number that is the product of two prime numbers. A distinct semiprime is a semiprime where the prime factors are distinct. We denote this case of PACDP as the \emph{semiprime partial approximate common divisor problem} (SPACDP). Although it is a restriction, there is no reason to believe that this is any easier than PACDP.
\begin{definition} \textit{(Semiprime factorisation problem.)} Given a semiprime $s$, the product of primes $p$ and $q$, can $p$ and $q$ be determined in polynomial time?
\end{definition}
The computational complexity of this problem, which lies at the heart of the widely-used RSA cryptosystem, is open, other than for quantum computing, which currently remains impractical. We will show that breaking HE1 is equivalent to semiprime factorisation. Therefore, our scheme is at least as secure as unpadded RSA~\cite{rivest1978method}.
\begin{restatable}{theorem}{factorise}
\label{thm:1}
An attack against HE1 is successful in polynomial time if and only if we can factorise a distinct semi-prime in polynomial time.
\end{restatable}
There is a variant of brute force attack on this system, which we will call a \emph{collision attack}. Suppose we have a pair of equal plaintexts $m_1=m_2$. Then the difference between their encryptions $(c_1-c_2)$ is an encryption of $0$, and the scheme is subject to KPA. In fact, if we have $n$ plaintexts $m_1,m_2,\ldots,m_n$, and there exist $i,j\in[1,n]$ with $m_i=m_j$, the product $\Pi_{1\leq i<j\leq n}(c_j-c_i)$ is an encryption of $0$. However, if there is sufficient entropy, this attack is not possible.\vspace{-2ex}
\begin{restatable}{lemma}{collision}
\label{lem:1}
If the inputs $m$ have entropy $\rho$ then, for any two independent inputs $m_1,m_2$, $\Pr(m_1=m_2)\leq 2^{-\rho}$.
\end{restatable}
Thus, if we have $n$ inputs, $m_1,m_2,\ldots,m_n$ the probability that there exist $i,j\in[1,n]$ with $m_i=m_j$ is at most $\binom{n}{2} 2^{-\rho}$. If $n<2^{-\rho/3}$, this probability is at most $2^{-\rho/3}$, smaller than any inverse polynomial in $\lambda$. Hence, for large enough $\lambda$, collision attack is infeasible.
A similar collision attack can be made against the schemes described below. We will not discuss the details, since they are almost identical to those above.
\subsection{Insufficient Entropy (HE1N)}
\label{sec:he1n}
Suppose now that the integer inputs $m_i, i\in[1,n],$ are distributed with entropy $\rho$, where $\rho$ is not large enough to negate a brute force guessing attack. Therefore, we increase the entropy of the plaintext by adding an additional ``noise'' term to the ciphertext. This will be a multiple $s$ (from 0 to $\kappa$) of an integer $\kappa$, chosen so that the entropy $\rho'=\rho + \lg \kappa$ is large enough to negate a brute force guessing attack. We also require $\kappa > (n+1)^d M^d$, so that $P(m_1,m_2,\ldots,m_n) < \kappa$. As a result of the extra linear term in the ciphertext, we compute $P(m_1,\ldots,m_n,\kappa)$ instead. We can easily retrieve $P(m_1,\ldots,m_n)$ from $P(m_1,\ldots,m_n,\kappa)$. $\textmyfont{KeyGen}$ now chooses $p$ and $q$ as in HE1, but with $\eta = \lambda^2/\rho' - \lambda,$ and $p>(n+1)^d (M+\kappa^2)^d$ so that
\[ P(m_1+s_1\kappa,m_2+s_2\kappa,\ldots,m_N+s_n\kappa) < p,\] when $s_1,s_2,\ldots,s_n\in[0,\kappa)$. The secret key, \textmyfont{sk}, is now $(\kappa,p)$.
We can set these parameters to practical values. If we assume $M \approx 2^\rho$ and large enough $n$, as in section \ref{sec:he1}, then we may take $\lg{\kappa} > d (\lg{n} + \rho)$, $\rho'=\rho+\lg\kappa$, $\lambda > d( \lg{n} + 2 \lg{\kappa})$. Then, for example, if $d=3$, $\lg{n}=16$, $\rho=8$, then $\lg{\kappa}>72$, $\rho'=80$, $\lambda > 480$, $\eta > 2400$. In the extreme case that the inputs are bits, so $\rho=1$, and $d=3$, $\lg{n}=16$, then we can take $\lg{\kappa} \approx 51$ and $\rho'\approx52$, and we have $\lambda > 354$, $\eta > 2056$, which is only 15\% smaller than for $\rho=8$.
We encrypt a plaintext, $m$, as
\[ \textmyfont{Enc}(m,\textrm{\textmyfont{sk}}) = m+ s\kappa+rp \mod{pq},\]
where $r\xleftarrow{\$}[1,q)$ and $s \xleftarrow{\$} [0,\kappa)$. We decrypt a ciphertext, $c$, as
\[ \textmyfont{Dec}(c,\textrm{\textmyfont{sk}}) = (c \bmod p) \bmod \kappa. \]
Addition and multiplication of ciphertexts is as above.
The use of random noise gives the encryption the following ``indistinguishability'' property, which implies that the system satisfies IND-CPA \cite{bellare1998relations,bellare1997concrete}.
\begin{restatable}{theorem}{ind}
\label{thm:2}
For any encryption $c$, $c\bmod\kappa$ is polynomial time indistinguishable from the uniform distribution on $[0,\kappa)$. Thus HE1 satisfies IND-CPA, under the assuption that SPACDP is not polynomial time solvable.\vspace{-2ex}
\end{restatable}
\section{Adding a dimension}
\label{sec:dimension}
In this section we discuss adding an additional dimension to the ciphertext, which becomes a 2-vector. In both schemes presented below, HE2 and HE2N, we add a further vector term, with two further secret parameters. The two schemes presented below have a constant factor overhead for arithmetic operations. An addition operation in the plaintext space requires two additions in the ciphertext space, and a multiplication in the plaintext space requires nine multiplications and four additions in the ciphertext space.
\subsection{Sufficient entropy (HE2)}
\label{sec:he2}
As with HE1, it is assumed that the inputs $m_i\ (i\in[1,n])$ are of sufficient entropy. $p$ and $q$ are chosen by $\textmyfont{KeyGen}$ according to the bounds given in section~\ref{sec:he1}. $\textmyfont{KeyGen}$ also sets $\mbf{a}= [a_1\ a_2]^T$, where $a_i\xleftarrow{\$} [1,pq)$ $(i\in [1,2])$ such that $a_1, a_2, a_1-a_2\neq 0$ ($\bmod\ p$ and $\bmod\ q$). The secret key \textmyfont{sk}\ is $(p,\mbf{a})$ and the public parameters are $pq$ and $R$. $R$ is the re-encryption matrix, which is detailed below.
The condition $a_1, a_2, a_1-a_2\neq 0$, $(\bmod~p$,~$\bmod~q)$ fails with exponentially small probability $3(1/p+1/q)$. Thus, $a_1$ and $a_2$ are indistinguishable in polynomial time from $a_1,a_2\xleftarrow{\$}[0,pq)$.
\subsubsection*{Encryption}
We encrypt a plaintext integer $m$ as the 2-vector $\mbf{c}$,
\begin{align*}
\mbf{c} =\textmyfont{Enc}(m,\textrm{\textmyfont{sk}})= (m+rp)\mbf{1} + s\mbf{a} \mod {pq} ,
\end{align*}
where $\mbf{1}= [1\ \,1]^T$, $r\xleftarrow{\$} [0,q)$, and $s\xleftarrow{\$} [0,pq)$. We construct $\mbf{c}_{\star}$, where $c_3=f(c_1,c_2)$ for a given linear function $f$. We will use $f(c_1,c_2)=2c_1-c_2$, though we only require $f(c_1,c_2)\neq c_1,c_2$. Therefore, $c_3= (m+rp) + sa_3 \mod pq$, for $a_3=2a_1-a_2$.
\begin{restatable}{theorem}{hetworandom}
\label{thm:3}
The encryption scheme produces ciphertexts with components which are random integers modulo $pq$.
\end{restatable}
Note, however, that the components of the ciphertexts are correlated, and this is a vulnerability. We discuss this later in this section (``Cryptanalysis'').
\subsubsection*{Decryption}
To decrypt, we eliminate $s$ from $\mbf{c}$ (modulo $p$), giving
\begin{align*}
\textmyfont{Dec}(\mbf{c},\textrm{\textmyfont{sk}})= \mbf{\gamma}^T \mbf{c} \mod p,
\end{align*}
where $\mbf{\gamma}^T= (a_{2}-a_{1})^{-1}[a_2~\,-a_1]$. We call $\gamma$ the \emph{decryption vector}.
\subsubsection*{Addition}
We define the addition operation on ciphertexts as the vector sum modulo $pq$ of the two ciphertext vectors $\mbf{c}$ and $\mbf{c'}$,
\begin{align*}
\textmyfont{Add}(\mbf{c},\mbf{c'})=\mbf{c}+\mbf{c'} \mod{pq}.
\end{align*}
Therefore, if inputs $m,m'$ encrypt as $(m+rp)\mbf{1}+s\mbf{a}$, $(m'+r'p+)\mbf{1}+s'\mbf{a}$, the sum is:
\begin{align*}
\mbf{c}+\mbf{c'}=(m+m'+(r+r')p)\mbf{1}+(s+s')\mbf{a}.
\end{align*}
which is a valid encryption of $m+m'$.
\subsubsection*{Multiplication}
Consider the Hadamard product modulo $pq$, $\mbf{c}_{\star} \circ \mbf{c}_{\star}'$, of the two augmented ciphertext vectors $\mbf{c_{\star}}$ and $\mbf{c_{\star}}'$:
\begin{align*}
\mbf{z}_{\star}=\mbf{c}_{\star} \circ \mbf{c}_{\star}' = \begin{bmatrix}
c_1 c_1' \\ c_2 c_2'\\c_3c_3' \end{bmatrix} \mod {pq}
\end{align*}
Therefore, if inputs $m,m'$ are encrypted as $(m+rp)\mbf{1}+s\mbf{a}$, $(m'+r'p)\mbf{1}+s'\mbf{a}$, we first calculate
\begin{align*}
\mbf{z}_{\star} &= (m+rp)(m'+r'p)\mbf{1}_{\star}+[(m+rp)s'+(m'+r'p)s]\mbf{a}_{\star}\\
&+ss'\mbf{a}_{\star}^{\circ2}
=(mm'+r_1p)\mbf{1}_{\star}+s_1\mbf{a}_{\star}+ss'\mbf{a}_{\star}^{\circ2}\ \ \mod {pq},
\end{align*}
where $r_1=mr'+m'r+rr'p$, $s_1=(m+rp)s'+(m'+r'p)s$, and $\mbf{a}_{\star}^{\circ2}=[a_1^2\ \ a_2^2\ \ a_3^2]^T$.
As we can see, $\mbf{z}_{\star}$ is not a valid encryption of $mm'$. We need to re-encrypt this product to eliminate the $\mbf{a}_{\star}^{\circ2}$ term.
We achieve this by multiplying $\mbf{z}_{\star}$ by $R$, a $2\times3$ matrix, \[\left[\begin{array}{c@{\quad}c@{\quad}c}
1-2\alpha_1 & \alpha_{1} & \alpha_1 \\
-2\alpha_2 & \alpha_{2}+1 & \alpha_2
\end{array}\right],\]
where $\alpha_1$ and $\alpha_2$ are parameters to be decided.
It is easy to check that $R\mbf1_{\star}=\mbf1$ and $R\mbf a_{\star}=\mbf a$, independently of $a_1,a_2$. Now
\begin{align*}
(R\mbf a_{\star}^{\circ2})_1&=(1-2\alpha_1)a_1^2+\alpha_1a_2^2+\alpha_1(2a_1-a_2)^2\\
&=a_1^2+\alpha_1((2a_1-a_2)^2+a_2^2-2a_1^2)\\
&=a_1^2+2\alpha_1(a_2-a_1)^2 \\
(R\mbf a_{\star}^{\circ2})_2&=-2\alpha_2a_1^2+(\alpha_2+1)a_2^2+\alpha_2(2a_1-a_2)^2\\
&=a_2^2+\alpha_2((2a_1-a_2)^2+a_2^2-2a_1^2)\\
&=a_2^2+2\alpha_2(a_2-a_1)^2
\end{align*}
Let $\beta=2(a_2-a_1)^2$. Thus, $\beta^{-1}\mod {pq}$ exists. Therefore, if we set
\begin{equation}\label{eq:160}
\alpha_1= \beta^{-1}(\sigma a_1+\varrho p-a_1^2),\qquad\alpha_2= \beta^{-1}(\sigma a_2+\varrho p-a_2^2),
\end{equation}
where $\varrho \xleftarrow{\$} [0,q]$ and $\sigma \xleftarrow{\$} [0,pq)$, then we obtain the identity
\[R\mbf a_{\star}^{\circ2} = \varrho p \mbf{1} + \sigma \mbf{a}.\]
Observe that $\alpha_1,\alpha_2$ are public, but give only two equations for the four parameters of the system $a_1,a_2,\sigma,\varrho p$. These equations are quadratic $\bmod\ pq$, and solving them is as hard as semiprime factorisation in the worst case~\cite{rabin1979dsp}.
We re-encrypt by applying $R$ to $\mbf{z}_{\star}$, i.e. $\mbf{z}' = R \mbf{z}_{\star}$, so
\begin{align*}
\mbf{z}'&= (mm'+r_1p)R\mbf{1}+s_1R\mbf{a}+ss'R\mbf{a}^{\circ2}\\
&= (mm'+r_1p)\mbf{1}+s_1\mbf{a}+ss'(\sigma\mbf{a}+\varrho p\mbf{1})\\
&=(mm'+r_2p)\mbf{1}+(s_1+\sigma rr')\mbf{a}\\
&=(mm'+r_2p)\mbf{1}+s_2\mbf{a}\quad\pmod {pq}
\end{align*}
for some integers $r_2,s_2$. So $\mbf{z}'$ is a valid encryption of $mm'$.
Therefore, the homomorphic equivalent of a multiplication operation is defined as
\begin{align*}
\textmyfont{Mult}(\mbf{c},\mbf{c}') = \mbf{c} \cdot \mbf{c}' = R(\mbf{c_{\star}}\circ\mbf{c'_{\star}})\pmod{pq},
\end{align*}
where $\cdot$ is a product on $\mathbb{Z}_{pq}^2$ and $\mbf{c_{\star}}\circ\mbf{c'_{\star}}$ is the Hadamard product modulo $pq$ of the two extended ciphertext vectors $\mbf{c_{\star}}$ and $\mbf{c'_{\star}}$. Thus, the public parameters of the system are the modulus $pq$ and the re-encryption matrix $R$, i.e. $(pq,R)$.
Observe that, independently of $\mbf a$,
\[R\mbf{c_{\star}}=(m+rp)R\mbf{1_{\star}}+sR\mbf{a_{\star}}=(m+rp)\mbf{1}+s\mbf{a}=\mbf{c},\]
for any ciphertext $\mbf{c}$. Hence re-encrypting a ciphertext gives the identity operation, and discloses no information.
\subsubsection*{Hardness}
We can show that this system is at least as hard as SPACDP. In fact,
\begin{restatable}{theorem}{hardness}
\label{thm:4}
SPACDP is of equivalent complexity to the special case of HE2 where $\delta =a_2-a_1$ ($0<\delta <q$) is known.
\end{restatable}
Observe that, without knowing the parameter $k=a_2-a_1$, HE2 cannot be reduced to SPACDP in this way. Thus HE2 is seemingly more secure than HE1.
\subsubsection*{Cryptanalysis}
Each new ciphertext $\mbf{c}$ introduces two new unknowns $r,s$ and two equations for $c_1,c_2$. Thus we gain no additional information from a new ciphertext. However, if we can guess, $m,\,m'$ for any two ciphertexts $\mbf{c},\mbf{c}'$, we can determine
\begin{align*}
(c_1-m)=rp+sa_1&,\qquad(c_2-m)=rp+sa_2,\\(c'_1-m')=r'p+s'a_1&,\qquad(c'_2-m')=r'p+s'a_2,
\end{align*}
so
\begin{align*}
(c_1-m)&(c'_2-m')-(c_2-m)(c'_1-m')\\=\,&(a_2-a_1)(rs'-r's)p \pmod{pq}
\end{align*}
Since $a_2\neq a_1$, and $sr'\neq s'r$ with high probability, this is a nonzero multiple of $p$, $\nu p$ say. We may assume $\nu<q$, so $p=\gcd(\nu p, pq)$.
We can now solve the linear system $\gamma^T[\mbf{c}\ \,\mbf{c}']=[m\ \,m']\mod{p}$ to recover the decryption vector. This effectively breaks the system, since we can now decrypt an arbitrary ciphertext. We could proceed further, and
attempt to infer $a_1$ and $a_2$, but we will not do so.
Note that to break this system, we need to guess two plaintexts, as opposed to one in HE1. The entropy of a pair $(m,m')$ is $2\rho$, so we have effectively squared the number of guesses needed to break the system relative to HE1. So HE2 can tolerate somewhat smaller entropy than HE1. We note further that HE2 does not seem immediately vulnerable to other attacks on HE1~\cite{howgrave2001approx,cohn2012approx,chen2012faster}.
\subsection{Insufficient entropy (HE2N)}
\label{sec:he2n}
In this section we extend HE1N above (section \ref{sec:he1n}) to two dimensions. $\textmyfont{KeyGen}$ chooses $p,q$ and $\kappa$ according to the bounds given in section \ref{sec:he1n} and $\mbf{1}$, $\mbf{a}$ are as in section \ref{sec:he2}. The secret key is $(\kappa,p,\mbf{a})$, and the public parameters are $pq$ and $R$, as defined in section \ref{sec:he2}.
We encrypt a plaintext integer $m \in [0,M)$ as a 2-vector~$\mbf{c}$,
\begin{align*}
\textmyfont{Enc}(m,\textrm{\textmyfont{sk}}) = \mbf{c} =(m+rp+s\kappa)\mbf{1} + t\mbf{a} \mod {pq},
\end{align*}
where $r$ is as in section~\ref{sec:he2}, $s\xleftarrow{\$}[0,\kappa)$, and $t\xleftarrow{\$} [0,pq)$.
We decrypt a ciphertext $\mbf{c}$ by
\begin{align*}
\textmyfont{Dec}(\mbf{c},\textrm{\textmyfont{sk}})=(\mbf{\gamma}^T \mbf{c} \mod p)\mod \kappa,
\end{align*}
where $\mbf{\gamma}^T$ is defined as in \ref{sec:he2}.
Addition and multiplication of ciphertexts are defined as in section \ref{sec:he2}.
Finally, we note that HE2N satisfies Theorem~\ref{thm:2}.
\section{Generalisation to \lowercase{\textit{\large k}} dimensions}
\label{sec:genk}
In this section we generalise HE2 and HE2N to $k$-vectors. HE1 and HE1N are the cases for $k=1$ and HE2 and HE2N are the cases for $k=2$.
\subsection{Sufficient entropy (\textbf{\large HE$k$})}
\label{sec:hek}
We now generalise HE2 to $k$ dimensions. $\textmyfont{KeyGen}$, randomly chooses $p$ and $q$ according to the bounds given in section \ref{sec:he2}. $\textmyfont{KeyGen}$ sets $\mbf{a}_j \xleftarrow{\$} [1,pq)^k$, $\forall j\in[1,k-1]$. The secret key, {\textmyfont{sk}, is $(p,\mbf{a}_1,\ldots,\mbf{a}_{k-1})$, and the public parameters are $pq$ and $R$. Again, $R$ is detailed below.
With regard to computational overhead, the number of arithmetic operations per plaintext multiplication is $O(k^3)$, and the space requirement per ciphertext is $O(k)$, by comparison with HE1.
\subsubsection*{Encryption}
A plaintext, $m \in [0,M]$, is enciphered as
\[\textmyfont{Enc}(m,\textrm{\textmyfont{sk}})=\mbf{c}= (m+rp)\mbf{1}+\sum_{j=1}^{k-1}s_j\mbf{a}_j \mod{pq} \] where $\mbf{c}$ is a $k$-vector, $r\xleftarrow{\$} [0,q)$, and $\forall j, s_j \xleftarrow{\$} [0,pq)$. Let $\mbf{a}_0=\mbf{1}$, and $A_k=[\mbf{a}_0\ \mbf{a}_1\ \ldots\ \mbf{a}_{k-1}]$. We wish the columns of $A_k$ to form a basis for $\mathbb{Z}^k_{pq}$. We will show that they do so with high probability. In the unlikely event that they do not, we generate new vectors until they do.
\begin{restatable}{lemma}{lemten}
\label{lem:10}
$\Pr(\mbf{a}_0,\mbf{a}_1,\ldots,\mbf{a}_{k-1}$ do not form a basis$)\leq(k-1)(1/p+1/q)$.
\end{restatable}
We extend our definition of an augmented vector, $\mbf{v}_\star$, for a $k$-vector, $\mbf{v}$, such that $\mbf{v}_\star$ is a $\binom{k+1}{2}$-vector, with components $v_i$ ($1\leq i \leq k$) followed by $2v_i-v_j$ ($1\leq i<j\leq k$).
In general, for $\ell>k$, $v_\ell=2v_i-v_j$, where $\ell=\binom{i}{2}+k+j-1$. Note that $\mbf{v}_\star=U_k\mbf{v}$ for a $\binom{k+1}{2}\times k$ matrix with entries $0,\pm 1,2$, and whose first $k$ rows form the $k\times k$ identity matrix $I_k$.
Note that $\mbf{v}_\star=U_k\mbf{v}$ implies that $\mbf{1}_\star$ is the $\binom{k+1}{2}$ vector of 1's, and that $*$ is a linear mapping, i.e. $(r_1\mbf{v}_1+r_2\mbf{v}_2)_\star=r_1\mbf{v}_{1*}+r_2\mbf{v}_{2*}$.
\subsubsection*{Decryption}
\[\textmyfont{Dec}(\mbf{c},\textrm{\textmyfont{sk}})= \mbf{\gamma}^T\hspace*{1pt}\mbf{c}\mod p.\]
where $\mbf{\gamma}^T=(A_k^{-1})_1$ is the first row of $A_k^{-1}$. We call $\mbf{\gamma}$ the \emph{decryption vector}, as in HE2.
\subsubsection*{Addition}
Addition of ciphertexts is the vector sum of the ciphertext vectors as with HE2.
\subsubsection*{Multiplication}
Consider the Hadamard product of two augmented ciphertext vectors, $\mbf{c}_\star\circ\mbf{c}'_\star$. For notational brevity, let $\tilde{m}=m+rp$.
\begin{align*}
\mbf{c}_\star\circ\mbf{c}'_\star\ &=\ \big(\tilde{m}\mbf{1}_\star+\sum_{j=1}^{k-1}s_j\mbf{a}_{\star j}\big)\circ\big(\tilde{m}'\mbf{1}_\star+\sum_{j=1}^{k-1}s'_j\mbf{a}_{\star j}\big)\\
&=\ \tilde{m}\tilde{m}'\mbf{1}_\star+\sum_{j=1}^{k-1}(\tilde{m}s'_j+\tilde{m}'s_j)\mbf{a}_{\star j}
+\sum_{j=1}^{k-1}s_js'_j\mbf{a}_{\star j}\circ\mbf{a}_{\star j}\\ &+
\sum_{1\leq i< j\leq k-1}(s_is'_j+s_i's_j)\mbf{a}_{\star i}\circ\mbf{a}_{\star j},
\end{align*}
since $\mbf{1}_\star\circ\mbf{v}_\star=\mbf{v}_\star$ for any $\mbf{v}$. There are $\binom{k}{2}$ product vectors, which we must eliminate using the re-encryption matrix, $R$.
The re-encryption matrix, $R$, is $k\times\binom{k+1}{2}$. We require that $R\mbf{v}_\star=\mbf{v}$, for all $\mbf{v}$.
\begin{restatable}{lemma}{lemtwenty}
\label{lem:20}
Let $A_{\star k}=[\mbf{a}_{\star 0}\ \mbf{a}_{\star 1}\ \ldots\ \mbf{a}_{\star ,k-1}]$, where the columns of $A_k$ form a basis for $\mathbb{Z}^k_{pq}$. If $RA_{\star k}=A_k$, then $R\mbf{v}_\star=\mbf{v}$ for all $\mbf{v}\in\mathbb{Z}^k_{pq}$.
\end{restatable}
The condition $RA_{\star k}=A_k$ can be written more simply, since it is $RU_kA_k=A_k$. Postmultiplying by $A_k^{-1}$ gives
$RU_k=I_k$.
Since $RA_{\star k}=A_k$, we have
\begin{align*}
R(\mbf{c}_\star\circ\mbf{c}'_\star)\
&=\ (mm'+\hat{r}p)\mbf{1}+\sum_{j=1}^{k-1}\hat{s}_j\mbf{a}_{j}\\ &\hspace*{1cm}+
\sum_{1\leq i\leq j\leq k-1}\hat{s}_{ij}R(\mbf{a}_{\star i}\circ\mbf{a}_{\star j}),
\end{align*}
where $\hat{r}$, $\hat{s}_j$ and $\hat{s}_{ij}$ ($1\leq i<j\leq k-1$) are some integers.
There are $k(\binom{k+1}{2}-k)=k\binom{k}{2}$ undetermined parameters $R_{i\ell}$, $1\leq i\leq k$, $k < \ell \leq \binom{k+1}{2}$. We now determine these by setting
\begin{equation}\label{equ:10}
R(\mbf{a}_{\star i}\circ\mbf{a}_{\star j})\ =\ \varrho_{ij}p\hspace*{1pt}\mbf{1}+\sum_{l=1}^{k-1}\sigma_{ijl}\hspace*{1pt}\mbf{a}_l
\end{equation}
Thus we have $k\binom{k}{2}$ new unknowns, the $\varrho$'s and $\sigma$'s, and $k\binom{k}{2}$ linear equations for the $k\binom{k}{2}$ unassigned $R_{i\ell}$'s.
Let $A^{\circ2}_{\star k}$ be the $\binom{k+1}{2}\times\binom{k+1}{2}$ matrix with columns $\mbf{a}_{\star i}\circ\mbf{a}_{\star j}$ ($0\leq i< j < k$), and let $C_k$ be the $k\times\binom{k}{2}$ matrix with columns $\varrho_{ij}p\hspace*{1pt}\mbf{1}+\sum_{l=1}^{k-1}\sigma_{ijl}\hspace*{1pt}\mbf{a}_l$ ($0<i<j<k$). Then the equations for the $R_{i\ell}$ can be written as
\begin{equation}\label{equ:20}
RA^{\circ2}_{\star k}\ =\ \left[A_k \mid C_k\right].
\end{equation}
giving $k\binom{k+1}{2}$ linear equations for the $k\binom{k+1}{2}$ $R_{i\ell}$'s in terms of quadratic functions of the $k(k-1)$ $a_{ij}$'s ($1\leq i\leq k, 1\leq j\leq k-1$), which are undetermined. Thus the system has $k(k-1)$ parameters that cannot be deduced from $R$.
The system of equations~\eqref{equ:20} has a solution provided that $A^{\circ2}_{\star k}$ has an inverse $\bmod\ pq$. We prove that this is true with high probability. Again, in the unlikely event that this is not true, we generate new vectors $\mbf{a}_1,\ldots,\mbf{a}_{k-1}$ until it is.
\begin{restatable}{theorem}{thmten}
\label{thm:10}
$A^{\circ2}_{\star k}\mbox{ has no inverse\,} \bmod{pq}$ with probability at most $(k^2-1)(1/p+1/q)$.
\end{restatable}
Note that Theorem~\ref{thm:10} subsumes Lemma~\ref{lem:10}, since the first $k$ columns of $A^{\circ2}_{\star k}$ contain $A_k$ as a submatrix, and must be linearly independent.
Each $\mbf{c}$ introduces $k$ new parameters
$rp,s_1,\ldots,s_{k-1}$ and $k$ equations, so the number of undetermined parameters is always $k(k-1)$.
\subsubsection*{Cryptanalysis}\label{HEk cryptanalysis}
Note that $p$ can still be determined if we know $m_i$ for $k$ ciphertexts. Then let
\[ C=[\mbf{c}_1-m_1\mbf{1}\ \ldots\ \mbf{c}_k-m_k\mbf{1}],\quad A_k=[\mbf{1}\ \mbf{a}_1\ \ldots\ \mbf{a}_{k-1}]\]
and let
\[ W=\left[\begin{array}{c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c}
r_{1}p & r_{2}p & \ldots & r_{k}p \\
s_{1,1} & s_{2,1} & \ldots & s_{k,1} \\
\vdots & & & \vdots \\
s_{1,k-1} & s_{2,k-1} & \ldots & s_{k,k-1}
\end{array}\right],\]
\[W'=\left[\begin{array}{c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c}
r_{1} & r_{2} & \ldots & r_{k} \\
s_{1,1} & s_{2,1} & \ldots & s_{k,1} \\
\vdots & & & \vdots \\
s_{1,k-1} & s_{2,k-1} & \ldots & s_{k,k-1}
\end{array}\right],\]
where $r_i,s_{ij}$ refer to $\mbf{c}_i$. Then $C=A_kW$, and so $\det C=\det A_k\det W$. Note that $\det W=p\det W'$, so $\det C$ is a multiple of $p$. Now $\det C$ can be determined in $O(k^3)$ time and, if it is nonzero, $p$ can be determined as $\gcd(\det C,pq)$. Then $p$ can be recovered if $\det C\neq 0$.
\begin{restatable}{lemma}{lemthirty}
\label{lem:30}
$\Pr(\det C = 0 \bmod\, pq)\leq (2k-1)(1/p+1/q)$.
\end{restatable}
Once we have recovered $p$, we can use the known $m_i$ to determine the decryption vector $\mbf{\gamma}$, by solving linear equations. Let \[C_0 = [\mbf{c}_1\ \mbf{c}_2\ \ldots\ \mbf{c}_k],\quad \mbf{m}^T = [m_1\ m_2\ \ldots\ m_k].\vspace{-2ex}\]
\begin{restatable}{lemma}{lemthirtyfive}
\label{lem:35}
$\Pr(\det C_0 = 0 \bmod\, pq)\leq (2k-1)(1/p+1/q)$.
\end{restatable}
Thus, with high probability, we can solve the system \[ \mbf{\gamma}^TC_0= \mbf{m}^T\quad\mod{p}\] uniquely, to recover $\mbf{\gamma}$ and enable decryption of an arbitrary ciphertext. However, encryption of messages is not possible, since we gain little information about $\mbf{a}_1,\ldots,\mbf{a}_k$. Note also that, if we determined $p$ by some means other than using $k$ known plaintexts, it is not clear how to recover $\mbf{\gamma}$.
To break this system, we need to guess $k$ plaintexts. The entropy of a $k$-tuple of plaintexts $(m_1,m_2,\ldots,m_k)$ is $k\rho$, so effectively we need $\mu^k$ guesses, where $\mu$ is the number of guesses needed to break HE1. So HE$k$ can tolerate much smaller entropy than HE1, provided $k$ is large enough. If $k$ is sufficiently large, the scheme appears secure without adding noise, though it does not have the other advantages of adding noise. We discuss this further in section~\ref{sec:fhe}.
\subsubsection*{Fixing an insecurity for $k>2$}
The decryption vector for HE$k$ is $\mbf{\gamma}^T=(A_k^{-1})_1$. Note that $\mbf{\gamma}^T\mbf{1}=1$ and $\mbf{\gamma}^T\mbf{a}_i=0$ ($i\in[1,k-1]$), since $\mbf{\gamma}^T\mbf{a}_i=I_{1i}$ ($i\in[0,k-1]$).
The equations
\begin{equation}\label{equ:100}
R(\mbf{a}_{\star i}\circ\mbf{a}_{\star j})\ =\ p\varrho_{ij}\hspace*{1pt}\mbf{1}+\sum_{l=1}^{k-1}\sigma_{ijl}\hspace*{1pt}\mbf{a}_l,
\end{equation}
define a product $\cdot$ on $\mathbb{Z}^k_{pq}$ so that
$\mbf{c}\cdot\mbf{c}'=R(\mbf{c}_{\star}\circ\mbf{c}'_{\star})$. This product is linear, commutative and distributive, since $R$ and $\star$ are linear operators, and $\circ$ is commutative and distributive. So we have an algebra $\mcl{A}_k$, with unit element $\mbf{1}$ \cite{schafer1966introduction}. The $\varrho_{ij},\sigma_{ijl}$ ($i,j,l\in[1,k-1])$ are the \emph{structure constants} of the algebra. In general, $\mcl{A}_k$ will not be associative, i.e. we can have
\begin{align*}
R(R(\mbf{c}_{1\star}\circ\mbf{c}_{2\star})_*\circ\mbf{c}_{3\star})&=(\mbf{c}_1\cdot\mbf{c}_2)\cdot\mbf{c}_3\\
\neq\mbf{c}_1\cdot(\mbf{c}_2\cdot\mbf{c}_3)&=R(\mbf{c}_{1\star}\circ R(\mbf{c}_{2\star}\circ\mbf{c}_{3\star})_*).
\end{align*}
This leads to the following potential insecurity. We must have
\begin{equation}\label{eq:110}
\mbf{\gamma}^T((\mbf{c}_1\cdot\mbf{c}_2)\cdot\mbf{c}_3)\ =\ \mbf{\gamma}^T(\mbf{c}_1\cdot(\mbf{c}_2\cdot\mbf{c}_3))\quad\pmod p,
\end{equation}
in order to have correct decryption. The \emph{associator} for $\mcl{A}_k$~is
\begin{align*}
[\mbf{c}_i,\mbf{c}_j,\mbf{c}_l]\ &= \mbf{c}_i\cdot(\mbf{c}_j\cdot\mbf{c}_l)-(\mbf{c}_i\cdot\mbf{c}_j)\cdot\mbf{c}_l\\ &=rp\mbf{1}+\sum_{l=1}^{k-1}s_{l}\hspace*{1pt}\mbf{c}_l\ \, \pmod {pq}.
\end{align*}
Thus $[\mbf{c}_i,\mbf{c}_j,\mbf{c}_l]$ is an encryption of $0$. If we can find $k$ such associators from $\mbf{c}_1,\ldots,\mbf{c}_n$ which violate~\eqref{eq:110}, then with high probability we will have $k$ linearly independent associators. We can use use these to make a collision attack on HE$k$, in a similar way to that described in section~\ref{sec:he1}. We use the $\gcd$ method to determine $p$, and then $\mbf\gamma$, as described in section~\ref{HEk cryptanalysis}. In fact all we need is that~\eqref{eq:110} holds for any associator. That is, for all $\mbf{c}_1, \mbf{c}_2, \mbf{c}_3$, we need
\begin{equation*}
\mbf{\gamma}^T((\mbf{c}_1\cdot\mbf{c}_2)\cdot\mbf{c}_3)\ =\ \mbf{\gamma}^T(\mbf{c}_1(\cdot\mbf{c}_2\cdot\mbf{c}_3))\quad\pmod {pq},
\end{equation*}
or, equivalently, using the CRT,
\begin{equation}\label{eq:120}
\mbf{\gamma}^T((\mbf{c}_1\cdot\mbf{c}_2)\cdot\mbf{c}_3)\ =\ \mbf{\gamma}^T(\mbf{c}_1\cdot(\mbf{c}_2\cdot\mbf{c}_3))\quad\pmod {q}.
\end{equation}
By linearity, it follows that~\eqref{eq:120} holds if and only if it holds for all basis elements, excluding the identity. That is, for all $i,j,l\in[1,k-1]$, we need
\begin{equation}\label{eq:140}
\mbf{\gamma}^T(\mbf{a}_i\cdot(\mbf{a}_j\cdot\mbf{a}_l))\ =\ \mbf{\gamma}^T((\mbf{a}_i\cdot\mbf{a}_j)\cdot\mbf{a}_l)\quad\pmod {q}.
\end{equation}
The associator for $\mcl{A}_k$ is
\begin{align*}
[\mbf{a}_i,\mbf{a}_j,\mbf{a}_l]\ &= \mbf{a}_i\cdot(\mbf{a}_j\cdot\mbf{a}_l)-(\mbf{a}_i\cdot\mbf{a}_j)\cdot\mbf{a}_l\\ &=rp\mbf{1}+\sum_{l=1}^{k-1}s_{l}\hspace*{1pt}\mbf{a}_l\ \, \pmod {pq},
\end{align*}
for some integers $r,s_1,\ldots,s_{k-1}$, and so $\mbf{\gamma}^T[\mbf{a}_i,\mbf{a}_j,\mbf{a}_l]=rp$.
If $\mcl{A}_k$ is associative, the problem does not arise, since \eqref{eq:140} will be satisfied automatically. Associativity holds for $k\leq2$, since all we have to check is that $\mbf{a}\cdot(\mbf{a}\cdot\mbf{a})=(\mbf{a}\cdot\mbf{a})\cdot\mbf{a}$, which is true by commutativity. Thus HE$k$ with $k\leq2$ cannot be attacked in this way.
Requiring associativity in $\mcl{A}_k$ would overconstrain the system, since it imposes $k\binom{k+1}{2}$ equations on the $k\binom{k+1}{2}$ structure constants. We have only $k(k-1)$ undetermined parameters, so this is too much. But all we need is that \eqref{eq:140} holds. We have the following.
\begin{restatable}{lemma}{lemoneforty}
\label{lem:140}
\eqref{eq:140} holds if and only if
\[ \ts\sum_{t=1}^{k-1}\sigma_{jlt}\varrho_{it}=\ts\sum_{t=1}^{k-1}\sigma_{ijt}\varrho_{lt}\pmod{q},\ \forall i,j,l\in[1,k-1].\]
\end{restatable}
There are several ways to ensure that~\eqref{eq:140} holds. We will do so by giving the $\varrho_{ij}$ a multiplicative structure.
\begin{restatable}{lemma}{lemonefifty}
\label{lem:150}
Let $\tau,\varrho_i\xleftarrow{\$}[0,q)$ $(i\in[1,k-1]$), let $\varrho_{ij}=\varrho_i\varrho_j \mod{q}$, and let the $\sigma_{ijl}$ satisfy $\sum_{l=1}^{k-1}\sigma_{ijl}\hspace*{1pt}\varrho_l=\tau\varrho_i\varrho_j\pmod{q}$ for all $i,j\in[1,k-1]$. Then, for all $i,j,\ell\in[1,k-1]$, $\mbf{\gamma}^T(\mbf{a}_i\cdot(\mbf{a}_j\cdot\mbf{a}_l))=\tau\varrho_i\varrho_j\varrho_l\mod{q}$, the symmetry of which implies that~\eqref{eq:140} holds.
\end{restatable}
Thus the conditions of Lemma~\ref{lem:150} are sufficient to remove the insecurity. The price is that we now have $(k-1)\binom{k}{2}+(k-1)+k(k-1)=(k+1)\binom{k}{2}+k-1$ parameters and $k\binom{k}{2}$ equations. There are $\binom{k}{2}+(k-1)=(k+2)(k-1)/2$ independent parameters. This is fewer than the original $k(k-1)$, but remains $\Omega(k^2)$.
\subsection{Insufficient entropy (\textbf{\large HE$k$N})}
\label{sec:hekn}
In this section, we generalise HE2N to $k$ dimensions. $\textmyfont{KeyGen}$, randomly chooses $\kappa$, $p$ and $q$ according to the bounds given in section \ref{sec:he2n}. $\forall j$, $\textmyfont{KeyGen}$ sets $\mbf{a}_j$ as in \ref{sec:hek}. The secret key, \textmyfont{sk}, is ($\kappa$, $p$, $\mbf{a}_1$, $\ldots$, $\mbf{a}_{k-1}$), and the public parameters are $pq$ and $R$. $R$ is as given in section \ref{sec:hek}. Note that, as a result of adding the ``noise'' term, defence against non-associativity is not required.
A plaintext, $m \in [0,M]$, is enciphered as
\[\textmyfont{Enc}(m,\textrm{\textmyfont{sk}})=\mbf{c}= (m+rp+s\kappa)\mbf{1}+\sum_{j=1}^{k-1}t_j\mbf{a}_j \pmod{pq} \] where $r,s$ are as in section \ref{sec:he2n}, and $t_j \xleftarrow{\$} [0,pq)$ $\forall j\in[1,k)$.
A ciphertext is deciphered by,
\[\textmyfont{Dec}(\mbf{c},\textrm{\textmyfont{sk}})= (\mbf{\gamma}^T\hspace*{1pt}\mbf{c}\mod p) \mod \kappa.\]
where $\mbf{\gamma}^T$ is defined as in section \ref{sec:hek}.
Addition and multiplication of ciphertexts are as in section~\ref{sec:hek}.
The effective entropy of HE$k$N is $\rho'=k(\rho + \lg \kappa)$. Thus, as we increase $k$, the ``noise'' term can be made smaller while still providing the requisite level of entropy.
Clearly HE$k$N also inherits the conclusions of Theorem~\ref{thm:2}.
\section{An extension of HE2N using the CRT (HE2NCRT)}
\label{sec:he2ncrt}
As an interesting aside, we extend HE2N (section \ref{sec:he2n}) using a technique inspired by CRT secret sharing, so that we compute the final result modulo a product of primes $\prod_{j=1}^K p_j$ rather than modulo $p$, where $K$ is the number of primes.
In this scheme, we distribute the computation. We have $K$ processors. Each processor computes arithmetic on ciphertexts modulo $p_jq_j$, where $p_j,q_j$ are suitable primes. Also, each processor only receives the $j$th ciphertext vector of an integer. Addition and multiplication of ciphertexts is as defined in section \ref{sec:he2}, except that it is performed modulo $p_jq_j$.
This serves two purposes. The first is to be able to handle larger moduli by dividing the computation into subcomputations on smaller moduli. The second is to mitigate against exposure of the secret key $p$ in the system presented in section \ref{sec:he2n}, by not distributing the modulus $pq$ to each processor. Instead, we distribute $p_jq_j$ to the $j$th processor, for $j \in [1,K]$. This allows us to partition the computation into subcomputations, encrypted using different parameters. Thus, should an attacker compromise one subcomputation, they may gain no knowledge of other subcomputations.
\subsubsection*{Key Generation}
$\textmyfont{KeyGen}$, randomly chooses $\kappa$ as in section \ref{sec:he1n}. For all $j \in [1,K]$, it randomly chooses a prime $p_j$ such that $p_j$ satisfies $2^{\lambda-1}<p_j<2^\lambda$ and \[\Pi = \prod\limits_{j=1}^K p_j > (n+1)^d (M+\kappa^2)^d.\]
It also randomly chooses $q_j$, $j \in [1,K]$, as for $q$ in section \ref{sec:he1}. Finally, it sets $\mbf{a}_j = [a_{j1}\ a_{j2}]^T$, where $a_{jk}\xleftarrow{\$} [1,p_jq_j)$ $( j \in[1,K], k\in [1,2])$ such that $a_{j1}\neq a_{j2}\mod p$ and $a_{j1}\neq a_{j2}\mod q$.
The secret key, \textmyfont{sk}, is $(\kappa,p_1,\ldots,p_K,\mbf{a}_1,\ldots,\mbf{a}_K)$, and the public parameters are $p_jq_j$ $(j\in[1,K])$ and $R_j$ $(j\in[1,K])$, where each $R_j$ is defined as $R$ in section \ref{sec:he2}.
\subsubsection*{Encryption}
We encrypt an integer, $m_i$ ($i\in[1,n]$), as the set of $K$ 2-vectors, $\mbf{c}_{ij}$,
\begin{align*}
\mbf{c}_{ij} = (m_i + r_{ij} p_j + s_i\kappa)\mbf{1} + t_{ij} \mbf{a}_j \bmod p_j q_j \ (j \in [1,K]),
\end{align*}
where $r_{ij}\xleftarrow{\$} [0,q_j)$, $s_i\xleftarrow{\$}[0,\kappa)$, and $t_{ij} \xleftarrow{\$}$ $[0,p_jq_j)$ $(i\in[1,n], j \in [1,K])$.
\subsubsection*{Decryption}
To decrypt, we first decrypt the $j$th ciphertext of the computational result $\mbf{c}_j$ as in section~\ref{sec:he2}, to give
\begin{align*}
P_j= \mbf{\gamma}_j^T \mbf{c}_j \mod {p_j},
\end{align*}
where $P_j$ is the residue of $P(m_1,m_2,\ldots,m_n,\kappa) \mod p_j$ and $\mbf{\gamma}_j^T=(a_{j2}-a_{j1})^{-1}[a_{j2}\ -a_{j1}]$.
We then use the Chinese Remainder Theorem to compute the plaintext as \[P(m_1,m_2,\dots,m_n)=\bigg(\sum\limits_{j=1}^K P_jM_j \mu_j \mod \Pi\bigg)\mod \kappa,\] where $M_j = \Pi/p_j$ and $\mu_j=M_j^{-1} \bmod{p_j}$.
\subsubsection*{Addition and Multiplication}
Addition of ciphertexts is performed as in \ref{sec:he2}. Multiplication of ciphertexts on processor $j$ is now \[ \textmyfont{Mult}(c_j,c'_j)=R_j (c_{j \star} \circ c'_{j \star}).\]
Clearly HE$k$N could be extended to HE$k$NCRT in a similar way, but we will not discuss the details here.
\section{Fully Homomorphic System}
\label{sec:fhe}
We return to HE$k$, presented above in section \ref{sec:hek}. We will show that, for large enough $k$, this can be used to create an FHE system.
We may use HE$k$ to evaluate an arithmetic circuit homomorphically, where $\mathsf{R}=\mathbb{Z}_{pq}$. However, this system is only somewhat homomorphic. If the computational result grows larger than $p$, we are unable to successfully decrypt it. This restricts us to arithmetic circuits of bounded depth to avoid the blow up. To make it fully homomorphic, we consider Boolean circuits.
Typically, we will use the binary Boolean functions, AND, OR, and NOT in the Boolean circuit. However, we may use fewer functions. Any Boolean circuit may be represented using only NAND gates \cite{scharle1965}. Recall that the indegree of any gate in the circuit is always $2$, but the outdegree is arbitrary. The inputs to each gate are bits, as are the outputs. We will denote the set of inputs to the circuit by $I\subseteq V$, and the set of outputs by $O\subseteq V$. The inputs have indegree 0, and the outputs have outdegree 0, but we regard the inputs as having indegree 1, and the outputs as having outdegree 1, with wires from and to the external environment $\Lambda$.
Note that constant input bits can easily be eliminated from the circuit, so we assume there are none, to avoid an obvious KPA. Even so, if we represent the bit values $0,1$ by encrypting known constants $\alpha_0,\,\alpha_1$, the HE$k$ system is open to a simple KPA. For any ciphertext $\mbf{c}$, we can compute $\mbf{c}'=(\mbf{c}-\alpha_0\mbf{1})\cdot(\mbf{c}-\alpha_1\mbf{1})$.
Then $\mbf{c}'$ is an encryption of~$0$. By repeating this on $k$ ciphertexts, we can obtain $k$ linearly independent encryptions of zero with high probability. Once we have done this, we can determine $p$ and $\mbf\gamma$ as in section~\ref{HEk cryptanalysis}. The problem, of course, is that we have not increased the entropy of the input data.
Therefore, we must add noise to the ciphertexts, but we will do this so as to ensure that the noise does not grow with the the depth of the circuit. On each wire $e\in E$, we will represent the bit value $b_e\in\{0, 1\}$ by $w_e\in\{\omega_{0e},\omega_{1e}\}$, where $\omega_{0e}=2s_{0,e}$, $\omega_{1e}=1+2s_{1,e}$, where $s_{0,e},s_{0,e}\xleftarrow{\$}[0,\kappa)$. Thus $b_e= w_e \bmod\ 2$, and the noise has entropy $\lg\kappa$. The value of $\kappa$ is chosen as large as possible such that we can correctly evaluate any polynomial of degree 2 in two variables. For each input $i\in I$, we represent the input bit value $b_i$ similarly, by $x_i\in\{\omega_{0i},\omega_{1i}\}$. The inputs and the wires in the circuit are encrypted using HE$k$, the inputs directly and the other wires indirectly as described below. As discussed in section \ref{sec:hek}, we need $k$ known plaintexts to break HE$k$. The plaintexts are the encrypted bits $w_e \bmod\ 2$, so a brute force attack requires guessing at least $2^k$ bits. So, by setting $k\gg \log \lambda$, a brute force attack on the system requires time superpolynomial in the security parameter $\lambda$.
An input $i\in I$ has a wire $(\Lambda,i)$ on which the (encrypted) input value $x_i$ is stored. For any wire $e=(i,v)$ from input $i$, we have a linear function $L(x)=a+bx$, which converts the plaintext input value $x\in\{\alpha_0,\alpha_1\}$ to the wire value $w\in\{\gamma_0,\gamma_1\}$. (We suppress the wire labels $e$ when they are clear from the context.) This requires
\[ a=(\alpha_1-\alpha_0)^{-1}(\alpha_1\gamma_0-\alpha_0\gamma_1),\quad b=(\alpha_1-\alpha_0)^{-1}(\gamma_1-\gamma_0).\]
The encrypted coefficients of this function are stored as data for the wire $e$.
Note that all computations are $\bmod\ pq$, and the required inverses exist because the numbers involved are less than $\kappa$.
For each output wire $e=(v,v')$ of a NAND gate $v$, we have a quadratic function $Q(x,y)=a+bx+cy+dxy$, which converts the values on the input wires of the gate, $x\in\{\alpha_0,\alpha_1\}$, $y\in\{\beta_0,\beta_1\}$, to the wire value $w\in\{\gamma_0,\gamma_1\}$. This requires
\begin{align*}
a=\gamma_0+\alpha_1\beta_1\vartheta,\ \ b=-\beta_1\vartheta,\ \ c = -\alpha_1\vartheta,\ \ d = \vartheta,
\end{align*}
where $\vartheta=\big((\alpha_1-\alpha_0)(\beta_1-\beta_0)\big)^{-1}(\gamma_1-\gamma_0)$. Again, the encrypted coefficients of this function are stored as data for the wire $e$.
For each output gate $v\in O$, we decrypt the value $w\in \{\gamma_0,\gamma_1\}$ computed by its (unique) output wire $(v,\Lambda)$. Then the output bit is $w \bmod\, 2$.
Thus we replace the logical operations of the Boolean circuit by evaluation of low degree polynomials. For simplicity, we have chosen to use only NAND gates, but we can represent any binary Boolean function by a quadratic polynomial in the way described above. Since the quadratic polynomials are encrypted in our system, they conceal the binary Boolean function they represent. Thus the circuit can be ``garbled''~\cite{bellare2012yao,goldreich1987play}, to minimise inference about the inputs and outputs of the circuit from its structure.
However, there is a price to be paid for controlling the noise. The encrypted circuit is not securely reusable with the same values $\omega_{0e},\omega_{1e}$ for $w_e$. Suppose we can observe the encrypted value on wire $e$ three times giving cyphertexts $\mbf{c}_1,\mbf{c}_2,\mbf{c}_3$. Two of these are encryptions of the same value $2s_{0,e}$ or $1+2s_{1,e}$. Thus $(\mbf{c}_1-\mbf{c}_2)\cdot(\mbf{c}_1-\mbf{c}_3)\cdot(\mbf{c}_2-\mbf{c}_3)$ is an encryption of $0$.
By doing this for $k$ wires, we can break the system. This is essentially the collision attack described in section~\ref{sec:inithom}.
Some reuse of the encrypted circuit is possible by using multiple values on the wires, and higher degree polynomials for the gates. However, we will not consider this refinement, since the idea seems to have little practical interest.
\section{Experimental Results}
\label{sec:results}
\begin{table*}[!thp]
\centering
\begin{adjustbox}{width=\textwidth,totalheight=8in,keepaspectratio}
\begin{tabular}{llllllllll}
\toprule
Alg.& \multicolumn{3}{c}{Parameters} & \multicolumn{2}{c}{Encryption} & \multicolumn{3}{c}{MR Job} & Decrypt(ms) \\
& $d$ & $\rho$ & $\rho'$ & Init(s) & Enc($\mu$s) & Exec(s) & Prod($\mu$s) & Sum($\mu$s) & \\
\midrule
HE1 & 2 & 32 & n/a & 0.12 & 13.52 & 23.82 & 54.41 & 9.06 & 0.21 \\
HE1 & 2 & 64 & n/a & 0.12 & 16.24 & 23.85 & 60.38 & 8.04 & 0.49 \\
HE1 & 2 & 128 & n/a & 0.15 & 25.73 & 23.77 & 84.69 & 8.43 & 0.28 \\
HE1 & 3 & 32 & n/a & 0.17 & 22.98 & 23.65 & 87.75 & 11.46 & 0.35 \\
HE1 & 3 & 64 & n/a & 0.19 & 34.63 & 24.72 & 95.68 & 12.37 & 0.45 \\
HE1 & 3 & 128 & n/a & 0.42 & 54.83 & 26.05 & 196.71 & 14.07 & 0.55 \\
HE1 & 4 & 32 & n/a & 0.28 & 43.36 & 24.48 & 108.72 & 13.75 & 0.5 \\
HE1 & 4 & 64 & n/a & 0.53 & 58.85 & 26.41 & 227.44 & 15.85 & 3.59 \\
HE1 & 4 & 128 & n/a & 1.36 & 104.95 & 28.33 & 484.95 & 16.92 & 5.67 \\
HE1N & 2 & 1 & 32 & 0.22 & 32.99 & 22.94 & 88.38 & 8.53 & 3.35 \\
HE1N & 2 & 1 & 64 & 0.39 & 52.63 & 26.24 & 168.54 & 12.39 & 3.56 \\
HE1N & 2 & 1 & 128 & 1.2 & 89.01 & 26.18 & 226.2 & 13.16 & 8.1 \\
HE1N & 2 & 8 & 32 & 0.6 & 57.88 & 25.9 & 177.36 & 11.17 & 7.18 \\
HE1N & 2 & 8 & 64 & 0.32 & 43.93 & 26.53 & 96.78 & 12.18 & 2.27 \\
HE1N & 2 & 8 & 128 & 1.13 & 78.11 & 24.42 & 212.75 & 11.07 & 8.4 \\
HE1N & 2 & 16 & 64 & 0.33 & 53.97 & 27.15 & 168 & 13.67 & 4.47 \\
HE1N & 2 & 16 & 128 & 0.63 & 68.73 & 25.22 & 194.42 & 11.01 & 7.65 \\
HE1N & 3 & 1 & 32 & 8.54 & 183.19 & 24.24 & 522.07 & 12.06 & 9.09 \\
HE1N & 3 & 1 & 64 & 3.67 & 125 & 29.49 & 467.36 & 18.22 & 11.43 \\
HE1N & 3 & 1 & 128 & 27.84 & 313.76 & 26.94 & 1235.77 & 15.04 & 11.75 \\
HE1N & 3 & 8 & 32 & 115 & 462.45 & 32.61 & 1556.17 & 21.11 & 19.79 \\
HE1N & 3 & 8 & 64 & 9.75 & 180.08 & 25.87 & 500.62 & 15.03 & 10.39 \\
HE1N & 3 & 8 & 128 & 36.05 & 259.15 & 30.1 & 836.27 & 20.68 & 11.45 \\
HE1N & 3 & 16 & 64 & 30.96 & 378.99 & 28.24 & 1338.33 & 15.51 & 13.3 \\
HE1N & 3 & 16 & 128 & 8.13 & 226.32 & 27.92 & 621.95 & 18.01 & 10.89 \\
HE2 & 2 & 32 & n/a & 0.16 & 85.79 & 26.82 & 305.52 & 11.68 & 4.83 \\
HE2 & 2 & 64 & n/a & 0.17 & 95.92 & 29.71 & 354.79 & 16.9 & 3.26 \\
HE2 & 2 & 128 & n/a & 0.22 & 132.53 & 32.84 & 540.78 & 22.83 & 4.92 \\
HE2 & 3 & 32 & n/a & 0.23 & 130.3 & 31.18 & 513.93 & 23.77 & 6.52 \\
HE2 & 3 & 64 & n/a & 0.29 & 145.62 & 32.84 & 615.9 & 24.61 & 6.3 \\
HE2 & 3 & 128 & n/a & 0.52 & 249.47 & 29.54 & 1443.82 & 16.56 & 18.34 \\
HE2 & 4 & 32 & n/a & 0.39 & 175.63 & 29.5 & 733.23 & 20.69 & 6.01 \\
HE2 & 4 & 64 & n/a & 0.7 & 255.3 & 29.55 & 1578.39 & 18.29 & 16.24 \\
HE2 & 4 & 128 & n/a & 2.7 & 465.51 & 37.47 & 2943.91 & 22.15 & 15.41 \\
HE2N & 2 & 1 & 32 & 0.27 & 147.83 & 29.74 & 571.94 & 16.58 & 5.66 \\
HE2N & 2 & 1 & 64 & 0.43 & 202.74 & 33.36 & 1291.68 & 18.3 & 13.23 \\
HE2N & 2 & 1 & 128 & 1.58 & 354.19 & 33.76 & 1977.51 & 17.13 & 12.46 \\
HE2N & 2 & 8 & 32 & 0.59 & 234.83 & 31.42 & 1413.31 & 15.21 & 14.92 \\
HE2N & 2 & 8 & 64 & 0.33 & 163.78 & 27.42 & 635.64 & 13.6 & 6.18 \\
HE2N & 2 & 8 & 128 & 0.9 & 307.68 & 36.32 & 1850.83 & 21.71 & 15.79 \\
HE2N & 2 & 16 & 64 & 0.42 & 208.1 & 29.96 & 1230.56 & 13.41 & 13.16 \\
HE2N & 2 & 16 & 128 & 0.73 & 274.48 & 30.82 & 1585.1 & 14.85 & 15.04 \\
HE2N & 3 & 1 & 32 & 5.72 & 651.1 & 36.49 & 3438.96 & 18.67 & 19.05 \\
HE2N & 3 & 1 & 64 & 4.45 & 477.52 & 35.33 & 3073.46 & 18.75 & 19.77 \\
HE2N & 3 & 1 & 128 & 26.83 & 1192.79 & 43.23 & 6416.43 & 22.48 & 25.12 \\
HE2N & 3 & 8 & 32 & 87.38 & 1658.36 & 49.63 & 8139.19 & 23.71 & 27.24 \\
HE2N & 3 & 8 & 64 & 5.21 & 607.75 & 36.54 & 3337.1 & 22.28 & 17.39 \\
HE2N & 3 & 8 & 128 & 17.14 & 945.64 & 40.49 & 4620.69 & 25.91 & 22.41 \\
HE2N & 3 & 16 & 64 & 39.19 & 1368.18 & 44.88 & 7005.7 & 24.1 & 28.3 \\
HE2N & 3 & 16 & 128 & 11.39 & 774.07 & 36.05 & 3845.1 & 20.29 & 20.74 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Timings for each experimental configuration. \emph{Init} is the initialisation time for the encryption algorithm, \emph{Enc} is the mean time to encrypt a single integer, \emph{Exec} is the MR job execution time, \emph{Prod} is the mean time to homomorphically compute the product of two encrypted integers, \emph{Sum} is the mean time to homomorphically compute the sum of two encrypted integers.}
\label{results}
\end{table*}
HE1, HE1N, HE2, and HE2N have been implemented in pure unoptimised Java using the JScience mathematics library \cite{jscience2014}. Secure pseudo-random numbers are generated using the ISAAC algorithm \cite{isaac2008}. The ISAAC cipher is seeded using the Linux {\small\texttt{/dev/random}} source. This prevents the weakness in ISAAC shown by Aumasson~\cite{aumasson2006isaac}.
We devised a simple evaluation experiment to generate a fixed (24,000) number of encrypted inputs and then perform a homomorphic inner product on those inputs using a Hadoop MapReduce (MR) algorithm. On the secure client side, the MR input is generated as pseudo-random $\rho$-bit integers which are encrypted and written to a file with $d$ inputs per line, where $d$ is the degree of the polynomial to be computed. In addition, the unencrypted result of the computation is computed so that it may checked against the decrypted result of the homomorphic computation. On the Hadoop cluster side, each mapper processes a line of input by homomorphically multiplying together each input on a line and outputs this product. A single reducer homomorphically sums these products. The MR algorithm divides the input file so that each mapper receives an equal number of lines of input, thereby ensuring maximum parallelisation. Finally, on the secure client side, the MR output is decrypted.
Our test environment consisted of a single secure client (an Ubuntu Linux VM with 16GB RAM) and a Hadoop 2.7.3 cluster running in a heterogeneous OpenNebula cloud. The Hadoop cluster consisted of 17 Linux VMs, one master and 16 slaves, each allocated 2GB of RAM. Each experimental configuration of algorithm, polynomial degree ($d$), integer size ($\rho$), and effective entropy of inputs after adding ``noise'' ($\rho'$, for the `N' variant algorithms only), was executed 10 times. The mean results are tabulated in Table \ref{results}.
Our results compare extremely favourably with Table 2 of \cite{naehrig2011can}. For encryption, our results are, in the best case, 1000 times faster than those presented there, and, in the worst case, 10 times faster. For decryption, our results are comparable. However, to decrypt our results we take the modulus modulo a large primes rather than 2 as in the case of \cite{naehrig2011can}. This is obviously less efficient. For homomorphic sums and products, our algorithms perform approximately 100 times faster. \cite{naehrig2011can} only provides experimental data for computing degree 2 polynomials. We have provided experimental results for the computation of higher degree polynomials.
Similarly, compared with figure 13 of \cite{popa2011cryptdb}, our encryption times for a 32-bit integer are considerably faster. While a time for computing a homomorphic sum on a column is given in figure 12, it is unclear how many rows exist in their test database. Nevertheless, our results for computing homomorphic sums compare favourably with those given. It should be noted that CryptDB \cite{popa2011cryptdb} only supports homomorphic sums and is incapable of computing an inner product. Therefore, we only compare the homomorphic sum timings.
Table 1 of \cite{stephen2014practical} is unclear on whether the values are aggregate timings or the timing per operation. Even assuming that they are aggregate values, our results are approximately 100 times faster than those presented for homomorphic sum and product operations. We also note that Crypsis \cite{stephen2014practical} uses two different encryption schemes for integers, ElGamal \cite{elgamal1985} and Paillier \cite{paillier1999}, which only support addition or multiplication but not both. No discussion of computation of an inner product is made in \cite{stephen2014practical} but we expect that the timings would be considerably worse as data encrypted using ElGamal to compute the products would have to be shipped back to the secure client to be re-encrypted using Paillier so that the final inner product could be computed.
We note that there are some apparent anomalies in the data. JScience implements arbitrary precision integers as an array of Java \texttt{long} (64-bit) integers to store the bit representation of an integer. It may be the case that this underlying representation is optimal for some of our test configurations and suboptimal for others, causing unexpected results. Another possibility is that the unexpected results may be as a result of JVM heap increases and garbage collection which may have been more prevalent in certain test configurations.
\section{Conclusion}
\label{sec:concfurther}
In this paper we have presented several new homomorphic encryption schemes intended for use in a practical SSCC system. We envisage that the majority of computation on integer big data, outside of scientific computing, will be computing low degree polynomials on integers, or fixed-point decimals which can be converted to integers. Our somewhat homomorphic schemes are perfectly suited to these types of computation.
As they are only somewhat homomorphic, each of these schemes has a concern that the computational result will grow bigger than the secret modulus. In the case of the ``noise'' variants, we also have to consider the noise term growing large. So, as they stand, these schemes can only compute polynomials of suitably bounded degree.
A further concern is that the ciphertext space is much larger than the plaintext space. This is as a result of adding multiples of large primes to the plaintext. However, we have shown that values exist which would make the system practical for computing low degree polynomials. Similar schemes \cite{vandijk2010fully,coron2011fully} produce ciphertexts infeasibly larger than the corresponding plaintext, which is a single bit. For example, it should be noted, that even the practical CryptDB \cite{popa2011cryptdb}, which is not fully homomorphic, enciphers a 32-bit integer as a 2048-bit ciphertext. Our schemes will produce ciphertext of similar size, if high security is required. However, if the security is only intended to prevent casual snooping, rather than a determined cryptographic attack, the ciphertext size can be reduced, and the blow-up may be acceptable. Observe that the parameters of the system will change for each computation, so a sustained attack has constantly to re-learn these parameters. Of course, if the attacker is able to export data for off-line cryptanalysis, only high security suffices.
We have also presented a hierarchy of systems, HE$k$, with increasing levels of security. These seem to be of practical interest for small $k>2$, but seem impractical for large $k$.
Finally, we presented a fully homomorphic scheme based on HE$k$ for large enough $k$, which seems of purely theoretical interest. The scheme is capable of computing an arbitrary depth Boolean circuit without employing the techniques used in other fully homomorphic systems \cite{gentry2009fully,brakerski2012leveled,brakerski2012fully}.
We have implemented and evaluated the HE1, HE1N, HE2 and HE2N schemes as part of an SSCC system as discussed in section \ref{sec:results}. Our results are extremely favourable when compared with \cite{naehrig2011can,popa2011cryptdb,stephen2014practical}. So much so, that our MapReduce job execution times remain low even when using the largest set of parameters for HE2N. We believe that this demonstrates the suitability of our schemes for the encryption of integers in cloud computations.
\printbibliography
\clearpage
|
2,869,038,157,084 | arxiv | \section{\label{introduction}Introduction}
The Standard Model (SM) of particle physics \cite{Weinberg:1967tq,Glashow:1961tr,Salam:1968rm}
has successfully passed the numerous experimental tests performed so far. The recent observation
of the Higgs particle \cite{Higgs:1964ia} at the LHC \cite{CMS:2012,ATLAS:2012} also seems
to confirm the mechanism of spontaneous symmetry breaking, which is responsible for masses of
the known gauge bosons and fermions. On the other hand, we know that the SM is not
complete. Firstly, it does not provide a viable dark matter candidate. Secondly, it predicts
that the active neutrinos are strictly massless, which contradicts the results of neutrino
oscillation experiments. A simple yet elegant way to generate
small but nonzero neutrino masses is to add three right-handed
Majorana neutrinos to the model:
\begin{align}
\label{lagrangian}
\mathscr{L}=\mathscr{L}_{SM}&+{\textstyle\frac12}\bar \majneutrino_i
\bigl(i\slashed{\partial} - M_i\bigr)\majneutrino_i\nonumber\\
&- \yu_{\alpha i}\bar \leptdoublet_\alpha {\tilde \higgs} P_R \majneutrino_i
-\yu^\dagger_{i\alpha} \bar \majneutrino_i {\tilde \higgs}^\dagger P_L \leptdoublet_\alpha\kend
\end{align}
where $\majneutrino_i=\majneutrino^c_i$ are the heavy Majorana fields, $\leptdoublet_\alpha$ are the
lepton doublets and $\tilde \higgs\equiv i\sigma_2 \higgs^*$ is the conjugate of the Higgs doublet.
After the electroweak symmetry breaking the active neutrinos receive naturally small masses through
the type-I seesaw mechanism. This scenario has even more far-reaching consequences as it can
explain another beyond-the-SM observation, the baryon asymmetry of the universe.
The Majorana mass term in \eqref{lagrangian} violates lepton number. In the early
Universe a decay of the Majorana neutrino into a lepton-Higgs pair increases the total lepton number of
the Universe by one unit, and a decay into the corresponding antiparticles decreases the total lepton
number by one unit. If there is \CP-violation then, on average, the number of leptons produced in those
decays is not equal to the number of antileptons and a net lepton asymmetry is produced. It is also known
that whereas the difference of the lepton and baryon numbers is conserved in the Standard Model, any
other their linear combination is not \cite{Kuzmin:1985mm}. This implies that the lepton asymmetry
produced by the Majorana neutrinos is partially converted to the baryon asymmetry \cite{Fukugita:1986hr}.
This mechanism, which is referred to as baryogenesis via leptogenesis, naturally explains the observed
baryon asymmetry of the Universe. For a more detailed review of leptogenesis see e.g.
\cite{Buchmuller:2004nz,Davidson:pr2008,Drewes:2013gca}.
The state-of-the-art analysis of the asymmetry generation uses Boltzmann equations with the decay
and scattering amplitudes calculated in vacuum. Their applicability in the hot and expanding early
universe is questionable and can be cross-checked using a first-principle approach based on the use of
non-equilibrium quantum field theory. One of the most important processes for the generation of the asymmetry
is the decay of the Majorana neutrino. Thermal effects enhancing \CP-violation in the decay
have been studied in \cite{Garny:2009rv,Garny:2009qn,Beneke:2010wd,Garbrecht:2010sz,Garny:2010nj}.
The role of the flavor effects has been addressed in \cite{Beneke:2010dz}. A first-principle
analysis of the asymmetry generation in the very interesting regime of resonant leptogenesis
has been presented in \cite{Garny:2011hg} and \cite{Garbrecht:2011aw}. The effect of
next-to-leading order corrections from the gauge interactions of lepton and
Higgs doublets on the production and decay rate of right-handed neutrinos at finite temperature
has been recently studied in \cite{Garbrecht:2013gd,Garbrecht:2013bia}.
The asymmetry generated in the Majorana decay is partially washed out by the inverse decay and scattering processes.
The latter can be classified into two categories. The first category includes $\Delta L=2$
scattering processes mediated by the Majorana neutrinos. A first-principle analysis of such processes
free of the notorious double-counting problem has been presented in \cite{Frossard:2012pc}. The second
category includes $\Delta L=1$ decay and scattering processes mediated by the Higgs. The latter processes
are also known to play an important role in the asymmetry generation and are addressed in the present paper.
The outline of the paper is as follows. In \sect\ref{CanonicalApproach} we briefly review the
canonical approach to the analysis of the $\Delta L=1$ processes and derive the corresponding
amplitudes and reduced cross-sections. In \sect\ref{NEQFTapproach} we derive quantum-generalized
Boltzmann equations for the lepton asymmetry, calculate the effective amplitudes of the Higgs-mediated
scattering processes and compare them with the canonical ones. The obtained Boltzmann equations are
used in \sect\ref{RateEquations} to derive a simple system of rate equations for the total lepton
asymmetry. In section \sect\ref{Numerics} we present a numerical comparison of the corresponding reaction
densities with the ones obtained using the canonical approach. A summary of the results is presented
in \sect\ref{Summary}.
\section{\label{CanonicalApproach}Conventional approach}
In the scenario of thermal leptogenesis lepton asymmetry is generated in the lepton number
and \CP-violating decay of the heavy Majorana neutrinos.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{thermal_leptogenesis_one_loop_corrections}
\caption{\label{treevertexself}Tree-level, one-loop self-energy and one-loop vertex
contributions to the decay of the heavy Majorana neutrino.}
\end{figure}
The corresponding \CP-violating parameters receive contributions from the interference of the
tree-level amplitude with the vertex \cite{Fukugita:1986hr,Garny:2009rv} and self-energy
\cite{Flanz:1994yx,Covi:1996wh,PhysRevD.56.5431,Pilaftsis:2003gt,Garny:2009qn}
amplitudes, see \fig\ref{treevertexself}. The contribution of the loop diagrams
can be accounted for by effective Yukawa couplings \cite{Pilaftsis:2003gt}. If
thermal masses of the SM particles are negligible, they are given by:
\begin{subequations}
\label{effective couplings}
\begin{align}
h_{+,\alpha i} &\equiv h_{\alpha i} -i h_{\alpha j} (h^\dagger h)^*_{ji}\, g_{ij}\kend\\
h_{-,\alpha i} &\equiv h^*_{\alpha i} -i h^*_{\alpha j} (h^\dagger h)_{ji}\, g_{ij}\kend
\end{align}
\end{subequations}
where the loop-function $g_{ij}$ is defined as
\begin{align}
\label{LoopFunction}
g_{ij}\equiv &\frac{1}{16\pi} \frac{M_i M_j}{M^2_i-M^2_j}\nonumber\\
+ &\frac{1}{16\pi}\frac{M_j}{M_i}\biggl[1-\biggl(1+\frac{M_j^2}{M_i^2}\biggr)
\ln \biggl(1+\frac{M_i^2}{M_j^2}\biggr)\biggr]\dend
\end{align}
Note that this expression is valid only for on-shell final states. The first term in
\eqref{LoopFunction} is related to the self-energy and
the second term to the vertex contribution. This expression is applicable for
a mildly or strongly hierarchical mass spectrum of the Majorana neutrinos.
\begin{figure}[h!]
\includegraphics[width=0.27\columnwidth]{thermal_leptogenesis_hierarchical}
\caption{\label{hierarchicalloop} Effective one-loop diagram for the self-energy and
vertex contributions to the decay of the lightest Majorana neutrino for a strongly
hierarchical mass spectrum.}
\end{figure}
In both cases most of the asymmetry is typically generated by the lightest Majorana
neutrino, whereas the asymmetry generated by the heavier ones is almost completely
washed out.
For a strongly hierarchical mass spectrum, $M_i\ll M_j$, the intermediate
Majorana line in \figs\ref{treevertexself}.b and \ref{treevertexself}.c
contracts to a point, see \fig\ref{hierarchicalloop}, and the structure of
the self-energy and vertex contributions is the same. In this limit:
\begin{align}
\label{LoopFunctionHierarchical}
g_{ij}\approx -\frac{3}{32\pi}\frac{M_i}{M_j}\dend
\end{align}
Note that in this approximation the loop integral leading to \eqref{LoopFunctionHierarchical}
depends only on the momentum of the initial state and is independent of the momenta
of the final states. This implies in particular that this expression can also be used for
off-shell final states.
Using the effective couplings \eqref{effective couplings} we find for the decay amplitudes
(squared) \cite{Pilaftsis:2003gt,Frossard:2012pc}:
\begin{subequations}
\label{EffAmplMajDecay}
\begin{align}
\EffAmplitude{}{\majneutrino_i\rightarrow\lepton \higgs}
& = g_\majneutrino g_w (h_+^\dagger h^{\vphantom{\dagger}}_+)_{ii} (\mommaj\momlep)\kend\\
\EffAmplitude{}{\majneutrino_i\rightarrow\bar\lepton\bar\higgs}
& = g_\majneutrino g_w (h_-^\dagger h^{\vphantom{\dagger}}_-)_{ii} (\mommaj\momlep)\kend
\end{align}
\end{subequations}
where we have summed over flavors of the leptons in the final state as well as over
the Majorana spin ($g_\majneutrino=2$) and the $SU(2)_L$ group ($g_w=2$) degrees of
freedom. Here $\mommaj$ and $\momlep$ are momenta of the heavy neutrino and
lepton, respectively. The decay amplitudes \eqref{EffAmplMajDecay} can be traded for
the total decay amplitude and \CP-violating parameter:
\begin{subequations}
\label{AmplsqAndEpsDef}
\begin{align}
\EffAmplitude{}{\majneutrino_i} & \equiv
\EffAmplitude{}{\majneutrino_i \rightarrow \lepton\higgs}+
\EffAmplitude{}{\majneutrino_i \rightarrow \bar\lepton\bar\higgs}\kend\\
\label{EpsilonUnintegrated}
\epsilon_i & \equiv
\frac{\EffAmplitude{}{\majneutrino_i \rightarrow \lepton\higgs}-
\EffAmplitude{}{\majneutrino_i \rightarrow \bar\lepton\bar\higgs}}{
\EffAmplitude{}{\majneutrino_i \rightarrow \lepton\higgs}+
\EffAmplitude{}{\majneutrino_i \rightarrow \bar\lepton\bar\higgs}}\dend
\end{align}
\end{subequations}
Combining \eqref{EffAmplMajDecay} and \eqref{AmplsqAndEpsDef} we then find for the
(unflavored) \CP-violating parameter:
\begin{align}
\label{epsilon vacuum}
\epsilon^{vac}_i\approx \frac{\Im(h^\dagger h)^2_{ij}}{(h^\dagger h)_{ii}}\times 2g_{ij}\kend
\quad j\neq i \dend
\end{align}
The asymmetry generated by the Majorana decay is partially washed out by the inverse
decay and scattering processes violating lepton number. An important role is played
by the $\Delta L=2$ scattering processes mediated by the heavy Majorana neutrinos
\cite{Pilaftsis:2003gt,Plumacher:1997ru,Frossard:2012pc}. In addition, there are
$\Delta L=1$ scattering process mediated by the Higgs doublet with quarks (or the
gauge bosons) in the initial and final states \cite{Pilaftsis:2003gt,Plumacher:1997ru},
see \fig\ref{fig:NQlt} and \fig\ref{fig:NlQt}. The Higgs coupling to the top is considerably
larger than to the other quarks of the three generations. For this reason we do not consider
the latter here. The corresponding Lagrangian reads:
\begin{align}
\label{toplagrangian}
\mathscr{L}_{SM}\supset -\yuq \bar \Q \tilde \higgs P_R \topq
-\yuq^* \bar \topq P_L {\tilde \higgs}^\dagger \Q\kend
\end{align}
where $\Q$ and $\topq$ are the $SU(2)_L$ doublet and singlet of the third quark generation.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{topscattering_TS_NQ_lt}
\caption{\label{fig:NQlt} Tree-level, self-energy and vertex contributions to the
scattering processes $\majneutrino_i \Q\rightarrow \lepton \topq$. Similar diagrams for the scattering
process $\majneutrino_i \bar \topq \rightarrow \lepton \bar \Q$ are obtained by replacing $\Q$ with
$\bar \topq$ and $\topq$ with $\bar \Q$ as well as inverting the direction of the arrows.}
\end{figure}
The $\Delta L=1$ processes are also \CP-violating. The \CP-violation is generated by the same self-energy and
vertex diagrams. Strictly speaking, since the Higgs is no longer on-shell the effective couplings
\eqref{effective couplings} are not applicable in this case.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{topscattering_TS_Nl_Qt}
\caption{\label{fig:NlQt} Tree-level, self-energy and vertex contributions to the scattering
processes $\majneutrino_i \bar \lepton \rightarrow \bar \Q \topq$.}
\end{figure}
On the other hand, for a strongly hierarchical mass spectrum the intermediate Majorana lines in
\fig\ref{fig:NQlt} and \fig\ref{fig:NlQt} again contract to a point and the momenta of the Higgs and
lepton play no role. In other words, for a strongly hierarchical mass spectrum we still can use the
effective couplings \eqref{effective couplings} supplemented with \eqref{LoopFunctionHierarchical}
to calculate the \CP-violating scattering amplitudes.
Summing over flavors and colors of the quarks and leptons in the initial and final states
as well as over the corresponding $SU(2)_L$ and spin degrees of freedom we find for the
amplitude of $\majneutrino_i\Q\rightarrow\lepton \topq$ scattering:
\begin{align}
\label{NQLtAmplitude}
\EffAmplitude{}{\majneutrino_i\Q\rightarrow\lepton \topq} =
\EffAmplitude{}{\majneutrino_i\rightarrow\lepton \higgs} \times
\pHiggs{2}{T}(\momlep-\mommaj)\times
\EffAmplitude{}{\higgs \Q\rightarrow \topq}\kend
\end{align}
where $\pHiggs{}{T}(\momhig)\approx 1/(\momhig^2-m_\higgs^2)$ is the Feynman (or time-ordered)
propagator\footnote{In the kinematic region of interest the decay width term in the Feynman
propagator of the Higgs plays no role and can be neglected.} of the intermediate Higgs and
we have defined
\begin{align}
\label{VacuumEffDecAmpl}
\EffAmplitude{}{\higgs \Q\rightarrow \topq} & =
2 g_s \yuqSqu (\momQ \momtop) \dend
\end{align}
Here $g_s=3$ is the $SU(3)_C$ factor, and $p_\topq$ and $p_\Q$ are the momenta of
the singlet and the doublet respectively. For the charge-con\-ju\-ga\-te process we find
an expression similar to \eqref{NQLtAmplitude}. As can be inferred from \eqref{VacuumEffDecAmpl}
in this work we neglect \CP-violation in the quark sector, which is known to be small. Defining
\CP-violating parameter in scattering as
\begin{align}
\label{defCPSE}
\epsilon_{X\rightarrow Y} \equiv \frac{\EffAmplitude{}{X\rightarrow Y}-\EffAmplitude{}{\bar X\rightarrow \bar Y}}{
\EffAmplitude{}{X\rightarrow Y}+\EffAmplitude{}{\bar X\rightarrow \bar Y}}\kend
\end{align}
we then obtain for $\epsilon_{\majneutrino_i\Q\rightarrow \lepton \topq}$ the same expression
as for the Majorana decay, see \eqref{epsilon vacuum}. In the same approximation amplitude and \CP-violating
parameter for $\majneutrino_i\bar\topq \rightarrow \lepton \bar\Q$ scattering coincide
with those for $\majneutrino_i\Q\rightarrow \lepton \topq$ process.
Proceeding in a similar way we find for the scattering amplitude of the
$\majneutrino_i \bar \lepton \rightarrow \bar \Q \topq$ process:
\begin{align}
\label{NLQtAmplitude}
\EffAmplitude{}{\majneutrino_i \bar \lepton \rightarrow \bar \Q \topq} =
\EffAmplitude{}{\majneutrino_i\bar\lepton\rightarrow\higgs} \times
\pHiggs{2}{T}(\momlep+\mommaj)\times
\EffAmplitude{}{\higgs\rightarrow \bar\Q\topq}\kend
\end{align}
where $\EffAmplitude{}{\higgs\rightarrow \bar\Q\topq}=\EffAmplitude{}{\higgs\Q\rightarrow \topq}$
because we neglect \CP-vi\-o\-la\-tion in the quark sector. Furthermore, for a strongly hierarchical mass spectrum
$\EffAmplitude{}{\majneutrino_i\bar\lepton\rightarrow\higgs}=\EffAmplitude{}{\majneutrino_i\rightarrow\lepton\higgs}$.
The resulting expression for the \CP-violating parameter then coincides with \eqref{epsilon vacuum}.
If the lepton and both quarks are in the final state then instead of a scattering process we deal
with a three-body Majorana decay, see \fig\ref{fig:Ampl_N_lQt}.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{topscattering_TS_N_lQt}
\caption{\label{fig:Ampl_N_lQt} Tree-level, self-energy and vertex contributions to the
amplitude of the three-body decay processes $\majneutrino_i\rightarrow \lepton \bar \Q \topq$.}
\end{figure}
In complete analogy with the scattering processes we can write its amplitude in the form:
\begin{align}
\label{NtoLQtAmplitude}
\EffAmplitude{}{\majneutrino_i \rightarrow \lepton \bar \Q \topq} =
\EffAmplitude{}{\majneutrino_i\rightarrow\lepton\higgs} \times
\pHiggs{2}{T}(\momQ+\momtop)\times
\EffAmplitude{}{\higgs\rightarrow \bar\Q\topq}\dend
\end{align}
Evidently, \CP-violating parameter for this process coincides with that for the
two-body Majorana decay.
To compute the generated lepton asymmetry, the conventional approach uses the
generalized Boltzmann equation for the total lepton abundance, $Y_L\equiv n_L/s$, with
$s$ being the comoving en\-tro\-py density \cite{Kolb:1990vq}. In the Friedmann-Robertson-Walker (FRW) universe
the contribution of the Higgs-mediated processes to the right-hand side of the
the Boltzmann equation simplifies to:
\begin{align}
\label{ConvBltzmnEq}
\frac{s\cal{H}}{z}&\frac{d Y_L}{dz}=\ldots\nonumber\\
&-\sum_i\int \dpi{\majneutrino_i \lepton}{\Q\topq}{\mommaj\momlep}{\momQ\momtop}
(2\pi)^4\delta(\mommaj+\momlep-\momQ-\momtop)
\nonumber\\
&\times \bigl[\EffAmplitude{}{\majneutrino_i\lepton\rightarrow\Q \bar\topq}
\f{\majneutrino_i}{}\f{\lepton}{}\qstatff{\Q}{}\qstatff{\bar\topq}{}-{\rm inverse}\bigr]
\nonumber\\
&+\sum_i\int \dpi{\majneutrino_i \Q}{\lepton\topq}{\mommaj\momQ}{\momlep\momtop}
(2\pi)^4\delta(\mommaj+\momQ-\momlep-\momtop)
\nonumber\\
&\times \bigl[ \EffAmplitude{}{\majneutrino_i\Q\rightarrow\lepton\topq}
\f{\majneutrino_i}{}\f{\Q}{}\qstatff{\lepton}{}\qstatff{\topq}{}-{\rm inverse}\bigr]
\nonumber\\
&+\sum_i\int \dpi{\majneutrino_i \bar\topq}{\lepton\bar\Q}{\mommaj\momtop}{\momlep\momQ}
(2\pi)^4\delta(\mommaj+\momtop-\momlep-\momQ)
\nonumber\\
&\times \bigl[ \EffAmplitude{}{\majneutrino_i\bar\topq\rightarrow\lepton\bar\Q}
\f{\majneutrino_i}{}\f{\bar\topq}{}\qstatff{\lepton}{}\qstatff{\bar\Q}{}-{\rm inverse}\bigr]
\nonumber\\
&+\sum_i\int \dpi{\majneutrino_i}{\lepton\bar\Q \topq}{\mommaj}{\momlep\momQ\momtop}
(2\pi)^4\delta(\mommaj-\momlep-\momQ-\momtop)
\nonumber\\
&\times \bigl[ \EffAmplitude{}{\majneutrino_i\rightarrow\lepton\bar\Q\topq}
\f{\majneutrino_i}{}\qstatff{\lepton}{}\qstatff{\bar\Q}{}\qstatff{\topq}{}-{\rm inverse}\bigr]
\nonumber\\
& - {\rm CP\,\, conjugate\,\,processes}.
\end{align}
where we have introduced the dimensionless inverse temperature $z=M_1/T$, the Hubble
rate ${\cal H}=H{|_{T=M_1}}$, and $\dpi{ab \dotso}{ij \dotso}{p_a p_b \dotso}{p_i p_j \dotso}$
stands for the product of the invariant phase space elements, $\lorentzd{p}{a}\equiv d^3p/[(2\pi)^3 2E_p]$.
Note that to ensure vanishing of the asymmetry in thermal equilibrium one should also include \CP-violating
$2\leftrightarrow 3$ processes \cite{Davidson:pr2008}. Since there is no need for that in the
non-equilibrium QFT approach we will not consider these processes here.
\section{\label{NEQFTapproach}Non-equilibrium QFT approach}
The formalism of non-equilibrium quantum field theory provides a powerful tool for the description
of out-of-equilibrium quantum fields and is therefore well suited for the analysis of leptogenesis.
In this section we briefly review results obtained in \cite{Frossard:2012pc} and introduce notation
that will be used in the rest of the paper. As has been argued in \cite{Frossard:2012pc}, the equation
of motion for the lepton asymmetry can be derived by considering the divergence of the lepton current.
In the FRW Universe $j_L^\mu=(n_L,\vec{0})$ and therefore it is related to the total lepton abundance by:
\begin{align}
{\cal D}_{\mu} j_L^\mu = \frac{s{\cal H}}{z}\frac{d Y_L}{dz}\dend
\end{align}
Using the formalism of non-equilibrium quantum field theory one can express it through propagators
and self-energies of leptons. After a transformation to the Wig\-ner space we obtain \cite{Frossard:2012pc}:
\begin{align}
\label{lepcurwig}
{\cal D}_\mu j_L^\mu (t,p)=g_w\int \! \lorentzdd{\momlep} \, \tr\bigl[ & \sLeptmat{}{<}(t,p) \pLeptmat{}{>} (t,p)
\nonumber \\ &- \sLeptmat{}{>}(t,p) \pLeptmat{}{<}(t,p) \bigr] \kend
\end{align}
where $\lorentzdd{\momlep}\equiv {d^4\momlep}/{(2\pi)^4}$ and the hats denote matrices
in flavor space. In \eqref{lepcurwig} we have taken into account that the $SU(2)_L$ symmetry
is unbroken at the epoch of leptogenesis. As a consequence, the $SU(2)_L$ structure of the
propagator is trivial, $\pLept{\alpha\beta}{ab}=\delta_{ab}\pLept{\alpha\beta}{}$, and
summation over the $SU(2)_L$ components simply results in the overall factor $g_w=2$. Furthermore,
in this work we restrict ourselves to the analysis of unflavored leptogenesis. Therefore, the
lepton propagator can be approximated by $\pLept{\alpha \beta }{}=\delta^{\alpha\beta} \pLept{}{}$.
Similar relation also holds for the lepton self-energy. Then the equation for the divergence
of the lepton current takes the form:
\begin{align}
\label{MasterEquationWigner}
{\cal D}_\mu j_L^\mu & = g_w \int\limits_0^\infty\frac{dp^0}{2\pi} \int \frac{d^3\momlep}{(2\pi)^3}\\
&\times \tr \bigl[ \bigl(\sLept{}{<}\pLept{}{>} - \sLept{}{>}\pLept{}{<}\bigr)
-\bigl(\sLeptcp{}{<} \pLeptcp{}{>}-\sLeptcp{}{>} \pLeptcp{}{<} \bigr)\bigr]\kend\nonumber
\end{align}
where $\sLept{}{}\equiv \sLept{\alpha\alpha}{}$ and we have suppressed the argument $(t,p)$ of the
two-point functions. Note that the trace in \eqref{MasterEquationWigner}
acts now in spinor space only.
To convert the integration over positive and negative frequencies into the integration over
positive frequencies only we have introduced in \eqref{MasterEquationWigner} \CP-conjugate
two-point functions and self-energies which are denoted by the bar. According to the extended
quasiparticle approximation (eQP) \citep{CondMatPhys2006_9_473,PhysRevC.48.1034,JPhys2006_35_110}
the Wigthmann propagators can be split into on- and off-shell parts:
\begin{align}
\label{plepeqp}
\pLept{}{\gtrless}=\pLeptEQP{}{\gtrless}-{\textstyle\frac{1}{2}}\bigl(\pLept{}{R} \sLept{}{\gtrless} \pLept{}{R}
+ \pLept{}{A} \sLept{}{\gtrless} \pLept{}{A}\bigr) \dend
\end{align}
The off-shell parts of the lepton propagators exactly cancel out in the lepton current as they
are lepton number conserving. On the other hand, as we will see later, the off-shell part of the
Higgs two-point functions is crucial for a correct description of the scattering processes.
The on-shell part of the Wightman propagators is related to the eQP spectral function and
one-particle distribution function $f_\lepton$ by the Kadanoff-Baym (KB) ansatz:
\begin{align}
\label{eqpspectral}
\pLeptEQP{}{>} &=\left(1-f_\lepton \right)\pLeptEQP{}{\rho}
\kend \quad \pLeptEQP{}{<} =-f_\lepton\, \pLeptEQP{}{\rho}
\kend
\end{align}
where
\begin{align}
\label{LeptonEQP}
\pLeptEQP{}{\rho}=-\frac{1}{2}\pLept{}{R} \sLept{}{\rho}
\pLept{}{R}\sLept{}{\rho}\pLept{}{A}\sLept{}{\rho}\pLept{}{A} \dend
\end{align}
In the limit of vanishing width the eQP spectral function $\pLeptEQP{}{\rho}$ approaches
the Dirac delta-function \cite{Frossard:2012pc},
\begin{align}
\label{leprhotilde}
\pLeptEQP{}{\rho} \approx (2\pi)\,\sign(\momlep^0) \delta(\momlep^2-m_\lepton^2) &
P_L \slashed{\momlep} P_R \nonumber\\
&\equiv \pLeptsc{}{\rho}(p) P_L \slashed{\momlep} P_R \kend
\end{align}
where we have extracted the `scalar' part $\pLeptsc{}{\rho}$ for notational convenience.
In \eqref{leprhotilde} we have approximately taken the gauge interactions into account in
the form of effective masses of the leptons. Note that we will not attempt
a fully consistent inclusion of the gauge interactions here. In the used approximation the
spectral function is \CP-symmetric. This implies that the spectral properties, in particular
the masses, of the particles and antiparticles are the same.
To evaluate the right-hand side of \eqref{MasterEquationWigner} we need to specify the form
of the lepton self-energy. It can be obtained by functional differentiation of the
2PI effective action with respect to the lepton propagator. Loosely speaking, this means
that the self-energies are obtained by cutting one line of the 2PI contributions to the
effective action.
\begin{figure}[h!]
\includegraphics[width=0.7\columnwidth]{setting_sun}\\[2mm]
\includegraphics[width=0.7\columnwidth]{pie}
\caption{\label{fig:2PI contributions}Two- and three-loop contributions to the 2PI
effective action and the corresponding contributions to the lepton self-energy.}
\end{figure}
The two- and three-loop contributions are presented in \fig\ref{fig:2PI contributions}(a)
and \fig\ref{fig:2PI contributions}(c).
The one-loop contribution to the lepton self-energy, see \fig\ref{fig:2PI contributions} (b),
is given by \cite{Frossard:2012pc}:
\begin{align}
\label{Sigma1WT}
\sLept{(1)}{\gtrless}(t,\momlep)
= & -\int \lorentzdd{\mommaj \momhig }(2\pi)^4
\deltafour{\momlep+\momhig-\mommaj}\nonumber\\
& \times
(\yu^\dagger\yu)_{ji} \, P_R \pMaj{ij}{\gtrless}(t,\mommaj) P_L \pHiggs{}{\lessgtr}(t,\momhig)\kend
\end{align}
where $\pMaj{}{}$ and $\pHiggs{}{}$ denote the Majorana and Higgs propagators respectively, and
$\lorentzdd{\mommaj\momhig }\equiv \lorentzdd{ \mommaj}\lorentzdd{\momhig} $.
The expression for the two-loop contribution, see \fig\ref{fig:2PI contributions} (d), is
rather lengthy. Here we will only need a part of it:
\begin{align}
\label{SEDMT}
&\sLept{(2)}{\gtrless}(t,\momlep) = \int \lorentzdd{\mommaj \momhig}\
(2\pi)^4\delta(\momlep+\momhig-\mommaj) \\
&\times\big[ (\yu^\dagger \yu)_{in}(\yu^\dagger \yu)_{jm}
\varLambda_{mn}(t,q,k)P_L C \pMaj{ij}{\gtrless}(t,\mommaj)
P_L\pHiggs{}{\lessgtr}(t,\momhig)\nonumber\\
&+ (\yu^\dagger \yu)_{ni} (\yu^\dagger \yu)_{mj}
P_R \pMaj{ji}{\gtrless}(t,\mommaj) C P_R V_{nm}(t,\mommaj,\momhig)
\pHiggs{}{\lessgtr}(t,\momhig)\big]\nonumber \kend
\end{align}
where we have introduced two functions containing loop corrections:
\begin{align}
\label{LC1MT}
&\varLambda_{mn}(t,\mommaj,\momhig) \equiv \int \lorentzdd{k_1 k_2 k_3}\nonumber\\
& \times (2\pi)^4\delta(\mommaj+k_1+k_2)\ (2\pi)^4 \delta(\momhig+k_2-k_3)\nonumber\\
& \times \bigl[ P_R \pMaj{mn}{R}(t,-k_3) C P_R \pLept{T}{F}(t,k_2)\pHiggs{}{A}(t,k_1)\nonumber\\
& +\ P_R \pMaj{mn}{F}(t,-k_3) C P_R \pLept{T}{R}(t,k_2) \pHiggs{}{A}(t,k_1)\nonumber\\
& +\ P_R \pMaj{mn}{R}(t,-k_3) C P_R \pLept{T}{A}(t,k_2) \pHiggs{}{F}(t,k_1) \bigr]\kend
\end{align}
and $V_{nm}(t,q,k)\equiv P\, \varLambda^\dagger_{nm}(t,q,k)\, P$ to shorten the notation.
Here $P=\gamma^0$ is the parity conjugation operator. The remaining terms of the two-loop
self-energy can be found in \cite{Frossard:2012pc}. As has been demonstrated in
the same reference, \CP-conjugates of the above self-energies can be obtained by
replacing the Yukawa couplings by the complex conjugated ones and the propagators by
the \CP-conjugated ones.
Comparing \eqref{Sigma1WT} and \eqref{SEDMT} we see that the two self-energies have a
very similar structure. First, the integration is over momenta of the Higgs and Majorana
neutrino and the delta-function contains the same combination of the momenta. Second, they
both include one Wightman propagator of the Higgs field and one Wightman propagator of the
Majorana field. These can be interpreted as cut-propagators which describe on-shell particles
created from or absorbed by the plasma \cite{Carrington:2004tm}. The retarded and advanced
propagators can be associated with the off-shell intermediate states. We therefore conclude
that the two self-energies describe \CP-violating decay of the heavy neutrino into a
lepton-Higgs pair. Note that this interpretation only holds for the ``particle'' part of the
eQP ansatz. The inclusion of the off-shell part of the Higgs Wightman propagator gives raise
to the Higgs mediated scattering processes and three-body decay, see section \ref{HiggsMediatedScattering}.
To evaluate \eqref{Sigma1WT} and \eqref{SEDMT} we need to know the form of the Higgs and
Majorana propagators. For the Higgs field we will adopt in this section a leading-order
approximation:
\begin{align}
\label{KBHiggs}
\pHiggs{}{>}=(1+f_\higgs)\pHiggs{}{\rho}\kend\quad
\pHiggs{}{<}=f_\higgs\pHiggs{}{\rho}\kend
\end{align}
and a simple quasiparticle approximation for the spectral function,
\begin{align}
\label{QPforHiggs}
\pHiggs{}{\rho}(t,\momhig)=\,(2\pi)\,\sign(\momhig^0)\,\delta(\momhig^2-m_\higgs^2)\kend
\end{align}
where $m_\higgs$ is the effective thermal Higgs mass. Close to thermal equilibrium the full resummed
Majorana propagator is given by \cite{Frossard:2012pc}:
\begin{align}
\label{eQPequ}
\pMajmat{}{\gtrless} & =\HatMatTheta{}{R} \bigl[\,\pMajEQPmat{\gtrless}\nonumber\\
&-\pMajdiagmat{}{R} \sMajmat{'}{\gtrless}\pMajdiagmat{}{A}
- {\textstyle\frac{1}{2}}\bigl(\pMajdiagmat{}{R} \sMajmat{d}{\gtrless}\pMajdiagmat{}{R}+
\pMajdiagmat{}{A} \sMajmat{d}{\gtrless}\pMajdiagmat{}{A} \bigr)
\bigr]\HatMatTheta{}{A}\kend
\end{align}
where $\sMajmat{d}{}$ and $\sMajmat{'}{}$ denote the diagonal and off-diagonal components
of the Majorana self-energy respectively, $\pMajdiagmat{}{R}$ and $\pMajdiagmat{}{A}$ are given by
\begin{align}
\label{diagRA}
\pMajdiagmat{}{R(A)}=-\bigl( \slashed{\mommaj} -\hat{M} - \sMajmat{d}{R(A)}\bigr)^{-1}\kend
\end{align}
and we have introduced
\begin{align}
\HatMatTheta{}{R}\equiv \bigl(\mathds{1}+\pMajdiagmat{}{R}\sMajmat{'}{R} \bigr)^{-1}\kend\quad
\HatMatTheta{}{A}\equiv \bigl(\mathds{1}+\sMajmat{'}{A} \pMajdiagmat{}{A} \bigr)^{-1}\kend
\end{align}
to shorten the notation. The first term in the square brackets of \eqref{eQPequ} describes (inverse)
decay of the Majorana neutrino, whereas the remaining three terms describe two-body scattering
processes mediated by the Majorana neutrino. For the ``particle'' part of the eQP diagonal Wightman
propagators of the Majorana neutrino one can use the KB approximation:
\begin{align}
\label{eqpmaj}
\pMajEQP{nn}{>}=(1-f_{N_n}) \pMajEQP{nn}{\rho}\,,\quad
\pMajEQP{nn}{<}=-f_{N_n} \pMajEQP{nn}{\rho}\kend
\end{align}
with the spectral function given by an expression identical to \eqref{LeptonEQP}.
Substituting \eqref{diagRA} we find in the limit of small decay width:
\begin{align}
\pMajEQP{nn}{\rho} =(2\pi)\,\sign(\mommaj^0)\delta(\mommaj^2-M^2_n) & (\slashed{\mommaj}+M_n)\nonumber\\
& \equiv \pMajEQPsc{nn}{\rho}(\slashed{\mommaj}+M_n) \dend
\end{align}
Inserting \eqref{Sigma1WT} and \eqref{SEDMT} into
the divergence of the lepton current \eqref{MasterEquationWigner} and integrating over
the frequencies we then obtain an expression that strongly resembles the Boltzmann equation:
\begin{align}
\label{asymLtree}
\frac{s{\cal H}}{z}\frac{d Y_L}{dz} & = \sum_i \int
\dpi{\majneutrino_i}{\lepton\higgs}{q}{pk}\nonumber \\
& \times \bigl[\EffAmpl{}{\majneutrino_i}{\lepton\higgs}
\F{\majneutrino_i}{\lepton\higgs}{\mommaj}{\momlep \momhig}
- \EffAmpl{}{\majneutrino_i}{\bar\lepton\bar\higgs}
\F{\majneutrino_i }{\bar\lepton\bar\higgs}{\mommaj}{\momlep \momhig} \bigr]\kend
\end{align}
where we have introduced
\begin{align}
\label{eqn:definition of F}
&\F{ab ..}{ij ..}{p_a p_b ..}{p_i p_j ..} \equiv
(2\pi)^4 \delta(p_a+p_b+\ldots -p_i-p_j-\ldots)\nonumber\\
&\hspace{10mm} \times\bigl[\f{a}{p_a} \f{b}{p_b} \ldots (1\pm \f{i}{p_i})(1\pm \f{j}{p_j}) \ldots \nonumber\\
&\hspace{10mm} - \f{i}{p_i} \f{j}{p_j} \ldots (1\pm \f{a}{p_a})(1\pm \f{b}{p_b}) \ldots \bigr] \kend
\end{align}
with the plus (minus) sign corresponding to bosons (fermions). Note that $\F{ab ..}{ij ..}{p_a p_b ..}{p_i p_j ..}$
vanishes in equilibrium due to detailed balance. This implies that in accordance with the third
Sakharov condition \cite{Sakharov:1967dj} no asymmetry is generated in equilibrium. In the
Kadanoff-Baym formalism this result is obtained automatically and no need for the real
intermediate state subtraction arises.
The effective decay amplitudes $\EffAmplitude{}{}{}$ are given by a sum of the tree-level, one-loop self-energy and
one-loop vertex contributions. The first two:
\begin{subequations}
\label{MajoranaSelfEnAmplitudes}
\begin{align}
\EffAmpl{T}{\majneutrino_i}{\lepton\higgs}&+\EffAmpl{S}{\majneutrino_i}{\lepton\higgs}
\equiv g_w{\textstyle\sum}_{mn}(h^\dagger h)_{mn}\nonumber\\
&\times\tr[\MatTheta{ni}{R}(\mommaj)(\slashed{\mommaj}+M_i)
\MatTheta{im}{A}(\mommaj)P_L\slashed{\momlep}P_R\,]\kend\\
\EffAmpl{T}{\majneutrino_i}{\bar\lepton\bar\higgs}&
+\EffAmpl{S}{\majneutrino_i}{\bar\lepton\bar\higgs}\equiv
g_w {\textstyle\sum}_{mn}(h^\dagger h)^*_{mn}\nonumber\\
&\times\tr[\MatThetacp{ni}{R}(\mommaj)(\slashed{\mommaj}+M_i)
\MatThetacp{im}{A}(\mommaj)P_L\slashed{\momlep}P_R\,]\kend
\end{align}
\end{subequations}
emerge from the one-loop lepton self-energy \eqref{Sigma1WT}. The third one:
\begin{subequations}
\label{MajoranaVertexAmplitudes}
\begin{align}
\EffAmpl{V}{N_i}{\lepton \higgs} \equiv & -g_w(h^{\dagger}h)_{ij}^2\
M_i\,\tr\bigl[\varLambda_{jj}(q,k) C P_L\slashed pP_R\bigr]\nonumber\\
&-g_w(h^{\dagger}h)_{ji}^{2}\,M_i\, \tr\bigr[ C V_{jj}(q,k)P_L\slashed p P_R\bigr]\kend\\
\EffAmpl{V}{N_i}{\bar \lepton \bar \higgs}
\equiv & -g_w(h^{\dagger}h)_{ij}^2\ M_i\,
\tr\bigr[ C V_{jj}(q,k)P_L\slashed p P_R\bigr]\nonumber\\
&-g_w(h^{\dagger}h)_{ji}^{2}\,M_i\, \tr\bigl[\varLambda_{jj}(q,k)
C P_L\slashed pP_R\bigr]\kend
\end{align}
\end{subequations}
is generated by the two-loop lepton self-energy \eqref{SEDMT}.
Substituting \eqref{MajoranaSelfEnAmplitudes} and \eqref{MajoranaVertexAmplitudes} into
\eqref{AmplsqAndEpsDef} we find to leading order in the couplings that the total decay
amplitude summed over the Majorana spin degrees of freedom is given by
$\EffAmplitude{}{\majneutrino_i}=2g_N g_w (h^\dagger h)_{ii}(\momlep\mommaj)$.
The self-energy \CP-violating parameter reads \cite{Frossard:2012pc}:
\begin{align}
\label{CPparamSEdecay}
\epsilon_i^{S}&\approx
-\sum \frac{\Im(\yu^\dagger \yu)_{ij}^2 }{(\yu^\dagger \yu)_{ii}(\yu^\dagger \yu)_{jj}}
\frac{M_i\Gamma_j}{M_j^2} \frac{\momlep L_S}{\mommaj\momlep} \cdot M_j^2 \pMajdiagsc{jj}{h}(\mommaj)
\kend
\end{align}
where the `scalar' part of the diagonal hermitian Majorana propagator is given by \cite{Frossard:2012pc}:
\begin{align}
\label{DiagonalHermitian}
\pMajdiagsc{jj}{h}(q)&\equiv {\textstyle\frac12}\bigl[\pMajdiagsc{jj}{R}(q)+\pMajdiagsc{jj}{A}(q)\bigr]\nonumber\\
&\approx -\frac{q^2-M_j^2}{(q^2-M_j^2)^2+(\Gamma_j/M_j\cdot qL_S)^2}\dend
\end{align}
It describes the intermediate Majorana neutrino line in \fig\ref{treevertexself}.b. Note that
\eqref{CPparamSEdecay} has been obtained assuming a hierarchical mass spectrum of the heavy
neutrinos and is not applicable for a quasidegenerate spectrum. For positive $\mommaj^0$ and
$\mommaj^2$ the self-energy loop function $L_S$ is given by \cite{Frossard:2012pc}:
\begin{align}
\label{eqn:Lrho}
L^\mu_{S}=16\pi \int \lorentzd{\higgs \lepton}{\momhig_1 \momlep_1} &
(2\pi)^4 \deltafour{\mommaj-\momhig_1-\momlep_1}\,
\momlep^\mu_1\nonumber\\
&\times \bigl[1+\f{\higgs}{\momhig_1}-\f{\lepton}{\momlep_1}\bigr]\dend
\end{align}
Simplifying \eqref{MajoranaVertexAmplitudes} we find for the vertex \CP-violating parameter \cite{Frossard:2012pc}:
\begin{align}
\label{MajoranaCPVertex}
\epsilon^V_i &= -\frac12\sum \frac{\Im\,(h^\dagger h)_{ij}^2}{(h^\dagger h)_{ii}(h^\dagger h)_{jj}}
\frac{M_i\Gamma_j}{M_j^2}\frac{\momlep L_V}{\mommaj \momlep}\dend
\end{align}
The vertex loop function is given by:
\begin{align}
L^\mu_V&(q,p) = 16\pi\, M_j^2\int \lorentzdd{\mommaj_1 \momlep_1 \momhig_1} \\
&\times (2\pi)^4 \delta(q+\momhig_1+\momlep_1) (2\pi)^4 \delta(\mommaj-\momlep+\momlep_1-\mommaj_1)\,
\momlep_1^\mu \nonumber\\
&\times \bigl[ \pHiggs{}{\rho}(\momhig_1) \pLeptsc{}{F}(\momlep_1) \pMajdiagsc{jj}{h}(\mommaj_1)
+ \pHiggs{}{F}(\momhig_1) \pLeptsc{}{\rho}(\momlep_1) \pMajdiagsc{jj}{h}(\mommaj_1)\nonumber\\
& - \pHiggs{}{h}(\momhig_1) \pLeptsc{}{\rho}(\momlep_1) \pMajdiagsc{jj}{F}(\mommaj_1)
+ \pHiggs{}{h}(\momhig_1) \pLeptsc{}{F}(\momlep_1) \pMajdiagsc{jj}{\rho}(\mommaj_1)
\nonumber\\
& + \pHiggs{}{\rho}(\momhig_1) \pLeptsc{}{h}(\momlep_1) \pMajdiagsc{jj}{F}(\mommaj_1)
+ \pHiggs{}{F}(\momhig_1) \pLeptsc{}{h}(\momlep_1) \pMajdiagsc{jj}{\rho}(\mommaj_1)\bigr]\nonumber\kend
\end{align}
where $\pMajdiagsc{}{F}=(\pMajdiagsc{}{>}+\pMajdiagsc{}{<})/2$ is the `scalar' part of
the corresponding statistical propagator of the heavy neutrino. For the lepton and Higgs fields
the definitions are similar. The three lines in the square brackets in \eqref{MajoranaCPVertex} correspond to different
cuts through two of the three internal lines of the vertex loop. The first line corresponds to
cutting the propagators of the Higgs and lepton and can be simplified to \cite{Garny:2010nj}:
\begin{align}
\label{cutlphi}
\momlep L^{\lepton \higgs}_V& (\mommaj,\momlep) = 16\pi \int \lorentzd{\higgs \lepton}{\momhig_1 \momlep_1}
(2\pi)^4\delta(\mommaj-\momlep_1-\momhig_1) \nonumber\\
&\times (\momlep \momlep_1) \bigl[1+\f{\higgs}{\momhig_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{M_j^2-(\mommaj-\momlep_1-\momlep)^2} \dend
\end{align}
The other two are cuts through the Majorana and lepton and the Majorana and Higgs lines
respectively \cite{Garbrecht:2010sz}. For the second cut we obtain:
\begin{align}
\label{cutNl}
\momlep L^{\majneutrino_j\lepton}_V (\mommaj,\momlep)& =16\pi\int
\lorentzd{\majneutrino_j \lepton}{\mommaj_1\momlep_1}
(2\pi)^4\delta(\mommaj-\momlep+\momlep_1-\mommaj_1) \nonumber\\
& \times(\momlep \momlep_1)\,\bigl[\f{\majneutrino_j}{\mommaj_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{m_\higgs^2-(\mommaj+\momlep_1)^2} \nonumber \\
&+16\pi\int \lorentzd{\majneutrino_j}{\mommaj_1}\lorentzd{\lepton}{\momlep_1}
(2\pi)^4\delta(\mommaj-\momlep-\momlep_1+\mommaj_1)\nonumber\\
&\times(\momlep \momlep_1)\,\bigl[\f{\majneutrino_j}{\mommaj_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{m_\higgs^2-(\mommaj-\momlep_1)^2} \kend
\end{align}
whereas contribution of the third cut is given by:
\begin{align}
\label{cutNphi}
\momlep L^{\majneutrino_j\higgs}_V & (\mommaj,\momlep) =16\pi\int
\lorentzd{\majneutrino_j \higgs }{\mommaj_1 \momhig_1}
(2\pi)^4\delta(\mommaj_1-\momlep-\momhig_1)\nonumber \\
&\times (\momlep\mommaj+\momlep\momhig_1)\bigl[\f{\higgs}{\momhig_1}+\f{\majneutrino_j}{\mommaj_1}\bigr]
\frac{M_j^2}{m_\lepton^2-(\mommaj+\momhig_1)^2} \kend
\end{align}
where we have assumed $M_i<M_j$ so that the (inverse) decay $\majneutrino_i\leftrightarrow \majneutrino_j
\lepton \lepton$ is kinematically forbidden. In \eqref{cutNl} the second term vanishes for
the decay process $N_i \leftrightarrow \lepton \higgs$ but gives a non-zero contribution for
the scattering processes, see section \ref{HiggsMediatedScattering}.
If the intermediate Majorana neutrino is much heavier than the decaying one the last two cuts
are strongly Boltzmann-suppressed. Furthermore, comparing \eqref{eqn:Lrho} and \eqref{cutlphi} we observe that
in this case $\momlep L_V \approx \momlep L_S$. In the same approximation we
can also neglect the `regulator' term in the denominator of \eqref{DiagonalHermitian}.
The two contributions to the \CP-violating parameter then have the same structure and their
sum can be written in the form:
\begin{align}
\epsilon_i=\epsilon_i^{vac}\,\frac{\momlep L_S}{\mommaj \momlep}\dend
\end{align}
In the vacuum limit $L^\mu_S=\mommaj^\mu$ and we recover \eqref{epsilon vacuum}.
At finite temperatures the \CP-violating parameter is moderately enhanced by the
medium effects \cite{Frossard:2012pc}.
\section{\label{HiggsMediatedScattering} Higgs mediated scattering}
In the previous section we have approximated the full resummed Higgs propagator by
leading-order expressions \eqref{KBHiggs} and \eqref{QPforHiggs}. In this section we
will use a more accurate eQP approximation. As we will see, it allows one to describe
Higgs-mediated $\Delta L=1$ two-body scattering and three-body decay processes.
Similarly to \eqref{plepeqp}, the extended quasiparticle approximation for the Higgs
propagator reads:
\begin{equation}
\label{eqphiggs}
\pHiggs{}{\gtrless}=\pHiggsEQP{}{\gtrless}-{\textstyle\frac{1}{2}}
\left( \pHiggs{2}{R}+\pHiggs{2}{A}\right)\sHiggs{}{\gtrless}\dend
\end{equation}
Its graphic interpretation is presented in \fig\ref{fig:eQPHiggs}.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{eQP_Higgs}
\caption{\label{fig:eQPHiggs}Schematic representation of the eQP approximation for the Higgs field.}
\end{figure}
For the first term on
the right-hand side of \eqref{eqphiggs} we can again use approximations \eqref{KBHiggs} and
\eqref{QPforHiggs}. To analyze the second term we have to specify the Higgs self-energy.
At one-loop level it reads:
\begin{align}
\label{higgsse}
\sHiggs{}{\gtrless}(t,\momhig)=&g_s \yuqSqu \int
\lorentzdd{\momQ \momtop} (2\pi)^4 \delta(\momhig-\momtop+\momQ) \nonumber\\
& \times \tr\bigl[ \pQ{}{\lessgtr}(t,\momQ)P_R \pTopq{}{\gtrless} (t,\momtop) P_L\bigr] \kend
\end{align}
see \app\ref{HiggsSE} for more details. As is evident from \eqref{higgsse}, here we limit
our analysis to contributions generated by the quarks of the third generations. Let us note that in the SM the gauge
contribution to the Higgs self-energy is of the same order of magnitude and should not be
dismissed in a fully consistent approximation. Using the KB ansatz for the eQP
propagators of the quarks with effective thermal mass:
\begin{subequations}
\label{eqpquark}
\begin{align}
\pTopqEQP{}{>}&=(1-f_\topq)\pTopqEQP{}{\rho} \kend \quad \pTopqEQP{}{<}=-f_\topq \pTopqEQP{}{\rho}\kend \\
\pQEQP{}{>}&=(1-f_\Q)\pQEQP{}{\rho} \kend \quad \pQEQP{}{<}=-f_\Q \pQEQP{}{\rho}\kend
\end{align}
\end{subequations}
with
\begin{subequations}
\label{spectralEQPquarks}
\begin{align}
\pQEQP{}{\rho}&=(2\pi)\sign(\momQ^0) \delta(\momQ^2-m_\Q^2 )P_L\momQslash P_R\nonumber\\
&\equiv \pQsc{}{\rho} P_L\momQslash P_R \kend \\
\pTopqEQP{}{\rho}&=(2\pi)\sign(\momtop^0) \delta(\momtop^2-m_\topq^2 )P_R\momtopslash P_L\nonumber\\
&\equiv \pTopqsc{}{\rho} P_R\momtopslash P_L\kend
\end{align}
\end{subequations}
and neglecting their off-shell parts, which are lepton number conserving, we can write the
Higgs self-energy in the form:
\begin{subequations}
\label{higgssesimple}
\begin{align}
\sHiggs{}{>}(t,\momhig)&=-2g_s \yuqSqu \int \! \!
\lorentzdd{\momQ \momtop} (2\pi)^4 \delta(\momhig+\momQ-\momtop)
\nonumber \\ & \times \f{\Q}{} \qstatff{\topq}{} (\momQ\momtop)
\pQsc{}{\rho}(\momQ) \pTopqsc{}{\rho}(\momtop) \kend \\
\sHiggs{}{<}(t,\momhig)&=-2g_s \yuqSqu \int \!
\! \lorentzdd{\momQ \momtop} (2\pi)^4 \delta(\momhig+\momQ-\momtop)
\nonumber \\ & \times \qstatff{\Q}{}\f{\topq}{} (\momQ\momtop)
\pQsc{}{\rho}(\momQ) \pTopqsc{}{\rho}(\momtop) \kend
\end{align}
\end{subequations}
Substituting the one-loop lepton self-energy \eqref{Sigma1WT} with the Higgs propagator
given by \eqref{eqphiggs} into the divergence of the lepton current \eqref{MasterEquationWigner},
we obtain:
\begin{align}
\label{LeptCurrSelfEn}
\frac{s{\cal H}}{z}\frac{d Y_L}{dz} & = \sum \int \lorentzdd{\mommaj\momQ\momlep\momtop}
(2\pi)^4 \delta(\mommaj+\momQ-\momlep-\momtop) \nonumber\\
&\times \pMajEQPsc{ii}{\rho}(\mommaj) \pLeptsc{}{\rho}(\momlep) \pQsc{}{\rho}(\momQ)
\pTopqsc{}{\rho}(\momtop)\nonumber\\
&\times \EffAmplitude{}{\majneutrino_i\rightarrow \lepton\higgs}(\mommaj,\momlep) \pHiggs{2}{R+A}(\momtop-\momQ)
\EffAmplitude{}{\higgs\Q\rightarrow\topq}(\momtop,\momQ) \nonumber \\
&\times\bigl[\f{N_i}{\mommaj} \f{\Q}{\momQ} (1-\f{\lepton}{\momlep}) (1-\f{\topq}{\momtop}) \nonumber\\
&\hspace{18mm}-\f{\lepton}{\momlep} \f{\topq}{\momtop}(1-\f{N_i}{\mommaj})(1-\f{\Q}{\momQ})\bigr] \kend
\end{align}
where we have introduced a combination of the retarded and advanced propagators,
\begin{align}
\label{DeltaRA}
\pHiggs{2}{R+A}(\momhig)\equiv
{\textstyle\frac12}\bigl[\pHiggs{2}{R}(\momhig)+\pHiggs{2}{A}(\momhig) \bigr] \kend
\end{align}
which describes the intermediate Higgs line in \fig\ref{fig:NQlt} and \fig\ref{fig:NlQt}.
Note that in \eqref{LeptCurrSelfEn} the momenta are not restricted to the mass shell.
In particular, the zeroth components of the momenta can have either sign. Due to the
Dirac-deltas in the spectral functions the frequency integration is trivial. Each of
the spectral functions can be decomposed into a sum of two delta-functions, one with
positive and one with negative frequency, leading to $2^4$ terms. These different terms
correspond to $1 \leftrightarrow 3$
(inverse) decay, $2 \leftrightarrow 2$ scattering and to (unphysical) $0\leftrightarrow 4$
process. An additional constraint comes from the delta-function ensuring energy conservation.
In the regime $M_i>m_\lepton + m_\Q+ m_\topq$ only $8$ terms satisfy the energy conservation.
Using the relation
\begin{align}
\label{dispneg}
1 \pm \f{a}{}(t,-p)= \mp \f{\bar{a}}{} (t,p) \kend
\end{align}
where $f_{\bar{a}}$ denotes the distribution function of the antiparticles, we can then
recast \eqref{LeptCurrSelfEn} in the form:
\begin{align}
\label{lepcurSE}
\frac{s{\cal H}}{z}\frac{d Y_L}{dz} &= \ldots
\sum_i \int \! \! \dpi{N_i \lepton}{ \Q \topq }{\mommaj \momlep}{ \momQ \momtop}
\\
\times \Bigl(&\left[\F{N_i \Q}{\lepton \topq}{\mommaj \momQ}{ \momlep \momtop}
\EffAmpl{}{N_i \Q}{\lepton \topq}
-\F{N_i \bar \Q}{\bar \lepton \bar \topq}{\mommaj \momQ}{ \momlep \momtop}
\EffAmpl{}{N_i \bar\Q}{\lepton \bar\topq}
\right] \nonumber \\
+& \left[\F{N_i \bar \topq}{\lepton \bar \Q}{\mommaj \momtop}{ \momlep \momQ}
\EffAmpl{}{N_i \bar \topq}{\lepton \bar \Q}
-\F{N_i \topq}{\bar \lepton \Q}{\mommaj \momtop}{ \momlep \momQ}
\EffAmpl{}{N_i \topq}{\bar \lepton \Q}
\right] \nonumber \\
+& \left[
\F{N_i \bar \lepton}{\bar \Q \topq}{\mommaj \momlep}{ \momQ \momtop}
\EffAmpl{}{N_i \bar \lepton}{\bar \Q \topq}
-\F{N_i \lepton}{ \Q \bar\topq}{\mommaj \momlep}{ \momQ \momtop}
\EffAmpl{}{N_i \lepton}{ \Q \bar\topq} \right] \nonumber \\
+&\left[
\F{N_i}{\lepton \bar \Q \topq}{\mommaj}{\momlep \momQ \momtop}
\EffAmpl{}{N_i}{\lepton \bar \Q \topq}
-\F{N_i}{\bar \lepton \Q \bar \topq}{\mommaj}{\momlep \momQ \momtop}
\EffAmpl{}{N_i}{\bar \lepton \Q \bar \topq}
\right]
\Bigr) \nonumber\dend
\end{align}
The effective scattering amplitudes in \eqref{lepcurSE} correspond to different assignments
for the sign of the four-momenta in \eqref{LeptCurrSelfEn}, reflecting the usual crossing symmetry.
For the tree-level and self-energy contributions to the effective scattering and decay amplitudes
we obtain:
\begin{subequations}
\label{EffAmplfact}
\begin{align}
\EffAmpl{T+S}{N_i \Q}{\lepton \topq}&= \EffAmpl{T+S}{N_i \bar \topq}{\lepton \bar \Q}\nonumber\\
&=\EffAmpl{T+S}{N_i}{\lepton \higgs}\Delta^2_{R+A}(\momtop-\momQ)\EffAmpl{}{\higgs\Q}{\topq} \kend \\
\EffAmpl{T+S}{N_i \bar \lepton}{\bar \Q \topq}&=
\EffAmpl{T+S}{N_i\bar \lepton}{ \higgs}\Delta^2_{R+A}(\momtop+\momQ)\EffAmpl{}{\higgs}{\bar\Q\topq}\kend\\
\EffAmpl{T+S}{N_i }{\lepton \bar \Q \topq} & =
\EffAmpl{T+S}{N_i}{\lepton \higgs}\Delta^2_{R+A}(\momtop+\momQ)\EffAmpl{}{\higgs}{\bar\Q\topq}\kend
\end{align}
\end{subequations}
and similar expressions for the \CP-conjugate ones. Note that $\EffAmpl{T+S}{N_i}{\lepton \higgs}$ and
$\EffAmpl{T+S}{N_i\bar \lepton}{ \higgs}$ are given
by the same expression since the \CP-violating loop term in \eqref{MajoranaSelfEnAmplitudes}
depends only on the momentum of the Majorana neutrino. In vacuum these scattering amplitudes
reduce to \eqref{NQLtAmplitude} and \eqref{NLQtAmplitude} respectively
but with the Feynman propagator $\pHiggs{2}{T}$ replaced by $\pHiggs{2}{R+A}$. In the latter
the contribution of the real intermediate state is subtracted by construction \cite{Frossard:2012pc}.
However, in the regime $m_\higgs<m_\Q+m_\topq$ the intermediate Higgs cannot be on-shell such that
the vacuum and in-medium amplitudes become numerically equal.
Since the amplitudes $\EffAmpl{}{\higgs}{\bar\Q\topq}$ and $\EffAmpl{}{\higgs\Q}{\topq}$ factorize
in \eqref{EffAmplfact} and are \CP-conserving, the self-energy \CP-violating parameter in these
processes is the same as in the Majorana decay, see \eqref{CPparamSEdecay}. However,
since the decay and scattering processes have different kinematics the averaged decay and
scattering \CP-violating parameters are not identical.
Next we consider the two-loop lepton self-energy \eqref{SEDMT}. Proceeding as above we
find for the divergence of the lepton current an expression of the form \eqref{lepcurSE} with
the amplitudes given by:
\begin{subequations}
\label{EffAmplvertex}
\begin{align}
\EffAmpl{V}{\majneutrino_i \Q}{\lepton \topq}&= \EffAmpl{V}{\majneutrino_i \bar \topq}{\lepton \bar \Q}\nonumber\\
&= \EffAmpl{V}{\majneutrino_i}{\lepton\higgs}\Delta^2_{R+A}(\momtop-\momQ)\EffAmpl{}{\higgs\Q}{\topq}\kend \\
\EffAmpl{V}{\majneutrino_i \bar \lepton}{\bar \Q \topq}&=\EffAmpl{V}{N_i \bar \lepton}{\higgs} \Delta^2_{R+A}(\momtop+\momQ)
\EffAmpl{}{\higgs}{\bar\Q\topq} \kend \\
\EffAmpl{V}{\majneutrino_i}{\lepton \bar \Q \topq} &= \EffAmpl{V}{N_i}{\lepton \higgs} \Delta^2_{R+A}(\momtop+\momQ)
\EffAmpl{}{\higgs}{\bar\Q\topq} \dend
\end{align}
\end{subequations}
Since the vertex contribution to the Majorana decay amplitude depends on the momentum of
the Higgs, the amplitude $\EffAmpl{V}{N_i}{\lepton \higgs}$ does not coincide with
$\EffAmpl{V}{N_i \bar \lepton}{\higgs}$ and we can define two inequivalent vertex \CP-violating
parameters \cite{Nardi:2007jp}. For the scattering processes
$\majneutrino_i \Q \leftrightarrow \lepton \topq$ and $\majneutrino_i \bar\topq \leftrightarrow \lepton \bar \Q$
as well as for the three-body decay $\majneutrino_i \leftrightarrow \lepton \bar \Q \topq$,
the \CP-violating parameter coincides with \eqref{MajoranaCPVertex} with the contributions
of the three possible cuts given by \eqref{cutlphi}, \eqref{cutNl} and \eqref{cutNphi}
respectively. For the $\majneutrino_i \bar \lepton \leftrightarrow \bar \Q \topq$ process,
the \CP-violating parameter still has the form \eqref{MajoranaCPVertex}, but since the
lepton is in the initial state the loop integral must be evaluated at $(\mommaj,-\momlep)$
instead of $(\mommaj,\momlep)$. For the first cut we obtain:
\begin{align}
\label{cutlphi1}
\momlep L^{\lepton \higgs}_V& (\mommaj,\momlep) = 16\pi \int
\lorentzd{\higgs \lepton}{\momhig_1 \momlep_1}
(2\pi)^4\delta(\mommaj-\momlep_1-\momhig_1) \nonumber\\
&\times (\momlep \momlep_1) \bigl[1+\f{\higgs}{\momhig_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{M_j^2-(\mommaj-\momlep_1+\momlep)^2} \dend
\end{align}
Contributions of the second and third cuts are given by:
\begin{align}
\label{cutNl1}
\momlep L^{\majneutrino_j\lepton}_V (\mommaj,\momlep)& =16\pi\int
\lorentzd{\majneutrino_j \lepton}{\mommaj_1 \momlep_1}
(2\pi)^4\delta(\mommaj+\momlep-\momlep_1-\mommaj_1) \nonumber\\
& \times(\momlep \momlep_1)\,\bigl[1-\f{\majneutrino_j}{\mommaj_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{(\mommaj-\momlep_1)^2-m_\higgs^2} \nonumber \\
&-16\pi\int \lorentzd{\majneutrino_j \lepton }{\mommaj_1\momlep_1}
(2\pi)^4\delta(\mommaj+\momlep+\momlep_1-\mommaj_1)\nonumber\\
&\times(\momlep \momlep_1)\,\bigl[\f{\majneutrino_j}{\mommaj_1}-\f{\lepton}{\momlep_1}\bigr]
\frac{M_j^2}{(\mommaj+\momlep_1)^2-m_\higgs^2} \kend
\end{align}
and by
\begin{align}
\label{cutNphi1}
\momlep L^{\majneutrino_j\higgs}_V & (\mommaj,\momlep) =16\pi\int
\lorentzd{\majneutrino_j \higgs}{\mommaj_1 \momhig_1}
(2\pi)^4\delta(\mommaj_1-\momlep-\momhig_1)\nonumber \\
&\times (\momlep\mommaj-\momlep\momhig_1)\bigl[\f{\higgs}{\momhig_1}+\f{\majneutrino_j}{\mommaj_1}\bigr]
\frac{M_j^2}{(\mommaj-\momhig_1)^2-m_\lepton^2} \kend
\end{align}
respectively. As follows from \eqref{cutlphi1} and \eqref{cutNl1}, \CP-violating parameter
in the $\majneutrino_i \bar \lepton \leftrightarrow \bar \Q \topq$ scattering receives two
vacuum contributions \cite{Nardi:2007jp}. One is the usual cut through $\lepton \higgs$,
and the second one is given by the first term in the cut through $N_j \lepton$. The kinematics of the
second contribution corresponds to $N_i \lepton \leftrightarrow N_j \lepton$ $t$-channel
scattering and therefore requires the initial center-of-mass energy $s=\mommaj+\momlep$ to
be greater than the final masses $M_j+m_\lepton$, meaning that contribution of this
term to the reaction density is suppressed for a hierarchical mass spectrum.
\section{\label{RateEquations} Rate equations}
Solving a system of Boltzmann equations in general requires the use of numerical codes capable
of treating large systems of stiff differential equations for the different momentum modes. This
is a difficult task if one wants to study a wide range of model parameters. A commonly employed
simplification is to approximate the Boltzmann equations by the corresponding system of `rate
equations' for the abundances $Y_a$. In \cite{HahnWoernle:2009qn} it was shown that the two
approaches, Boltzmann or the rate equations, give approximately equal results for the final
asymmetry, up to $\sim$10\% correction.
Starting from a quantum Boltzmann equation of the type \eqref{lepcurSE} we derive here the rate
equation for the lepton asymmetry which includes the (usually neglected) quantum statistical
factors. In our analysis we are closely following \cite{Frossard:2012pc}. Contribution of
various processes to the generation of the lepton asymmetry can be represented in the form:
\begin{align}
\label{genBoltz}
\mathcal{D}_\mu j^\mu & = \sum_{i, \{a\},\{j\}} \int \dpi{N_i a b \dotso}{ j k }{\mommaj
p_a p_b \dotso}{ p_j p_k \dotso} \nonumber\\
&\times \bigl[ \F{N_i a b \dotso}{j k \dotso}{q p_a p_b
\dotso}{p_j p_k \dotso} \EffAmpl{}{N_i a b \dotso}{i j \dotso}\nonumber\\
&\hspace{15mm} -\F{N_i \bar a \bar b \dotso}{\bar j \bar k \dotso}{q p_a p_b \dotso}{p_j p_k \dotso} \EffAmpl{}{N_i \bar a
\bar b \dotso}{\bar i \bar j \dotso}\bigr] \kend
\end{align}
compare with \eqref{lepcurSE}, where the sum runs over each possible particle state
with $\lepton \in \{j\}$ or $\bar{\lepton} \in \{a\}$. We assume
that the SM particles are maintained in kinetic equilibrium by the fast gauge interactions. This
means that their distribution function takes the form:
\begin{align}
f_a(t,E_a)=\bigl(e^{\frac{E_a-\mu_a}{T}} \mp 1\bigr)^{-1}\kend
\end{align}
with a time- (or temperature-) dependent chemical potential $\mu_a=\mu_a(t)$. Here the
upper (lower) sign corresponds to bosons (fermions). It is also useful to define the equilibrium
distribution function,
\begin{equation}
f_a^{eq}=\bigl(e^{E_a/T}\mp 1\bigr)^{-1} \dend
\end{equation}
The fast SM interactions relate chemical potentials of the leptons, quarks and the Higgs, such
that only one of them is independent. We can therefore express the chemical potential of the quarks
as a function of the lepton chemical potential \cite{Buchmuller:2005eh,Kartavtsev:2005rs,PhysRevD.42.3344},
\begin{equation}
\mu_\topq=\frac{5}{21}\mu_\lepton\equiv c_{\topq \lepton} \mu_\lepton \kend \quad \mu_\Q=
-\frac{1}{3}\mu_\lepton\equiv c_{\Q \lepton} \mu_\lepton \dend
\end{equation}
Chemical potentials of the antiparticles: $\mu_{\bar a}=-\mu_a$. The lepton chemical potential
is related to the abundance by:
\begin{align}
\frac{\mu_\lepton}{T} \approx c_\lepton \frac{Y_L}{2Y_\lepton^{eq}} \kend
\end{align}
where $c_\lepton$ depends on the thermal mass of the lepton. For $m_\lepton/T \approx 0.2$ it can
be very well approximated by the zero mass limit, $c_\lepton \approx 9 \xi(3)/\pi^2 \approx 1.1$.
Using the identity $1 \pm f_a=e^{(E_a-\mu_a)/T} f_a$ and energy conservation we can rewrite
the combinations of distribution functions appearing in \eqref{genBoltz} as:
\begin{align}
\label{Fsimplif}
&\F{N_i a b \dotso}{j k \dotso}{q p_a p_b \dotso}{p_j p_k \dotso}= \\
& \times (2\pi)^4\delta\left(q+{\textstyle\sum}_a p_a-{\textstyle\sum}_j p_j\right)
\frac{\prod_a f_a \prod_j(1\pm f_j)}{1-f_{N_i}^{eq}} \nonumber \\
&\times \bigl[ f_{N_i}-f_{N_i}^{eq}-f_{N_i}^{eq}(1-f_{N_i})\{ e^{\sum_j\mu_j/T-\sum_a \mu_a/T}-1\}\bigr]\kend \nonumber
\end{align}
where we have suppressed the momentum argument in the distribution functions for notational
convenience. We can then expand \eqref{Fsimplif} in the small chemical potential $\mu_a$.
Assuming the Majorana neutrino to be close to equilibrium, $f_{N_i}-f_{N_i}^{eq}\sim \mathcal{O}(\mu_a)$,
we see that the term in the square bracket in \eqref{Fsimplif} is already of the first order in the chemical
potential. We can therefore replace the distribution functions in the second line of \eqref{Fsimplif}
by the equilibrium ones,
\begin{align}
\frac{\prod_a f_a^{eq} \prod_j(1\pm f_j^{eq})}{1-f_{N_i}^{eq}}=&\prod_a f_a^{eq} \prod_j(1 \pm f_j^{eq})\nonumber \\
&+\prod_j f_j^{eq} \prod_a (1\pm f_a^{eq} ) \kend
\end{align}
and expand the exponential to first order in the chemical potential. Since we assume small
departure from equilibrium the Majorana distribution function that multiplies the chemical
potential should also be replaced by the equilibrium one. The corresponding equation for
the antiparticles is obtained from the above equation by replacing $\mu_a\rightarrow -\mu_a$.
The last step is to assume that the Majorana distribution function is proportional to its equilibrium value,
\begin{align}
f_{N_i} \approx \frac{Y_{N_i}(t)}{Y_{N_i}^{eq}(t)}f_{N_i}^{eq} \dend
\end{align}
Putting everything together we get the conventional form of the rate equation,
\begin{align}
\label{rateeq}
\frac{s{\cal H}}{z}\frac{d Y_L}{dz}&=
\sum_{i, \{a\},\{j\}}
\left[\langle \epsilon^{N_i a b \dotso}_{j k \dotso}
\gamma^{N_i a b \dotso}_{ j k \dotso}\rangle
\left(\frac{Y_{N_i}}{Y_{N_i}^{eq}}-1\right) \right.\nonumber \\
&-\left.\langle \gamma^{N_i a b \dotso}_{j k \dotso} \rangle
c_\lepton c_{a b \dotso \leftrightarrow j k \dotso}\frac{Y_L}{2Y_L^{eq}}\right] \kend
\end{align}
where we have defined the production and washout reaction densities:
\begin{subequations}
\label{LeptonReactDens}
\begin{align}
\label{reaceps}
&\langle \epsilon^{N_i a b \dotso}_{j k \dotso}
\gamma^{N_i a b \dotso}_{j k \dotso}\rangle \equiv \nonumber \\
&\equiv \int \dpi{N_i a b \dotso}{j k \dotso }{\mommaj p_a p_b \dotso}{ p_j p_k \dotso}
(2\pi)^4 \delta\left(q+{\textstyle\sum}_a p_a-{\textstyle\sum}_j p_j\right) \nonumber\\
&\times \epsilon_{N_i a b \dotso \rightarrow j k \dotso} \left(\EffAmplitude{}{N_i a b \dotso \leftrightarrow j k \dotso}+
\EffAmplitude{}{N_i \bar a \bar b \dotso \leftrightarrow \bar j \bar k \dotso}\right) f_{N_i}^{eq} \nonumber \\
&\times \Bigl( \prod_a f_a^{eq} \prod_j(1 \pm f_j^{eq})+\prod_j f_j^{eq}
\prod_a (1\pm f_a^{eq} ) \Bigr)\kend \\
\label{WashoutScattering}
&\langle \gamma^{N_i a b \dotso}_{j k \dotso} \rangle \big|_W \equiv\nonumber\\
& \equiv \int
\dpi{N_i a b \dotso}{ j k \dotso}{\mommaj p_a p_b \dotso}{ p_j p_k \dotso}
(2\pi)^4\delta\left(q+{\textstyle\sum}_a p_a-{\textstyle\sum}_j p_j\right)\nonumber \\
& \times \left(\EffAmplitude{}{N_i a b \dotso \leftrightarrow j k \dotso}+
\EffAmplitude{}{N_i \bar a \bar b \dotso \leftrightarrow \bar j \bar k \dotso}\right)
f_{N_i}^{eq}(1-f_{N_i}^{eq}) \nonumber \\ &
\times \Bigl( \prod_a f_a^{eq}
\prod_j(1 \pm f_j^{eq})+\prod_j f_j^{eq} \prod_a (1\pm f_a^{eq} ) \Bigr)\kend
\end{align}
\end{subequations}
and the numerical factor,
\begin{equation}
\label{defcc}
c_{a b \dotso \leftrightarrow j k \dotso}\equiv\frac{\sum_j \mu_j-\sum_a \mu_a}{\mu_\lepton}\dend
\end{equation}
Equation \eqref{rateeq} must be supplemented by an equation for the heavy neutrino abundance,
\begin{align}
\label{majneutab}
\frac{s{\cal H}}{z}&\frac{d Y_{N_i}}{dz}= -\sum_{\{a\},\{j\}} \langle \gamma^{N_i a b \dotso}_{j k \dotso}
\rangle \big|_P \left(\frac{Y_{N_i}}{Y_{N_i}^{eq}}-1\right) \kend
\end{align}
with the reaction density given by an expression similar to \eqref{WashoutScattering}:
\begin{align}
\label{MajReactDens}
&\langle \gamma^{N_i a b \dotso}_{j k \dotso} \rangle \big|_P \equiv\nonumber\\
& \equiv \int
\dpi{N_i a b \dotso}{ j k \dotso}{\mommaj p_a p_b \dotso}{ p_j p_k \dotso}
(2\pi)^4\delta\left(q+{\textstyle\sum}_a p_a-{\textstyle\sum}_j p_j\right)\nonumber \\
& \times \left(\EffAmplitude{}{N_i a b \dotso \leftrightarrow j k \dotso}+
\EffAmplitude{}{N_i \bar a \bar b \dotso \leftrightarrow \bar j \bar k \dotso}\right)
f_{N_i}^{eq} \nonumber \\ &
\times \Bigl( \prod_a f_a^{eq}
\prod_j(1 \pm f_j^{eq})+\prod_j f_j^{eq} \prod_a (1\pm f_a^{eq} ) \Bigr)\dend
\end{align}
Note that these expressions are valid for two-body scattering processes with Majorana
neutrino in the initial or final state as well as for Majorana (inverse) decay processes.
If the quantum-statistical corrections are neglected, i.e. if the $1\pm \f{}{}$ terms are
replaced by unity and the equilibrium fermionic and bosonic distributions are approximated
by the Maxwell-Boltzmann one, then \eqref{WashoutScattering} and \eqref{MajReactDens} are
equal. For the case of a $2\leftrightarrow 2$ scattering process they read:
\begin{align}
\label{WashoutScatteringMB}
\langle \gamma^{N_i a}_{j k} \rangle & \equiv
\int
\dpi{N_i a}{ j k }{\mommaj p_a}{ p_j p_k} (2\pi)^4\delta (q+p_a-p_j-p_k)\nonumber \\
& \times \bigl(\EffAmplitude{}{N_i a \leftrightarrow j k}+
\EffAmplitude{}{N_i \bar a \leftrightarrow \bar j \bar k}\bigr)
f_{N_i}^{eq} f_a^{eq}\dend
\end{align}
Part of the integrations in \eqref{WashoutScatteringMB} can be performed analytically and we obtain:
\begin{align}
\label{ReactDensCanon}
\langle \gamma^{N_i a}_{j k}\rangle & \approx \frac{T}{64\pi^4}\int\limits_{s_{min}}^\infty ds
\sqrt{s}K_1\left(\frac{\sqrt{s}}{T}\right) \hat{\sigma}^{N_i a }_{j k }(s)\dend
\end{align}
Here $s_{min}=(M_i+m_a)^2$ (assuming $M_i+m_a>m_j+m_k$) and $\hat \sigma(s)$ is the so-called reduced cross-section:
\begin{align}
\label{eqn:reduced cross section definition main text}
\hat \sigma^{N_i a }_{j k } \equiv \frac{1}{8\pi}\int\limits_0^{2\pi}
\frac{d\varphi_{ai}}{2\pi}
\int\limits_{t^-}^{t^+} \frac{dt}{s}\,
\bigl(\EffAmplitude{}{N_i a \leftrightarrow j k }{} +
\EffAmplitude{}{N_i \bar a \leftrightarrow \bar j \bar k }{}\bigr)
\kend
\end{align}
where $s$ and $t$ are the usual Mandelstam variables. The integration limits are given by
\cite{Frossard:2012pc}:
\begin{align}
\label{eqn: t range}
t^\pm&=M_i^2+m_j^2\nonumber\\
&-\frac{s}{2}\bigl[(1+M_i^2/s-m_a^2/s)(1+m_j^2/s-m^2_k/s)\nonumber\\
&\mp \lambda^\frac12(1,M_i^2/s,m_a^2/s)\lambda^\frac12(1,m_j^2/s,m_k^2/s)\bigr]\kend
\end{align}
where $\lambda(x,y,z)\equiv x^2+y^2+z^2-2xy-2xz-2yz$ is the usual kinematical function.
If effective thermal masses of the SM particles are neglected then the integration limits
simplify to $t^+=0$ and $t^-=-(s-M_i^2)$. Integrating \eqref{NQLtAmplitude} and
\eqref{NLQtAmplitude} over $t$ and neglecting the effective masses of the initial and final
lepton and quarks we obtain the standard expressions (see, e.g. \cite{Luty:1992un,Plumacher:1998ex})
for the reduced cross-sections of the Higgs-mediated scattering processes:
\begin{subequations}
\label{VacuumReducedCrs}
\begin{align}
\label{NQLtCanRedAmpl}
\hat{\sigma}^{N_i \Q}_{\lepton\topq}&=\sigma^{N_i \bar\topq}_{\lepton\bar\Q}=\frac{g_w g_s}{4\pi}
(\yu^\dagger \yu)_{ii}\, \yuqSqu\,\frac{x-a_i}{x}\\
&\times\left[\frac{x-2a_i+2a_\higgs}{x-a_i+a_\higgs}
+\frac{a_i-2a_\higgs}{x-a_i}\ln\left(\frac{x-a_i+a_\higgs}{a_\higgs}\right)\right]\kend\nonumber\\
\label{NLQtCanRedAmpl}
\hat{\sigma}^{N_i \bar\lepton}_{\bar\Q\topq}&=\frac{g_w g_s}{4\pi}
(\yu^\dagger \yu)_{ii}\yuqSqu \frac{(x-a_i)^2}{(x-a_\higgs)^2}\kend
\end{align}
\end{subequations}
where we have replaced $s$ by $x\equiv s/M_1^2$ and introduced dimensionless quantities
$a_i\equiv M_i^2/M_1^2$ and $a_\higgs\equiv m_\higgs^2/M_1^2$. Combined with \eqref{ReactDensCanon},
expressions \eqref{VacuumReducedCrs} give the conventional reaction densities of the
Higgs-mediated scattering processes.
Since in the conventional approach the \CP-violating parameter is calculated in vacuum
it is momentum-independent and therefore can be taken out of the integral. The \CP-violating
reaction densities are thus proportional to the washout ones:
\begin{align}
\langle \epsilon^{N_i a}_{jk} \gamma^{N_i a}_{jk}\rangle&=\epsilon_i^{vac}\langle \gamma^{N_i a}_{jk}\rangle\kend
\end{align}
where we have again assumed a strongly hierarchical mass spectrum of the heavy neutrinos.
When the medium corrections are taken into account the \CP-violating parameter depends on the
momenta of the initial and final states and this simple relation is violated.
\section{\label{Numerics}Numerical results}
To illustrate the effect of the quantum-statistical corrections and effective thermal
masses we present in this section ratios of the reaction densities to the conventional ones
assuming a strongly hierarchical mass spectrum of the Majorana neutrinos.
Let us first consider the scattering processes.
Ratios of the improved reaction densities to the conventional ones are presented
in \fig\ref{RatioWashoutReactDens}.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{NQandLt}
\caption{\label{RatioWashoutReactDens} Ratios of the scattering reaction densities
obtained taking into account the thermal masses (dashed lines) and the thermal
masses plus quantum-statistical effects (solid lines) to the conventional ones.
The thick solid lines correspond to \eqref{WashoutScattering} whereas the thin
ones to \eqref{MajReactDens}.}
\end{figure}
Note that the Majorana (as well as the quark) Yukawa couplings cancel out in these ratios
and for this reason we do not specify them here. The dashed lines show the ratio of the
reaction density computed using \eqref{ReactDensCanon}--\eqref{eqn: t range}, i.e. taking
into account the effective thermal masses but neglecting the quantum-statistical corrections,
to the conventional ones. For the $\majneutrino_i\bar\lepton\leftrightarrow \bar\Q\topq$
process (dashed red line) the effective
masses decrease the available phase space and lead to a suppression of
the reaction density in the whole range of temperatures. Note that the ratio does not
approach unity at low temperatures. Qualitatively this behavior can be understood from
\eqref{WashoutScatteringMB}. Let us assume for a moment that the scattering amplitude is
momentum-independent. The reaction density at low temperatures can then be estimated by
evaluating the distribution functions at the average momenta $\langle p_i\rangle$ and
$\langle p_a\rangle\sim 3T$. In the ratio of the reaction densities the Majorana
distribution function cancels out and
\begin{align*}
\frac{\langle \gamma^{X}_{Y}\rangle_{MB,m\neq 0}}{\langle \gamma^{X}_{Y}\rangle_{MB,m=0}}
&\approx \frac{\exp(-E_a/T)}{\exp(-\langle p_a\rangle/T)} \approx \exp(-m_a^2/2\langle p_a\rangle T)\dend
\end{align*}
A more accurate estimate for the ratio of $\langle \gamma^{X}_{Y}\rangle_{MB,m\neq 0}$ and
$\langle \gamma^{X}_{Y}\rangle_{MB,m=0}$ is $\sim \exp(-m_a^2/T^2)$. Since to a good
approximation $m_a \propto T$ we conclude that this ratio is a constant smaller than unity.
In other words, despite the fact that at low temperatures the quark masses become small compared to the
Majorana mass this ratio is \textit{not} expected to approach unity as the temperature decreases. Note also
that (in a very good agreement with the numerical cross-check) this ratio does not depend on the masses
of the final states. Of course, the momentum dependence of the scattering amplitude somewhat changes the low-temperature
behavior of the reaction density. Interestingly enough, for the $\majneutrino_i\Q\leftrightarrow\lepton\topq$
process the inclusion of the thermal masses actually enhances the reaction density at high temperatures
(dashed blue line). This occurs because the induced increase of the amplitude turns out to be larger
than the phase-space suppression. At low temperatures the effective masses become negligible in the
scattering amplitude but still play an important role in the kinematics. As a result, the ratio becomes
smaller than unity and continues to decrease as the temperature decreases. Let us note that for a
(moderately) strong washout regime most of the asymmetry is typically produced at $z \lesssim 10$
and the low-temperature behavior of the reaction densities does not affect the generation of the
asymmetry. Since all particles in the initial and final states are fermions the quantum-statistical
effects further suppress the reaction densities (solid blue and red lines) and render the ratio of the
improved and conventional reaction densities smaller than unity for both $\majneutrino_i\Q\leftrightarrow\lepton\topq$
and $\majneutrino_i\bar \lepton\leftrightarrow \bar \Q\topq$ in the whole range of temperatures.
Ratios of the improved \CP-violating scattering reaction densities to the conventional ones are
presented in \fig\ref{RatioCPReactDens}.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{NQandLtCP}
\caption{\label{RatioCPReactDens} Ratio of the \CP-violating reaction densities to the
ones computed using Boltzmann statistics and neglecting the thermal masses of the initial
and final states.}
\end{figure}
For both scattering processes the improved \CP-violating reaction densities are enhanced at high
temperatures. This is explained by the enhancement of the \CP-violation in the Majorana decay observed
in \cite{Garny:2010nj,Frossard:2012pc}. At the intermediate temperatures the relative enhancement of the \CP-violating
parameters gets smaller and is overcompensated by the effective mass and Fermi-statistics induced
suppression of the washout reaction densities that we have observed in \fig\ref{RatioWashoutReactDens}.
The low-temperature behavior is somewhat different for the two scattering processes. For the
$N_i \bar\lepton \leftrightarrow \bar\Q \topq$ process the effective mass and quantum-statistical
effects get smaller in both the (unintegrated) \CP-violating parameter and the washout reaction
density, such that the ratio of the \CP-violating reaction density to the
conventional one slowly approaches a constant value.
On the other hand, for the $N_i \Q\leftrightarrow\lepton \topq$ process the suppression of the
washout reaction density induced by the effective masses of the initial and final states that
we observed in \fig\ref{RatioWashoutReactDens} also leads to a suppression of the \CP-violating
reaction density that gets stronger at low temperatures.
Next we consider the three-body decay.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{NtoLQt}
\caption{\label{RatioDecayReactDens} Ratio of the $\majneutrino_i \leftrightarrow \lepton \bar\Q\topq$
decay reaction density obtained taking into account effective thermal masses and quan\-tum-statistical
effects to the conventional one computed taking into account only the effective thermal masses of the
final and intermediate states. The thick solid line corresponds to \eqref{LeptonReactDens} whereas
the thin one to \eqref{MajReactDens}.}
\end{figure}
Neglecting the quantum-statistical effects and using vacuum
approximation for the $\majneutrino_i \leftrightarrow \lepton \bar\Q\topq$ decay amplitude in
\eqref{LeptonReactDens} and \eqref{MajReactDens} we recover the conventional expression for the decay
reaction density:
\begin{align}
\label{DecayReactDensCanon}
\langle \gamma^{N_i}_{\lepton\bar\Q\topq}\rangle & \approx
\frac{g_N}{2\pi^2}T M_i^2 \Gamma_{N_i\rightarrow \lepton\bar\Q\topq}
K_1(M_i/T)\dend
\end{align}
Note that it is important to retain the effective thermal masses of the quarks in the calculation
of $\Gamma_{N_i\rightarrow \lepton\bar\Q\topq}$. The four-momentum of the intermediate
Higgs, see \fig\ref{fig:Ampl_N_lQt}, varies in the range $(m_\Q+m_\topq)^2\leqslant \momhig^2 \leqslant(M_i-m_\lepton)^2$.
The relation $m_\higgs < m_\Q+m_\topq$, which is fulfilled in the SM, ensures that the intermediate
Higgs remains off-shell and prevents a singularity in the Higgs propagator.
The ratio of the reaction density computed taking into account the quantum-statistical effects
and effective masses to the one computed taking into account only the effective masses is
presented in \fig\ref{RatioDecayReactDens}.
Note that since $m_\Q\approx m_\topq \approx 0.4\, T$
and $m_\lepton \approx 0.2\, T$ this three-body decay is kinematically allowed only at $T \lesssim M_i$.
As one would expect, at high temperatures the fermionic nature of the initial and final states
leads to a suppression as compared to the Boltzmann approximation. At low temperatures the
quantum-statistical effects play no role and the ratio slowly approaches unity.
Ratio of the \CP-violating reaction density for the $N_i \leftrightarrow \lepton\bar\Q\topq$ process
is presented in \fig\ref{RatioCPDecayReactDens}.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{NtoLQtCP}
\caption{\label{RatioCPDecayReactDens} Ratio of the \CP-violating reaction density of the
$\majneutrino_i \leftrightarrow \lepton \bar\Q\topq$ process obtained taking into account
effective thermal masses and quantum-statistical effects to the ones
computed taking into account only the effective thermal masses.}
\end{figure}
At high and intermediate temperatures the medium
enhancement of the (unintegrated) \CP-violating parameter is overcompensated by the suppression of
the washout decay reaction density that we have observed in \fig\ref{RatioDecayReactDens}.
At low temperatures the effective mass and quantum-statistical effects get smaller in both
the (unintegrated) \CP-violating parameter and the washout reaction density, such that the
\CP-violating reaction density slowly approaches the conventional one.
To conclude this section we present the ratio of the three-body decay and $2\leftrightarrow 2$
scattering processes to the reaction density of $\majneutrino_i \leftrightarrow \lepton\higgs$
process, see \fig\ref{ScatToDecay}.
\begin{figure}[ht]
\includegraphics[width=0.95\columnwidth]{ScatToDecay}
\caption{\label{ScatToDecay} Ratio of the washout scattering and three-body decay reaction densities
to the reaction density of $\majneutrino_i \leftrightarrow \lepton\higgs$ process. The dashed lines
denote the ratios of the conventional reaction densities, the thin solid lines the ratios
computed taking into account only the effective masses in all the reaction densities, and finally the thick solid lines the
ratios computed taking into account the effective masses and quantum-statistical corrections
in all the reaction densities. The reaction density $\langle \gamma^{N_i}_{\ell\phi} \bigr\rangle_{W}$
is computed using \eqref{EffAmplMajDecay} and the definition \eqref{WashoutScattering}, see also \cite{Frossard:2012pc}.}
\end{figure}
As can be inferred from this plot, the three-body decay is subdominant in the whole range of
temperatures and can be safely neglected. The inclusion of the effective masses has a very
similar effect on the two-body decay and scattering reaction densities such that the ratios
of the two almost do not change as compared to the one computed in the massless approximation.
The inclusion of the quantum-statistical corrections has a stronger effect on the scattering
processes such that the ratio of the reaction densities is smaller than the ratio of the
conventional ones. Let us also note that the scattering processes are very important at high
temperatures but become subdominant at low temperatures.
\section{\label{Summary}Summary}
In this work we have studied $\Delta L=1$ decay and scattering processes mediated by the Higgs
with quarks in the initial and final states using the formalism of non-equilibrium quantum field
theory.
Starting from the Kadanoff-Baym equations for the lepton propagator we have derived the
corresponding qu\-a\-ntum-corrected Boltzmann and rate equations for the total lepton asymmetry.
As compared to the canonical ones the latter are free of the notorious double-counting
problem and ensure that the asymmetry automatically vanishes in thermal equilibrium. To
compute the collision term we have taken into account one- and two-loop contributions
to the lepton self-energy and used the extended quasiparticle approximation for the Higgs
propagator. The impact of the SM gauge interactions on the collision term has been approximately
taken into account in the form of effective thermal masses of the Higgs, leptons and quarks.
We find that the inclusion of the effective masses and quantum-statistical terms
suppresses the washout reaction densities of the decay and scattering processes with
respect to the conventional ones, where these effects are neglected, in the whole
relevant range of temperatures. For the $N_i \bar\lepton \leftrightarrow \bar\Q \topq$ process
the ratio of the improved and conventional washout reaction densities slowly approaches a constant
value close to unity at low temperatures. Interestingly enough, for the $N_i \Q\leftrightarrow \lepton \topq$
processes this ratio decreases even at low temperatures. Finally for $N_i\leftrightarrow \lepton\bar\Q \topq$
process the ratio slowly approaches unity at low temperatures. As far as the \CP-violating reaction densities
are concerned, we find that for the scattering processes the ratio of the improved and
the conventional ones is greater than unity at high temperatures but is smaller than
unity at intermediate and low temperatures because of the thermal masses and
quantum-statistical effects. For the three-body decay this ratio is smaller than
unity in the whole relevant range of temperatures.
We expect that the effects studied here can induce a $\mathcal{O}(10 \%)$ correction to the total
generated asymmetry. For a detailed phenomenological analysis it is necessary to include
further phenomena such as flavour effects and process with gauge bosons in the initial
and final states.
\subsection*{Acknowledgements}
\noindent The work of A.K. has been supported by the German Science Foundation (DFG)
under Grant KA-3274/1-1 ``Systematic analysis of baryogenesis in non-equilibrium quantum
field theory''. T.F. acknowledges support by the IMPRS-PTFS. We thank A.~Hohenegger for
useful discussions.
\begin{appendix}
\section{\label{HiggsSE}Higgs self-energy}
The top quark contribution to the Higgs self-energy is derived from the 2PI effective action. At
one-loop the contribution of the top quark is given by:
\begin{align}
i\Gamma_2 = g_s \yuqSqu \int_\mathcal{C} \! d^4u d^4v &
\tr \left[ \pQ{}{ba}(v,u)P_R \pTopq{}{}(u,v) \right] \nonumber \\
&\times \epsilon^*_{bc} \pHiggs{}{cd}(v,u) \epsilon_{da}^T\kend
\end{align}
where the factor $g_s=3$ comes from the summation over color indices and $\epsilon=i\sigma_2$.
In a $SU(2)_L$ symmetric state the Higgs and lepton propagators are proportional to the identity
in the $SU(2)_L$ space, and so is the Higgs self-energy,
\begin{align}
\sHiggs{}{ab}&(x,y)\equiv \sHiggs{}{}(x,y) \delta_{ab}=\frac{\delta i\Gamma_2 }{\pHiggs{}{ba}(y,x)} \nonumber \\
&=g_s \yuqSqu \tr \left[ \pQ{}{}(y,x)P_R \pTopq{}{}(x,y)P_L \right] \delta_{ab} \dend
\end{align}
Its Wightman components are given by,
\begin{align}
\sHiggs{}{\gtrless}(x,y)=g_s \yuqSqu \tr
\bigl[ \pQ{}{\lessgtr}(y,x)P_R \pTopq{}{\gtrless}(x,y)P_L \bigr]\dend
\end{align}
Finally, performing a Wigner transform of the above equation, we find,
\begin{align}
\sHiggs{}{\gtrless}(t,k)&=g_s\yuqSqu \int
\lorentzdd{\momQ \momtop} (2\pi)^4 \delta(k-\momtop+\momQ)\nonumber \\
& \times \tr \bigl[\pQ{}{\lessgtr}(t,\momQ)P_R \pTopq{}{\gtrless}(t,\momtop)P_L \bigr]\dend
\end{align}
\section{\label{kinematics}Reaction density of \texorpdfstring{$1\rightarrow 3$}{1->3} decay}
For $\majneutrino_i\rightarrow \lepton \bar \Q \topq$ decay the general expression \eqref{MajReactDens}
takes the form:
\begin{align}
\bigl\langle \gamma^{N_i}_{\lepton \bar \Q \topq}\bigr\rangle & =
\int \dpi{N_i }{\lepton \bar \Q \topq}{\mommaj}{\momlep\momQ\momtop} (2\pi)^4 \delta(\mommaj-\momlep-\momQ-\momtop)\\
&\times\EffAmplitude{}{\majneutrino_i\rightarrow \lepton\higgs}\times \pHiggs{2}{R+A}(\momQ+\momtop)\times
\EffAmplitude{}{\higgs\rightarrow \bar\Q\topq}\nonumber\\
& \times \f{\majneutrino_i}{eq}\bigl[
\qstatff{\lepton}{eq}\qstatff{\bar\Q}{eq}\qstatff{\topq}{eq}+\f{\lepton}{eq}\f{\bar\Q}{eq}\f{\topq}{eq}
\bigr]\kend\nonumber
\end{align}
where we have used the explicit form of the decay amplitude \eqref{NtoLQtAmplitude}. To reduce it to a
form suitable for the numerical analysis we insert an identity:
\begin{align}
1=\int ds \int d^4 k \delta(\momQ+\momtop-\momhig)\delta_+(\momhig^2-s)\kend
\end{align}
where $\momhig$ is the four-momentum of the intermediate Higgs. Approximating furthermore $\pHiggs{2}{R+A}$
by $\pHiggs{2}{T}$ we can rewrite the reaction density in the form:
\begin{align}
\label{NtoLQtReactDensPrel}
\bigl\langle \gamma^{N_i}_{\lepton \bar \Q \topq}\bigr\rangle & =
\int \dpi{N_i}{}{\mommaj}{} \,\f{\majneutrino_i}{eq}\,\int\frac{ds}{2\pi} \pHiggs{2}{T}(s)\\
&\times \int \dpi{}{\lepton \higgs}{}{\momlep\momhig} \,
(2\pi)^4 \delta(\mommaj-\momlep-\momhig) \EffAmplitude{}{\majneutrino_i\rightarrow \lepton\higgs}\nonumber\\
&\times \int \dpi{}{\bar \Q \topq}{}{\momQ\momtop} (2\pi)^4 \delta(\momhig-\momQ-\momtop)\,
\EffAmplitude{}{\higgs\rightarrow \bar\Q\topq}\nonumber\\
& \times \bigl[
\qstatff{\lepton}{eq}\qstatff{\bar\Q}{eq}\qstatff{\topq}{eq}+\f{\lepton}{eq}\f{\bar\Q}{eq}\f{\topq}{eq}
\bigr]\dend\nonumber
\end{align}
Note that in the regime $m_\higgs<m_\Q+m_\topq$ realized in the considered
case the Higgs is always off-shell and its width can be neglected in $\pHiggs{}{T}$. For the
second line in \eqref{NtoLQtReactDensPrel} we can use
\cite{Frossard:2012pc}:
\begin{align}
\label{ellphiintegral}
\int \dpi{}{\lepton \higgs}{}{\momlep\momhig} (2\pi)^4& \delta(\mommaj-\momhig-\momlep)\nonumber\\
& \rightarrow \frac{1}{8\pi\,|\vec \mommaj|} \int\limits_{E^{-}_\momlep}^{E^{+}_\momlep}dE_\momlep
\int\limits_0^{2\pi}\frac{d\varphi}{2\pi}\dend
\end{align}
The integration limits are given by
\begin{align}
\label{eqn: decay integration limits}
E^\pm_\momlep =
{\textstyle\frac12}\bigl[E_\mommaj\bigl(1+x_\lepton-x_\higgs)
\pm |\vec \mommaj| \lambda^\frac12(1,x_\lepton,x_\higgs)\bigr]\kend
\end{align}
where $x_\lepton\equiv m_\lepton^2/M_i^2$, $x_\higgs\equiv s/M_i$ and
$\lambda(x,y,z)\equiv x^2+y^2+z^2-2xy-2xz-2yz$ is the usual kinematical function.
For the third line we can use a similar expression with $x_\Q=m_\Q^2/s$ and
$x_\topq=m_\topq^2/s$.
Expressed in terms of the integration variables the amplitudes take the form:
\begin{subequations}
\begin{align}
\EffAmplitude{}{\majneutrino_i\rightarrow\lepton \higgs}
& = g_w (h^\dagger h)_{ii} (M_i^2+m_\lepton^2-s)\kend\\
\EffAmplitude{}{\higgs \rightarrow \bar \Q\topq} & =
g_s \yuqSqu (s-m_\Q^2-m_\topq^2)\dend
\end{align}
\end{subequations}
Since they do not depend on the angles between the quarks and leptons the integration over
$\varphi$ is trivial and the reaction density takes the form:
\begin{align}
\label{NtoLQtReactDens}
\bigl\langle \gamma^{N_i}_{\lepton \bar \Q \topq}\bigr\rangle & =
\int \dpi{N_i}{}{\mommaj}{} \,\f{\majneutrino_i}{eq}\,\int\frac{ds}{2\pi} \pHiggs{2}{T}(s)\\
&\times \int^{E^+_\momlep}_{E^-_\momlep} \frac{dE_\momlep}{8\pi |\vec{\mommaj}|} \,
\EffAmplitude{}{\majneutrino_i\rightarrow \lepton\higgs}
\int^{E^+_\momQ}_{E^-_\momQ} \frac{dE_\momQ}{8\pi |\vec{\momhig}|} \,
\EffAmplitude{}{\higgs \rightarrow \bar \Q\topq}
\nonumber\\
& \times \bigl[
\qstatff{\lepton}{eq}\qstatff{\bar\Q}{eq}\qstatff{\topq}{eq}+\f{\lepton}{eq}\f{\bar\Q}{eq}\f{\topq}{eq}
\bigr]\dend\nonumber
\end{align}
The three-momentum of the intermediate Higgs is given by $|\vec{\momhig}|=(E_\momhig^2-s)^\frac12$
and $E_\momhig=E_\mommaj-E_\momlep$. Note that if we neglect the quantum-statistical factors
in \eqref{NtoLQtReactDens} the reaction density takes the standard form.
\section{Majorana spectral self-energy}
We compute here the Majorana spectral self-energy. In a \CP-symmetric medium it reads \cite{Frossard:2012pc}:
\begin{align}
\sMaj{ij}{\rho} &=-{\frac{g_w}{16\pi}}\bigl[(h^\dagger h)_{ij}P_L+(h^\dagger h)^*_{ij} P_R\,\bigr] L_{S} \kend
\end{align}
where we have defined the loop function $L_S(q)$,
\begin{align}
\label{defLrhoh}
L_{S}(q)=&16\pi\int \! \! \lorentzdd{\momlep \momhig} (2\pi)^4\delta(\mommaj-\momlep-\momhig)\slashed{\momlep} \nonumber \\
&\times \big[ \pHiggs{}{F}(\momhig) \pLeptsc{}{\rho(h)}(\momlep)+\pHiggs{}{\rho(h)}(\momhig) \pLeptsc{}{F}(\momlep)\big] \dend
\end{align}
Using the eQP for the Higgs, see \eqref{eqphiggs}, one
can split the function $L_S$ into a decay part, identical to the one computed in \cite{Frossard:2012pc},
\begin{align}
\label{Lrhodecay}
L^d_S( \mommaj)=16\pi{\int} &\lorentzd{\lepton \higgs}{ \momlep \momhig}
\Ftilde{(N_i)}{\lepton \higgs}{\mommaj}{\momlep \momhig}\slashed{\momlep} \kend
\end{align}
where we have assumed $q^0>0$, and defined
\begin{align}
\label{defFtilde}
&\Ftilde{(a)b \dotso}{i j \dotso}{p_a p_b \dotso}{p_i p_j \dotso} \equiv
(2\pi)^4 \delta(p_a+p_b+\dotso -p_i-p_j-\dotso) \nonumber \\
&\hspace{25mm}\times\left[f_b^{p_b}\dotso(1 \pm f_i^{p_i})(1 \pm f_j^{p_j})\right. \dotso \nonumber\\
&\hspace{32mm}+\left.f_i^{p_i} f_j^{p_j} \dotso(1 \pm f_b^{p_b})\dotso \right] \kend
\end{align}
and a scattering part,
\begin{align}
\label{defLrhotop}
L_S^s&(\mommaj)=16\pi \int \lorentzdd{\momlep \momQ \momtop} (2\pi)^4
\delta(\mommaj+\momQ-\momlep-\momtop) \nonumber \\
& \times \pLeptsc{}{\rho}(\momlep) \pQsc{}{\rho}(\momQ) \pTopq{}{\rho}(\momtop)
\pHiggs{2}{R+A}(\momtop-\momQ) \EffAmplitude{}{\higgs\bar \topq\rightarrow \bar \Q}\slashed{\momlep} \nonumber \\
&\times \bigl[ f_\Q^\momQ (1-f_\lepton^\momlep) (1-f_\topq^\momtop)+
f_\lepton^\momlep f_\topq^\momtop (1-f_Q^\momQ)\bigr]\dend
\end{align}
Performing the frequency integration as explained above, see \eqref{lepcurSE}, we can
rewrite \eqref{defLrhotop} as a sum of four terms, corresponding to the three scattering
and one three-body decay process as well as their \CP-conjugates. Assuming $q^0>0$ we obtain:
\begin{align}
\label{lrhosimple}
L_S^s(\mommaj)&=16 \pi \int \! \dpi{\lepton}{ \Q \topq }{\momlep}{ \momQ \momtop}\nonumber\\
&\times \bigl[ \Ftilde{(N_i)\Q}{\lepton \topq}{\mommaj \momQ}{\momlep \momtop}
\pHiggs{2}{R+A}(\momtop-\momQ)\EffAmplitude{}{\higgs\Q\rightarrow \topq}
\nonumber\\
&+\Ftilde{(N_i) \bar \topq}{\lepton \bar \Q}{\mommaj \momtop}{\momlep \momQ}
\pHiggs{2}{R+A}(\momtop-\momQ)\EffAmplitude{}{\higgs\bar \topq\rightarrow \bar \Q}
\nonumber \\
&+\Ftilde{(N_i)\bar \lepton}{\bar \Q \topq}{\mommaj \momlep}{\momQ \momtop}
\pHiggs{2}{R+A}(\momtop+\momQ)\EffAmplitude{}{\higgs\rightarrow\bar \Q \topq} \nonumber\\
&+\Ftilde{(N_i)}{\lepton \bar \Q \topq}{\mommaj }{\momlep \momQ \momtop}
\pHiggs{2}{R+A}(\momtop+\momQ)\EffAmplitude{}{\higgs\Q\rightarrow \topq} \bigr]\slashed{\momlep}\dend
\end{align}
In the regime $m_\higgs<m_\Q+m_\topq$ the intermediate Higgs cannot be on-shell. Therefore one
can neglect the Higgs width in $\pHiggs{2}{R+A}$ and approximate it by $\pHiggs{2}{T}$.
As can be inferred from the definition \eqref{defFtilde} for the scattering terms $\tilde {\cal F}$
vanishes in vacuum, whereas for the decay term it does not. Due to Lorentz covariance
in vacuum both $L_S^d$ and $L_S^s$ must be proportional to the four-vector $q$. Using
\eqref{ellphiintegral} and \eqref{eqn: decay integration limits} we find that the coefficient
of proportionality is equal to unity for the decay contribution, i.e. $L_S^d=q$, if thermal masses
of the Higgs and leptons are neglected. Using \eqref{NtoLQtReactDens} we find that for the scattering
contribution the coefficient of proportionality reads:
\begin{align}
\label{Lsvacuum}
\frac{g_s \yuqSqu}{16\pi^2}& \int_{(m_\Q+m_\topq)^2}^{(M_i-m_\lepton)^2}
\frac{ds}{M_i^2}\lambda^\frac12(1,x_\Q,x_\topq)\lambda^\frac12(1,x_\lepton,x_\higgs)\nonumber\\
&\times \frac{(s-m_\topq^2-m_\Q^2)(M_i^2+m_\lepton^2-s)}{(s-m_\higgs^2)^2}\dend
\end{align}
Note that since we have omitted the Higgs decay width, this expression is convergent only
if $m_\higgs< m_\Q+m_\topq$. The vacuum result \eqref{Lsvacuum} provides also a very good approximation for
nonzero temperatures provided that $M/T\gg 1$. The thermal masses of the quarks then ensure
that the Higgs remains off-shell and therefore that \eqref{Lsvacuum} is finite. It is
important to note that due to the temperature dependence of the effective masses the
coefficient \eqref{Lsvacuum} is temperature-dependent as well. A numerical analysis shows
that it grows as the temperature decreases.
Using $L_S$ we can calculate the in-medium \CP-violating parameter in $\majneutrino_i \leftrightarrow
\lepton \higgs$ process. For a hierarchical mass spectrum \cite{Frossard:2012pc}:
\begin{align}
\epsilon=\epsilon^{vac}_0\frac{\momlep L_S}{\mommaj \momlep}\kend
\end{align}
where $\epsilon^{vac}_0$ denotes the vacuum \CP-violating parameter calculated neglecting contributions
of the Higgs-mediated processes, i.e. using only $L_S^d$. As has been mentioned above, if thermal masses
of the Higgs and leptons are neglected then $L_S^d=q$ in vacuum and we recover $\epsilon=\epsilon_0^{vac}$.
Once the contribution of the Higgs-mediated processes is taken into account $\epsilon^{vac}\neq \epsilon^{vac}_0$.
To estimate the size of the corrections induced by \eqref{lrhosimple} we plot the ratio of thermally
averaged \CP-violating parameter, $\langle \epsilon\rangle\equiv \langle \epsilon \gamma^D_N \rangle
/ \langle \gamma^D_N \rangle$, to $\epsilon_0^{vac}$, see \fig\ref{NLH_CP}.
\begin{figure}[h!]
\includegraphics[width=0.95\columnwidth]{NLH_CP}
\caption{\label{NLH_CP}Ratio of the thermally averaged \CP-violating parameter to the one calculated
in vacuum neglecting the contribution of the Higgs-mediated processes. The blue line corresponds to
\eqref{Lrhodecay}, whereas the red lines to the sum of \eqref{Lrhodecay} and \eqref{lrhosimple}.
The dashed red line is obtained by omitting the contribution of the three-body decay in \eqref{lrhosimple}.}
\end{figure}
Note that we have neglected thermal masses of the final-state Higgs and lepton in the numerics. The blue
line corresponds to the \CP-violating parameter computed using \eqref{Lrhodecay}. In agreement with the above discussion
the ratio reaches unity at low temperatures. The dashed red line corresponds to the \CP-violating parameter
computed using the sum of \eqref{Lrhodecay} and the \textit{scattering} (lines two to four) contributions
to \eqref{lrhosimple}. As expected, at high temperatures we observe an enhancement of the ratio, whereas
at low temperatures it reaches unity. The solid red line is obtained by considering the sum of \eqref{Lrhodecay}
and all of the terms in \eqref{lrhosimple}. Since the three-body process is kinematically suppressed at high
temperatures, the dashed an solid lines overlap for $z\lesssim 1$. At lower temperatures the quantum-statistical
effects are small. However, in agreement with the discussion below \eqref{Lsvacuum}, the effective thermal
masses of the Higgs and quarks lead to a slow rise of the ratio at low temperatures.
\end{appendix}
\input{higgsmediatedscattering.bbl}
\end{document}
|
2,869,038,157,085 | arxiv | \section{Introduction}
The exploration of the internal structure of nuclei is a fascinating task, which identifies transverse momentum dependent (TMD) distributions as one of its most powerful tools. Transverse momentum dependent factorization theorems present a consistent description of double-inclusive processes, such as Drell-Yan/Vector/Scalar boson production(DY)\cite{Collins:2011zzd,GarciaEchevarria:2011rb} and semi-inclusive deep inelastic scattering (SIDIS)\cite{Echevarria:2014rua,Collins:2011zzd,Bacchetta:2006tn} in the regime of small transverse momentum. Within the TMD factorization approach, the information on hadron structure is encoded in TMD parton distribution functions (TMDPDFs) and TMD fragmentation functions (TMDFFs). The presence of the transverse scale allows to resolve the internal structure of hadron with more details than collinear parton distributions. Many polarization phenomena, which are subleading in collinear factorization, are described by the leading order TMD factorization. In this work, we study the Sivers function \cite{Efremov:1981sh,Sivers:1989cc}, which describes the correlation of an unpolarized parton transverse momentum and a hadron polarization vector.
The Sivers function is an essential part of the single-spin asymmetry (SSA) phenomenon. Experimentally, SSA has been measured in SIDIS at Hermes \cite{Airapetian:2009ae}, COMPASS ~\cite{Alekseev:2008aa,Adolph:2012sp}, JLab~\cite{Qian:2011py} and in Drell-Yan at RHIC \cite{Adamczyk:2015gyk,Dilks:2016ufy,Bok:2018}. Its measurement is planned also for the future Electron-Ion Collider (EIC)\cite{Accardi:2012qut}. SSA has been also an object of intensive phenomenological analysis, see e.g.~\cite{Anselmino:2005ea,Kang:2009bp,Echevarria:2014xaa,Anselmino:2016uie,Martin:2017yms,Boglione:2018dqd}. The resulting predictions differ substantially among these studies owing to TMD evolution \cite{Aschenauer:2015ndk}, which shows the importance of a correct treatment of QCD perturbatively calculable parts. In the literature, there are several available calculations of the SSA in perturbative QCD. The leading order (LO) (and partially the next-to-leading order (NLO)) calculations for the SSA were performed in many works \cite{Boer:2003cm,Ji:2006ub,Ji:2006vf,Koike:2007dg,Kang:2011mr,Sun:2013hua,Dai:2014ala}. In principle, following these works it is possible to obtain the perturbative expression for Sivers function at NLO (however, different schemes are used for different parts of the calculation, see discussion in sec.~\ref{sec:discussion}). Therefore, the SSA and the Sivers function are probably one of the most renowned and intensively studied polarized TMD quantities.
Although the TMD distributions are genuine non-perturbative functions that should be extracted from data, they can be evaluated in a model-independent way in terms of collinear distributions in the limit of large-$q_T$ \cite{Collins:1984kg}, or small-$b$ in the position space. This procedure is called ``matching'' and typically it serves as an initial input for the non-perturbative model of the TMD distributions, see e.g.~\cite{Echevarria:2014xaa,Scimemi:2017etj,Bacchetta:2017gcc}. The matching greatly increases the agreement with data~\cite{Scimemi:2017etj}. From the theory side, the matching procedure consists in the selection of the leading term in the light-cone operator product expansion (OPE) for the TMD operators \cite{Echevarria:2016scs,Gutierrez-Reyes:2017glx}. Alternatively, the matching can be obtained by taking the small-$q_T$ limit of collinear factorization~\cite{Sun:2013hua,Dai:2014ala}, which, however, is not always possible~\cite{Bacchetta:2008xw}.
Only a few TMD distributions of leading-dynamical twist match the twist-2 collinear distributions. These are the unpolarized, helicity and transversity TMDPDFs and TMDFFs. The matching coefficients for these distributions are known uniformly at the next-to-leading order (NLO)~\cite{Collins:2011zzd,GarciaEchevarria:2011rb,Bacchetta:2013pqa,Echevarria:2015uaa,Gutierrez-Reyes:2017glx} and some are known at NNLO~\cite{Echevarria:2015usa,Echevarria:2016scs,Gutierrez-Reyes:2018iod}. The remaining TMD distributions match twist-3 collinear distributions (apart of the pretzelosity which is apparently of twist-4~\cite{Gutierrez-Reyes:2018iod,Chai:2018mwx}). The knowledge of the matching for these distributions is very poor: the quark TMDPDFs are all known at LO \cite{Boer:2003cm,Ji:2006ub,Kang:2011mr,Kanazawa:2015ajw,Scimemi:2018mmi} and only Sivers function is known at NLO \cite{Sun:2013hua,Dai:2014ala} (however, see discussion in sec.~\ref{sec:results}). The matching for some of quark TMDFFs, such as Collins function, is known at LO \cite{Kanazawa:2015ajw}. The matching for the majority of gluon TMD distributions is unknown.
The importance of the computation of the perturbative part of a TMD distribution in order to meet an agreement between theory and experiment has been shown already in \cite{Scimemi:2017etj} for the unpolarized case. Depending on the experimental conditions, the measured data can be sensitive to various aspects of the theory such as power corrections in the evolution \cite{Scimemi:2016ffw}, power correction \cite{Balitsky:2017gis}, small-$x$ effects in the evolution~\cite{Balitsky:2015qba} and many others. The full control of all of these sources of non-perturbative physics requires an accurate setting of the perturbative scales, as provided, for instance, by the $\zeta$-prescription of~\cite{Scimemi:2018xaf}.
In this work, we perform a complete NLO computation of the Sivers function starting from its operator definition and performing a light-cone OPE in background field~\cite{DeWitt:1967ub}. To our best knowledge, this approach is used for the description of TMD operator for the first time, despite the fact that it is a standard tool in higher twist calculation, see e.g. \cite{Balitsky:1987bk,Balitsky:2016dgz}. This technique grants an unprecedented control of the operator structure and it allows a very general treatment for twist-3 distributions. Therefore, the result obtained in this work is also interesting for a broader study. For the first time, we demonstrate how the TMD renormalization (ultraviolet and rapidity renormalization~\cite{Vladimirov:2017ksc}) is organized at the operator level. We also articulate the role of the gauge links and their direction and show (at the level of operators) the famous sing-change in-between DY and SIDIS definitions of the Sivers function~\cite{Collins:2002kn}. Motivated by these considerations, we provide a detailed and pedagogical explanation of the calculation method, which is a major target of this article. For that aim, the Sivers function represents an ideal case, because one can cross-check the calculation with other methods already used in the literature. We anticipate that our results agree with the results present in the literature only partially, however, the origin of the discrepancy is clear.
The article is organized as following. Sec.~\ref{sec:SSA} is a general introduction to SSA in the TMD factorization approach. Here we collect the expressions for SSA structure functions and describe the role of Sivers function and its collinear matching. In sec.~\ref{sec:TMDdefs} we introduce and describe in detail the operator that defines Sivers function. Its renormalization properties are discussed in sec.~\ref{sec:evol+ren}. Sec.~\ref{sec:evol+ren} is devoted to the detailed derivation of OPE at LO. We discuss separately the evaluation in regular (sec.~\ref{sec:LO_reg}) and light-cone (sec.~\ref{sec:LO_sing}) gauges.
The NLO evaluation is presented in sec.~\ref{sec:OPENLO}. We make a pedagogical introduction to the background field method in sec.~\ref{sec:intro-to-back}-\ref{sec:background}. The details on the NLO evaluation of diagrams are given in sec.~\ref{sec:eval}.
In sec.~\ref{sec:rapidity_div}-\ref{sec:backrenormalization} we discuss the appearance of rapidity divergences and their renormalization.
The difference in the evaluation of DY and SIDIS operators is discussed in sec.~\ref{sec:DY-SIDIS-difference}.
The extra details on the calculation are given in appendices \ref{app:exampleDiag}, where we present a step-by-step calculation of a diagram and \ref{app:diag-by-diag_OP}, where we give the diagram-by-diagram expressions for OPE.
The collinear distributions are defined in sec.~\ref{sec:def-collinear}. Additional details of the parametrization definition are given in appendix.~\ref{app:tensor-decomposition}.
The transition from operators to distributions is discussed in sec.~\ref{sec:TOdistr1} and the collection of diagram-by-diagram expressions can be found in appendix~\ref{app:diag-by-diag:matching}. The final result of calculation is given in sec.~\ref{sec:results}. The discussion and comparison with earlier calculations is given in~\ref{sec:discussion}.
\section{Sivers effect and TMD factorization}
\label{sec:SSA}
TMD distributions are defined by a large set of parameters: collinear momentum fraction $x$, transverse distance $\vec b$ (or transverse momentum $\vec p_T$), polarization, parton flavor $f$, the type of hadron $h$, ultraviolet and rapidity renormalization scales ($\mu$ and $\zeta$) and the defining process (DY or DIS). An explicit designation of all these parameters would lead to a heavy notation such as
\begin{eqnarray}\nonumber
f^\perp_{1T,q\leftarrow h;\text{DY}}(x,\vec b;\mu,\zeta),
\end{eqnarray}
which should be read as the Sivers function for a quark $q$ with momentum faction $x$ at the transverse parameter $\vec b$ produced by hadron $h$ in the DY kinematics, measured at scales $\mu$ and $\zeta$. Most of this information is not needed in perturbative calculations and in the following \textit{we skip the unnecessary parts of the notation}, e.g. the renormalization scales are usually dropped. We also distinguish the momentum and coordinate space TMD distributions only by their arguments. In the rest of this section we show how the Sivers function arises in SIDIS and DY cross sections.
\subsection{Sivers function in SIDIS}
The semi-inclusive deep inelastic scattering (SIDIS) is a common name for a set of processes
\begin{eqnarray}
l(l)+N(P)\to l(l')+h(P_h)+X,
\end{eqnarray}
where $l(l')$ is a lepton, $N$ is a nucleon target and $h$ is the produced hadron. The TMD factorization is applicable in the regime $|\vec P_h|\ll Q$, where $Q^2=(l-l')^2$ is a hard scale of the scattering, $\vec P_h$ is the transverse component of the momentum $P_h$. In the following, we use the bold font notation for the transverse components of vectors.
In the case of unpolarized lepton beam, unpolarized produced hadron $h$ and a transversely polarized target $N$, the cross-section for SIDIS contains three structures. The so-called Sivers effect (proportional to $\sin(\phi_h-\phi_s)$), Collins effect (proportional to $\sin(\phi_h+\phi_s)$) and the $\sin(3\phi_h-\phi_s)$ asymmetry. The structure functions corresponding to these effects within TMD factorization can be found e.g. in~\cite{Bacchetta:2006tn,Vogelsang:2005cs,Anselmino:2005ea}. The structure function for the Sivers effect is denoted by $F_{UT}^{\sin(\phi_h-\phi_s)}$. Within the TMD factorization it is \cite{Bacchetta:2006tn}
\begin{eqnarray}\label{SF:SIDIS_momentum}
F_{UT}^{\sin(\phi_h-\phi_s)}(x,z,Q,\vec P_h)&=&-xH_{\text{DIS}}(Q,\mu)\sum_f e_f^2 \int d^2\vec p d^2\vec k\delta^{(2)}\(\vec p-\vec k-\frac{\vec P_h}{z}\)
\\\nonumber &&\times
\frac{\vec P_h\cdot \vec p}{M|\vec P_h|}f_{1T;f\leftarrow N;\text{DIS}}^\perp(x,\vec p;\mu,\zeta_1)D_{1;f\to h}(z,\vec k;\mu,\zeta_2)+O\(\frac{\vec P_h^2}{z^2 Q^2}\),
\end{eqnarray}
where the variables $x$ and $z$ are the momentum fractions of partons and $M$ is the hadron mass. The functions $D_1$ and $f_{1T}^\perp$ are unpolarized and Sivers TMD distributions. The factorization scale $\mu$ is typically chosen to be of order $Q$. The scales of soft exchanges (rapidity factorization) $\zeta_{1,2}$ satisfy $\zeta_1\zeta_2=Q^4$.
The TMD factorization is naturally formulated in position space, where the Fourier convolution in eq.~(\ref{SF:SIDIS_momentum}) turns into a product of functions. In position space the structure function reads
\begin{eqnarray}
F_{UT}^{\sin(\phi_h-\phi_s)}(x,z,Q,\vec P_h)&=&ixMH_{\text{DIS}}(Q,\mu) \sum_f e_f^2 \int \frac{d^2\vec b}{(2\pi)^2} e^{i(\vec b \vec P_h)/z}
\\\nonumber &&\times
\frac{\vec P_h\cdot \vec b}{|\vec P_h|}f_{1T;f\leftarrow N;\text{DIS}}^\perp(x,\vec b;\mu,\zeta_1)D_{1;f\to h}(z,\vec b;\mu,\zeta_2)+O\(\frac{\vec P_h^2}{z^2 Q^2}\).
\end{eqnarray}
The functions $D_1$ and $f_{1T}^\perp$ depend only on the length of the vector $\vec b$ but not on its direction and one can also simplify the angular dependence \cite{Boer:2011xd,Scimemi:2018mmi}
\begin{eqnarray}\label{SF:SIDIS_bessel}
F_{UT}^{\sin(\phi_h-\phi_s)}(x,z,Q,\vec P_h)&=&-xMH_{\text{DIS}}(Q,\mu) \sum_f e_f^2 \int_0^\infty \frac{d|\vec b|}{2\pi} |\vec b|^2 J_1\(\frac{|\vec b||\vec P_h|}{z}\)
\\\nonumber &&\times
f_{1T;f\leftarrow N;\text{DIS}}^\perp(x,\vec b;\mu,\zeta_1)D_{1;f\to h}(z,\vec b;\mu,\zeta_2)+O\(\frac{\vec P_h^2}{z^2 Q^2}\),
\end{eqnarray}
where $J_1$ is the Bessel function of the first kind. The equation (\ref{SF:SIDIS_bessel}) is the usual starting point for the parametrization of the Sivers effect in TMD factorization.
\subsection{Sivers function in DY}
The Sivers effect also appears in the Drell-Yan/vector boson production process
\begin{eqnarray}
h_a(P_a)+h_b(P_b)\to Z/\gamma^*(q)+X \to l(l)+\bar l(l')+X,
\end{eqnarray}
where one of the initial hadrons is polarized~\cite{Boer:1999mm,Anselmino:2002pd,Efremov:2004tp,Vogelsang:2005cs}. In general one refers to structure functions $F_{UT}^1$ when the hadron $h_a$ is polarized and $F_{TU}^1$ when the hadron $h_b$ is polarized. The structure function $F_{TU}^1$ in TMD factorization (i.e. for $q_T\ll Q$) reads \cite{Arnold:2008kf}
\begin{eqnarray}\label{SF:DY_momentum}
F_{TU}^1(Q,\vec q_T)&=&\frac{-H_{\text{DY}}(Q,\mu)}{N_c}\sum_f e_f^2 \int d^2\vec k_{a} d^2\vec k_{b}\delta^{(2)}\(\vec q_T-\vec k_{a}-\vec k_{b}\)
\\\nonumber &&\times
\frac{\vec q_T\cdot \vec k_{a}}{M|\vec q_T|}f_{1T;f\leftarrow h_a;\text{DY}}^\perp(x_a,\vec k_a;\mu,\zeta_1)f_{1;\bar f\leftarrow h_b}(x_b,\vec k_b;\mu,\zeta_2)+O\(\frac{\vec q_T^2}{Q^2}\),
\end{eqnarray}
where $Q^2=(l+l')^2$ is the hard scale of the process, $x_{a,b}$ are momentum fractions of partons, $\vec q_T$ is the transverse component of $q=l+l'$ relative to the scattering plane
and $f_1$ is the unpolarized TMD distribution. The factorization scales are defined similarly to the SIDIS case, i.e. $\mu\sim Q$ and $\zeta_1\zeta_2=Q^4$. The transformation of the structure function under interchange of the polarized hadron ($h_a\leftrightarrow h_b$) is $F_{UT}^1=-F_{TU}^1$.
The structure functions can be also written in the form
\begin{eqnarray}
F_{TU}^1(Q,\vec q_T)&=&
\frac{iMH_{\text{DY}}(Q,\mu)}{N_c}\sum_f e_f^2 \int \frac{d^2\vec b}{(2\pi)^2}e^{i(\vec b \vec q_T)}
\frac{\vec q_T\cdot \vec b}{|\vec q_T|}
\\\nonumber && \times
f_{1T;f\leftarrow h_a;\text{DY}}^\perp(x_a,\vec b;\mu,\zeta_1)f_{1;\bar f\leftarrow h_b}(x_b,\vec b;\mu,\zeta_2)+O\(\frac{\vec q_T^2}{Q^2}\)\ ,
\end{eqnarray}
and
\begin{eqnarray}\label{SF:DY_bessel}
F_{TU}^1(Q,\vec q_T)&=&
\frac{-MH_{\text{DY}}(Q,\mu)}{N_c}\sum_f e_f^2 \int_0^\infty \frac{d|\vec b|}{2\pi}|\vec b|^2 J_1(|\vec b||\vec q_T|)
\\\nonumber &&
\times f_{1T;f\leftarrow h_a;\text{DY}}^\perp(x_a,\vec b;\mu,\zeta_1)f_{1;\bar f\leftarrow h_b}(x_b,\vec b;\mu,\zeta_2)+O\(\frac{\vec q_T^2}{Q^2}\),
\end{eqnarray}
where we have integrated out the angular dependence.
The Sivers functions in SIDIS, eq.~(\ref{SF:SIDIS_momentum}) and DY, eq.~(\ref{SF:DY_momentum}), have different labels that specify the processes. These functions have different operator definitions (see sec.~\ref{sec:TMDdefs}). However, de facto, the process-dependence reduces to a simple sign change \cite{Brodsky:2002cx, Brodsky:2002rv, Collins:2002kn, Boer:2003cm}
\begin{eqnarray}\label{SF:DY<->DIS}
f_{1T;f\leftarrow h_a;\text{DY}}^\perp(x,\vec b;\mu,\zeta)=-f_{1T;f\leftarrow h_a;\text{DIS}}^\perp(x,\vec b;\mu,\zeta).
\end{eqnarray}
In the following, we demonstrate the origin of the sign-change at the level of OPE.
\subsection{TMD evolution and operator product power expansion}
The practical application of TMD factorization relies on the concept of TMD evolution, which allows to relate structure functions at different values of $Q$.
Here, we should stress that a TMD distribution is an involved non-perturbative function. In fact, in addition to the non-perturbative structure of TMD distribution (which involves the dependence on the variables ($x$, $\vec b$)), the TMD factorization also contains a non-perturbative part of the evolution factor (which depends only on $\vec b$). An efficient implementation of the TMD approach should be able to disentangle these non-perturbative contributions. The parametrization and extraction of three non-perturbative functions (two TMD distributions and the evolution kernel) of two variables would be a hopeless task if the TMD factorization would not allow us to separate the problem into pieces.
First of all, the TMD evolution is regulated by two scales $(\mu,\zeta)$ and it is process independent. It factors out the non-perturbative evolution effects into an evolution factor which is strictly universal for all structure functions and for all TMD factorizable processes. Nonetheless, the TMD evolution still non-trivially affects the ($x$, $\vec b$) dependence of the distribution which should be modeled as a function of two variables. To simplify this procedure one can use any available information that restricts the functional form of the TMD. In particular, at small values of $\vec b$ a TMD distribution can be related to collinear distributions in a model-independent way in perturbation theory. Such a relation has the general form provided by OPE
\begin{eqnarray}\label{small-$b$}
f(x,\vec b)=C_1(x,\mathbf{L}_\mu)\otimes f_1(x)+\vec b^2 C_2(x,\mathbf{L}_\mu)\otimes f_2(x)+...,
\end{eqnarray}
where $C_i$ are perturbatively calculable Wilson coefficient functions which depend on $\vec b$ only logarithmically via $\mathbf{L}_\mu$ (to be defined in eq.~(\ref{def:Lmu})), $f_i$ are collinear distributions of increasing twist and $\otimes$ is an integral convolution in the variable $x$. This expansion is valid only in a certain range of $\vec b$, say $|\vec b| < R$, where $R$ is some matching scale. For values of $\vec b$ larger than $R$ TMD distribution is completely non-perturbative. In fact, as the value of $\vec b$ gets closer to $R$, the contribution of higher order terms in the small-$b$ expansion becomes more important. However, our knowledge of the corresponding higher-twist distributions is very limited.
Thus, it is of practical convenience to use only the first term of the small-$b$ expansion in eq.~(\ref{small-$b$}) and replace the rest by a generic non-perturbative function, i.e. \begin{eqnarray}\label{model}
f(x,\vec b)=C_1(x,\mathbf{L}_\mu)\otimes f_1(x)f_{NP}(x,\vec b).
\end{eqnarray}
The practical success of such an ansatz can be easily understood if we notice that the main contribution to the Fourier integrals in eqs.~(\ref{SF:SIDIS_bessel}, \ref{SF:DY_bessel}) comes from the small-$b$ region. Therefore, we can expect that the function $f_{NP}$ has a simple behavior in $x$ and $\vec b$, which is indeed confirmed by phenomenological applications of this formula. The details of the modeling procedure which is based on eq.~(\ref{model}) are different in different approaches, but the core picture described here remains unchanged.
The small-$b$ matching is an essential part of the modern TMD phenomenology. In ref.~\cite{Scimemi:2017etj} a comparison of different orders of the matching to experimental results has been performed. It has been shown that the NLO matching is essential for the predictive power of the approach. The NNLO matching provides further improvements and it can be necessary for the description of the most precise experiments.
The achievable precision can also be affected by the choice of scales in the matching. Let us also mention that in~\cite{Scimemi:2018xaf} the authors have proved the possibility to disentangle the procedure of small-$b$ matching and TMD evolution using the $\zeta$-prescription which is not entirely possible in other formulations. The $\zeta$-prescription allows using different perturbative orders for TMD evolution and small-$b$ matching. This means that the modeling of the TMD through eq.~(\ref{model}) is completely separated from the evolution part of the TMD (that is, the scale choice does not mix up non-perturbative pieces of different origin). This fact results to be extremely useful for phenomenology since it allows to use the highest allowed/known expression of evolution \cite{Vladimirov:2016dll} in combination with polarized observables whose high perturbative orders are unknown. The universal non-perturbative part of evolution can be extracted from the most precise data (such as Z-boson production at LHC) \cite{Bertone:2019TOBE}.
Let us conclude this section recalling that the hard coefficient functions $H_{\text{DIS}}$ and $H_{\text{DY}}$ within TMD factorization are given by the quark form factor evaluated in the different analytical regions. At the NLO they differ only by a $\pi^2$-term,
\begin{eqnarray}
H_{\text{DIS}}(Q,\mu)&=&|C_V(Q^2,\mu^2)|^2=1+2a_s C_F\(-\mathbf{l}_{Q^2}^2-3\mathbf{l}_{Q^2}-8+\frac{\pi^2}{6}\)+O(a_s^2),
\\
H_{\text{DY}}(Q,\mu)&=&|C_V(-Q^2,\mu^2)|^2=1+2a_s C_F\(-\mathbf{l}_{Q^2}^2-3\mathbf{l}_{Q^2}-8+\frac{7\pi^2}{6}\)+O(a_s^2),
\end{eqnarray}
where $\mathbf{l}_{Q^2}=\ln(\mu^2/Q^2)$ and $a_s=g^2/(4\pi)^2$. The NNLO and NNNLO expression can be found in \cite{Gehrmann:2010ue}.
\section{Operator definitions for unpolarized and Sivers TMD distributions}
\label{sec:definitions}
In this section, we introduce and review the main properties of TMD distributions.
\subsection{Definition of TMD distributions}
\label{sec:TMDdefs}
Through the article we use the standard notation for the light-cone decomposition of a vector
\begin{eqnarray}
v^\mu=v^+ \bar n^\mu+v^- n^\mu+v_T^\mu,
\end{eqnarray}
where $v^+=(nv)$, $v^-=(\bar n v)$ and $v_T$ is the transverse component $(v_Tn)=(v_T\bar n)=0$. The vectors $n$ and $\bar n$ are light-like
\begin{eqnarray}\label{def:n}
n^2=\bar n^2=0,\qquad (n\bar n)=1.
\end{eqnarray}
Their particular definition is related to the factorization frame of the scattering process. The transverse part (with respect to vectors $n$ and $\bar n$) of the metric and Levi-Civita tensors are
\begin{eqnarray}
g_T^{\mu\nu}=g^{\mu\nu}-\frac{n^\mu \bar n^\nu+\bar n^\mu n^\nu}{(n\bar n)}, \qquad \epsilon_T^{\mu\nu}=\frac{n_\alpha \bar n_\beta}{(n\bar n)}\epsilon^{\alpha\beta\mu\nu},
\end{eqnarray}
where $\epsilon^{\mu\nu\rho\sigma}$ is in the Bjorken convention ($\epsilon_{0123}=-\epsilon^{0123}=1$). In four dimensions (with $n$ and $\bar n$ localized in the plane $(0,3)$) both tensors have only two non-zero components, $g^{11}_T=g_T^{22}=-1$ and $\epsilon_T^{12}=-\epsilon_T^{21}=1$.
Since the transverse subspace is Euclidian, the scalar product of transverse vectors is negative, $v_T^2<0$. In the following, we adopt the bold font notation to designate the Euclidian scalar product of transverse vectors, i.e. $\vec b^2=-b^2>0$, when it is convenient.
Using this notation,
the transverse momentum dependent parton distribution functions (TMDPDFs) for \textit{unpolarized quark} are defined by the matrix element~\cite{Tangerman:1994eh,Collins:2011zzd,GarciaEchevarria:2011rb}
\begin{eqnarray}\label{def:TMDPDF_Qop}
\Phi_{q\leftarrow h}^{[\gamma^+]}(x,\vec b)&=&\int\!\frac{dz}{2\pi}e^{-ixzp^+}\!\langle p,S|\bar T\{\bar q\(zn+\vec b\)[z n+\vec b,\pm\infty n+\vec b]\}\gamma^+T\{
[\pm\infty n,0] q(0)\}|p,S\rangle,
\end{eqnarray}
where $[a,b]$ are Wilson lines defined in eq.~(\ref{def:WilsonLine}). The notation $\pm \infty n$ indicates different cases of TMD distributions, which appear in different processes. The TMD distributions that appear in SIDIS have Wilson lines pointing to $+\infty n$, while in Drell-Yan they point to $-\infty n$ as in fig.~\ref{fig:contours}. The Wilson lines within the TMD operator are along the light-like direction $n$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{DY-DIS-contours.eps}
\end{center}
\caption{\label{fig:contours} Illustration for the definition of TMD operators in DY and SIDIS. The Wilson lines (shown by dashed lines) are oriented along past (DY) or future (SIDIS) light cone direction. At light-cone infinities the Wilson lines are connected by transverse gauge links (not shown).}
\end{figure}
The matrix element in eq.~(\ref{def:TMDPDF_Qop}) for the \textit{polarized hadron} is parametrized by two independent functions ~\cite{Boer:2011xd,Scimemi:2018mmi}
\begin{eqnarray}\label{param:TMDv}
\Phi_{q\leftarrow h}^{[\gamma^+]}(x,\vec b)&=&f_1(x,\vec b)+i\epsilon_T^{\mu\nu} b_\mu s_{T\nu} M f_{1T}^\perp(x,\vec b),
\end{eqnarray}
where $M$ is the mass of the hadron and $s_T$ is the transverse part of the hadron spin-vector $S$, i.e. $s^\mu_T=g_T^{\mu\nu}S_\nu$. The function $f_1$ is the \textit{unpolarized} TMDPDF, which measures the unpolarized quark distribution in an unpolarized hadron. The function $f_{1T}^\perp$ is known as the \textit{Sivers function}, which measures the unpolarized quark distribution in a polarized hadron.
The parametrization of eq.~(\ref{param:TMDv}) is given in position space. The distributions in momentum space are defined in the usual manner
\begin{eqnarray}\label{def:p<->b}
\Phi_{q\leftarrow h}^{[\gamma^+]}(x,\vec p)=\int \frac{d^2 \vec b}{(2\pi)^2}e^{+i(\vec b\vec p)}\Phi^{[\gamma^+]}_{q\leftarrow h,ij}(x,\vec b),
\end{eqnarray}
where the scalar product $(\vec b\vec p)$ is Euclidian. Correspondingly, the momentum space parameterization reads \cite{Goeke:2005hb,Bacchetta:2006tn}
\begin{eqnarray}\label{def:gammaP_momentum}
\Phi_{q\leftarrow h}^{[\gamma^+]}(x,\vec p)&=& f_1(x,\vec p)-\frac{\epsilon_T^{\mu\nu}p_{\mu}s_{T\nu}}{M}f_{1T}^\perp(x,\vec p).
\end{eqnarray}
Some explicit relations among particular TMDPDFs can be found in the appendix of ref.~\cite{Scimemi:2018mmi}. These relations are used to relate structure functions in momentum and coordinate representations in sec.~\ref{sec:SSA}.
The anti-quark TMD distribution is defined as
\begin{eqnarray}\label{def:TMDPDF_aQop}
\Phi_{\bar q\leftarrow h}^{[\gamma^+]}(x,\vec b)&=&\int\frac{dz}{2\pi}e^{-ixz(pn)}\langle p,S|\mathrm{Tr}\(\gamma^+\bar T\{
[\pm\infty n,0] q_i(0)\} T\{\bar q\(zn+\vec b\)[z n+\vec b,\pm\infty n]\}\)|p,S\rangle.\nonumber \\
\end{eqnarray}
Using charge-conjugation, one can relate the quark and anti-quark TMD distributions~\cite{Tangerman:1994eh},
\begin{eqnarray}
\Phi_{q\leftarrow h}^{[\gamma^+]}(x,\vec b)=-\(\Phi_{\bar q\leftarrow h}^{[\gamma^+]}(-x,\vec b)\)^*,
\end{eqnarray}
from which it follows
\begin{eqnarray}
f_{1;q\leftarrow h}(x,\vec b)&=&-f_{1;\bar q\leftarrow h}(-x,\vec b),
\\
f_{1T;q\leftarrow h}^\perp(x,\vec b)&=&f_{1T;\bar q\leftarrow h}^\perp(-x,\vec b).
\end{eqnarray}
Therefore, in the following we associate the anti-quark distributions with the negative values of $x$ and we define the TMD distributions in the range $-1<x<1$ as
\begin{eqnarray}\label{def:unpol_allX}
f_{1;q\leftarrow h}(x,\vec b)&=&\theta(x)f_{1;q\leftarrow h}(x,\vec b)-\theta(-x)f_{1;\bar q\leftarrow h}(-x,\vec b),
\\\label{def:sivers_allX}
f_{1T;q\leftarrow h}^\perp(x,\vec b)&=&\theta(x)f_{1T;q\leftarrow h}^\perp(x,\vec b)+\theta(-x)f_{1T;\bar q\leftarrow h}^\perp(-x,\vec b).
\end{eqnarray}
The small-$b$ expansion (often called small-$b$ matching or collinear matching) presents a TMD distributions as a series of collinear distributions and Wilson coefficients in the vicinity of $\vec b=0$ as in eq.~(\ref{small-$b$}). For instance, the leading term of the small-$b$ expansion for unpolarized TMD is expressed by the (unpolarized) collinear PDF $f_1(x)$
\begin{eqnarray}
f_{1,q\leftarrow h}(x,\vec b;\mu,\zeta)&=&\sum_f\int_x^1 \frac{dy}{y}C_{1;q\leftarrow f}(y,\vec b,\mu,\zeta)f_{1,f\leftarrow h}\(\frac{x}{y},\mu\)+O(\vec b^2),
\end{eqnarray}
where the sum index $f$ indicates gluons, quarks and antiquarks of all flavors. The coefficient function $C$ is the perturbative Wilson coefficient, which depends on $\vec b$ logarithmically. Its leading term is $\delta(1-y)$ and the perturbative corrections are known up to NNLO~\cite{Echevarria:2015usa}. The power corrections (as in eq.~(\ref{small-$b$})) contain collinear distributions of twist-2 and twist-4 and they are currently unknown.
The expression for the small-$b$ matching of the Sivers function is
\begin{eqnarray}
f_{1T;q\leftarrow h}^{\perp}=\sum_f C_{1T;q\leftarrow f}^\perp(x_1,x_2,x_3,\vec b,\mu,\zeta)\otimes T_{f\to h}(x_1,x_2,x_3,\mu)+O(\vec b^2),
\end{eqnarray}
where $T$ are the collinear distributions of twist-3, to be defined in secs.~\ref{sec:coll-quark}, \ref{sec:coll-gluon}.
The symbol $\otimes$ denotes an integral convolution in the variables $x_{1,2,3}$. At leading order the expression for the coefficient function is known to be $\pm \pi \delta(x_1+x_2+x_3)\delta(x_2)\delta(x-x_3)$~\cite{Boer:2003cm,Ji:2006ub,Kang:2011mr,Scimemi:2018mmi} (and we also re-derive it in the next section). The status of the NLO expressions is cumbersome. In principle, the quark-to-quark part can be found in~\cite{Sun:2013hua}, where it has been extracted from computation of the cross-section made in \cite{Ji:2006ub,Ji:2006vf,Koike:2007dg}. However, the computations made in~\cite{Ji:2006ub,Ji:2006vf,Koike:2007dg} miss certain parts and for this reason they are partially incorrect (see extended discussion in \cite{Braun:2009mi}). The quark-to-gluon part is evaluated in~\cite{Dai:2014ala}, however, the authors use a scheme which is different from the standard one for twist-2 computations. We return to this discussion in sec.~\ref{sec:results}.
\subsection{Evolution and renormalization}
\label{sec:evol+ren}
The renormalized TMD, unlike usual parton distributions, depend on a pair of scales. This is a consequence of the TMD factorization procedure, which decouples the hard scattering factorization and the factorization of the soft-gluon exchanges~\cite{Collins:2011zzd,Echevarria:2012js,Chiu:2012ir,Vladimirov:2017ksc}. As a result the evolution of TMD is given by a pair of equations
\begin{eqnarray}\label{evol:gamma}
\mu^2 \frac{d}{d\mu^2}\Phi_{f\leftarrow h}(x,\vec b;\mu,\zeta)&=&\frac{\gamma_F^f(\mu,\zeta)}{2}\Phi_{f\leftarrow h}(x,\vec b;\mu,\zeta),
\\\label{evol:D}
\zeta\frac{d}{d\zeta}\Phi_{f\leftarrow h}(x,\vec b;\mu,\zeta)&=&-\mathcal{D}^f(\mu,\vec b)\Phi_{f\leftarrow h}(x,\vec b;\mu,\zeta),
\end{eqnarray}
where $\gamma_F$ and $\mathcal{D}$ are respectively the ultraviolet (UV) and rapidity anomalous dimensions. Eq.~(\ref{evol:gamma}-\ref{evol:D}) are independent of polarization and TMD structure. The double-scale nature of factorization and evolution opens also unique possibilities for the phenomenological implementation of TMD. In particular, it allows a universal scale-independent definition of a TMD distribution~\cite{Scimemi:2018xaf}.
At the operator level the double-scale nature of evolution is reflected by the presence of two types of divergences, namely UV and rapidity divergences. Both divergences are to be renormalized. The UV renormalization factor is known as TMD-renormalization factor $Z_f^{\text{TMD}}$ and it can be extracted from the UV renormalization of quark (or gluon) vertex attached to the (light-like) Wilson line. The rapidity renormalization is made through the rapidity renormalization factor $R_f$ (for the proof of multiplicativity of rapidity divergence renormalization, see ref. \cite{Vladimirov:2017ksc}). It is compulsary that \textit{both renormalizations are made at the level of operator} and thus do not depend on the hadron states.
The renormalized TMD operators $\mathcal{U}_f$ that defines the physical TMD distribution, reads
\begin{eqnarray}\label{def:renormalization}
\mathcal{U}_f(x,\vec b;\mu,\zeta)&=&Z_i^{-1}(\mu)Z_f^{TMD}\(\frac{\mu^2}{\zeta}\)R_f\(\vec b;\mu,\zeta\)\mathcal{U}_f^{bare}(x,\vec b),
\end{eqnarray}
where we explicitly write the scaling variables for each expression. In eq.~(\ref{def:renormalization}) $Z_i$ is the renormalization of the field wave functions ($Z_2$ for the quark field and $Z_3$ for the gluon field). The TMD operators $\mathcal{U}$ relevant for this work are defined later in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}).
Both renormalizations are scheme dependent. We use the conventional $\overline{\text{MS}}$-scheme together with the dimensional regularization for the UV divergences.
For the rapidity renormalization we use the conventional scheme~\cite{Becher:2010tm,Collins:2011zzd,GarciaEchevarria:2011rb,Chiu:2012ir,Vladimirov:2017ksc} that is fixed by the requirement that no remnants of the soft factor contribute to the hard scattering. Apart from this one should worry about the overlap between collinear and soft modes in the factorization of the cross sections, which is rapidity regulator dependent. This is resolved in the $\delta$-regulator scheme where the form of the rapidity renormalization factor is given by the inverse square root of the TMD soft factor $R=1/\sqrt{S}$, see ref. \cite{Echevarria:2015byo}.
This regulator has been already used several times in higher order calculations, see refs. \cite{Echevarria:2015usa,Echevarria:2015byo,Echevarria:2016scs,Gutierrez-Reyes:2018iod}.
The particular expression depends on the order of application of the renormalization factors. In this work, we fix the order as in eq.~(\ref{def:renormalization}) and we use the $\delta$-regularization, whose definition is given in sec.~\ref{sec:rapidity_div}. Then the rapidity renormalization factor in $\overline{\text{MS}}$-scheme reads \cite{Vladimirov:2017ksc}
\begin{eqnarray}\label{renorm:R_1loop}
R_q(\vec b;\mu,\zeta)&=&1+2a_s C_F \mathbf{B}^\epsilon\mu^{2\epsilon}e^{-\epsilon\gamma_E}\Gamma(-\epsilon)\(\ln\(\mathbf{B} \delta^2\frac{\zeta}{(p^+)^2}\)-\psi(-\epsilon)+\gamma_E\)+O(a_s^2),
\end{eqnarray}
where $\mathbf{B}=\vec b^2/4$ and $a_s=g^2/(4\pi)^2$. The UV renormalization constant is \cite{Echevarria:2016scs}
\begin{eqnarray}\label{renorm:Z_1loop}
Z_2^{-1}Z_q^{TMD}\(\frac{\mu^2}{\zeta}\)&=&\(1-C_F\frac{a_s}{\epsilon}+\mathcal{O}(a_s^2)\)^{-1}\[1-2a_sC_F\(\frac{1}{\epsilon^2}+\frac{2+\ln(\mu^2/\zeta)}{\epsilon}\)+O(a_s^2)\]
\\\nonumber &=&1-a_sC_F\(\frac{2}{\epsilon^2}+\frac{3+2\ln(\mu^2/\zeta)}{\epsilon}\)+O(a_s^2).
\end{eqnarray}
Here, we list only the renormalization constants for quark operators at one-loop, since they are the only required in the present calculation. The gluon case, as well as, two-loop expressions can be found in ref. \cite{Echevarria:2016scs}.
We emphasize that the rapidity renormalization factor depends on the boost-invariant combination of scales $\delta/p^+$ \cite{Echevarria:2012js}
(here, $\delta$ regularizes rapidity divergences in $n$-direction and thus transforms as $p^+$ under Lorentz transformations). Such a combination appears in the factorization of
the cross section of DY and SIDIS and when splitting
the soft factor into parts with rapidity divergences associated with different TMD distributions~\cite{GarciaEchevarria:2011rb}. In the course of factorization procedure, the accompanying TMD distribution (e.g. $D_1$ in (\ref{SF:SIDIS_bessel}) or $f_1$ in (\ref{SF:DY_bessel})) gets the rapidity renormalization factor with $(\delta^-/p^-)\bar \zeta$ argument, where $\delta^-$ regularizes rapidity divergences in $\bar n$-direction. The values of $p^+$ and $p^-$ are arbitrary, however, they dictate the value of $\zeta$ and $\bar \zeta$, since $\zeta \bar \zeta=(2p^+p^-)^2$. The standard and convenient choice of scales is $\zeta \bar \zeta=Q^4$, which is the only physical hard scale appearing in the reference processes.
This scale determines the value of $p^+$ and $p^-$ as momenta of partons that couple to test current, see also sec.\ref{sec:rapidity_div}. For an extended discussion see sec.~6.1.1 in ref. \cite{Vladimirov:2017ksc} and also refs.~\cite{Echevarria:2012js,Chiu:2012ir}.
\section{Light-cone OPE at leading order}
\label{sec:LO}
In this section we present the operators that enter in the definition of the Sivers function and their LO limit for small-$b$, recovering the results of \cite{Scimemi:2018mmi}.
The notation for operators established in this section is the one used in the NLO computation.
\subsection{Light-cone OPE in a regular gauge}
\label{sec:LO_reg}
Let us denote the operator that defines the TMD distributions in DY case as
\begin{eqnarray}\label{def:TMDop_DY}
\mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec b)&=&\bar T\{\bar q(z_1 n+\vec b)[z_1 n+\vec b,-\infty n+\vec b]\}\,\gamma^+ T\{
[-\infty n-\vec b,z_2 n-\vec b] q(z_2n -\vec b)\},
\nonumber \\
\end{eqnarray}
where the Wilson lines are defined as
\begin{eqnarray}\label{def:WilsonLine}
[a_1 n+\vec b,a_2n+\vec b]&=& P\exp\(ig\int_{a_2}^{a_1} d\sigma n^\mu A_\mu(\sigma n+\vec b)\).
\end{eqnarray}
The operator that defines the TMD distributions in the SIDIS case reads
\begin{eqnarray}\label{def:TMDop_DIS}
\mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)&=&\bar T\{\bar q(z_1 n+\vec b)[z_1 n+\vec b,+\infty n+\vec b]\}\,\gamma^+
T\{[+\infty n-\vec b,z_2 n-\vec b] q(z_2 n -\vec b)\}.\nonumber \\
\end{eqnarray}
Generally, the links which connect the end points of Wilson lines at a distant transverse plane must be added in both operators (for DY and for SIDIS) \cite{Belitsky:2002sm,Idilbi:2010im}. Here, we omit them for simplicity, assuming that some regular gauge (e.g. covariant gauge) is in use. In non-singular gauges the field nullifies at infinities, $A_\mu(\pm \infty n)=0$ and the contribution of distant gauge links vanishes. The case of singular gauges is discussed in the following section.
We point out that for convenience of calculation and presentation the operators in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}) are defined differently in comparison to original operator in eq.~(\ref{def:TMDPDF_Qop}). In particular, we double the transverse distance between fields and write it in symmetric form. Also, the operators in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}) are defined for arbitrary light cone positions $z_1$ and $z_2$, although the definition of a TMD distribution depends only on the difference of these points. Such a generalization does not complicate the calculation, moreover, it allows to cross-check certain results. These modifications are undone on the last step of calculation, see eq.~(\ref{main-fourier}). Note, that the operators in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}) define the generalized transverse momentum distributions (GTMDs) and thus the obtained OPE can be applied for generalized TMD (GTMD) kinematics as well.
It is straightforward to check that the spatial separations between any pair of fields in the operators defined in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}) are space-like\footnote{There is a single exception. The fields of anti-quark operator and the attached Wilson line have light-like separations but anti-time-ordered. However, the reordering of the operator can performed in the light-cone gauge, where the gauge links vanish. The detailed discussion on the ordering properties of quasi-partonic operators can be found in ref.~\cite{Jaffe:1983hp}.}. For that reason we can replace the $T$- and $\bar T$- orderings by a single $T$-ordering. This significantly simplifies the calculation and in the following we do not explicitly show the symbol of T-ordering, but we suppose that each operator is T-ordered. The possibility to reorder the fields is not a general feature, e.g. TMD operators for fragmentation functions do not allow this simplification and thus, their properties are drastically different.
At LO in perturbation theory one can treat the fields as classical fields, i.e. omit their interaction properties. In this approximation, the small-$b$ expansion is just the Taylor expansion at $\vec b=0$. Expanding $\mathcal{U}$ in $\vec b$ up to linear terms we obtain
\begin{eqnarray}\label{U_Taylor}
\mathcal{U}^{\gamma^+}(z_1,z_2,\vec b)=\mathcal{U}^{\gamma^+}(z_1,z_2,\vec 0)+b^\mu \frac{\partial}{\partial b^\mu} \mathcal{U}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}+O(\vec b^2).
\end{eqnarray}
The leading term is the same for DY and SIDIS cases
\begin{eqnarray}
\mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec 0)&=&\mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec 0)=\bar q(z_1 n)[z_1 n,z_2 n]\gamma^+ q(z_2 n).
\end{eqnarray}
Note that the half-infinite segments of Wilson lines compensate each other due to the unitarity of the Wilson line and the resulting operator is spatially compact.
The derivative term in eq.~(\ref{U_Taylor}) is different for different kinematics
\begin{eqnarray}\label{U_DY_der_0}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1 n)[z_1 n,-\infty n](\overleftarrow{\partial_{T\mu}}-\overrightarrow{\partial_{T\mu}})\gamma^+ [-\infty n,z_2 n] q(z_2n),
\\\label{U_DIS_der_0}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1 n)[z_1 n,+\infty n](\overleftarrow{\partial_{T\mu}}-\overrightarrow{\partial_{T\mu}})\gamma^+ [+\infty n,z_2 n] q(z_2n).
\end{eqnarray}
Here, the derivative prevents the compensation of infinite segments of Wilson lines. Acting by derivative explicitly we obtain
\begin{eqnarray}\label{U_DY_der}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1n)\(\overleftarrow{D_\mu}[z_1n,z_2n]-[z_1n,z_2n]\overrightarrow{D_\mu}\)\gamma^+ q(z_2n)
\\\nonumber &&+ig\(\int_{-\infty}^{z_1} +\int_{-\infty}^{z_2}\)d\tau~
\bar q(z_1n)[z_1n,\tau n]\gamma^+ F_{\mu+}(\tau n)[\tau n,z_2n]q(z_2n),
\\
\label{U_DIS_der}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1n)\(\overleftarrow{D_\mu}[z_1n,z_2n]-[z_1n,z_2n]\overrightarrow{D_\mu}\)\gamma^+ q(z_2n)
\\\nonumber &&-ig\(\int^{\infty}_{z_1} +\int^{\infty}_{z_2}\)d\tau~
\bar q(z_1n)[z_1n,\tau n]\gamma^+ F_{\mu+}(\tau n)[\tau n,z_2n]q(z_2n).
\end{eqnarray}
where the covariant derivative and the field-strength tensor are defined as usual
\begin{eqnarray}\label{def:DandF}
\overrightarrow{D}_\mu=\overrightarrow{\partial}_\mu-igA_\mu,\qquad
\overleftarrow{D}_\mu=\overleftarrow{\partial}_\mu+igA_\mu,\qquad F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu-ig[A_\mu,A_\nu].
\end{eqnarray}
The operators which contribute to each order of the small-$b$ expansion have different geometrical twists \footnote{By the term geometrical twist we refer to the standard definition of the twist as ``dimension minus spin'' of the operator. This definition is formulated for a local operator, but it can be naturally extended to the light-cone operators as a generating function for local operators.}. In particular, the first term in eq.~(\ref{U_DY_der}) is a mixture of twist-2 and twist-3 operators, while the second term is a pure twist-3 operator (the same for eq.~(\ref{U_DIS_der})). The procedure of separation of different twist contributions is explained in details in \cite{Scimemi:2018mmi}. In the present paper, we skip this discussion because the Sivers function contains only contribution of geometrical twist-3 operator. Indeed, comparing the results for DY in eq.~(\ref{U_DY_der}) and SIDIS in eq.~(\ref{U_DIS_der}) kinematics we observe that the first terms are the same, while the last terms differ. Therefore, already at this stage it is clear that the Sivers function is made of the operators from the last terms, i.e. pure twist-3 operator.
\subsection{Light-cone OPE in the light-cone gauge}
\label{sec:LO_sing}
Before entering a detailed description of the background field method it is convenient to formulate the derivation of the small-$b$ limit of the TMD functions at LO in the light-cone gauge. This gauge will then be used in the following to describe the background fields.
The definition of TMD operators is gauge invariant. In order to demonstrate this explicitly, let us restore the formal structure of gauge links in eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}). We have
\begin{eqnarray}\label{def:TMDop_DY_full}
&&\mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec b)=
\\&&\nonumber\quad
\bar q(z_1 n+\vec b)[z_1 n+\vec b,-\infty n+\vec b][-\infty n+\vec b,
-\infty n-\vec b][-\infty n-\vec b,z_2 n-\vec b]
\,\gamma^+ \,q(z_2n -\vec b),
\\
\label{def:TMDop_DIS_full}
&&\mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)=
\\&&\nonumber\quad
\bar q(z_1 n+\vec b)[z_1 n+\vec b,+\infty n+\vec b][+\infty n+\vec b,
+\infty n-\vec b][+\infty n-\vec b,z_2 n-\vec b]
\,\gamma^+ \,q(z_2n -\vec b).
\end{eqnarray}
Notice, that in order to write eq.~(\ref{def:TMDop_DY_full},~\ref{def:TMDop_DIS_full}) we have explicitly used the fact that the T-ordering can be removed. In the absence of such assumption the finite distance transverse link must be replaced by two half-infinite links~\cite{Belitsky:2002sm}.
The light-cone gauge is defined by the condition
\begin{eqnarray}\label{gauge:A+=0}
n^\mu A_\mu(x)=A_+(x)=0.
\end{eqnarray}
The application of this condition removes the contribution of gauge links along vector $n$ in the TMD operator, i.e. $[z n+\vec b,\pm\infty n+\vec b]=1$ and $[\pm\infty n-\vec b,-z n-\vec b]=1$. However, the status of the transverse gauge links is unresolved. This reflects the known fact that the gauge fixing condition (\ref{gauge:A+=0}) does not fix the gauge dependence entirely but should be supplemented by an additional boundary condition. There are two convenient choices for boundary conditions in our case\footnote{The names selected here could be misleading since the limit is taken along the light cone, rather then along a time axis. Also the vector boundary condition assumption is too strong. The quantized Yang-Mills condition $g_T^{\mu\nu}A_\nu$ could be replaced by a weaker $\partial_\mu g_T^{\mu\nu}A_\nu$ as it is shown in~\cite{Chirilli:2015fza}. Nonetheless, for our purposes the condition in eq.~(\ref{gauge:ret},~\ref{gauge:adv}) is sufficient.}
\begin{eqnarray}\label{gauge:ret}
\text{retarded:}&& g_T^{\mu\nu}A_\nu(-\infty n)=0,
\\\label{gauge:adv}
\text{advanced:}&& g_T^{\mu\nu}A_\nu(+\infty n)=0.
\end{eqnarray}
Clearly, each of these boundary conditions is advantageous in some particular kinematics. As so, we apply \textit{the retarded boundary condition for the DY operator}. That is, the transverse link at $-\infty n$ vanishes,
\begin{eqnarray}\label{def:TMDop_DY_LC}
\mathcal{U}_{\text{DY}}^{\gamma^+}(z_1,z_2,\vec b)=
\bar q(z_1 n+\vec b) \,\gamma^+ \,q(z_2n -\vec b),\qquad \text{in the retarded light-cone gauge.}
\end{eqnarray}
Whereas \textit{for the SIDIS operator we apply the advanced boundary condition}. That is, the transverse link at $+\infty n$ vanishes,
\begin{eqnarray}\label{def:TMDop_DIS_LC}
\mathcal{U}_{\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)=
\bar q(z_1 n+\vec b) \,\gamma^+ \,q(z_2n -\vec b),\qquad \text{in the advanced light-cone gauge.}
\end{eqnarray}
Thus, the operators have the same expression in different gauges. In order to recover the structure of gauge links (and hence to obtain the explicitly gauge-invariant operators), we can make a gauge transformation of the operator and subsequently replace each gauge-transformation factor by a Wilson line along the vector $n$ to the selected boundary.
The OPE in the light-cone gauge has a compact form. The leading term of eq.~(\ref{U_Taylor}) is
\begin{eqnarray}
\mathcal{U}_{\text{DY}/\text{DIS}}^{\gamma^+}(z_1,z_2,\vec 0)&=&\bar q(z_1 n)\,\gamma^+\,q(z_2 n).
\end{eqnarray}
The expression for the derivative of the operator is also independent of the underlying kinematics (compare to eq.~(\ref{U_DY_der_0},~\ref{U_DIS_der_0}))
\begin{eqnarray}\label{U_der_LC}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DY}/\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1 n)(\overleftarrow{\partial_{T\mu}}-\overrightarrow{\partial_{T\mu}})\gamma^+ q(z_2n),
\end{eqnarray}
and in fact, it already gives the final expression of the correction linear in $\vec b$ in the light-cone gauge.
Let us show how the results for LO OPE in eq.~(\ref{U_DY_der},~\ref{U_DIS_der}) are recovered starting from eq.~(\ref{U_der_LC}).
One starts rewriting eq.~(\ref{U_der_LC}) explicitly in a gauge-invariant form. With this purpose we replace the partial derivatives in eq.~(\ref{U_der_LC}) with covariant derivatives, see eq.~(\ref{def:DandF}), by adding (and subtracting) appropriate gluon fields
\begin{eqnarray}\label{U_der_LC+}
\frac{\partial}{\partial b^\mu} \mathcal{U}_{\text{DY}/\text{DIS}}^{\gamma^+}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
\bar q(z_1 n)(\overleftarrow{D_{\mu}}-\overrightarrow{D_{\mu}}-ig A_\mu(z_1n)-ig A_\mu(z_2n))\gamma^+ q(z_2n).
\end{eqnarray}
To proceed further, we have to recall the used boundary condition in the form
\begin{eqnarray}\label{gauge:A=F_ret}
A^\mu(x)&=&-\int_{-\infty}^0 d\sigma~F^{\mu+}(\sigma n+x),\qquad \text{in the retarded light-cone gauge,}
\\\label{gauge:A=F_adv}
A^\mu(x)&=&\int^{\infty}_0 d\sigma~F^{\mu+}(\sigma n+x),~~~\qquad \text{in the advanced light-cone gauge,}
\end{eqnarray}
where $x$ is an arbitrary point. Substituting these expressions into eq.~(\ref{U_der_LC+}) we arrive to eq.~(\ref{U_DY_der},~\ref{U_DIS_der}).
\subsection{Light-cone OPE for the gluon TMD operator}
\label{sec:LO_gluon}
The small-b OPE at NLO contains both quark and gluon collinear operators. The gluon operators that appear in a quark TMD are those that would appear in the small-$b$ OPE for gluon TMD operator. Since this expansion for gluons has never been considered in the literature we briefly describe it here.
We define the gluon TMD operator as (compare to eq.~(\ref{def:TMDop_DY},~\ref{def:TMDop_DIS}))
\begin{eqnarray}\label{def:TMDop_DY_gluon}
\mathcal{G}_{\text{DY}}^{\mu\nu}(z_1,z_2,\vec b)&=&F^{\mu+}(z_1 n+\vec b)[z_1 n+\vec b,-\infty n+\vec b]
[-\infty n-\vec b,z_2 n-\vec b] F^{\nu+}(z_2n -\vec b),
\\
\label{def:TMDop_DIS_gluon}
\mathcal{G}_{\text{DIS}}^{\mu\nu}(z_1,z_2,\vec b)&=&F^{\mu+}(z_1 n+\vec b)[z_1 n+\vec b,+\infty n+\vec b]
[+\infty n-\vec b,z_2 n-\vec b] F^{\nu+}(z_2n -\vec b),
\end{eqnarray}
where the Wilson lines are in the adjoint representation, i.e. the contraction of the color indices\footnote{This is the only color structure that appears in the leading power of TMD factorization. The so-called dipole TMD distributions that couples to opposite directed Wilson lines in the fundamental representation do not appear in the factorization of SIDIS or DY processes.} is $F^A(z_1)[..]^{AB}F^B(z_2)$. The parametrization of the corresponding TMD matrix elements can be found e.g. in~\cite{Echevarria:2015uaa}.
The evaluation of the light-cone OPE for gluon operators is totally analogous to the one made in sec.~\ref{sec:LO_reg}. The only difference is that the quark fields are replaced by $F^{+\mu}$ and the covariant derivatives act in the adjoint representation. We obtain the following analog of eq.~(\ref{U_DY_der},~\ref{U_DIS_der})
\begin{eqnarray}\label{G_DY_der}
\frac{\partial}{\partial b^\rho} \mathcal{G}_{\text{DY}}^{\mu\nu}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
F^{\mu+}(z_1n)\(\overleftarrow{D_\rho}[z_1n,z_2n]-[z_1n,z_2n]\overrightarrow{D_\rho}\)F^{\nu+}(z_2n)
\\\nonumber &&+ig\(\int_{-\infty}^{z_1} +\int_{-\infty}^{z_2}\)d\tau~
F^{\mu+}(z_1n)[z_1n,\tau n]F_{\rho+}(\tau n)[\tau n,z_2n]F^{\nu+}(z_2n),
\\
\label{G_DIS_der}
\frac{\partial}{\partial b^\rho} \mathcal{G}_{\text{DIS}}^{\mu\nu}(z_1,z_2,\vec b)\Big|_{\vec b=0}&=&
F^{\mu+}(z_1n)\(\overleftarrow{D_\rho}[z_1n,z_2n]-[z_1n,z_2n]\overrightarrow{D_\rho}\)F^{\nu+}(z_2n)
\\\nonumber &&-ig\(\int^{\infty}_{z_1} +\int^{\infty}_{z_2}\)d\tau~
F^{\mu+}(z_1n)[z_1n,\tau n] F_{\rho+}(\tau n)[\tau n,z_2n]F^{\nu+}(z_2n),
\end{eqnarray}
where the covariant derivatives are in the adjoint representation. Alike the quark case, the only operators which contribute to the Sivers function are given in the second lines of these equations.
\section{Light-cone OPE at next-to-leading order}
\label{sec:OPENLO}
The object of this section is to introduce the calculation of OPE for $\mathcal{U}$ up to terms linear in $\vec b$ at NLO in perturbation theory.
The OPE is realized when $\vec b^2\ll \Lambda^{-2}$ and it looks like
\begin{eqnarray}\label{OPE_gen4}
\mathcal{U}(z,\vec b)&=&\sum_n C^{\text{tw-2}}_n(z,\mathbf{L}_\mu,a_s(\mu))\otimes \mathcal{O}^{\text{tw2}}_{n}(z;\mu)
\\\nonumber &&+
b_\nu \sum_n C^{\text{tw-3}}_n(z,\mathbf{L}_\mu,a_s(\mu))\otimes \mathcal{O}^{\nu,\text{tw3}}_{n}(z;\mu)+O(\vec b^2),
\end{eqnarray}
where $C$ are the coefficient functions which depend on $\vec b^2$ logarithmically, $n$ enumerates all available operators at this order and $\otimes$ is some integral convolution in variables $z$. Here, we also introduce the notation for the coupling constant $a_s=g^2/(4\pi)^2$ and for the logarithm combination that typically enters in perturbative calculations
\begin{eqnarray}\label{def:Lmu}
\mathbf{L}_\mu=\ln\(\frac{\mu^2 \vec b^2}{4 e^{-2\gamma_E}}\).
\end{eqnarray}
The variable $\mu$ represents the scale of OPE.
The complexity of the computations for OPE increases drastically passing from LO to NLO in perturbative QCD. In the latter case one cannot omit the field interactions, as it happens in ordinary Taylor expansion as in eq.~(\ref{U_Taylor}). The propagation of fields between different points is responsible of the fact that eq.~(\ref{U_Taylor}) is to be modified in the presence of interactions which can pick up additional fields from the vacuum. Moreover, the OPE with interacting fields contains all possible operators with correct (as prescribed by the theory) quantum numbers.
An additional difficulty in the present calculation is that only a few computing methods have been tested on higher twist operators. For the twist-2 TMD operators the matching procedure is simple because in the OPE a TMD is in a one-to-one correspondence with the on-shell matrix elements over collinear-parton states.
In the case of higher twist operators the only matrix elements of collinear partons are not suitable for obtaining the matching coefficients, since a transverse component of momentum is needed to carry the operator indices. It can also happen that a matrix element over collinear partons is not infrared-safe and it requires an additional regularization with a (specific) separation of pole contributions, see e.g.~\cite{Ji:2006vf,Chen:2017lvx}. These problems are solved using off-shell matrix elements, which is significantly more complicated, due to the fact that the higher-twist operators mix with each other via QCD equations of motion and that off-shell colored states are not generally gauge invariant. The best method to evaluate the coefficient functions at higher twist results to be the \textit{background-field method}. At the diagram level, the method is equivalent to the evaluation of a generic matrix elements, with the main difference that the result of the calculation is given explicitly in operator form. The method allows to keep track of gauge properties and significantly simplifies the processing of equations of motion. Altogether, these properties make the background-field method very effective for higher twist calculations. In the following we concentrate on this method, for which we provide a brief general introduction in sec.~\ref{sec:intro-to-back}. The details of the calculation are given in sec.~\ref{sec:background}-\ref{sec:eval}. The treatment of rapidity divergences and renormalization needs a special discussion which is provided in sec.~\ref{sec:rapidity_div}-\ref{sec:backrenormalization}. All the computation is done for the DY case, but the passage to the SIDIS case does not present particular difficulties and the comparison of the two cases is provided in sec.~\ref{sec:DY-SIDIS-difference}.
\subsection{OPE in background field method}
\label{sec:intro-to-back}
The background-field method is founded on the idea of mode separation. The operator matrix element between states $S_1$ and $S_2$ is defined as
\begin{eqnarray}\label{gen:func_int1}
\langle S_1|\mathcal{U}|S_2\rangle = \int \mathcal D \Phi ~\Psi^*_{S_1}[\Phi] \,\mathcal{U}[\Phi] \,\Psi_{S_2}[\Phi]\,e^{i \mathcal{S}[\Phi]},
\end{eqnarray}
where the letter $\Phi$ represents any QCD field $\{\bar q, q,A_\mu\}$, $\Psi_S$ is the wave function of the state $S$ and $\mathcal{S}$ is the action of QCD. Let us split the fields into the ``fast'' and ``slow'' (or ``short-correlated'' and ``long-correlated'' in position space terminology) components, as
\begin{eqnarray}
\Phi(x) = \varphi(x;\mu)+\phi(x;\mu).
\end{eqnarray}
Here, the ``fast'' modes $\phi$ have momentum $p>\mu$, while ``slow'' modes have momentum $p<\mu$. The (factorization) scale $\mu$ is not explicitly defined but it is large enough to guarantee the convergence of the perturbative series. In the following we omit the argument $\mu$ for the fields. We postulate that physical states (hadrons) are built from the ``slow'' components, i.e. $\Psi_S[\Phi]=\Psi_S(\varphi)$ so that eq.~(\ref{gen:func_int1}) turns into
\begin{eqnarray}
\langle S_1|\mathcal{U}|S_2\rangle = \int \mathcal D \varphi\, \mathcal D \phi ~\Psi^*_{S_1}[\varphi] \,\mathcal{U}[\varphi+\phi](x) \,\Psi_{S_2}[\varphi]\,e^{i \mathcal{S}[\varphi+\phi]}.
\end{eqnarray}
In this expression the integral over ``fast'' components can be evaluated and the expression for observables has the following effective form
\begin{eqnarray}\label{gen:slow_op}
\langle S_1|\mathcal{U}|S_2\rangle = \int \mathcal D \varphi\, \Psi^*_{S_1}[\varphi] \,\widetilde{\mathcal{U}}[\varphi](x) \,\Psi_{S_2}[\varphi]\,e^{i \mathcal{S}[\varphi]},
\end{eqnarray}
where
\begin{eqnarray}
\widetilde{\mathcal{U}}[\varphi](x)=\int \mathcal D \phi ~\mathcal{U}[\varphi+\phi](x) ~e^{i \mathcal{S}[\varphi+\phi]-i \mathcal{S}[\varphi]}.
\end{eqnarray}
The mode separation then assumes that the ``slow'' fields can be treated as free-fields on distances $~x^2$. This hypothesis is typical for effective field theories (see for instance \cite{Beneke:2002ph,Bauer:2000yr,Bauer:2001yt} for the application of similar concepts in soft collinear effective theory (SCET) or \cite{Balitsky:2016dgz} for TMD factorization at small-x).
One can interpret the construction in eq.~(\ref{gen:slow_op}) as an evaluation of the perturbative QCD fields in a general parton background, which gives the method its name. After the integration of the ``fast'' fields in eq.~(\ref{gen:slow_op}), the resulting effective operator is then expanded using free-theory twist expansion, as it was done in sec.~\ref{sec:LO}. It is important to realize that in background calculation the result is gauge-invariant and satisfies QCD equations of motion at each step of the evaluation (even for each diagram). The result then is also universal, that is, it is valid for all states (we do not even specify them) and thus, we can operate only with fields $\varphi$. Essentially, the background field methods is concentrated in a single definition, eq.~(\ref{gen:slow_op}).
The background field method is an essential tool of the modern small-x calculations. In this case the separation of kinematic modes is based on the strong ordering in rapidity, which is a distinctive feature of the small-x kinematics. To define different modes one has to introduces a rapidity cutoff parameter $\sigma$, which separates ``fast'' ($p^+<\sigma$) and ``slow'' ($p^+>\sigma$) fields based on the value of the longitudinal component of the momenta $p^+$. Instead of the twist expansion the calculation of the functional integral over ``fast'' fields (\ref{gen:slow_op}) is now performed in the so-called shock-wave approximation. Since the procedure of separation of modes is quite general, the method can incorporate different kinematic regimes, which has been recently employed in ~\cite{Balitsky:2015qba, Balitsky:2016dgz}.
\subsection{QCD in background field}
\label{sec:background}
The QCD Lagrangian reads
\begin{eqnarray}
\mathcal{L}=\bar q(i\fnot \!D)q+\frac{1}{4}F_{\mu\nu}^a F^{\mu\nu}_a+\text{gauge fix},
\end{eqnarray}
where the covariant derivative and $F_{\mu\nu}$ are defined in eq.~(\ref{def:DandF}). Following the mode separation we split the fields as $A_\mu\to A_\mu+B_\mu$ and $q\to q+\psi$, where $\psi$ and $B_\mu$ are ``fast'' fields and $q$ and $A_\mu$ are ``slow'' (background) fields. The separation of modes in the main body of the Lagrangian is straightforward, but the gauge fixing term should be considered with caution. The ultimately convenient point of the background field method is the possibility to choose different classes of gauge fixing for different modes. The detailed discussion on gauge fixing in QCD with background method is given in \cite{Abbott:1980hw,Abbott:1981ke}.
We choose the most convenient combination of gauges for our task. For ``fast'' components we use the background-field gauge,
\begin{eqnarray}\label{gauge:back}
(\partial_\mu \delta^{AC}+g f^{ABC}A_\mu^B)B^{\mu,C}=D_\mu[A]B^\mu=0\ ,
\end{eqnarray}
which is the analog of covariant gauge fixing in the usual QCD perturbation theory.
In particular, the propagator has the familiar form
\begin{eqnarray}\label{gauga:alpha}
\contraction{}{B}{\,(x)~}{B}
B_\mu^A(x)B_\nu^B(0)&=&\int \frac{d^dk}{(2\pi)^d}e^{-ikx}\frac{-i \delta^{AB}}{k^2+i0}\(g^{\mu\nu}-(1-\alpha) \frac{k^\mu k^\nu}{k^2+i0}\),
\end{eqnarray}
where $\alpha$ is a free parameter. For background fields we use light-cone gauge eq.~(\ref{gauge:A+=0}) with retarded boundary condition eq.~(\ref{gauge:A=F_ret}) for DY operators and advanced boundary condition eq.~(\ref{gauge:adv}) for SIDIS operators.
In background field formulation, the Lagrangian of QCD splits into three parts
\begin{eqnarray}
\mathcal{L}=\mathcal{L}[q,A]+\mathcal{L}[\psi,B]+\delta \mathcal{L},
\end{eqnarray}
where the first two terms are usual QCD Lagrangians built for particular modes and the last term is the ``fast-slow'' modes interaction,
\begin{eqnarray}\label{QCD:deltaL}
\delta \mathcal{L}=g\(\bar q \fnot B\psi+\bar \psi \fnot B q+\bar \psi \fnot A \psi\)+\delta \mathcal{L}_{ABB}+\delta \mathcal{L}_{AABB}+\delta \mathcal{L}_{ABBB},
\end{eqnarray}
where $\delta \mathcal{L}_{ABB}$ ($\delta \mathcal{L}_{ABBB}$) is the interaction of a single field $A_\mu$ with two (three) fields $B_\mu$ and $\delta \mathcal{L}_{AABB}$ is the interaction of two fields $A_\mu$ with two fields $B_\mu$.
These terms depend on the gauge fixing condition. For our calculation we need only the $\delta \mathcal{L}_{ABB}$ interaction. It reads
\begin{eqnarray}
\delta\mathcal{L}_{ABB}&=&-gf^{ABC} A_\mu^A (\partial_\alpha B^B_\beta)B_\gamma^C\(2g^{\mu\beta} g^{\alpha \gamma}-g^{\mu \alpha}g^{\beta \gamma}-\frac{1+\alpha}{\alpha} g^{\mu\gamma}g^{\alpha \beta}\).
\end{eqnarray}
The rest of the terms can be found in \cite{Abbott:1980hw}. In the following, we consider the case $\alpha=1$, which corresponds to the ``Feynman gauge version'' of the background gauge.
\subsection{Evaluation of diagrams}
\label{sec:eval}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\textwidth]{NullDiagrams.eps}
\caption{\label{fig:0} Example of diagrams that vanish in our scheme of calculation. Diagrams (1) and (2) vanish due to $A_+=0$. Diagram (3) is proportional to $1-\alpha$ and vanish at $\alpha=1$. Diagrams (4) and (5) vanish since the dimensionally regularized loop integral does not have a scale. The bold lines denote the propagators of quantum fields. The thin lines with bubbles are background fields. The double dashed lines are Wilson lines and crosses show that they are pointing to light cone infinity.}
\end{center}
\end{figure}
We would like to evaluate the effective operator in eq.~(\ref{gen:slow_op}) up to twist-3 corrections, at $a_s$ order. The computation proceeds expanding the interaction part of the exponent in eq.~(\ref{gen:slow_op}) and integrating the ``fast'' modes by the Gaussian integration formula. i.e we obtain the Feynman diagrams with background fields as the external sources. The divergences of loop-integrals are regularized
by dimensional regularization and $\delta$-regulator as in \cite{Echevarria:2016scs,Echevarria:2015usa,Echevarria:2015byo}, which allows us to use renormalization factors of eq.~(\ref{renorm:R_1loop},~\ref{renorm:Z_1loop}).
In summary, the calculation follows this path:
\begin{itemize}
\item The dynamical fields are in background gauge, eq.~(\ref{gauge:back}) with the parameter $\alpha =1$, eq.~(\ref{gauga:alpha}).
\item The background fields are in light-cone gauge, eq.~(\ref{gauge:A+=0}) with the retarded eq.~(\ref{gauge:A=F_ret}) (advanced eq.~(\ref{gauge:A=F_adv})) boundary condition for DY (SIDIS) operator.
\item The UV and collinear divergences are regularized by the dimensional regularization with $d=4-2\epsilon$. We use the conventional $\overline{\text{MS}}$ scheme with $(e^{-\gamma_E}/4\pi)^\epsilon$ factor for each $a_s=g^2/(4\pi)^2$.
\item The rapidity divergences are regularized by $\delta$-regularization, defined in \cite{Echevarria:2016scs}. See detailed discussion in sec.~\ref{sec:rapidity_div}.
\end{itemize}
Within this scheme many diagrams vanish. Some examples of null diagrams are shownin fig.~\ref{fig:0}. \textit{(i)} and more specifically
we have the following cases of vanishing diagrams:
\textit{(i)} The diagrams with the background field coupled directly, or through a sub-graph, to the Wilson lines, such as diagrams diagrams (1) and (2) in fig.~\ref{fig:0}. They vanish due to light-cone gauge fixing, $A_+=0$. \textit{(ii)} The diagrams with a ``Wilson-lines reducible subgraph'', such as the diagram (3) in fig.~\ref{fig:0}. They are proportional to $1-\alpha$ and thus vanish at $\alpha=1$. \textit{(iii)} The diagrams without interaction of fields at different transverse positions (i.e. with $\vec b$ and $-\vec b$), such diagrams are diagrams (4) and (5) in fig.~\ref{fig:0}. They are zero in dimensional regularization, since loop-integrals in such diagrams are scaleless.
The rest of contributions are conveniently ordered with respect to the number of background fields. Since the number of fields in the operator is less or equal to the twist of the operator, only the diagrams with two or three background fields contribute at a specific power of OPE. There are 6 non-vanishing diagrams at this order (4 of them have charge conjugated diagrams). The diagrams with two quark fields are shown in fig.~\ref{fig:2point}. The diagrams with two quark and gluon fields are shown in fig.~\ref{fig:3point}. There are also diagrams (with two and three field) that mix the quark operator with the gluon operator, as in fig.~\ref{fig:QG}. In principle, there could be also diagrams with more gluon insertions, which are to be combined with a single gluon insertion into a gauge invariant combination $F_{\mu\nu}$ (with both transverse indices). However, we recall that only $F_{\mu+}$ contributes to operators of twist-3 and in the light-cone gauge $F_{\mu+}=-\partial_+A_\mu$. Thus, such diagrams should not be considered at twist-3 accuracy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{2PointDiagrams.eps}
\caption{\label{fig:2point} The non-vanishing diagrams with two insertions of background fields. The bold lines denote the propagators of quantum fields. The thin lines with bubbles are background fields. The double dashed lines are Wilson lines and crosses show that they are pointing to light-cone infinity.}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{3PointDiagrams.eps}
\caption{\label{fig:3point} The non-vanishing diagrams with three insertions of background fields. The bold lines denote the propagators of quantum fields. The thin lines with bubbles are background fields. The double dashed lines are Wilson lines and crosses show that they are pointing to light-cone infinity.}
\end{center}
\end{figure}
The process of diagrams computation is almost elementary. Let us show here the evaluation of the simplest diagram, diagram A. A similar evaluation (with the only difference in the path of Wilson lines) is presented in~\cite{Balitsky:1987bk}, which allows an instructive comparison. Also, in ref.~\cite{Vladimirov:2014aja} the diagram A (and the diagram B) has been calculated in momentum space for all values of $\vec b$, which allows to match the scheme factors. Importantly, the diagram A plays a special role in TMD physics, since it is the only diagram which has rapidity divergences as discussed in the next section. In appendix~\ref{app:exampleDiag} we also present a detailed explanation of the computation technique for one of the most difficult diagrams (diagram E).
The diagram A comes from the following contraction of fields in eq.~(\ref{gen:slow_op})
\begin{eqnarray}\label{diagA:0}
&&\widetilde{\mathcal{U}}_A=
\\\nonumber &&\qquad
\contraction[6pt]{
\Big\{\bar q(z_1 n+\vec b)\Big[ig \int_{-\infty}^{z_1} d\sigma n^\mu t^A B^A_\mu(n \sigma+\vec z_1)\Big]\gamma^+}{\psi)}{(z_2 n-\vec b)\Big)\Big(ig\int d^d y}{ \bar \psi}
\contraction[8pt]{\Big(\bar q(z_1 n+\vec b)\Big[ig \int_{-\infty}^{z_1} d\sigma n^\mu t^A }{B^A_\mu}{(n \sigma+\vec z_1)\Big]\gamma^+\psi(z_2 n-\vec b)\Big\}\Big(ig\int d^d y \bar \psi(y)}{\fnot B}
\Big\{\bar q(z_1 n+\vec b)\Big[ig \int_{-\infty}^{z_1} d\sigma n^\mu t^A B^A_\mu(n \sigma+\vec b)\Big]\gamma^+\psi(z_2 n-\vec b)\Big\}\Big(ig\int d^d y \bar \psi(y)\fnot B(y)q(y)\Big),
\end{eqnarray}
where the factor in the square brackets is part of the Wilson line and the factor in the round brackets is part of $\delta \mathcal{L}$ (see eq.~(\ref{QCD:deltaL})). Note, that here we consider the DY operator, which dictates the integration limits over $\sigma$. The propagators in dimensional regularization (with $d=4-2\epsilon$) are
\begin{eqnarray}
\contraction{}{\psi_i}{(x)}{\bar \psi}\psi_i(x)\bar \psi_j(0)&=&\frac{\Gamma(2-\epsilon)}{2\pi^{d/2}}\frac{i\fnot x_{ij}}{(-x^2+i0)^{2-\epsilon}}
\\
\contraction{}{B}{\,(x)~}{B}
B_\mu^a(x)B_\nu^b(0)&=&\frac{\Gamma(1-\epsilon)}{4\pi^{d/2}}\frac{-g_{\mu\nu}\delta^{ab}}{(-x^2+i0)^{1-\epsilon}},
\end{eqnarray}
where the gluon propagator is taken with $\alpha=1$. Explicitly, the diagram reads
\begin{eqnarray}
\widetilde{\mathcal{U}}_A&=&
-ig^2C_F\frac{\Gamma(2-\epsilon)\Gamma(1-\epsilon)}{8 \pi^d}
\\\nonumber &&\int_{-\infty}^{z_1} d\sigma \int d^d y\,\bar q(z_1 n+\vec b)\frac{2\gamma^+ y^+}{(-(y-nz_2+\vec b)^2+i0)^{2-\epsilon}(-(y-n \sigma-\vec b)^2+i0)^{1-\epsilon}} q(y),
\end{eqnarray}
where we have simplified gamma- and color-algebra.
To proceed further we join the propagator with a usual Feynman trick, introducing a single Feynman parameter $\alpha$. The resulting propagators is $(-y^2+2y^+(\sigma \alpha +(1-\alpha)z_2)+2(yb)(1-2\alpha)+\vec b^2).$ We diagonalize it by a shift $y^\mu \to y^\mu+n^\mu(\alpha \sigma+(1-\alpha)z_2)-(1-2\alpha)b^\mu$ and obtain
\begin{eqnarray}\label{diagA:2}
\widetilde{\mathcal{U}}_A&=&-ig^2C_F\frac{\Gamma(3-2\epsilon)}{4 \pi^d}
\\\nonumber &&
\int_{-\infty}^{z_1} d\sigma
\int d^d y\int_0^1 d\alpha \bar q(z_1n+\vec b)\frac{\gamma^+ y^+ \alpha^{-\epsilon}\bar \alpha^{1-\epsilon}}{(-y^2+4 \alpha \bar \alpha \vec b^2+i0)^{3-2\epsilon}} q(y+nz_{2\sigma}^{\alpha}-(1-2\alpha)\vec b),
\end{eqnarray}
where $\vec b^2=-b^2>0$, $\bar \alpha=1-\alpha$ and $z_{2\sigma}^\alpha=z_2 \bar \alpha+\sigma \alpha $. Starting from here we use the following notation
\begin{eqnarray}
z_{ij}^\alpha=z_i \bar \alpha+z_j \alpha,\qquad \bar \alpha =1-\alpha.
\end{eqnarray}
If the indices $i$ ($j$) are replaced by $\sigma$, the $z_i$ ($z_j$) is replaced by $\sigma$.
In order to evaluate the integral over $y$, we recall that the background field is a classical field and the expressions of the form eq.~(\ref{diagA:2}) should be understood as a generating function for the whole tower of twist-operators. Therefore, \textit{we are allowed to make the twist-expansion under the loop-integral sign.} In the considered case, we make the Taylor expansion at $y^\mu=0$, $q(y+x)=(1+y^\mu\partial_\mu+y^\mu y^\nu/2\, \partial_\mu\partial_\nu+...) q(x)$. The loop-integration can be taken for each term in the series. The necessary loop-integral reads
\begin{eqnarray}
\int \frac{y^{\mu_1}...y^{\mu_{2n}}}{(-y^2+X+i0)^{3-2\epsilon}}&=&-i\pi^{d/2}\frac{\Gamma(1-\epsilon-n)}{\Gamma(3-2\epsilon)}\frac{(-1)^n g_s^{\mu_1...\mu_{2n}}}{2^n X^{1-\epsilon-n}},
\end{eqnarray}
where $g_s$ is a completely symmetric composition of metric tensors. For an odd number of indices the loop-integral is zero.
Metric tensors produced by loop-integration can contract derivatives, vectors $b^\mu$ and $n^\mu$. Each term in the series should be sorted with respect to its twist. The thumb rule is that each transverse derivative increases the twist of an operator, but the light-cone derivative does not. Thus, the higher derivative term could be dropped. Alternatively, one can count the power of the vector $\vec b$. In our current calculation, we evaluate up to terms linear in $\vec b$. Note, that strictly speaking we should also expand fields in the powers of $\vec b$, but it does not affect the diagram evaluation and can be postponed until later stage.
The expression in eq.~(\ref{diagA:2}) has a very simple numerator, which is linear in $y$. So, only odd terms of Taylor series contribute. Moreover, already the second term in the expansion, the one with three derivatives $\sim y^\mu y^\nu y^\rho \partial_\mu \partial_\nu\partial_\rho q/3!$, vanishes after contraction. Indeed, it generates $\partial_+\partial^2 q$, that is at least twist-4 (on top, this contributions is proportional to $\vec b^2$). Therefore, we consider only the single-derivative term of the series and obtain
\begin{eqnarray}\label{diagA:3}
\widetilde{\mathcal{U}}_A&=&
2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_{-\infty}^{z_1} d\sigma
\int_0^1 d\alpha \,\bar \alpha~\bar q(nz_1+\vec b)\gamma^+ \overrightarrow{\partial_+}q(n z^\alpha_{2\sigma}-(1-2\alpha)\vec b)+O(\vec b^2\partial^2q).~~
\end{eqnarray}
Charge-conjugated diagrams can be evaluated independently, or obtained from the direct diagrams by reversing the order of field arguments and with the replacement $z_1\leftrightarrow z_2$. I.e. the diagram A$^*$ reads
\begin{eqnarray}
\widetilde{\mathcal{U}}_{A^*}&=&2a_sC_F\vec b^{2\epsilon} \Gamma(-\epsilon)
\int_{-\infty}^{z_2} d\sigma
\int_0^1 d\alpha\,\bar\alpha ~\bar q(z_{1\sigma}^\alpha n+(1-2\alpha)\vec b)\overleftarrow{\partial_+}\gamma^+ q(z_2n-\vec b)+O(\vec b^2\partial^2\bar q).~~
\end{eqnarray}
These expressions contain rapidity divergences, which are discussed in the next section. All other diagrams are evaluated similarly.
The expression for the diagram $A$ in SIDIS kinematics is almost identical to DY case. The only modification is the lower limit for integration over $\sigma$ iin eq.~(\ref{diagA:0}), which must be changed to $(+\infty)$ for the SIDIS case. Such a replacement does not affect the evaluation of the diagram and thus the analog of eq.~(\ref{diagA:3}) in the SIDIS kinematics is obtained replacing $(-\infty)$ by $(+\infty)$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{QGDiagrams.eps}
\caption{\label{fig:QG} The non-vanishing diagrams that mix quark and gluon operators. The bold lines denote the propagators of quantum fields. The thin lines with bubbles are background fields. The double dashed lines are Wilson lines and crosses show that they are pointing to light-cone infinity.}
\end{center}
\end{figure}
\subsection{Treatment of rapidity divergences}
\label{sec:rapidity_div}
The rapidity divergences appear due to the localization of a gluon field in the transverse plane at the light-cone infinity \cite{Vladimirov:2017ksc}.
There are three diagrams that have interactions with a Wilson line and thus, that are potentially rapidity divergent.
These are diagrams A, C and D. However, according to the general counting rule \cite{Vladimirov:2017ksc}, only the diagram A is rapidity divergent. In this section, we demonstrate how rapidity divergences arise in background field calculation.
The fact that diagram A is rapidity divergent is well-known. It has been calculated in numerous works, see e.g. the discussions in ref.~\cite{Echevarria:2015usa,Collins:2011zzd,GarciaEchevarria:2011rb,Gutierrez-Reyes:2017glx,Vladimirov:2014aja}. In all these works, the diagrams have been calculated in momentum space, where the loop-integral
is explicitly divergent. In our case the loop-integral in the diagram A has been evaluated without any problems, however, as we demonstrate shortly, the result of the integral in eq.~(\ref{diagA:3}) is ambiguous and the resolution of this ambiguity gives rise to the rapidity divergence.
The ambiguity in diagram A is hidden in the argument of the quark field. Indeed, its value at point $(\alpha,\sigma)=(0,-\infty)$ depends on the path used to approach this point. In particular, we find
\begin{eqnarray}
\lim_{\alpha\to 0}\lim_{\sigma\to -\infty} q(n z^\alpha_{2\sigma})&=&
q(-\infty)=0,
\\
\lim_{\sigma\to -\infty}\lim_{\alpha\to 0}q(n z^\alpha_{2\sigma})&=&
q(z_2),
\end{eqnarray}
and the integration over $\sigma$ and $\alpha$ does not commute in the vicinity of $(0,-\infty)$.
In order to resolve the ambiguity, the dependence on $\alpha$ and $\sigma$ should be separated. Let us rewrite eq.~(\ref{diagA:3}) as
\begin{eqnarray}\label{diagA:4}
\widetilde{\mathcal{U}}_A&=&
2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_{-\infty}^{z_1} d\sigma
\int_0^1 d\alpha \,\frac{\bar \alpha}{\alpha}~\bar q(nz_1)\gamma^+ \frac{\partial}{\partial \sigma} q(n z^\alpha_{2\sigma}),
\end{eqnarray}
where we set $\vec b$ in the arguments of the fields to $\vec 0$, for demonstration purposes (the presence of $\vec b$ in the argument does not change the procedure of rapidity divergence elaboration and we restore it at the end of the section). In eq.~(\ref{diagA:4}) the ambiguity at $(0,-\infty)$ is enforced by the divergence of the integrand at $\alpha\to 0$. We isolate the ambiguous part of the diagram splitting the integration into two parts
\begin{eqnarray}
\widetilde{\mathcal{U}}_A=\widetilde{\mathcal{U}}^{\text{reg}}_A+\widetilde{\mathcal{U}}^{\text{sing}}_A,
\end{eqnarray}
where
\begin{eqnarray}\label{diagA:reg1}
\widetilde{\mathcal{U}}^{\text{reg}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_{z_2}^{z_1} d\sigma
\int_0^1 d\alpha \,\frac{\bar \alpha}{\alpha}~\bar q(nz_1)\gamma^+ \frac{\partial}{\partial \sigma} q(n z^\alpha_{2\sigma}),
\\\label{diagA:sing1}
\widetilde{\mathcal{U}}^{\text{sing}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_{-\infty}^{z_2} d\sigma
\int_0^1 d\alpha \,\frac{\bar \alpha}{\alpha}~\bar q(nz_1)\gamma^+ \frac{\partial}{\partial \sigma} q(n z^\alpha_{2\sigma}).
\end{eqnarray}
The regular part does not contain the problematic point and thus the order of integration is irrelevant. Taking the integral over $\sigma$ by parts, we obtain
\begin{eqnarray}
\widetilde{\mathcal{U}}^{\text{reg}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_0^1 d\alpha \,\frac{\bar \alpha}{\alpha}~\Big[\bar q(nz_1)\gamma^+ q(n z^\alpha_{21})-\bar q(nz_1)\gamma^+ q(n z_{2})\Big].
\end{eqnarray}
This expression is regular at $\alpha\to 0$ since $z_{21}^{\alpha=0}=z_2$ and it is a position representation form of the well-known ``plus''-distribution.
To evaluate the singular part we introduce a regulator. Here, we use the $\delta$-regularization, which consists in the following modification of the Wilson line
\begin{eqnarray}
P\exp\(ig \int_{-\infty}^z d\sigma A_+(n\sigma+x)\) \to P\exp\(ig \int_{-\infty}^z d\sigma A_+(n\sigma+x)e^{-\delta |\sigma|}\) ,
\end{eqnarray}
where $\delta >0$. Such modification breaks gauge invariance by power corrections and therefore, only the limit $\delta \to 0$ is gauge invariant. For the detailed discussion of this issue we refer to \cite{Echevarria:2015byo}. In $\delta$-regularization the interaction vertex with Wilson line as in eq.~(\ref{diagA:0}) receives a factor $e^{\sigma \delta}$, which passes through all calculation untouched and appears in the integrand of eq.~(\ref{diagA:sing1}). With such a factor the ambiguity is resolved because the integrand is zero at $\sigma\to-\infty$ irrespectively of the value of $\alpha$. In order to evaluate it, we make the change of variable $\tau=\alpha(\sigma-z_2)$ and we obtain
\begin{eqnarray}\label{diagA:sing2}
\widetilde{\mathcal{U}}^{\text{sing}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}
\int_{-\infty}^{0} d\tau
\int_0^1 d\alpha \,e^{\delta \frac{\tau}{\alpha}}\frac{\bar \alpha}{\alpha}~\bar q(nz_1)\gamma^+ \frac{\partial}{\partial \tau} q(n (z_2+\tau)).
\end{eqnarray}
The integral over $\alpha$ is singular in the limit $\delta\to 0$
\begin{eqnarray}
\int_0^1 d\alpha \,e^{\delta \frac{\tau}{\alpha}}\frac{\bar \alpha}{\alpha}\sim \ln\delta.
\end{eqnarray}
The logarithm of $\delta$ represents the rapidity singularity. In order to evaluate the construction (\ref{diagA:sing2}) explicitly we rewrite
\begin{eqnarray}
q(n (z_2+\tau))=e^{i\tau (n\cdot\hat{p}_q)}q(n z_2),
\end{eqnarray}
where $(\hat{p}_q)_\mu=-i\overrightarrow{\partial_\mu}$ is the momentum operator acting on the quark field. Then the integral (\ref{diagA:sing2}) can be taken formally
\begin{eqnarray}
\int_{-\infty}^{0} d\tau\int_0^1 d\alpha e^{\delta \frac{\tau}{\alpha}}\frac{\bar \alpha}{\alpha}\frac{\partial}{\partial \tau}e^{i\tau (n\cdot\hat{p}_q)}
&=&-1+\(1-\frac{i\delta}{(n\cdot\hat{p}_q)}\)\ln\(\frac{\delta+i(n\cdot\hat{p}_q)}{\delta}\)
\\\nonumber &=&-1-\ln\(\frac{\delta}{i(n\cdot\hat{p}_q)}\)+O(\delta).
\end{eqnarray}
The singular part of the diagram A is
\begin{eqnarray}\label{diagA:sing3}
\widetilde{\mathcal{U}}^{\text{sing}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}\(-1-\ln\(\frac{\delta}{i(n\cdot\hat{p}_q)}\)\)\bar q(nz_1)\gamma^+ q(n z_2).
\end{eqnarray}
This expression literally (including the complex part) coincides with the calculation of the rapidity divergent part in $\delta$-regularization in the momentum space \cite{Vladimirov:2014aja,Echevarria:2016scs}.
The same method can be used when the position of fields is shifted by $\vec b$. The result for the diagrams A can be written in the form
\begin{eqnarray}\label{diagA:final}
\widetilde{\mathcal{U}}_A&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}\Bigg\{\int_0^1 d\alpha \frac{\bar \alpha}{\alpha} \Big[\mathcal{U}^{\gamma^+}(z_1,z_{21}^\alpha;\bar \alpha \vec b)-\mathcal{U}^{\gamma^+}(z_1,z_{2};\vec b)\Big]
\\\nonumber && \qquad\qquad\qquad\qquad\qquad\qquad-\(1+\ln\(\frac{\delta}{i(n\cdot\hat{p}_q)}\)\)\mathcal{U}^{\gamma^+}(z_1,z_{2};\vec b)\Bigg\}+O(\vec b^2\partial^2 q),
\\\label{diagA:final*}
\widetilde{\mathcal{U}}_{A^*}&=&2a_sC_F\Gamma(-\epsilon) \vec b^{2\epsilon}\Bigg\{\int_0^1 d\alpha \frac{\bar \alpha}{\alpha} \Big[\mathcal{U}^{\gamma^+}(z_{12}^\alpha,z_{2};\bar \alpha \vec b)-\mathcal{U}^{\gamma^+}(z_1,z_{2};\vec b)\Big]
\\\nonumber &&\qquad\qquad\qquad\qquad\qquad\qquad -\(1+\ln\(\frac{\delta}{i(n\cdot\hat{p}_{\bar q})}\)\)\mathcal{U}^{\gamma^+}(z_1,z_{2};\vec b)\Bigg\}+O(\vec b^2\partial^2 q),
\end{eqnarray}
where $\hat{p}_{\bar q}=-i\overleftarrow{\partial_\mu}$ is the momentum operator acting on the anti-quark field. Note, that we have added a total shift $\sim \alpha \vec b$ to the first operators, to make the expression more compact. Including such a shift does not affect the expression for the TMD distribution, since it is proportional to the difference between the momenta of initial and final states. Notice that while in TMD distributions this difference is null, it is not the case for generalized TMD distributions (GTMD).
\subsection{Renomalization}
\label{sec:backrenormalization}
Performing the evaluation of all the other diagrams in a similar manner (see an explicit example for diagram E in the appendix \ref{app:exampleDiag}), we get the OPE for the bare TMD operator, which schematically can be written as
\begin{eqnarray}\label{bare_OPE}
\widetilde{\mathcal{U}}(z_1,z_2;\vec b)&=&\sum_{i}\Big[1_i+a_s\Gamma(-\epsilon)\vec b^{2\epsilon} \tilde C_i^{\text{tw2}}+O(a_s^2)\Big]\otimes \mathcal{O}_{i,\text{tw2}}(z_1,z_2)
\\\nonumber &&+b_\mu\sum_{i}\bigg[
1_i+a_s\Gamma(-\epsilon)\vec b^{2\epsilon} \tilde C_i^{\text{tw3}}+O(a_s^2)\bigg]\otimes \mathcal{O}^\mu_{i,\text{tw3}}(z_1,z_2)
+O(\vec b^2),
\end{eqnarray}
where the indices $i$ enumerate all operators that enter the expression, $\otimes$ is some integral convolution in the light cone positions variables $z$, and $1_i=1 (0)$ for the operators that contribute at LO (otherwise). Here, the coefficients $\tilde C$ depend on $\epsilon$, $\delta$ and light-cone positions $z_{1,2}$, the dependence $\vec b$ is concentrated entirely in the factors $\vec b^{2\epsilon}$. The explicit form of each term in eq.~(\ref{bare_OPE}) is rather lengthy. We present it diagram-by-diagram (since there is practically no simplification in the diagram sum) in appendix \ref{app:diag-by-diag}.
The bare OPE eq.~(\ref{bare_OPE}) requires renormalization as in eq.~(\ref{def:renormalization}), i.e. both sides of eq.~(\ref{bare_OPE}) are to be multiplied by $Z^{-1}_2Z_q^{TMD}R_q$, whose LO expressions are given in eqs.~(\ref{renorm:R_1loop}) and ~(\ref{renorm:Z_1loop}). We recall that this renormalization is universal, in the sense that, it is common for all terms of the small-$b$ expansion and for various Lorentz structures of TMD operator. An example of this universality is already provided by the diagram A, discussed in the previous section. Indeed, according to eqs.~(\ref{diagA:final},~\ref{diagA:final*}) the rapidity divergence enters the expression multiplying the bare TMD operator $\mathcal{U}(z_1,z_2;\vec b)$. In other words, we can extract the rapidity divergent terms from eq.~(\ref{bare_OPE}) and write it as
\begin{eqnarray}
\widetilde{\mathcal{U}}(z_1,z_2;\vec b)&=&\Big[
1-2 a_s C_F \Gamma(-\epsilon)\vec b^{2\epsilon} \ln\(\frac{\delta^2}{(p^+)^2}\)\Big]\mathcal{U}(z_1,z_2;\vec b)+a_s(\text{rapidity finite terms}),
\end{eqnarray}
where $p^+$ is the momentum of the parton\footnote{In GTMD case, initial and final partons have different momenta. We cannot specify which momentum appears in the soft factor in the absence of the process and factorizaton theorem which would fix the kinematic scales. Nonetheless, in any case, the rapidity divergences are renomalized by factor $R_q$, but possibly leave extra terms of the form $\ln(p_{q}^+/p_{\bar q}^+)$.}. Multiplying it by $R_q$, given in eq.~(\ref{renorm:R_1loop}), \textit{the logarithm of $\delta$ cancels for all terms of the small-$b$ expansion to all orders of $\epsilon$}. To our best knowledge this is the first explicit demonstration of rapidity divergences renormalization of TMD at higher twists.
The renormalization of eq.~(\ref{bare_OPE}) makes this expression finite. However, coefficients $\tilde C$ contain singularities in $\epsilon$. These singularities are collinear singularities and are compensated by UV behavior of light-cone operators. To remove them explicitly we replace the bare operators on r.h.s. by the renormalized operators $\mathcal{O}^{bare}=Z^{-1}\otimes \mathcal{O}^R(\mu)$. The factor $Z^{-1}$ being convoluted with coefficient function removes the remaining poles in $\epsilon$.
Concluding, the renormalized expression for small-$b$ OPE has the form
\begin{eqnarray}\label{renorm:final}
\widetilde{\mathcal{U}}(z_1,z_2;\vec b;\mu,\zeta)&=&\sum_{i}\big[1_i+a_s(\mu)C_i^{\text{tw2}}(\mu,\zeta)+O(a_s^2)\Big]\otimes \mathcal{O}_{i,\text{tw2}}(z_1,z_2;\mu)
\\\nonumber &&\qquad +b_\mu\sum_i\big[
1_i+a_s(\mu)C_i^{\text{tw3}}(\mu,\zeta)+O(a_s^2)\big]\otimes \mathcal{O}^\mu_{i,\text{tw3}}(z_1,z_2;\mu)
+O(\vec b^2),
\end{eqnarray}
where the operators are renormalized at scales $\mu$ and $\zeta$ and we have set the scale of renormalization for light-cone operators to be the same as for TMD operator for simplicity. The expression for the coefficient functions at NLO for any twist can be written as
\begin{eqnarray}\label{renorm:final-final}
C_i^{\text{tw-n}}(\mu,\zeta)&=&\Bigg\{\Gamma(-\epsilon)\vec b^{2\epsilon}\mu^{2\epsilon}e^{-\epsilon \gamma_E} \Big[\tilde C_i^{\text{tw-n}}+2 C_F\(\ln\(\vec b^2\delta^2\frac{\zeta}{(p^+)^2}\)-\psi(-\epsilon)+\gamma_E\)\Big]
\\\nonumber &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad
-C_F\(\frac{2}{\epsilon^2}+\frac{3+2\ln(\mu^2/\zeta)}{\epsilon}\)\Bigg\}_{\epsilon-\text{finite}},
\end{eqnarray}
where the rapidity divergences in $\tilde C_i^{\text{tw-n}}$ are explicitly canceled and we have expressed the renormalization factors in $\overline{\text{MS}}$-scheme, see eq.~(\ref{renorm:R_1loop}, \ref{renorm:Z_1loop}). With this formula it is simple enough to obtain the coefficient functions for the small-$b$ OPE in coordinate space.
However, they are of little use, since in practice, one operates in terms of momentum fractions $x$ and the corresponding collinear distributions. The transition to the distribution and the corresponding expressions are discussed in sec.~\ref{sec:TOdistr}.
\subsection{Difference in the evaluation of DY and SIDIS operators}
\label{sec:DY-SIDIS-difference}
The operators for the DY and SIDIS initiated TMD distributions differ by the geometry of Wilson lines. This dependence influences the calculation in two aspects. The first one is the explicit expression for diagrams that have interaction with Wilson line, such as diagrams A, C and E. The second one is the preferred boundary conditions for the gauge fixing for the background field, the retarded for DY-type operators, eq.~(\ref{gauge:ret}) and advanced one for SIDIS-type operators, eq.~(\ref{gauge:adv}). Let us note, that boundary conditions do not influence the process of diagram evaluation, but rather the procedure of recompilation of the expressions in terms of gauge-invariant operators, see eq.~(\ref{gauge:A=F_ret},~\ref{gauge:A=F_adv}).
In both cases the only difference between expressions for DY and SIDIS kinematic is the sign of infinity in the integration limits. I.e. a term contributing to OPE for DY operator has the form
\begin{eqnarray}
\text{DY}:\qquad \int_{-\infty}^{z_i}d\sigma~ ...~ F^{\mu +}(\sigma),
\end{eqnarray}
whereas the same term in the OPE for SIDIS operator is
\begin{eqnarray}
\text{SIDIS}:\qquad \int_{+\infty}^{z_i}d\sigma~ ...~ F^{\mu +}(\sigma).
\end{eqnarray}
Here, dots indicate various compositions of fields, functions and integrals that do not change. Such a structure is already evident at the tree level order, as one finds comparing eq.~(\ref{U_DY_der}) and eq.~(\ref{U_DIS_der}). As we will see, in terms of distributions this difference will result into a different global sign of the coefficient functions.
\section{Definition of collinear distributions}
\label{sec:def-collinear}
In order to proceed further we need to evaluate the hadronic matrix element of OPE. This procedure is scheme dependent in the following sense: We recall that our computation is made in dimensional regularization and after the renormalization procedure the expressions are finite for $\epsilon\to 0$.
Nonetheless, the finite part of the results depends on $\epsilon$ and moreover the expressions so obtained have a tensor structure which also depends on the number of dimensions. Thus, in order to completely define the scheme, we should specify the order of operations with respect to the limit $\epsilon\to 0$.
There are two major options. The first one consists in setting $\epsilon\to 0$ before the evaluation of matrix elements (i.e. at the level of operators) and defining the distributions in 4-dimensions. The second one is to \textit{define the distributions in $d$-dimensions and to perform the limit $\epsilon\to0$ after the evaluation of matrix elements}. Both schemes have positive and negative aspects. In fact, this problem has not been accurately addressed in the TMD-related literature. Checking the traditional calculations of TMD matching at twist-2 \cite{Collins:2011zzd,Echevarria:2016scs,Gutierrez-Reyes:2017glx,Aybat:2011zv}, we conclude that the second scheme is used in all these cases. Therefore, to be consistent with earlier calculations, \textit{we use the second scheme.}
Nonetheless, we have also performed the calculation in the first scheme and we have found that for the Sivers function some differences appear only in the quark-gluon mixing diagrams. These differences are $\epsilon$-suppressed and thus the expression for the NLO matching coefficient is the same in both schemes. In appendix \ref{app:diag-by-diag:matching} we present the expressions for diagrams with an explicit designation of the origin of $\epsilon$ which allows to re-derive the complete result.
In the rest of this section we define the twist-2 and twist-3 matrix collinear distributions and evaluate the TMD matrix element over the small-$b$ OPE obtained in the previous section.
\subsection{Quark distributions}
\label{sec:coll-quark}
The forward matrix elements of the light-cone operators are parametrized by collinear distributions, or parton distribution functions (PDFs). For this work we need the forward matrix element of twist-2 and twist-3 operators only. We start discussing the required quark distributions, while the gluon distributions are treated in the next section.
There are three quark operators contributing to the OPE of the Sivers function,
\begin{eqnarray}\label{distr:O2}
\mathcal{O}_{\gamma^+}(z_1,z_2)&=&\bar q(z_1 n)[z_1 n,z_2 n]\gamma^+ q(z_2 n),
\\\label{distr:T1}
\mathcal{T}^{\mu}_{\gamma^+}(z_1,z_2,z_3)&=&g\bar q(z_1n)[z_1n,z_2n]\gamma^+ F^{\mu+}(z_2n)[z_2n,z_3n]q(z_3n),
\\\label{distr:T2}
\mathcal{T}^{\nu}_{\gamma^+\gamma_T^{\nu\mu}}(z_1,z_2,z_3)&=&g\bar q(z_1n)[z_1n,z_2n]\gamma^+\gamma^{\nu\mu}_T F^{\nu+}(z_2n)[z_2n,z_3n]q(z_3n),
\end{eqnarray}
where
\begin{eqnarray}
\gamma_T^{\mu\nu}=g_T^{\mu\mu'}g_T^{\nu\nu'}\frac{\gamma_{\mu'}\gamma_{\nu'}-\gamma_{\nu'} \gamma_{\mu'}}{2}.
\end{eqnarray}
The operator in eq.~(\ref{distr:O2}) is twist-2, whereas the operators in eq.~(\ref{distr:T1},~\ref{distr:T2}) are twist-3. We emphasize that all indices appearing in eq.~(\ref{distr:T1},~\ref{distr:T2}) are transverse.
The forward matrix element depends only on the distance between fields, but not on the absolute position. A shift of the common position can be written as a total derivative of the operator, which is a momentum transfer between initial and final states. It is the consequence of the quantum-mechanical definition of the momentum operator:
\begin{eqnarray}\label{dO=0}
\langle p_1|\partial_\mu\{O\}|p_2\rangle= i(p_2-p_1)_\mu \langle p_1|O|p_2\rangle,
\end{eqnarray}
where $O$ is any operator. It allows to move each term of OPE to a convenient position and to drop terms with total derivatives. Altogether it significantly simplifies the evaluation. To resolve the total derivative terms one should consider a non-forward kinematics, that defines GTMD distributions and generalized parton distributions. In the following, we consider each operator in a convenient point.
The standard unpolarized PDF comes from the forward matrix element of $\mathcal{O}_{\gamma^+}$,
\begin{eqnarray}\label{def:f1}
\langle p,S|O_{\gamma^+}(z_1,z_2)|p,S\rangle=2p^+ \int dx e^{ix(z_1-z_2)p_+} f_1(x).
\end{eqnarray}
The PDF is non-zero for $-1<x<1$ and
\begin{eqnarray}
f_1(x)=\theta(x)q(x)-\theta(-x)\bar q(x),
\end{eqnarray}
where $q(x)$ and $\bar q (x)$ are the quark and anti-quark parton densities in the infinite momentum frame.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{DomainInterpretation.eps}
\end{center}
\caption{\label{fig:domain} The support of the twist-3 functions, drawn in the barycentric coordinates, $x_1+x_2+x_3=0$. The diagrams demonstrate the interpretation of distribution in the terms of emission-absorption of partons by a hadron. Red dashed line is the line on which the Qui-Sterman distribution is defined.}
\end{figure}
The definition of twist-3 PDFs is more cumbersome since they depend on two momentum fractions $x_i$ and they have a different interpretation relative to a domain of variables. The notation simplifies considerably if one writes the twist-3 distributions as a functions of three momentum factions $x_{1,2,3}$.
Each momentum fraction is the Fourier conjugate of the corresponding coordinate $z_{1,2,3}$. We define
\begin{eqnarray}\label{def:T}
\langle p,S|\mathcal{T}^{\mu}_{\gamma^+}(z_1,z_2,z_3)|p,S\rangle
&=& 2\tilde s^\mu (p^+)^2 M\int [dx]e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}T(x_1,x_2,x_3),
\\\label{def:DeltaT}
\langle p,S|\mathcal{T}^{\nu}_{\gamma^+\gamma_T^{\nu\mu}}(z_1,z_2,z_3)|p,S\rangle
&=& -2\tilde s^\mu (p^+)^2 M\int [dx]e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}\Delta T(x_1,x_2,x_3),
\end{eqnarray}
where $M$ is the mass of the hadron and the integral measure is defined as
\begin{eqnarray}\label{def:[dx]}
\int [dx]f(x_1,x_2,x_3)=\int_{-1}^1 dx_1dx_2dx_3 \delta(x_1+x_2+x_3)f(x_1,x_2,x_3).
\end{eqnarray}
Such an integral measure automatically takes into account the independence of forward matrix element on the total shift, eq.~(\ref{dO=0}).
The functions of three variables $T(x_1,x_2,x_3)$ have several symmetry properties. It is natural to consider them as functions defined on the hyperplane $x_1+x_2+x_3=0$, since only this domain contributes to forward matrix element. The domain can be split into six regions, corresponding to different signs of the variables $x_i$, see fig.~\ref{fig:domain}. Each of these regions has a different interpretation in parton language: depending on the sign of $x_i$ the corresponding parton is either emitted ($x_i>0$) or absorbed by a hadron \cite{Jaffe:1983hp}, as it is shown schematically in fig.~\ref{fig:domain}.
The functions $T$ and $\Delta T$ are not independent and mix under the evolution. In ref.~\cite{Braun:2009mi} it is shown that there exist a combination of $T$ and $\Delta T$ which evolve autonomously, but we do not use it in this work.
The definitions in eq.~(\ref{def:T},~\ref{def:DeltaT}) are understood in $d$-dimensions. That is, the vector $\tilde s^\mu$ is some vector that turns into $\tilde s^\mu=\epsilon_T^{\mu\nu}s_\nu$ when $\epsilon\to 0$. The definition of the non-perturbative functions $T$ and $\Delta T$ coincides\footnote{To compare the definitions that we have used, consider the 4-dimensional relation $\gamma^+\gamma^{\mu\nu}_T=-i\epsilon^{\mu\nu}_T\gamma^+\gamma^5$.} with the one made in \cite{Scimemi:2018mmi}. Also it is coincides (up to a factor $M$) with the definition given in \cite{Braun:2009mi}. The articles \cite{Ji:2006vf,Koike:2007dg,Kang:2008ey,Kang:2011mr,Sun:2013hua} use a less convenient two-variable definition, which is related to the definition with three variables by (here we compare to \cite{Kang:2008ey})
\begin{eqnarray}\label{def:T(3)->T(2)}
\tilde{\mathcal{T}}_{q,F}(x,x+x_2)&=&MT(-x-x_2,x_2,x),\qquad
\tilde{\mathcal{T}}_{\Delta q,F}(x,x+x_2)=M\Delta T(-x-x_2,x_2,x).
\end{eqnarray}
Using time-reversal and hermiticity, one can show that the functions $T$ and $\Delta T$ are real and obey the property
\begin{eqnarray}\label{quark:symT}
T(x_1,x_2,x_3)&=&T(-x_3,-x_2,-x_1),
\\\label{quark:symdT}
\Delta T(x_1,x_2,x_3)&=&-\Delta T(-x_3,-x_2,-x_1).
\end{eqnarray}
These properties are central in the following calculation. They represent the simple statement that gluon is a neutral particle.
In barycentric coordinates the time-reversal transformation turns the picture upside down as shown in fig.~\ref{fig:transformation}.
Therefore, the function $T$ ($\Delta T$) is (anti)symmetric with respect to the horizontal line $x_2=0$ (given by red dashed line in fig.~\ref{fig:domain}).
PDFs defined on these lines are known as Qui-Sterman distribution.
They play a special role in TMD physics, since they provide the LO matching, as it is shown in the next sections.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{DomainTransformation.eps}
\end{center}
\caption{\label{fig:transformation} The illustration for the transformation of the barycentric coordinates. From left to right: original, time-inversion, permutation of variables, cyclic permutation of variables.}
\end{figure}
\subsection{Gluon distributions}
\label{sec:coll-gluon}
The gluon operators of twist-2 and twist-3 are
\begin{eqnarray}\label{distr:Omunu}
\mathcal{O}^{\mu\nu}(z_1,z_2)&=&F^{\mu+}(z_1n)[z_1n,z_2n]F^{\nu+}(z_2n),
\\\label{distr:T+}
\mathcal{T}^{\mu\nu\rho}_+(z_1,z_2,z_3)&=&igf^{ABC}F^{A;\mu+}(z_1n)F^{B;\nu+}(z_2n)F^{C;\rho+}(z_3n),
\\\label{distr:T-}
\mathcal{T}^{\mu\nu\rho}_-(z_1,z_2,z_3)&=&gd^{ABC}F^{A;\mu+}(z_1n)F^{B;\nu+}(z_2n)F^{C;\rho+}(z_3n),
\end{eqnarray}
where $f^{ABC}$ and $d^{ABC}$ are symmetric and anti-symmetric structure constants of the gauge-group. In the definitions (\ref{distr:T+}) we have dropped the Wilson lines for simplicity\footnote{ The complete expression with Wilson lines is like
\begin{eqnarray}\nonumber
\mathcal{T}^{\mu\nu\rho}_+(z_1,z_2,z_3)=gF^{A';\mu+}(z_1n)F^{B';\nu+}(z_2n)F^{C';\rho+}(z_3n)[z_1n,rn]^{A'A}[z_2n,rn]^{B'B}[z_3n,rn]^{C'C}if^{ABC},
\end{eqnarray}
and analogous for $\mathcal{T}^{\mu\nu\rho}_-$. The expression is independent on $r$, thanks to Jacobi identity.}.
The forward matrix element is parametrized by
\begin{eqnarray}\label{distr:tensor-decomposition}
\langle p,S|\mathcal{O}^{\mu\nu}(z_1,z_2)|p,S\rangle&=& (p^+)^2\int dx e^{i(z_1-z_2)xp^+}\,x\,\Big(\frac{g_T^{\mu\nu}}{2(1-\epsilon)}g(x)+\lambda\frac{a^{\mu\nu}}{2}\Delta g(x)\Big),
\end{eqnarray}
where $\lambda$ is a hadron helicity and $a^{\mu\nu}$ is an antisymmetric tensor such that
\begin{eqnarray}
\lim_{\epsilon\to 0}a^{\mu\nu}=\epsilon_T^{\mu\nu}.
\end{eqnarray}
Generally, the decomposition (\ref{distr:tensor-decomposition}) should additionally contain a symmetric-traceless component. The corresponding distribution is however zero in forward kinematics. The distributions $g(x)$ and $\Delta g$ are conventional unpolarized and polarized gluon distributions.
There is no standard parametrization for the twist-3 gluon operator. Here we introduce the parameterization that is convenient for our calculation. It is different (but equivalent) to other parameterizations used e.g. in \cite{Braun:2009mi,Kang:2008ey,Beppu:2010qn,Dai:2014ala,Chen:2016dnp,Chen:2017lvx}. The main difference is that we use two distributions with different properties, instead of a single one. We have
\begin{eqnarray}\label{distr:gluons}
\langle p,S|\mathcal{T}^{\mu\nu\rho}_\pm(z_1,z_2,z_3)|p,S\rangle &=&-(p^+)^3M\int [dx]e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}
\\\nonumber &&
\times \Big(
\frac{\tilde s^\mu g_T^{\nu\rho}+\tilde s^\nu g_T^{\mu\rho}+\tilde s^\rho g_T^{\mu\nu}}{2(2-\epsilon)}G_\pm(x_1,x_2,x_3)
\\\nonumber &&+\frac{\tilde s^{\nu}g_T^{\mu\rho}Y_\pm(x_1,x_2,x_3) \mp \tilde s^\mu g_T^{\nu\rho}Y_{\pm}(x_2,x_1,x_3)
\mp \tilde s^\rho g_T^{\mu\nu}Y_\pm(x_1,x_3,x_2)}{1-2\epsilon}
\Big).
\end{eqnarray}
The overall minus sign is set in order to have a simple relation to the distributions defined in \cite{Braun:2009mi,Kang:2008ey}. The foundation for this parameterization is discussed in appendix \ref{app:tensor-decomposition}. Despite its cumbersome appearance, this parameterization has some natural properties, that significantly simplify the calculation. Time-reversal and hermiticity imply that
\begin{eqnarray}\label{distr:gluon-reverse}
G_\pm(x_1,x_2,x_3)=G_\pm(-x_3,-x_2,-x_1),\qquad Y_\pm(x_1,x_2,x_3)=Y_\pm(-x_3,-x_2,-x_1) ,
\end{eqnarray}
which reflects the fact that the gluon is a neutral particle and thus, ``anti-gluon'' distribution is equal to the ``gluon'' one.
Due to the permutation properties of the operator, the distributions are highly symmetric. Namely, the distribution $G_-$ ($G_+$) is (anti-)symmetric with respect to permutation of any pair of arguments
\begin{eqnarray}\label{distr:gluon-anti}
G_\pm(x_1,x_2,x_3)=\mp G_\pm(x_2,x_1,x_3)=\mp G_\pm(x_1,x_3,x_2).
\end{eqnarray}
The distribution $Y_-$($Y_+$) is (anti-)symmetric with respect to to permutation of $x_1$ and $x_3$,
\begin{eqnarray}\label{distr:gluon-anti2}
Y_\pm(x_1,x_2,x_3)=\mp Y_\pm(x_3,x_2,x_1).
\end{eqnarray}
Additionally, the distributions $Y_\pm$ obey a cyclic rule
\begin{eqnarray}\label{distr:gluon-cyclic}
Y_\pm(x_1,x_2,x_3)+Y_\pm(x_2,x_3,x_1)+Y_\pm(x_3,x_1,x_2)=0.
\end{eqnarray}
The graphical representation of these transformation in barycentric coordinates is shown in fig.~\ref{fig:transformation}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{GYplus.eps}
\end{center}
\caption{\label{fig:GYplus} The value of functions $G_\pm$ and $Y_\pm$ in the whole domain is defined by values in the red segments. The values in other segments is obtained by turning/reflecting the values with respect to edges and multiplying by the factor shown within the segment.}
\end{figure}
The symmetry properties in eq.~(\ref{distr:gluon-reverse}-\ref{distr:gluon-cyclic}) significantly restrict the functional form of distributions. In particular, the functions $G_\pm$ are entirely defined by its values in the region $0<x_1/2<-x_2<x_1$. Whereas the functions $Y_\pm$ are defined by its values in the region $0<x_1/2<-x_2<2x_1$. Graphically these relations are demonstrated in fig.~\ref{fig:GYplus}.
The functions $G$ and $Y$ mix under evolution. In many aspects they are similar to the functions $T$ and $\Delta T$ of the quark case. Nonetheless, the parametrization given here grants many simplification during calculation, because each of the structures in eq.~(\ref{distr:gluons}) belongs to an irreducible representation of the Lorentz group. For that reason these structures enter the dimensionally regularized expression with different $\epsilon$-dependent factors.
The relation of the functions $G_\pm$ and $Y_\pm$ to the functions used in \cite{Braun:2009mi} is
\begin{eqnarray}
T_{3F}^\pm(x_1,x_2,x_3)=G_\pm(x_1,x_2,x_3)+Y_\pm(x_1,x_2,x_3).
\end{eqnarray}
It is important to note that this comparison is made at $\epsilon=0$, because at $\epsilon\neq 0$ the comparison is impossible. The inverse relation is
\begin{eqnarray}\label{relationG-T}
G_\pm(x_1,x_2,x_3)&=&\frac{T_{3F}^\pm(x_1,x_2,x_3)-T_{3F}^\pm(x_2,x_1,x_3)-T_{3F}^\pm(x_1,x_3,x_2)}{3},
\\\label{relationY-T}
Y_\pm(x_1,x_2,x_3)&=&\frac{2T_{3F}^\pm(x_1,x_2,x_3)+T_{3F}^\pm(x_2,x_1,x_3)+T_{3F}^\pm(x_1,x_3,x_2)}{3}.
\end{eqnarray}
Therefore, our basis is equivalent to a decomposition of a general 3-variable function into antisymmetric and cyclic components. The reduction of three-variable notation used here and in \cite{Braun:2009mi} to the two-variable notation used in \cite{Kang:2008ey,Chen:2016dnp,Chen:2017lvx} is the same as for quarks in eq.~(\ref{def:T(3)->T(2)}).
In~\cite{Beppu:2010qn,Dai:2014ala} a different notation is used, which again can be related to our functions at $\epsilon \to 0$. For a detailed comparison we refer to the discussion in~\cite{Beppu:2010qn}.
\section{Small-$b$ expansion for unpolarized and Sivers distributions}
\label{sec:TOdistr}
Having at hand the parametrization of the matrix elements we can obtain the matching coefficient for TMD distributions to collinear distributions. The standard protocol to achieve this is the following. We derive the TMD distribution using the operators $\mathcal{U}$ (compare eq.~(\ref{def:TMDPDF_Qop}) and eq.~(\ref{def:TMDop_DY})),
\begin{eqnarray}\label{main-fourier}
\Phi^{[\gamma^+]}_{q\leftarrow h}(x,\vec b)=\int \frac{dz}{2\pi}e^{-2ixzp^+}\langle p,S|\mathcal{U}^{\gamma^+}\(z,-z;\frac{\vec b}{2}\)|p,S\rangle.
\end{eqnarray}
Next, we substitute the expression for OPE eq.~(\ref{renorm:final}) into the matrix element and we evaluate the Fourier transform using the parameterization for collinear matrix elements. In this way we obtain the small-$b$ expansion for the TMD distribution $\Phi^{[\gamma^+]}$. Collecting all terms with appropriate Lorentz structures, eq.~(\ref{param:TMDv}), we obtain the small-$b$ expansion for individual TMD distributions, in our case these are the unpolarized and Sivers distributions. The procedure is rather straightforward and it can be performed for each diagram independently. In sec.~\ref{sec:TOdistr1} we give several comments on the evaluation of it, while the final result is presented in sec.~\ref{sec:results}. The results for individual diagrams are presented in appendix \ref{app:diag-by-diag:matching}.
\subsection{From operators to distributions and tree level results}
\label{sec:TOdistr1}
The tree level order of OPE is given in eq.~(\ref{U_Taylor}). Applying the transformation in eq.~(\ref{main-fourier}) and using the definitions in eq.~(\ref{def:f1},~\ref{def:T}) we obtain\footnote{When evaluating matrix element one should also consider the matrix element of the first term in eq.~(\ref{U_DY_der}). For the unpolarized operator this matrix element is zero. The proof can be found in \cite{Scimemi:2018mmi}.}
\begin{eqnarray}
&&\Phi^{[\gamma^+]}_{q\leftarrow h;\text{DY}}(x,\vec b)=f_1(x)
\\\nonumber && +i\tilde s_\mu b^\mu (p^+)^2M
\int \frac{dz}{2\pi}e^{-2ixzp^+} \(\int_{-\infty}^z+\int_{-\infty}^{-z}\)d\tau \int [dx]e^{-ip^+(x_1z+x_2\tau-x_3 z)}T(x_1,x_2,x_3).
\end{eqnarray}
To evaluate the second line we use the following trick. We consider the two integrals over $\tau$ separately and change the variables $x_{1,2,3}\to-x_{3,2,1}$, $\tau\to-\tau$ in the second one. The integrand is invariant under such transformation, due to the property in eq.~(\ref{quark:symT}) while the limits of integration change to $(-z,+\infty)$. As a result the two integrals over $\tau$ can be combined into a single integral over $\tau$ from $-\infty$ to $+\infty$,
\begin{eqnarray}\label{distr:11}
&&\Phi^{[\gamma^+]}_{q\leftarrow h}(x,\vec b)=f_1(x)
\\\nonumber && +i\tilde s_\mu b^\mu (p^+)^2M
\int \frac{dz}{2\pi}e^{-2ixzp^+} \int_{-\infty}^\infty d\tau \int [dx]e^{-ip^+(x_1z+x_2\tau-x_3 z)}T(x_1,x_2,x_3).
\end{eqnarray}
Let us stress that \textit{the dependence on the intermediate gluon position $\tau$ disappears}. This property holds for all diagrams and allows to combine seemingly cumbersome expressions into simple ones. It is the result of time-reversal symmetry. Therefore, to observe such cancellation, one should collect a diagram with its conjugated. I.e. the dependence on the intermediate point cancels in combination of diagrams $A$ and $A^*$, $C$ and $C^*$, $E$ and $E^*$, $D$ and $D^*$. The rest diagrams are self-conjugated.
The time-reversal symmetry is also responsible of the different relative sign in the matching of DY and SIDIS operators. Indeed, since the integrands are symmetric under time-reversal, the intermediate point cancels and the only thing that matters is a common global sign. This sign is necessarily different between DY and SIDIS expressions, due to different boundary conditions holding in two cases. In other words, all gluon fields in the DY case are connected to $-\infty$ and the corresponding integrals are $\int_{-\infty}$. Whereas for SIDIS they are connected to $+\infty$ and corresponding integrals are $\int_{+\infty}=-\int^{+\infty}$. In this way, we observe the well-known relation
\begin{eqnarray}
C_{1T;\text{DY}}^\perp(x_1,x_2,x_3,\vec b)=-C_{1T;\text{DIS}}^\perp(x_1,x_2,x_3,\vec b),
\end{eqnarray}
i.e. the matching (Wilson coefficient) of the Sivers function has a different sign in DY and SIDIS. This observations agrees with the time-reversal property of the Sivers distribution
\begin{eqnarray}
f^\perp_{1T;\text{DY}}(x,\vec b)=-f^\perp_{1T;{DIS}}(x,\vec b),
\end{eqnarray}
observed a long ago \cite{Collins:2002kn}.
Coming back to eq.~(\ref{distr:11}), the integrals over $\tau$ and $z$ decouple and both produce a $\delta$-function. We obtain
\begin{eqnarray}
&&\Phi^{[\gamma^+]}_{q\leftarrow h}(x,\vec b)=f_1(x) +i\pi\tilde s_\mu b^\mu M \int [dx]\delta(x_2)\delta(x-x_3)T(x_1,x_2,x_3).
\end{eqnarray}
Using the delta-function in the definition of $[dx]$ in eq.~(\ref{def:[dx]}), the integrals over $x$'s can be evaluated,
\begin{eqnarray}
&&\Phi^{[\gamma^+]}_{q\leftarrow h}(x,\vec b)=f_1(x) +i\pi\tilde s_\mu b^\mu M T(-x,0,x)+O(a_s)+O(\vec b^2).
\end{eqnarray}
This expression gives the leading order matching for unpolarized and Sivers TMD distributions in eq.~(\ref{param:TMDv})
\begin{eqnarray}
f_1(x,\vec b)&=&f_1(x)+O(a_s)+O(\vec b^2),
\\
f_{1T}^\perp(x,\vec b)&=&\pm\pi T(-x,0,x)+O(a_s)+O(\vec b^2),
\end{eqnarray}
where $+$ sign is for DY operator and $-$ sign is for SIDIS operator. The same procedure with minimal modifications can be done for each term of OPE also at higher orders. In appendix \ref{app:diag-by-diag:matching}, we present the expressions for each diagram at NLO and the corresponding final result is given in the next section.
The $T$ and $\Delta T$ distributions defined on the line $x_2=0$ are generally known as Efremov-Teryaev-Qui-Sterman (ETQS) distributions \cite{Efremov:1983eb,Qiu:1991pp}. In the next section, we write explicitly the evolution equation for these functions in eq.~(\ref{result:tw3Evolution}). Here, we just remind that \textit{the ETQS functions are not autonomous}, meaning that their evolution involves the values of these functions in a full domain of $x_{1,2,3}$. However, we have found that the finite part\footnote{\label{foot1}Following common terminology, we name $C(\mathbf{L}_\mu=0)$ as the finite part of the coefficient function $C(\mathbf{L}_\mu)$, whereas $C(\mathbf{L}_\mu)-C(\mathbf{L}_\mu=0)$ is named the logarithmic part.} of the small-$b$ matching coefficient involves only ETQS functions.
The line $x_2=0$ plays a special role in the matching of TMD distributions as shown in red in fig.~\ref{fig:domain}. In the parton picture the distributions defined on this line can be interpreted as ``gluonless''. Indeed, while the quarks are normally emitted and absorbed by a hadron (as in usual twist-2 distribution), here the gluon is in an ``intermediate state'' nor emitted, nor absorbed, but smoothly distributed all-over the space. This picture also supports the interpretation of variables $x$, as the parton momenta measured as the fraction of the hadron momentum. In such a momentum picture, the line $x_2=0$ corresponds to null-energy gluon.
The symmetry properties of the distributions allow some simplification along the line $x_2=0$. In particular, the $\Delta T$ function (which in principle appears when $x_2\neq 0$) does not explicitly contribute to the matching due to eq.~(\ref{quark:symdT})
\begin{eqnarray}
\Delta T(-x,0,x)=0,
\end{eqnarray}
but it will appear in the evolution of the ETQS functions, as we show in the next section.
Due to the anti-symmetry property the function $G_\pm$ when one of their arguments in 0, they can be expressed as ETQS distributions
\begin{eqnarray}
G_\pm(-x,0,x)=\mp G_\pm(x,0,-x)=\mp G_\pm(-x,x,0)=\mp G_\pm(0,-x,x).
\end{eqnarray}
The functions $Y_\pm$ at $x_i=0$ also can be expressed via ETQS distributions, but with a different rule
\begin{eqnarray}
Y_\pm(-x,x,0)=\mp Y_\pm(x,-x,0)=\mp Y_\pm(0,x,-x)=-\frac{Y_\pm(-x,0,x)}{2}.
\end{eqnarray}
The application of these rules significantly simplifies the calculation.
\subsection{Results at NLO}
\label{sec:results}
The NLO matching of Sivers TMD distribution at small-$b$ reads
\begin{eqnarray}\label{result:Sivers}
&&f_{1T;q\leftarrow h;\text{DY}}^\perp(x,\vec b;\mu,\zeta)=\pi T(-x,0,x)+\pi a_s(\mu)\Big\{
\\\nonumber && \quad-2\mathbf{L}_\mu P \otimes T+C_F\(-\mathbf{L}_\mu^2+2\mathbf{l}_\zeta \mathbf{L}_\mu+3\mathbf{L}_\mu-\frac{\pi^2}{6}\)T(-x,0,x)
\\\nonumber &&\quad +
\int d\xi \int_0^1 dy \delta(x-y\xi)\Big[\(C_F-\frac{C_A}{2}\)2\bar y T(-\xi,0,\xi)+\frac{3 y \bar y }{2}\frac{G_+(-\xi,0,\xi)+G_-(-\xi,0,\xi)}{\xi}\Big]\Big\}
\\\nonumber && \hspace{0.7\textwidth} +O(a_s^2)+O(\vec b^2),
\end{eqnarray}
where on the right hand side all distributions are defined at the scale $\mu$, $\bar y=1-y$ and
\begin{eqnarray}
\mathbf{l}_\zeta=\ln\(\frac{\mu^2}{\zeta}\).
\end{eqnarray}
Eq.~(\ref{result:Sivers} ) is written for the DY definition of the TMD distribution. In the case of the SIDIS definition the factor $\pi$ in the first line should be replaced by $-\pi$.
The symbol $P\otimes T$ represents the evolution kernel for the function $T(x_1,x_2,x_3)$ on the $x_2=0$ line. It reads
\begin{eqnarray}\label{result:tw3Evolution}
&&\mu^2 \frac{d}{d\mu^2}T(-x,0,x)=2a_s(\mu)P\otimes T
=2a_s \int d\xi \int_0^1 dy \delta(x-y\xi)\Bigg\{
\\\nonumber &&\quad \(C_F-\frac{C_A}{2}\)\Big[\(\frac{1+y^2}{1-y}\)_+T(-\xi,0,\xi)+(2y-1)_+T(-x,\xi,x-\xi)-\Delta T(-x,\xi,x-\xi)\Big]
\\\nonumber &&\quad +\frac{C_A}{2}\Big[\(\frac{1+y}{1-y}\)_+T(-x,x-\xi,\xi)+\Delta T(-x,x-\xi,\xi)\Big]
\\\nonumber && \quad +\frac{1-2y\bar y}{4}\frac{G_+(-\xi,0,\xi)+Y_+(-\xi,0,\xi)+G_-(-\xi,0,\xi)+Y_-(-\xi,0,\xi)}{\xi}
\Bigg\},
\end{eqnarray}
where the plus-distribution is defined as usual
\begin{eqnarray}
\(f(y)\)_+=f(y)-\delta(\bar y)\int_0^1 dy' f(y').
\end{eqnarray}
Note that the gluon part is regular for $\xi\to 0$ since functions $G_\pm$ and $Y_\pm$ vanish at $x_{1,2,3}=0$.
In eq.~(\ref{result:Sivers},~\ref{result:tw3Evolution}) the integrals over $y$ and $\xi$ together with the $\delta(x-y\xi)$ reproduce the Mellin convolution. This convolution naturally appears during the calculation and it is defined for the whole range of $x$, $(-1<x<1)$ (and we recall that the anti-quark TMD distributions are given by values of $x<0$, see definition in eq.~(\ref{def:sivers_allX})). It should be understood literally
\begin{eqnarray}
\int d\xi\int_0^1 dy \delta(x-y\xi)f(y)g(\xi)=\left\{
\begin{array}{ll}
\displaystyle \int_{x}^1 \frac{d\xi}{\xi}f\(\frac{x}{\xi}\)g(\xi),&\qquad x>0,
\\
\displaystyle \int_{|x|}^1 \frac{d\xi}{\xi}f\(\frac{|x|}{\xi}\)g(-\xi),&\qquad x<0.
\end{array}\right.
\end{eqnarray}
\subsection{Discussion and comparison with earlier calculations}
\label{sec:discussion}
The evolution kernel in eq.~(\ref{result:tw3Evolution}) derived by us agrees with the known results in~\cite{Braun:2009mi,Kang:2012em}. Also, the matching of the twist-2 part coincides with earlier works exactly i.e. as the whole function of $\epsilon$. Altogether this provides a very strong check for the whole procedure and results derived by us.
It is instructive to compare eq.~(\ref{result:Sivers}) to the small-$b$ expansion of the unpolarized TMD distribution, which we have also reevaluated in this work to provide an additional cross-check.
Following the notation of this work, it reads \cite{Collins:2011zzd,GarciaEchevarria:2011rb,Echevarria:2016scs,Vladimirov:2014aja}
\begin{eqnarray}\label{result:unpol}
&&f_{1}(x,\vec b;\mu,\zeta)=f_1(x)+a_s(\mu)\Big\{
-2\mathbf{L}_\mu P \otimes f_1+C_F\(-\mathbf{L}_\mu^2+2\mathbf{l}_\zeta \mathbf{L}_\mu+3\mathbf{L}_\mu-\frac{\pi^2}{6}\)f_1(x)
\\\nonumber &&\quad +
\int d\xi \int_0^1 dy \delta(x-y\xi)\Big[C_F 2\bar y f_1(\xi)+2 y \bar yg(\xi)\Big]\Big\}+O(a_s^2)+O(\vec b^2),
\end{eqnarray}
where the evolution kernel is
\begin{eqnarray}\label{result:unpol_EVOL}
\mu^2 \frac{d}{d\mu^2}f_1(x)&=&2a_s(\mu)P\otimes f_1
\\\nonumber &=&2a_s \int d\xi \int_0^1 dy \delta(x-y\xi)\Big\{C_F\(\frac{1+y^2}{1-y}\)_+f_1(\xi)+\frac{1-2y\bar y}{2}g(\xi)\Big\}.
\end{eqnarray}
One can see that eq.~(\ref{result:Sivers}) and eq.~(\ref{result:unpol}) have a very similar structure and, more precisely, \textit{the finite parts\footnoteref{foot1} of these expressions have the same $y$-behavior}. It is possible that this fact indicates some hidden correspondence which is to be understood in the future.
Let us note that our calculation scheme (namely, the definition of distributions in $d$-dimensions, as it is discussed in sec.~\ref{sec:def-collinear}) affects only the quark-from-gluon terms. In appendix \ref{app:diag-by-diag:matching} we present these mixing diagrams with the explicit designation of $\epsilon$'s from different sources. We have found that the scheme dependence enters the expressions via factors $\sim \epsilon/(1-\tilde \epsilon)$, where $\epsilon$ is the parameter of dimension regularization and $\tilde \epsilon$ is the parameter of $d$-dimensional definition of distributions. Therefore, the current choice of scheme influences only the $\epsilon$-suppressed terms of the final expression and thus it can contribute only from NNLO. Let us mention, that the same observation (namely, the suppression of the details of the $d$-dimensional definition in the NLO coefficient function) is valid also in the case of the helicity distribution, which contains $\gamma^5$-matrix, see ref.~\cite{Gutierrez-Reyes:2017glx}.
The expressions for coefficient functions in eq.~(\ref{result:Sivers}-\ref{result:unpol}) are given for a general scale setting $(\mu,\zeta)$. For practical applications, it is convenient to use the $\zeta$-prescription \cite{Scimemi:2017etj,Scimemi:2018xaf}, where a TMD distribution is defined at the line $\zeta=\zeta(\mu)$. This line depends on certain boundary conditions that can be uniquely fixed and which define the so-called \textit{optimal TMD distribution}, see a detailed discussion in \cite{Scimemi:2018xaf}. The line $\zeta_\mu$ is universal for all TMD distributions and on this line the expression for the coefficient function simplifies. Namely, in eq.~(\ref{result:Sivers},~\ref{result:unpol}) one should set
\begin{eqnarray}
\text{in $\zeta$-prescription:}\qquad -\mathbf{L}_\mu^2+2\mathbf{l}_\zeta \mathbf{L}_\mu+3\mathbf{L}_\mu \to 0.
\end{eqnarray}
It is easy to see that in $\zeta$-prescription the TMD distribution is (naively-)independent on the scale $\mu$.
The matching coefficient for Sivers function can be found in the literature scattered in different works: the quark-to-quark part has been deduced in~\cite{Sun:2013hua} and the quark-to-gluon part has been evaluated in~\cite{Dai:2014ala}. In both references the derivation of the matching coefficient has been made indirectly, refactorizing the factorized cross-section for SSA with the help of known matching for unpolarized TMD distribution. In our approach we evaluate the Sivers function directly, which grants us a better control over factors and schemes. Let us compare and comment on these works one-by-one.
In~\cite{Sun:2013hua} the quark-from-quark part of the matching (the first term in square brackets in eq.~(\ref{result:Sivers})) is derived.
A comparison with this work shows a disagreement in the logarithmic part\footnoteref{foot1}, but an agreement in the finite part (i.e. compare eq.~(\ref{result:tw3Evolution}) with eq.~(12) of ~\cite{Sun:2013hua}). The origin of this difference is clear. The calculation of ref.~\cite{Sun:2013hua} is based on the fixed-order calculation of SSA made in~\cite{Ji:2006ub,Koike:2007dg}. The latter considers only gluon-pole contributions and misses a quark-pole contribution, which roughly corresponds to our diagrams D (see detailed discussion in~\cite{Schafer:2012ra,Braun:2009mi,Kang:2012em}), which in turn, contributes only to the logarithmic part of matching coefficient, i.e. second line of eq.~ (\ref{result:Sivers})).
In~\cite{Dai:2014ala} the quark-to-gluon matching has been calculated. The result is presented using the functions $N(x_1,x_2)$ and $O(x_1,x_2)$ which can be related to a combination of the functions $G$ and $Y$, similar to eq.~(\ref{relationG-T},~\ref{relationY-T}) (for a comparison of the definitions of these functions see~\cite{Beppu:2010qn}). In particular, $G_+(-x,0,x)+Y_+(-x,0,x)\simeq N(x,x)-N(x,0)$ and $G_-(-x,0,x)+Y_-(-x,0,x)\simeq O(x,x)-O(x,0)$. Using these relations and comparing with eq.~(44) of~\cite{Dai:2014ala} we find a complete agreement with the logarithmic part (which is expected since it is given by the evolution kernel), but disagreement in the finite part. We claim that this disagreement is the result of a different parametrization of the gluon PDF used in~\cite{Dai:2014ala}. Indeed, according to eq.~(39) of \cite{Dai:2014ala}, the authors of \cite{Dai:2014ala} define PDF in $d$-dimensions, but they do not decompose the tensors to irreducible representations and therefore $\epsilon$-dependent pre-factors of PDFs are different.
In fact, the method of ref.~\cite{Dai:2014ala} could be inconsistent beyond LO. Indeed, the parameterization of the twist-3 matrix element used by~\cite{Dai:2014ala} is based on the 4-dimensional relation (see also~\cite{Beppu:2010qn})
\begin{eqnarray}\label{gepsilon=0}
g^{\mu\nu}\epsilon^{\alpha\beta\rho\delta}=
g^{\mu\alpha}\epsilon^{\nu\beta\rho\delta}+
g^{\mu\beta}\epsilon^{\alpha\nu\rho\delta}+
g^{\mu\rho}\epsilon^{\alpha\beta\nu\delta}+
g^{\mu\delta}\epsilon^{\alpha\beta\rho\nu},
\end{eqnarray}
which is used to reduce the number of degrees of freedom. In $d$-dimensions the relation in eq.~(\ref{gepsilon=0}) is not valid. Instead one has to use the decomposition to irreducible components (see discussion in appendix~\ref{app:tensor-decomposition}), as it is made in this work. In order to consistently use the parameterization based on eq.~(\ref{gepsilon=0}), the limit $\epsilon\to 0$ must be taken prior to the application of the parameterization, i.e. the approach one, as it is discussed in the introduction to the sec.~\ref{sec:def-collinear}. Contrary, the authors of~\cite{Dai:2014ala} have used a 4-dimensional parametrization within the $d$-dimensional calculation. There is no apparent contradiction at one-loop level, however, it can appear at higher perturbative orders.
\section{Conclusion}
We have derived the matching of the Sivers function to collinear distributions at NLO. The final result is given in eq.~(\ref{result:Sivers}) both for quark-to-quark and quark-to-gluon channels. The final result can be compared to the known calculations piece by piece: the logarithmic part agrees with the evolution kernel derived in~\cite{Braun:2009mi,Kang:2012em}, the finite quark-to-quark part agrees with the one derived in~\cite{Sun:2013hua} and the finite quark-to-gluon part is in disagreement with~\cite{Dai:2014ala}.
In sec.~\ref{sec:discussion} we argue that the disagreement between our calculation and the calculation made in~\cite{Dai:2014ala} is due to the difference in calculation schemes.
The peculiarities of our calculation scheme are given in beginnings of sec.~\ref{sec:eval} and sec.~\ref{sec:def-collinear}. We also argue that our calculation scheme is equivalent to the scheme commonly used for twist-2 TMD matching, which we also confirm by comparing the twist-2 part of our calculation, eq.~(\ref{result:unpol}).
In contrast to all previous evaluations of Sivers function we do not consider any process but derive it directly from the definition of the TMD operator. The evaluation presented here is in many aspects novel, especially for the TMD community. Our calculation is made at the level of operators within the background field method which provides the most complete type of calculation and in the text we have described many details. In particular, for the first time, we explicitly demonstrate the appearance of rapidity divergences at the operator level, sec.~\ref{sec:rapidity_div} and explicitly demonstrate its renormalization at all twists of collinear OPE (sec.~\ref{sec:backrenormalization}). We also demonstrate the appearance of the famous sign flip for Sivers functions defined for DY and SIDIS, eq.~(\ref{SF:DY<->DIS}).
The method outlined in this work can be used also for the evaluation of the other leading order distributions which match on collinear twist-3 operators. All intermediate results of the calculation are presented in the appendix. Since the calculation is made at the level of operators, it contains the complete information on small-b OPE. In particular, it can be used to write down the matching of GTMD distributions to GPDs. Also, many diagrams can be used without recalculation for other polarizations. We expect
that this line of research will give new results in the near future and before the advent of the Electron Ion Collider (EIC).
\acknowledgments A.V. gratefully acknowledges V.~Braun and A.~Manashov for numerous stimulating discussions and help in clarifying several aspects of higher twist calculus.
I.S. is supported by the Spanish MECD grant FPA2016-75654-C2-2-P. A.T. is grateful to J.W. Qiu and W. Vogelsang for valuable discussions and is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC02-98CH10886 and in part by the US DOE Transverse Momentum Dependent (TMD) Topical Theory Collaboration.
|
2,869,038,157,086 | arxiv | \section{Introduction}
\label{sec:intro}
It is a well-known fact that statistical mechanics systems such as lattice spin models may exhibit critical behavior by undergoing a second order phase transition
if some external parameters such as the temperature are opportunely tuned. This is the case, for example,
of the lattice Ising model, which has diverging correlation length at a special critical temperature $T_c$
and which falls into the same universality class of a single-component $\phi^4$ field theory.
It is less-known, however, that many of the same systems still exhibit critical behavior even in the presence of some
probability-driven dilution of the lattice sites.
A physical situation in which dilution is important happens when there are impurities in the system,
for example non-magnetic sites in a ferromagnetic lattice, and these impurities are distributed on the lattice according
to some probability.
One could correctly argue that, if not many impurities are present, the diluted system
must behave similarly to the pure (non-diluted) one.
The truth is actually more surprising:
for sufficiently low concentration and, to some extent,
independently on the distribution of impurities, the system undergoes a \emph{different}
second order phase transition with lower $T_c$ if the thermodynamical exponent $\alpha$ is positive \cite{Harris:1974}.
Under the assumption that system's impurities change according to a timescale
that is much longer than the time to reach equilibrium, the randomness
introduced as a result must be taken into account through quenched averages
over the disorder.
This practically means that disorder's averages are performed at the very end
of any correlator's computation and
the difference between the pure and diluted systems is quantified by the final quenching.
Much of previous work on the effect of disorder has concentrated attention
to dilution of the spins of the lattice Ising model assuming a Gaussian uncorrelated distribution
of the impurities.
In this case, the difference between the critical exponents of pure and diluted systems
has been firmly established by field-theoretical and lattice methods (see refs. in \cite{Pelissetto:2000ek}).
Less is known on what happens if other spin systems, such as the tricritical Ising model,
are faced with random dilution, or, alternatively,
if the distribution of impurities is changed to one that depends on more parameters.
One might wonder if both the above changes might lead to \emph{multicritical} behavior
of the diluted system, essentially by introducing new parameters to be tuned to criticality.
A simple answer to this question can be achieved by classifying and discussing
critical field-theoretical models representing diluted systems by means of renormalization group (RG) methods.
On the practical side, the effect of dilutions can be taken into account by introducing $N$ non-interacting
replicas of the system and integrating over the distribution of impurities
(this \emph{subterfuge} is the so-called replica method) \cite{cardy1996scaling}.
The net effect of the procedure is that, after integration, the copies of the system become interacting
and, for a randomness that modifies the energy-per-site operator, the replicated system has symmetry enhanced
by the discrete group $H_N$ (the symmetries of the hypercube), if compared to the symmetry of the pure Ising system.
An interesting aspect of the procedure is that quenched averages can be computed
exactly like traditional path-integral averages if the limit $N\to 0$ is taken.
Therefore, the last step in quenching requires an analytic continuation of the results for arbitrary $N$.
It is worth sidestepping for a moment to discuss the interpretation of the analytic continuation in $N$
from a conformal field theory (CFT) point of view. First recall that, rather generally, critical points of statistical systems
can be associated to conformal field theories. This implies that the simple space- and scale-invariance
of the critical system (invariance under the action of the dilatation operator)
is actually enhanced by the full group of conformal transformations. At the critical point CFT methods can be used
to determine various quantities of interest, including the scaling dimensions of operators, which
are in correspondence with critical exponents of the system.
This is certainly true for arbitrary natural-valued $N$, in which there are, for example, $N$ total CFT primary operators
with maximum scaling dimension besides the identity operator related by the symmetry $H_N$.
Less trivial to visualize the limit $N\to 0$, because this would actually correspond to no fields at all.
Amazingly, the limit can be formalized with surprising results. First of all, it is important to stress that
the operator content can be classified for arbitrary $N$ adopting the representation theory of the group $H_N$.
This allows one to work with general $N$ at any step of the computation, without limitations from considering
specific integer values. Then, upon continuation in $N$, some operators of the spectrum, which are generally distinguished as realizing irreducible representations of $H_N$, have degenerate scaling dimensions for
$N\to 0$. This happens because, in the limit, the dilatation operator does not commute anymore with
the action of the group $H_N$, and has the effect to produce a so-called logarithmic CFT (log-CFT).
In practice, the energy operator and a tensor operator of rank two have colliding scaling dimensions for $N\to 0$,
implying that their correlators are not diagonal anymore, and the limit produces a specific structure
which ultimately results in a logarithmic correction to the traditional power law structure of the CFT correlators \cite{Gurarie:1993}.
An outstanding result is that these logarithmic contributions introduce new universal coefficients
and can be related to meaningful observables in terms of product and ratios of quenched averages \cite{Cardy:1999, Cardy:2013}.
In the light of the existence of new universal quantities,
the computation of the logarithmic effects in the CFT spectrum gives a brand new
arena in which it is interesting to compare RG and CFT based results.
The challenge is to understand from an RG point of view all the intricacies of the log-CFT
approach, but also to prove once more the validity of the RG approach by showing
that it can easily accommodate the necessary analytic continuation in $N$.
This last step is of course greatly helped by the use of representation theory of the group $H_N$.
Using RG, quantities relevant for the log-CFT framework could be initially computed by means of some
perturbative expansion such as the $\epsilon$-expansion, and possibly improved in the future
by non-perturbative methods.
In this paper, we discuss several multicritical generalizations of the diluted spin system universality class,
understood as the $N\to 0$ limit of a replicated field theory. We do this by investigating critical Ginzburg-Landau-like
Hamiltonians with $H_N$ symmetry and by constraining them to be coming from a replicated system.
The critical models have associated upper critical dimensions, which can be deduced on the basis of dimensional analysis, so we study the first interesting possibilities, before the size of the computations gets out of hand.
All the nitty-gritty details of the computations, that are necessary to obtain the results presented in this paper,
will be presented in a future paper, in which we will also address the general case with arbitrary $N$ \cite{bcz-to-appear}.
The rest of the paper is structured as follows:
we pedagogically review the basic steps of using replicas to treat disorder, make the case for the multicritical generalizations by enhancing the possible distributions for the disorder in Sect.~\ref{sec:hypercubicandreplica},
we briefly discuss the connection with the formalism of log-CFT highlighting the simplest logarithmic observable
in Sect.~\ref{sec:logarithms},
and we provide some further notion on criticality and renormalization group methods in Sect.~\ref{sec:method}.
The explicit results are presented in the subsequent sections:
In Sects.~\ref{sec:dc3}~to~\ref{sec:dc10/3} we discuss three multicritical generalizations
of the randomly diluted Ising model.
For each model, we pay special attention to the computation of the relevant scaling dimensions
in the language of log-CFT.
In App.~\ref{app:betas}, we list the most important RG quantities, which are needed to obtain our results.
In App.~\ref{sec:dc4}, we review the $\phi^4$-like model in $d=4-\epsilon$ in order to make
our discussion complete (in passing, we collect several quantities which are otherwise scattered in the literature).
\section{Hypercubic and replica}
\label{sec:hypercubicandreplica}
The influence of quenched frozen-in structural disorder on the critical behavior of
a magnetic model
can be accounted for by diluting the system with non-magnetic impurities randomly distributed over the original lattice.
For a first concrete example, consider an Ising ferromagnet in which a fraction of the spins is replaced by vacancies; in this form of site-dilution, the disorder is explicitly present in the Hamiltonian under the form of random variables $m_i$ taking the values $0$ or $1$ depending on whether or not the site $i$ is vacant
\begin{equation}\label{eq:microHamiltonian}
\mathcal{H}[\{m_i\}] = - J \sum_{\langle i, j \rangle} m_i m_j s_i s_j \,,
\end{equation}
and the spins $s_i=\pm 1$ are the degrees of freedom. For a second example, consider a random bond dilution in which randomness is moved from the lattice sites to the coupling constants resulting in the Hamiltonian
$\mathcal{H}[\{J_{ij}\}] = -\sum_{\langle i j \rangle} J_{ij}s_is_j$. The disorder is completely specified by a probability distribution function, $P[\{m_i\}]$ in the case of site-dilution
and $P[\{J_{ij}\}]$ in the case of bond
randomness.\footnote{%
For completeness, we mention that bond and site randomness are not the only types of disorder that can be encountered experimentally.
Random fields that couple linearly to the magnetic moments can also be considered.
These are described by the so-called random field Ising model (RFIM),
which displays a form of disorder that is essentially different from the aforementioned ones,
mostly because the external random field can break the $\mathbb{Z}_2$ symmetry
w.r.t.\ spin flip.
RFIM is a very interesting and not yet fully understood model,
and we point the interested reader to the excellent reviews \cite{Nattermann, Dotsenko3,Natterman2}
}
In either case,
it is natural to expect that the presence of impurities
manifests itself by lowering the critical temperature at which the phase transition to an ordered phase occurs.
Sufficiently close to the critical point of the pure theory, at which an effective continuum description is possible,\footnote{%
For example by means of a Hubbard-Stratonovich transformation that allows to sum over the spins.}
quenched disorder can be described in terms of small random spatial fluctuations of the effective temperature.
Namely, we define the action of the diluted model as
\begin{equation}
S[\phi] = S_0[\phi] + \int {\rm d}^dx \, m(x) E_1(x) \,,
\end{equation}
in which $S_0[\phi]$ is the action of the pure magnetic model,
and $m(x)$ is a random variable coupled to the energy density operator $E_1=\phi^2$.
The energy operator has an additional label for reasons that are clarified in a moment.
Intuitively, physical quantities like the free energy or correlation functions should not depend on a specific realization of the disorder, i.e.\ they should be \emph{self-averaging} over the realizations of the disorder according to its distribution function. Phrased differently, self-averaging quantities are such that, in the thermodynamic limit, they are equal to their quenched expectation value \cite{cardy1996scaling, mezard1987spin}.
The quenched average is greatly simplified by replicating the field into $N$ non-interacting copies $\phi \to \phi_a=\{\phi_1,\phi_2,\dots,\phi_N\}$, and then integrating over the disorder.
The rationale behind this procedure stems from the following identity
\begin{equation}
\log Z = \lim_{N \to 0} \frac{Z^{N}-1}{N}\,,
\end{equation}
which recasts the difficult problem of averaging the logarithm of the partition function into the easier one of averaging the partition function of $N$ independent copies or \emph{replicas} of the original system.
Evidently, analytic continuation to $N\to 0$ is required for full exploitation of the method.\footnote{%
An alternative to this approach would be the strategy in which the same limit is reached by having a finite number
of bosonic and fermionic degrees of freedom which cancel out. This leads to structures with manifest supersymmetry \cite{Cardy:1985}.}
Notice that, likewise the field, local pure operators are replicated too; for example the energy density operators for each replica are $E_a \equiv (\phi_a)^2$.
For the purpose of integrating over disorder,
let us consider a generic random variable $X$ with probability distribution function (p.d.f.) $P(X)$;
the average of a function $f(X)$ is
expressed as $\overline{f} \equiv \int {\rm{d}}X P(X) f(X)$.
One can construct the generating function
\begin{equation}\label{eq:cumulantsexp}
G(\xi) \equiv \overline{\mathrm{e} ^ {\xi X}}
= \mathrm{exp} \left({\sum_{j=1} ^\infty \frac{\xi^j}{j!} \kappa_j} \right) \,,
\end{equation}
in which $\kappa_j$ are the cumulants of the probability distribution $P(X)$.
We can treat the disorder in the replicated quenched-averaged partition function by performing the same cumulant expansion with the correspondence $X\leftrightarrow m(x)$ and $\xi \leftrightarrow -\sum_a E_a(x)$, namely
\begin{equation}\label{eq:avgZ}
\begin{split}
\overline{Z^N}
& = \int ({\rm d}m) P[ m(x) ] \int ({\rm{d}}\phi_a) \mathrm{e}^{-\sum_{a} \left[ S_{0,a} + \int {\rm{d}}^d x ~ m(x) E_a(x)\right]} \\
& = \int ({\rm{d}}\phi_a) \mathrm{e}^{-\sum_{a} \left[ S_{0,a} + \kappa_1 \int {\rm{d}}^d x ~ E_a(x)\right]
+ \frac{1}{2!} \kappa_2 \int {\rm{d}}^d x ~ \sum_{a , b} E_a(x) E_b(x) + \dots}\,,
\end{split}
\end{equation}
in which we only assumed that impurities are just locally correlated.
The process of averaging over the quenched disorder results in the different replicas becoming \emph{interacting}
and the strength and form of their interactions depend on the details of the probability distribution.
For later purpose, it is interesting to show the truncated form of the replicated action $S_R$,
which retains all the terms up to order $\phi^6$. This is given by
\begin{align}\label{eq:replicated-action}
S_R[\phi] = & \sum_a S_{0,a} + \int {\rm{d}}^d x \Bigl\{ \kappa_1 \sum_a E_a(x)
- \frac{\kappa_2}{2!} \sum_{a,b} E_a(x)E_b(x) + \frac{\kappa_3}{3!} \sum_{a,b,c} E_a(x) E_b(x) E_c(x) \Bigr\} \notag\\
= & \int {\rm{d}}^d x ~ \Bigl\{
\sum_a \Bigl[ \frac{1}{2} (\nabla \phi_a)^2 + \mu^2 E_a(x) + g_1 E_a^2(x) + w_1 E_a^3(x)\Bigr] \notag \\
& + g_2 ~ \sum_{a \neq b} E_a(x) E_b(x) + w_2 \sum_{a \neq b} E_a^2(x)E_b(x) + w_3 \sum_{a\neq b \neq c} E_a(x) E_b(x) E_c(x) \Bigr\} \,,
\end{align}
in which in the last step we explicitly separated the decoupled replica parts from the coupled ones,
and we also introduced effective couplings $\{\mu, g_{1,2}, w_{1,2,3}\}$.
The effective couplings are combinations of the pure ones and of the cumulants,
for example $\mu^2$ is the sum of $\kappa_1$ and of the analog term present in $S_{0,a}$, which
we have not shown for brevity.
For concreteness one can consider the traditional example \cite{Grinstein:1976} of a regular lattice in which each site is occupied by a spin variable with probability $p$ or it is empty with probability $1-p$. This situation corresponds to the following bi-modal distribution function
\begin{equation}\label{eq:bimodalpdf}
P[m(x)] = \prod_x \left[ p~\delta_{m(x),1} + (1-p)~\delta_{m(x),0} \right]\,.
\end{equation}
It is straightforward, in this case, to compute the first cumulants and show that they are nonzero for almost all values of $p$. In the cases of the microscopic Hamiltonian \eqref{eq:microHamiltonian} and site dilution \eqref{eq:bimodalpdf}, it is possible to construct the effective Hamiltonian corresponding
to the replicated action \eqref{eq:replicated-action}, in which the correspondence
between the microscopic parameters of the lattice model and those of the continuum model is explicit, see Ref.~\cite{Grinstein:1976}.
Notice that, even if for some probability distribution functions some terms might not be generated by the random variable (e.g.\ those related to odd cumulants for a symmetric p.d.f.), the iteration of the renormalization group procedure
will likely produce an action such as \eqref{eq:replicated-action} after the first iteration.
The replicated action is invariant under the group $S_N$ of permutations of the replicas, but each replica shares the original Ising-like $\mathbb{Z}_2$ symmetry, therefore the overall symmetry group characterizing the replicated action $S_R$ is the hypercubic point group $H_N \simeq (\mathbb{Z}_2)^N \rtimes S_N$.
In other words, the system has now the symmetries of the hypercubic model.
Quenched averages can actually be constructed with $S_R[\phi]$ and arbitrary $N$,
but they involve correlators normalized by $N$-powers of the partition functions.
The limit $N\to 0$, however, gives quenched correlators as standardly normalized correlators of $S_R$.
In the following the strategy will be to consider the theory for arbitrary analytically continued $N$,
and perform the limit as the very last step.
It is important to mention here that, contrary to pure systems for which the ground state is unique,
random systems might have many more local minima of the energy.
Therefore, the standard RG approach may fail in taking into account
the several contributing configurations of the replicated action \eqref{eq:replicated-action},
which may also cause the spontaneous breaking of the replica symmetry.
This problem, which is very well-known in the spin-glass community, motivated some authors \cite{Dotsenko1, Dotsenko2} to reconsider the RG study of randomly diluted spin systems
in terms of the Parisi's replica symmetry breaking (RSB) scheme,
which has been specifically developed to deal with systems exhibiting
several local minima configurations. Further investigations, however,
proved that the replica-symmetric FP solution in $d=3$
is stable against the switching on of possible replica symmetry breaking terms \cite{Prudnikov1}.
Even a simple distribution such as \eqref{eq:bimodalpdf} turns on \emph{all} cumulants and therefore,
in principle, all possible interactions of the replicated action \eqref{eq:replicated-action}.
We use this notion to argue that multicritical generalizations of the hypercubic model, as hinted
from our formula for the replicated action, are not only interesting in their own right,
but important for the sake of constructing models of quenched average with arbitrary
interactions and distributions of the impurities.
Not much is known about these multicritical generalizations;
for example the possibility of RSB in the multicritical case has, to our best knowledge,
never been invistigated.
Starting from Sect.~\ref{sec:dc3},
we characterize and discuss the first few interesting examples of critical and multicritical
models with hypercubic symmetry.
\section{Logarithms and logarithmic CFTs}
\label{sec:logarithms}
Now we feel the urge to make a connection with a language more akin to that of CFT.
In the previous section, we established the possibility to work with arbitrary $N$ before continuing to the limit $N\to 0$,
which is
quite advantageous when it comes to constructing interesting observables, because we can
simplify field's multiplets in terms of irreducible representations of the hypercubic group.
Here we concentrate on observables built with two copies of $\phi$ and no derivatives,
but a much more general discussion is possible \cite{Cardy:2013,Vichi:2016}.
The study of the irreducible representations of $H_N$ gives three distinct operator multiplets at the quadratic level
for arbitrary $N$
\begin{equation}\label{eq:irreps}
S = \sum_a \phi_a^2 \,, \qquad\qquad X_{ab} = \phi_a^2- \phi_b^2\,, \qquad\qquad Y_{ab} =\phi_a\phi_b\,,
\end{equation}
in which $S$ is obviously the singlet, $X$ is an antisymmetric two-tensor, and $Y$ is a symmetric one.
The naive expectation is that, for arbitrary values of $N$ the dilatation operator ${\cal D}$ should commute with the
action of the elements of the group $H_N$, implying that a scaling operator ${\cal O}(x)$
can carry a label for ${\cal D}$ (a.k.a.\ the scaling dimension) as well as an irreducible representation of $H_N$.
From the CFT point of view, the corresponding states would be of the form $\left|\Delta,R\right>\simeq{\cal O}\left|0\right>$ with $R$ ranging over the irreducible representations, for example $R=S,X,Y,\dots$ as in \eqref{eq:irreps}.\footnote{%
One can adopt radial quantization to make the connection between states and operators more formal.}
A general property, that we confirm repeatedly in the next sections, is that the scaling dimensions
of the operators $S$ and $X$ degenerate in the limit $N\to 0$. One way to understand this is that
the copies can be thought to become completely indistinguishable in the limit.
To clarify the connection with the previous notation and to make contact with previous literature
(modulo some normalization), we make the first important identification:
the singlet $S$ is the energy operator of the replicated system up to an overall constant,
so we define $E\equiv\frac{1}{N}S=\frac{1}{N}\sum_a E_a$.
The second identification is only slightly less trivial. Given that the tensor $X$ has two indices,
we first take the trace over one index to construct a vector $\tilde{E}_a\equiv \frac{1}{N}\sum_b X_{ab}=E_a-E$.
This vector has the same scaling dimension as $X$ and, by construction, is traceless $\sum_a \tilde{E}_a=0$ \cite{Cardy:1999}.
The two point correlators of the operators $E$ and $\tilde{E}$ can be simplified using replica symmetry,
that is, the symmetry of the subgroup $S_N$ acting on the labels $a=1,\dots,N$. In particular, the correlators can easily be simplified
in terms of those of only two distinguished copies $a=1,2$, which is the minimum number of necessary copies.
The result is
\begin{eqnarray} \label{eq:replica-correlators}
\langle E(x) E(0)\rangle &=& \frac{1}{N} \langle E_1(x) E_1(0)\rangle + \frac{N-1}{N}\langle E_1(x) E_2(0)\rangle \,,\\
\langle \tilde{E}(x) \tilde{E}(0)\rangle &=& \frac{N-1}{N} \langle E_1(x) E_1(0)\rangle -\frac{N-1}{N}\langle E_1(x) E_2(0)\rangle \,.
\end{eqnarray}
The interesting aspect of this manipulation is that the correlators on the right hand side
can be used to give the quenched average of both connected and disconnected energy-energy correlation functions,
which are defined as
\begin{eqnarray} \label{eq:quenched-correlators}
\overline{\langle E(x) E(0)\rangle} =\lim_{N\to 0} \langle E_1(x) E_1(0)\rangle\,, &\qquad \quad&
\overline{\langle E(x) \rangle \langle E(0)\rangle} =\lim_{N\to 0}\langle E_1(x) E_2(0)\rangle
\end{eqnarray}
and are observables of the diluted system.
We thus solve for the correlators on the r.h.s.\ of \eqref{eq:replica-correlators} inside \eqref{eq:quenched-correlators},
to obtain in the limit two very important observables:
\begin{equation}
\begin{split}
&\overline{\langle E(x) E(0)\rangle} -\overline{\langle E(x) \rangle \langle E(0)\rangle}
= \lim_{N\to 0} N \langle \tilde{E}(x) \tilde{E}(0)\rangle \,,\\
&\overline{\langle E(x) \rangle \langle E(0)\rangle}
= \lim_{N\to 0} \Bigl\{\langle E(x) E(0)\rangle +\langle \tilde{E}(x) \tilde{E}(0)\rangle \Bigr\}\,.
\end{split}
\end{equation}
In manipulating these expressions, we assume that correlators can have poles at $N=0$, but are otherwise regular.
In fact, the first line is nonzero only if the correlator of $\tilde{E}$ is singular in the limit,
which has been confirmed in practice by CFT methods.
Now we use the scaling properties of the hypercubic model at criticality for general $N$
to deduce the scaling form of the correlators
\begin{eqnarray}
\langle E(x) E(0)\rangle \sim \frac{1}{N} A(N) \left|x\right|^{-2\Delta_E} \,, &\quad &
\langle \tilde{E}(x) \tilde{E}(0)\rangle \sim \frac{N-1}{N} \tilde{A}(N) \left|x\right|^{-2\Delta_{\tilde{E}}}\,,
\end{eqnarray}
in which $\Delta_E$ and $\Delta_{\tilde{E}}$ are the $N$-dependent scaling dimensions of the operators $E$
and $\tilde{E}$ respectively.
The structure of the overall coefficients can be inferred from the analysis of the replicated correlators:
the functions $A(N)$ and $\tilde{A}(N)$ are regular in the limit $N\to 0$.
Evidently, the normalization of the correlators is singular in the limit, which can be proven using CFT methods and requiring consistency.
The presence of these poles is actually very important from the point of view of CFT,
because they generate the most interesting aspects of the quenched limit, as we shall see briefly.
Recall now that the two operators become degenerate in the limit $N\to 0$:
consistency of the correlators requires that $A(0)=\tilde{A}(0)$ and $A'(0)=\tilde{A}'(0)$.\footnote{%
Actually, the second condition is only for convenience and, if relaxed,
the following conclusions remain almost unaltered.
}
The scaling dimensions coincide in the limit too, implying $\Delta_{\tilde{E}}|_{N=0}=\Delta_{E}|_{N=0}$,
therefore we argue on general grounds that there is a quantity $\Delta'_{E}$ such that
$\Delta_{\tilde{E}}-\Delta_{E}= N \Delta'_{E} +{\cal O}(N^2)$.
Notice that $\Delta'_{E}$ is a difference of the scaling dimensions, rather than a derivative, despite the notation.
It can be defined operatively as
\begin{equation}
\Delta'_{E} \equiv \lim_{N\to 0} \frac{\Delta_{\tilde{E}}-\Delta_{E}}{N}\,.
\end{equation}
Now we insert the scaling limit of the correlators in the quenched averages and take the limit $N\to 0$ according
to our definitions. We find
\begin{equation}\label{eq:quenched-correlators-log}
\begin{split}
&\overline{\langle E(x) E(0)\rangle} -\overline{\langle E(x) \rangle \langle E(0)\rangle}
\sim A(0) \left|x\right|^{-2\Delta_{E}} \,,\\
&\overline{\langle E(x) \rangle \langle E(0)\rangle}
\sim 2 A(0) \Delta'_{E} \log\left|x\right| \left|x\right|^{-2\Delta_{E}} \,,
\end{split}
\end{equation}
in which the scaling dimensions on the r.h.s.\ are implicitly evaluated at $N=0$
in order to not overburden the notation. From now on quantities will be understood at $N=0$, therefore
$\Delta_E|_{N=0}= \Delta_E$ and $\Delta_{\tilde{E}}|_{N=0}= \Delta_{\tilde{E}}$.
The surprising feature of \eqref{eq:quenched-correlators-log} is that the quenched average
of the disconnected part of the energy-energy correlation function displays a logarithmic behavior at criticality
in apparent violation of CFT basic properties!
What is happening here is that the operators are not eigenstates of the action of dilatation at (and only at) $N=0$.
More technically, the operators arrange in a two-by-two Jordan cell which is triangular and cannot be diagonalized
(in general cases with more degenerate operators the cell can have higher rank). Manipulations of the Jordan cell
result in a so-called logarithmic pair of operators \cite{Gurarie:1993, Flohr:2001zs, Gaberdiel:2001tr, Creutzig:2013hma, Gurarie:2013tma}.
An interesting conclusion, which is relevant to the renormalization group point of view,
comes from recalling that
the quantities $\Delta_{\tilde{E}}$ and $\Delta_{E}$ are universal functions of $N$,
therefore we have that $\Delta'_{E}$ is also universal.
Consequently, one is tempted to look for an observable to measure $\Delta'_{E}$
and the logarithmic behavior, which is easily found by measuring the ratio of the correlators at criticality
\begin{eqnarray}
\frac{\overline{\langle E(x) \rangle \langle E(0)\rangle}}{\overline{\langle E(x) E(0)\rangle} -\overline{\langle E(x) \rangle \langle E(0)\rangle}}
\sim 2 \Delta'_{E} \log\left|x\right| \,.
\end{eqnarray}
Similar observables can be constructed for other logarithmic CFTs, including percolations,
loop-erased random walks, and self-avoiding walks. Logarithmic behavior is very common among CFTs
which can be described by some parameter, such as $N$, that takes a \emph{special} value through analytic continuation.
Having mentioned that the quantities $\Delta_{E}$, $\Delta_{\tilde{E}}$ and $\Delta'_{E}$ are universal,
it is clear that they can be computed by some renormalization group method. In fact, they are already known
to varying degree of accuracy and in several schemes for $\phi^4$-type theories, for example.
It is important to stress, however, that our discussion is certainly not limited to $\phi^4$-type theories of the Ising universality class, but potentially includes any possible quenched generalization of critical and multicritical
magnetic systems.
Furthermore, the quantity $\Delta'_{E}$ is in general much less known, if known at all.
A traditional starting point for any renormalization group based computation is
perturbation theory and the $\epsilon$-expansion,
which we adopt in the next sections to give several estimates and the above universal quantities.
\section{Renormalization group and critical properties}
\label{sec:method}
The truncation of the cumulant expansion \eqref{eq:cumulantsexp} to order $\kappa_n$ results in a model
with $\phi^{2n}$-type interactions, hence multicritical. Simple dimensional analysis shows that to the highest order
interactions correspond upper critical dimension $d_{c} = \frac{2n}{n-1}$.
For example, for $n=2$ the interaction is $\phi^{4}$-type, which has upper critical dimension $d_c=4$. More generally,
higher order interactions have smaller $d_c$ and accumulate to $d_c\to 2$ for increasing $n$.
These critical dimensions are the same as the ones of single-fields multicritical models, which are known to
interpolate to the multicritical CFTs of Zamolodchikov in $d=2$ \cite{zamolodchikov:1986}.
From our point of view, following ideas discussed in Refs.~\cite{Codello:2017a, Codello:2017qek, Codello:2019}, we take the possible values of $d_c$, and therefore the
label $n$, as an input for the classification of the multicritical generalizations. In fact,
at $d=d_c$ it is possible to construct a meaningful perturbative series in terms of the marginal couplings,
and at $d=d_c-\epsilon$ it is possible to determine universal quantities, such as critical exponents,
in the $\epsilon$-expansion. In the next sections we concentrate on the first few values $d_c=4,3,\frac{8}{3}$.
We also discuss a model with $d_c=\frac{10}{3}$, which we explain in more detail in the corresponding section.
For each $d_c$ there are very general forms of the renormalization group flows $\beta_V$ and $\beta_Z$,
respectively of the effective potential $V$ and of the wavefunction renormalization $Z$, which are determined
to a certain loop order. The amount of loops at which each multicritical RG flow is known varies considerably among the models, therefore we specify it in each corresponding section, however within our selection we consistently
have results between three and four loops.
In practice, the beta functional $\beta_V$ is used to determine the beta functions $\beta_i$ of the couplings,
in which the label $i$ runs among all couplings of $V$,
while $\beta_Z$ determines the anomalous dimension of the fields' multiplet.
The beta functions have increasing number of fixed points of order $\epsilon$ for increasing number
of couplings, so in the following we make a selection of the ones which we think are most important.
Using the fixed points, one can translate the perturbative series in the couplings into $\epsilon$-series.
The spectrum of the relevant operators of the theory is given by the eigendeformations
of the stability matrix $M_{ij} \equiv \partial_j \beta_i$, whose eigenvalues $\theta_a$ are related to the scaling dimensions $\Delta_a$
of scaling operators (eigenvectors) by the relation $\theta_a = d - \Delta_a$.\footnote{%
Here we are referring to $\Delta_a$ as being the same as the CFT scaling dimension, notice however
that this relation is slightly modified if the corresponding operator is a CFT descendant \cite{Codello:2017a}.
This will not concern the results of this paper.
}
In dealing with the quadratic operators \eqref{eq:irreps}, we find it more convenient to renormalize them as composites.
To achieve this within a functional scheme, it is sufficient to operate the replacement
$V \to V + J_R\cdot R$ in $\beta_V$, which gives the RG running of the composite operators
if $R=S,X,Y$ runs through the irreducible representations and $J_R$ are sources with the correct number of indices
and symmetries. We used the linear order in $J_R$ of the substitution, to determine the critical exponents and scaling dimensions of $R$, but in principle one could go beyond and compute the operator product expansion
of pairs of operators \cite{Pagani:2020ejb}.
The first most important field-theoretical critical exponent is the anomalous dimension $\eta$,
which is obtained by diagonalizing $\beta_Z$. In all even models, the symmetry $H_N$ is enough to
constrain all fields to have the same anomalous dimension.
The second most important field-theoretical critical exponent is $\nu$, that controls the scaling
of the correlation length and is estimated from the inverse of the quadratic singlet in $V$.
From \eqref{eq:irreps}, we expect three different exponents for general $N$, and correspondingly three different
inverse quantities that we denote $\nu_R$ for $R=S,X,Y$.
Obviously, $\nu=\nu_S$ which we can verify easily for all computations.
As discussed in Sect.~\ref{sec:logarithms}, we have that $\nu_S=\nu_X$ while $\nu_S\neq \nu_Y $ in the limit $N\to 0$.
This has an interesting consequence as we are about to see.
As discussed in Ref.~\cite{Wiese:2020}, the critical exponent of the operator $X$ of \eqref{eq:irreps}
is also the fractal dimension of the propagator lines of the model, therefore
$ d_f = \frac{1}{\nu_X}$. Using again the fact that the operators $S$ and $X$ degenerate in the limit,
we can establish that
\begin{equation}
\begin{split}
d_f= \frac{1}{\nu}
\end{split}
\end{equation}
for all the random models that we are about to discuss.
Notice that this result replicates similar statements found in Ref.~\cite{Wiese:2020} in the same limit, but for different
symmetry content.
In fact, in that paper the limit $N\to 0$ of the $O(N)$ model is considered, but, since $O(N)$
is the maximal symmetry of a model with $N$ scalars \cite{Osborn:2017ucf} and since $H_N\subset O(N)$,
our equations actually contain the case $O(N)$ as special case and the limits $N\to 0$ are actually the same.
The limit $N\to 0$ of the $O(N)$ model is interesting because it corresponds to the continuum
theory behind a self-avoiding walk, so a random walker which cannot cross its path (or, in other words,
whose line has exactly $N=0$ loops) \cite{DeGennes1972}.
The operators $X$ and $Y$ of \eqref{eq:irreps} are also interesting when discussing the breaking of $H_N$ symmetry.
One can imagine a situation in which the model is at the critical temperature, but it is also deformed by some component
of either $X$ or $Y$, which breaks explicitly $H_N$ (therefore the correlation length is still finite).
Using scaling analysis, it is easy to show (see again Ref.~\cite{Wiese:2020} for the $O(N)$ example)
that, depending on the irreducible deformation,
the model is crossing-over to a phase with a smaller symmetry group, $H_N\to G$,
and that the crossover is critical for small coupling, which means close to the original critical point.
The exponents controlling these transformations, which go by the name of \emph{crossover exponents},
are estimated as the ratios of $\nu$s
\begin{equation}
\begin{split}
\phi_X = \nu/\nu_X \,, \qquad\qquad\qquad \phi_Y = \nu /\nu_Y\,.
\end{split}
\end{equation}
As before, we notice that for $N\to 0$ we have that $\phi_X=1$, which applies to all models described in the following
and therefore will not be repeated.
Instead, $\phi_Y $ is an independent exponent, which we give explicitly later even though it can easily be
derived from the operator scaling dimensions $\phi_Y =\frac{d-\Delta_Y}{d-\Delta_S}$.
The quadratic deformation $Y$ induces a crossover to a (broken symmetry) phase described by the so-called Klein four-group $G = \mathbb{Z}_2 \rtimes S_2 \simeq \mathbb{K}_4 $ and which could be associated to the critical content of two coupled Ising models.
The final quantities that we compute are rooted in both RG and CFT formalisms. The $A$-function is
the scalar function from which one can derive the RG flow as a gradient for all the even models. If $g_a$ are all the couplings and $\beta_a$ the corresponding beta functions, then $A$ is derived implicitly from
\begin{equation}
\begin{split}
\beta_a = \sum_b h_{ab} \frac{\partial A}{\partial g_b} \,.
\end{split}
\end{equation}
In the rest of the paper we assume that the metric $h_{ab}$ in the space of the couplings is flat,
which is consistent until the next-to-leading order.\footnote{%
To be precise, the metric is flat only for a specific choice of couplings.
In the case $d_c=4$, the most general quartic potential
is $V(\phi)= \sum\lambda_{ijkl}\phi_i\phi_j\phi_k\phi_l$, with couplings $\lambda_{ijkl}$ which are fully symmetric tensors. Their beta functions $\beta_{ijkl}$ can be derived from the function $A$ as $\beta_{ijkl}= \frac{\partial A}{\partial \lambda_{ijkl}}$, thus we choose the metric to be flat in the coordinates $\lambda_{ijkl}$. If we were to specialize
the same formula to the couplings of \eqref{eq:replicated-action}, we would generally find a nontrivial metric \cite{Osborn:2017ucf}.
}
Naively, $A$ counts the number of degrees of freedom of the model. Since $N\to 0$ corresponds to a situation
with no fields, we have that in our cases $A \sim N$, implying that the quantity of interest is instead
\begin{equation}
\begin{split}
a \equiv \lim_{N\to0} \frac{A}{N}\,,
\end{split}
\end{equation}
for which we stress the similarity with $\Delta'_E$ of Sect.~\ref{sec:logarithms}.
Finally, the constant $C_T$ appears when expanding the correlator involving the trace of the stress-energy tensor, $T$,
and two copies of the field $\phi_i$ for $d=4$. Loosely speaking
\begin{equation}
\begin{split}
C_T \sim \langle \phi_i \phi_i T\rangle\,,
\end{split}
\end{equation}
as shown in Ref.~\cite{Dey:2016}, in which the relation with the marginal couplings of $V$ has also been derived.
We have that $C_T \sim N$, likewise $A$, so again the interesting result comes from the limit
\begin{equation}
\begin{split}
c_T \equiv \lim_{N\to0} \frac{C_T}{N}\,.
\end{split}
\end{equation}
For both $a$ and $c_T$, we can think at their definitions as being the uppercase quantities
normalized with their free theory counterparts
\begin{equation}
\begin{split}
a \equiv \lim_{N\to0} \frac{A}{A_{\rm free}}\,, \qquad \qquad c_T \equiv \lim_{N\to0} \frac{C_T}{C_{T\, {\rm free}}}\,.
\end{split}
\end{equation}
Of course, the free theory that we refer to here has $N$ free scalar fields,
and not just one as in the normalization given in Ref.~\cite{Dey:2016}.
We finally notice that at LO the following relation is true
\begin{equation}
A= -\frac{3 \epsilon}{2} \sum_a \eta_a =- \frac{3 \epsilon}{2}N \eta \,,
\end{equation}
from which it directly follows that $a = -\frac{3\epsilon}{2} \eta$.
\section{\texorpdfstring{Tricritical theory in $d=3-\epsilon$}{Tricritical theory in 3-epsilon}}
\label{sec:dc3}
The renormalization group analysis in $d=3-\epsilon$ dimensions determines the properties of tricritical fixed points \cite{Stephen1973, Lewis1978}.
This is certainly true for a single component field $\phi$ with $\mathbb{Z}_2$-symmetry,
for which there exists a tricritical point which
requires the simultaneous fine-tuning at criticality of two relevant operators, besides
the external magnetic field, hence the name ``tricritical''.
In the generalization to $H_N$ symmetry and the randomly diluted case, however, the fixed points emerging in $d=3-\epsilon$ dimensions might require the fine-tuning at criticality of more than two parameters and, strictly speaking, one should regard them as multicritical ones. That being said, following \cite{Stephen1976} and in order to make clear contact
with the corresponding pure theory, we refer to the fixed points emerging at $d_c=3$ in the limit $N\to 0$
as tricritical ones.
Truncating the cumulant expansion at third order, we are left with the following marginal potential
\begin{equation}\label{eq:marginalV3}
\begin{split}
V(\phi_a) &=
w_1 \sum_a E_a^3(x) + w_2 \sum_{a \neq b} E_a^2(x)E_b(x) + w_3 \sum_{a\neq b \neq c} E_a(x) E_b(x) E_c(x) \\
&=
\upsilon_1\Bigl(\sum_a \phi_a^2\Bigr)^3 + \upsilon_2 \sum_a \phi_a^2 \sum_b \phi_b^4 + \upsilon_3 \sum_a \phi_a^6
\,.
\end{split}
\end{equation}
We have introduced a second parametrization that makes explicit the form of the interaction
in terms of the basic fields and shows that the $O(N)$ limit is reached when $\upsilon_2=\upsilon_3=0$.
The linear relations between the couplings $\upsilon_i$ and $w_i$ are trivial to obtain and we only need $w_1=(\upsilon_1+\upsilon_2+\upsilon_3)$ in the following.
We computed the beta functions of the RG flow for \eqref{eq:marginalV3}; they can be found in \eqref{eq:betasystem-dc3}.
Apart for the trivial Gaussian fixed point, the theory admits the tricritical pure Ising fixed point \cite{Lewis1978} with coordinates
$\{ \upsilon_1^{\star} = 0 \,, \upsilon_2^{\star} = 0 \,, \upsilon_3^{\star} = \frac{3}{10}\epsilon \}$
and LO anomalous dimension given by $\eta = \epsilon^2/500$.
Then there is the tricritical $O(N)$ fixed point in the limit $N\to 0$ with coordinates
$\{ \upsilon_1^{\star} = \frac{15}{44} \epsilon \,, \upsilon_2^{\star} = 0 \,, \upsilon_3^{\star} = 0 \,, \}$, which
corresponds to the tricritical SAW universality class and has anomalous dimension given by $\eta = \epsilon^2/726$;
it has been linked to the so-called Flory $\theta$-point of very long polymer chains \cite{DeGennes1972,DeGennes1975, Duplantier1982}.
Finally, there are three further fixed points that display genuine $H_N$ symmetry in the limit $N\to 0$;
therefore, there are three potentially equivalent candidates for random tricritical model.
Using the experience on the standard critical case, which is reviewed in App.~\ref{sec:dc4}, we apply a rationale that boils this list down to a single interesting fixed point.
Since we are interested in phase transitions that can be of second order in the pure system, a first physical requirement is that of stability, which is equivalent to
$w_1^\star = (\upsilon_1^\star+\upsilon_2^\star+\upsilon_3^\star)>0$
that ensures that \eqref{eq:marginalV3} is bounded from below
(an analog condition holds for the case $d_c=4$).
A second physical requirement is that the theory is perturbatively unitary:
this implies that the anomalous dimension $\eta$ must be positive\footnote{%
The anomalous dimension is \emph{not} perturbatively positive for the standard diluted model in $d=4-\epsilon$, as shown in App.~\ref{sec:dc4}. However, it \emph{is} positive for $d=3$,
implying that it must change sign through a nonperturbative mechanism. This is very important for the comparison with lattice simulations, which clearly display $\eta>0$.
We elect the positivity of the anomalous dimension as a physical requirement for this reason.
}
and that the spectrum of RG deformations is \emph{real}.
At NLO, corresponding to four loops, this leaves us only with the following FP
\begin{equation}
\begin{split}
\upsilon_1^{\star} & = 0.427688 ~\epsilon + 3.6999 ~\epsilon ^2 \,,\\
\upsilon_2^{\star} & = -0.164358 ~\epsilon -2.49894 ~\epsilon ^2 \,, \\
\upsilon_3^{\star} & = 0.084626 ~\epsilon +1.26481 ~\epsilon ^2 \,,
\end{split}
\end{equation}
The above solution can be determined analytically, and at the leading order
it comes from solving three quadratic equations in three variables; the solution, however,
is enormous, so we prefer to report it numerically.
The critical exponents $\eta$ and $\nu$ for this fixed point are given by
\begin{equation}
\begin{split}
\eta = 0.00137842 ~\epsilon^2\,, \qquad \qquad
\nu = \frac{1}{2} + 0.00551369 ~\epsilon ^2 \,,
\end{split}
\end{equation}
while the scaling dimensions associated to the irreps.\ \eqref{eq:irreps} read
\begin{equation}
\begin{split}
\Delta_E & = 1 - \epsilon + 0.0220548 ~\epsilon ^2 \,,\qquad \qquad
\Delta_Y = 1 - \epsilon + 0.0222901 ~\epsilon ^2 \,,\\
\Delta'_E & = -0.00832891 ~\epsilon ^2 \,.
\end{split}
\end{equation}
The $a$-function in this case takes the negative value
\begin{equation}
a^\star = -0.0206763 ~\epsilon ^3.
\end{equation}
We conclude giving the fractal dimension of propagator lines and the nontrivial crossover exponent
\begin{equation}
\begin{split}
d_f = 2 - 0.0220548 ~\epsilon ^2 \,, \qquad \qquad
\phi_Y = 1 - 0.000117671 ~\epsilon ^2\,.
\end{split}
\end{equation}
To conclude, we briefly discuss some property of the two other fixed points of \eqref{eq:marginalV3} that have been left out of the discussion.
One of them lies in the unphysical region $w_1<0$ and thus is not positive definite.
The other one has a complex conjugate pair of eigenvalues in the stability matrix (and therefore has \emph{complex} spectrum).
We discarded both of them accordingly, even though they might have some physical application.
For another analysis of the random tricritical model we refer to \cite{Stephen1976}.
\section{\texorpdfstring{Tetracritical theory in $d=\frac{8}{3}-\epsilon$}{Tetracritical theory in 8/3-epsilon}}
\label{sec:dc8/3}
The next in line among the possible generalizations comes from the $\epsilon$-expansion below $d_c=\frac{8}{3}$,
which determines the properties of tetracritical fixed points.
We proceed by truncating the cumulant expansion to the fourth order, obtaining the following marginal potential
\begin{eqnarray}\label{eq:marginalV8/3}
V(\phi_a) & =& u_1 \sum_a E_a^4(x) + u_2 \sum_{a \neq b} E_a^2(x)E_b^2(x) + u_3 \sum_{a \neq b} E_a^3(x) E_b(x)
\nonumber\\
&& + u_4 \sum_{a \neq b \neq c} E^2_a(x)E_b(x)E_c(x) + u_5 \sum_{a \neq b \neq c \neq l} E_a(x) E_b(x) E_c(x) E_l(x) \\
&=& \rho_1 \Bigl(\sum_a \phi_a^2\Bigr)^4 + \rho_2 \Bigl(\sum_a \phi_a^2\Bigr)^2 \sum_b \phi_b^4
+ \rho_3 \sum_a \phi_a^2 \sum_b \phi_b^6 + \rho_4 \Bigl(\sum_a \phi_a^4 \Bigr)^2
+ \rho_5 \sum_a \phi_a^8
\nonumber
\,.
\end{eqnarray}
As in the previous sections the second parametrization of couplings $\rho_i$ conveniently
highlights the $O(N)$ limit as the case in which only $\rho_1\neq 0$.
The beta functions of the RG flow for \eqref{eq:marginalV8/3} are given in \eqref{eq:betasystem-dc8/3}.
The theory admits several nontrivial fixed points. First, we have the tetracritical pure Ising fixed point with coordinates
$\{\rho_1^{\star} = 0 \,, ~ \rho_2^{\star} = 0 \,, ~ \rho_3^{\star} = 0 \,, ~ \rho_4^{\star} = 0 \,, ~ \rho_5^{\star} = \frac{3}{70} \epsilon \}$,
and LO anomalous dimension given by $\eta = \frac{9}{85750} \epsilon^2$, which can be checked against Ref.~\cite{Zinati:2019gct}.
Then, we have the tetracritical $O(N)$ fixed point in the limit $N\to 0$ with coordinates $\{ \rho_1^{\star} = \frac{105}{2144} \epsilon\,,
~\rho_2^{\star} = 0 \,, ~ \rho_3^{\star} = 0 \,, ~ \rho_4^{\star} = 0 \,, ~ \rho_5^{\star} = 0 \}$
and anomalous dimension $\eta = \frac{9}{143648} \epsilon^2$,
corresponding to the tetracritical SAW universality.
We then have several nontrivial fixed points with true hypercubic symmetry.
The requirement that $u_1 ^{\star} \equiv ( \rho_1^\star + \rho_2^\star + \rho_3^\star + \rho_4^\star + \rho_5^\star)>0$
leaves us with seven possible candidates.
Among these, only three have a positive anomalous dimension $\eta$, and only one has a completely real spectrum.
These features are in common with the random Ising model in $d=3$
(see the appropriate comment in Sect.~\ref{sec:dc4}) and with the tricritical example of the previous section. The corresponding LO fixed point coordinates are determined at three loops as
\begin{equation}
\{ \rho_1^{\star} = 0.009405 ~\epsilon \,, ~ \rho_2^{\star} = 0.060199 ~\epsilon \,, ~ \rho_3^{\star} = 0.002977 ~\epsilon\,, ~ \rho_4^{\star} = 0.022737 ~\epsilon \,, ~ \rho_5^{\star} = -0.048088 ~\epsilon \} \,.
\end{equation}
The anomalous dimension is given by $\eta = 0.0000615196 \epsilon^2$. In principle,
even the two discarded fixed points may as well be interesting;
they have a complex conjugate pair of critical exponents in the stability matrix, but positive anomalous dimension.
We chose to limit our discussion to fixed points with real spectrum for clarity,
assuming that it is a physically meaningful requirement based on the analogy with the previous models.
\section{\texorpdfstring{Multicritical theory in $d=\frac{10}{3}-\epsilon$}{Multicritical theory in 10/3-epsilon}}
\label{sec:dc10/3}
In the previous sections, we exclusively considered even interactions of the fields
and each model can be regarded as a generalization of the tower of multicritical models $\phi^{2n}$
by Zamolodchikov \cite{zamolodchikov:1986, itzykson1991sft, ODwyer:2007}.
A way around this limitation is to include a field singlet $\sigma$ in the multiplet of fields,
$\phi_a \to (\phi_a,\sigma)$.
The presence of a singlet allows us to construct interactions with an odd number of fields.
For the single component case, these interactions
also correspond to a tower of multicritical models $\phi^{2n+1}$,
albeit much less known than the even counterpart \cite{Zambelli:2016,Codello:2017}.
The simplest possible odd theory with $H_N$ symmetry
is the one with a cubic interaction and upper critical dimension $d_c=6$,
however the potential is constrained to be $V\sim \sum_a\sigma (\phi_a)^2$, and therefore it actually has
enhanced $O(N)$ symmetry. This is essentially the same theory considered in Ref.~\cite{Giombi:2019},
when attempting to construct the Wilson-Fisher $O(N)$ model above the upper critical dimension $d_c=4$.
The result is a non-unitary theory as clarified recently in \cite{Giombi:2019}.
For our construction, we necessitate a true symmetry $H_N$, which can be achieved by
considering at least a theory with quintic interaction and upper critical dimension $d_c=\frac{10}{3}$.
The critical potential is
\begin{equation}\label{eq:marginalV103}
\begin{split}
V(\phi_a,\sigma) &= v_1 \sigma^5 + v_2 \sigma^3 \sum_a E_a + v_3 \sigma\sum_{a\neq b}E_a E_b
+ v_4 \sigma\sum_a E_a^2 \\
&=
\kappa_1 \sigma^5 + \kappa_2 \sigma^3 \sum_a \phi_a^2 + \kappa_3 \sigma \Bigl(\sum_a \phi_a^2\Bigr)^2 + \kappa_4 \sigma \sum_a \phi_a^4
\,,
\end{split}
\end{equation}
and it can accommodate a true $H_N$ symmetry thanks to the monomial coupled to $v_3$ in the first parametrization
or, equivalently, $\kappa_4$ in the second.
For this reason, a nonzero $v_3$ is a signature of genuine hypercubic symmetry,
instead of the larger group $O(N)$.
If we were to follow the same logic of the construction of Ref.~\cite{Giombi:2019},
then the resulting theory could be interpreted as an attempt to promote the
model with hypercubic symmetry and $d_c=3$ above its upper critical dimension.
For general $N$, this model generalizes the single-field case presented in detail in Ref.~\cite{Codello:2017}.
The corresponding RG flow can be read off from the equations presented in Ref.~\cite{Codello:2017}
by opportunely adding the field's indices, which can be done in a unique way at the leading order.
We also adopt the same normalization to avoid factors of $4\pi$.
The result of the RG analysis is that there are several fixed points, most of which are genuinely
complex, having nonzero real and imaginary parts.
As in the single field case, however,
there are some purely imaginary solutions. In fact, purely imaginary solutions are protected
by the parity and time-reversal pseudo-symmetry
$$
{\cal PT}: V(\phi_a,\sigma) \to V^*(\phi_a,-\sigma)\,,
$$
in which we denoted complex-conjugation of the potential with the asterisk.
The same happens for the Lee-Yang model \cite{Bender:2012ea, Bender:2013qp}.
We deduce that the resulting models are non-unitary, but they are still interesting
because the boundedness of the spectrum is guaranteed by the action of ${\cal PT}$.
Non-unitarity results in $\eta<0$, so for this section we relax the requirement of positive
anomalous dimension.
An interesting aspect of the analysis of the model with odd interactions
is related to the generalization of the quadratic composite operators \eqref{eq:irreps}.
Having a new singlet field $\sigma$ at our disposal, it is clear that it is possible to add a new
quadratic singlet to \eqref{eq:irreps}, which we denote $S'=\sigma^2$.
In general, the renormalization process will mix the singlets $S$ and $S'$ because they carry the same
labels for arbitrary values of $N$.
However, this notion clashes with the statement that the singlet $S$ and the tensor $X$ have degenerate scaling dimensions on the basis of $H_N$ symmetry in the limit $N\to 0$,
which was verified multiple times in the previous sections.
This conundrum is solved by noticing that, precisely in the limit $N\to 0$, the mixing matrix of
$S$ and $S'$ is diagonal and they become separate scaling operators,
which is thus consistent with $S$ alone becoming degenerate with $X$.
In the limit $N\to 0$, we thus have two distinct critical exponents $\nu_\phi$ and $\nu_\sigma$
from $S$ and $S'$ respectively,
as well as two distinct anomalous dimensions $\eta_\phi$ and $\eta_\sigma$.
The RG flow of \eqref{eq:marginalV103} is given in \eqref{eq:betasystem-dc10/3}.
We find two fixed points that fit all the criteria and therefore we think are interesting to report.
The coordinates of the first fixed point are
\begin{equation*}
\{ \kappa_1^\star = \frac{3}{\sqrt{229}} \,i ~\epsilon^{1/2} ,
~ \kappa_2^\star = 0.948238 \,i ~\epsilon^{1/2},
~ \kappa_3^\star = -0.630458 \,i ~\epsilon^{1/2},
~ \kappa_4^\star = 0.775543 ~ \,i ~\epsilon^{1/2} \} \,,
\end{equation*}
and its associated critical exponents are
\begin{align}
\eta_\phi & = -0.00048373 ~\epsilon \,, &\nu_\phi & = \frac{1}{2} + 0.00196926 ~\epsilon \,, \\
\eta_\sigma & = -\frac{3}{1145} ~\epsilon \,, &\nu_\sigma & = \frac{1}{2}- \frac{33}{4580} ~\epsilon \,,
\end{align}
The coordinates of the second fixed point are
\begin{equation*}
\{ \kappa_1^\star = \frac{3}{\sqrt{229}} \,i ~\epsilon^{1/2} ,
~ \kappa_2^\star = -0.390662 \,i ~\epsilon^{1/2} ,
~ \kappa_3^\star = -0.575971 \,i ~\epsilon^{1/2} ,
~ \kappa_4^\star = -0.0131081 \,i ~\epsilon^{1/2} \} \,,
\end{equation*}
and the corresponding critical exponents are
\begin{align}
\eta_\phi & = -0.00410786 ~\epsilon \,, &\nu_\phi & = \frac{1}{2} - 0.00822047 ~\epsilon \,, \\
\eta_\sigma & =-\frac{3}{1145} ~\epsilon \,, &\nu_\sigma & = \frac{1}{2}- \frac{33}{4580} ~\epsilon \,.
\end{align}
We notice that the critical exponents
associated to the singlet $\sigma$ are identical at the two fixed points and could be given as simple fractions.
This happens through the combination of few facts: first, the beta function of $v_1$ is decoupled from the rest of the system in the limit $N\to 0$, resulting in $\beta_{\kappa_1}=-\frac{3}{2}\epsilon\, \kappa_1 -\frac{229}{6}(\kappa_1)^3$.
Second, both the anomalous dimension $\eta_\sigma=\frac{(\kappa_1)^2}{15}$ and the scaling exponent $\frac{1}{\nu_\sigma}=2-\frac{11}{15}(\kappa_1)^2$ of $\sigma^2$ only depend on $\kappa_1$ in the limit.
\section{Conclusions}
\label{sec:conclusions}
There are two main messages that we wanted to deliver through this paper:
the first one is that there are several, possibly infinitely many, multicritical generalizations of the hypercubic model.
These generalizations are relevant if one desires to approach the problem
of describing quenched averages of pure magnetic systems with arbitrary
criticality order or arbitrary distributions of the disorder.
We have argued in favor of the existence of such generalizations after a summary
of the replica approach to describe disorder, in which we evidenced the role that a nontrivial distribution
of the disorder has in generating higher order terms for the replicated action.
While most of the previous literature has concentrated attention on a Gaussian noise,
eventually invoking universality to justify that all distributions should essentially fall into the same universality class,
we believe that a sufficiently complex distribution might trigger a multicritical behavior in the random system.
If this is correct, our findings suggest that such type of multicritical behaviors can be classified.
For this purpose, we have used perturbation theory and the $\epsilon$-expansion as classification tools,
determining the upper critical dimension indirectly from the choice of the marginal interactions that control
the perturbative series.
Given that almost all the new upper critical dimensions that we have studied are below $d=3$,
we also believe that our findings might be most relevant for systems with two physical dimensions
and therefore could have interesting implications in the context of low-dimensional physics and
two-dimensional CFT.
The second main message is that some conformal data and the scaling of very interesting observables are accessible by renormalization group methods, including less-known universal coefficients that are related to logarithmic corrections to CFT. These corrections are not logarithmic in the sense of mean-field, but instead literally display logarithms as part of the CFT correlators.
We have discussed in few simple steps how to access some quantities that are relevant for the study of CFTs
and log-CFTs by a combination of group representation theory and renormalization group methods making an explicit connection with earlier work by Cardy.
Our hope is to open a new avenue for comparing and combining results coming from RG and CFT,
since both methods might have advantages and disadvantages.
For example, it is very simple to handle the analytic continuation
in the number $N$ of replicas through the methods of this paper.
An interesting aspect for a generalization, in this regard,
would be to promote the functional approach to a nonperturbative method of the RG,
such as the Wetterich equation \cite{Wetterich:1992yh}.
How to properly continue to $N\to 0$ the fully functional form
of the local potential approximation is, however, presently unknown to us,
as it would imply to have a function with zero arguments.
It is still possible to continue the couplings' beta functions as done
in Ref.~\cite{PhysRevB.65.140402} for the hypercubic model
and in Ref.~\cite{Zinati:2017hdy} for the comparable Potts model.
In passing, we have mentioned several other fixed points and relative critical models, besides the limit $N\to 0$
of the hypercubic.
Our hope is to have mentioned a complete accounting of the interesting fixed points
appearing in the limit of ``zero fields.''
Among these, we have seen several multicritical generalizations
of the self avoiding walk, which arise as the limit $N\to 0$ of models with $O(N)$ symmetry.
This is an important point because, especially for some of models that we have considered,
the RG diagram contains many new fixed points
which are either complex (thus maybe related to some complex CFT) or non-unitary
(in the sense that they do not satisfy the unitarity bound perturbatively)
and some rationale must be used to infer which are the important ones.
For brevity, we have made a selection of the interesting candidate fixed points
to generalize the hypercubic behavior in the limit and decided to not dive too deeply
in the study of all others. It is still possible, however,
that the points that we have not discussed in detail have some interesting physical meaning,
so we hope that our work triggers some interest in their respect as well.
\paragraph{Acknowledgements.}
We are grateful to A.~Stergiou for suggesting us some important references
and to E.~Vicari for pointing our interest to the $N\to 0$ limit of the hypercubic model.
For performing the computations of this paper we relied heavily on the \emph{Mathematica} packages \cite{xact-package,xperm-package} and \cite{Nutma:2013zea}. RBAZ acknowledges the support from the
French ANR through the project NeqFluids (grant ANR-
18-CE92-0019).\enlargethispage{\baselineskip}
|
2,869,038,157,087 | arxiv | \section{Introduction}
\label{Sec_introduction01}
\subsection{General introduction}
This work studies a flow-constrained randomized shortest paths \cite{Kivimaki-2012,Saerens-2008,Yen-08K} formulation to the relative entropy-regularized, or randomized, optimal transport problem on a graph with multiple inputs and outputs having fixed marginal probabilities (or margins, providing fixed unit input and output flows), studied in \cite{Guex-2016,Guex-2019}. This last work extended the relative entropy-regularized optimal transport problem (see the recent work \cite{Cuturi2013}, but also \cite{Erlander-1990,Kapur-1992,Wilson-1970}) to a graph structure. Intuitively, it aims to carry goods from some input nodes to output nodes with least expected cost while maintaining a prescribed level of relative entropy of the path probabilities connecting inputs to outputs. In this problem, the input flows (proportion of goods carried from input nodes) and output flows (proportion of goods carried from output nodes) are constrained to be equal to some predefined values (marginal probabilities), which are not defined in the standard \textbf{randomized shortest paths} (RSP, \cite{Kivimaki-2012,Saerens-2008,Yen-08K}) and \textbf{bag-of-paths} (BoP, \cite{Francoisse-2017,Mantrach-2009}) models. The introduced model will therefore be called the \textbf{margin-constrained bag-of-paths} model along the paper, in order to remain consistent with \cite{Guex-2019}.
The introduced algorithm solving this problem provides an optimal randomized policy balancing exploitation and exploration through a simple iterative algorithm inspired by \cite{Courtain-2020}. Similarly to the standard randomized shortest paths and bag-of-paths frameworks\footnote{The main difference between the BoP and the RSP formalism is that, for the BoP, all possible paths in the network are considered \cite{Francoisse-2017,Mantrach-2009}, whereas only source-target paths connecting two nodes of interest are considered in the RSP \cite{Kivimaki-2012,Saerens-2008,Yen-08K}. The RSP therefore avoids the need for defining prior distributions on source and on target nodes because there is only one single source and target.}, the model is monitored by a parameter $\theta$ in such a way that, when $\theta$ goes to infinity, it approximates the optimal, lowest-cost, solution to the transportation problem. Conversely, when $\theta$ is close to zero, the solution approaches a random walk behavior (provided a priori by the user) in terms of relative entropy (also called Kullback-Leibler divergence). Thus, when varying $\theta$, the model interpolates between an optimal (exploitation) and a random (exploration) behavior.
The main idea, in comparison with the previous work (mainly \cite{Guex-2016,Guex-2019}), is the following. The relative entropy-regularized optimal transport on a graph problem studied in \cite{Guex-2019} is based on a BoP formulation where the set of all paths between the source and target nodes is considered. On the contrary, in the present work, the problem is rephrased within a RSP formalism (\cite{Saerens-2008}; inspired by \cite{Akamatsu-1996}) only considering the set of paths from one single source supernode and one target supernode, both added to the original network.
This rephrasing into a source-target RSP problem allows us to easily define capacity constraints on edge flows, as explained in \cite{Courtain-2020}, which itself allows us to reformulate the margin-constrained BoP on a graph problem into a capacity constrained RSP problem which has its own merits, for instance allowing additional flow capacity constraints. As an application, the margin-constrained bag-of-paths surprisal distance measure between nodes \cite{Guex-2019} is derived within the new formalism.
To summarize, the predominant benefit here lies in the fact that this new formulation (in comparison to previous formulations \cite{Guex-2016,Guex-2019} solving the same problem) can easily accommodate flow capacity constraints, which frequently appear in real-world applications \cite{Ahuja-1993}. Indeed, because the problem is transformed into a RSP single source-single target framework, it suffices to follow the procedure introduced in \cite{Courtain-2020} for dealing with both the margin and the capacity constraints. The new algorithm therefore solves the relative entropy-regularized minimum expected cost flow with capacity constraints problem on a graph. It is, however, slower than the dedicated algorithm developed in \cite{Guex-2019} so that it should only be used when capacity constraints are indeed present.
In addition to the introduction of this new algorithm, we also conducted an extensive experimental comparison between the newly introduced model and other state-of-the-art models on graph-based semi-supervised classification problems. This experimental comparison was left for further work in \cite{Guex-2019}, where a structural distance measure between pairs of nodes was defined. The results of this comparison suggest that the distance measure derived from the margin-constrained BoP model is clearly competitive with respect to the other investigated distance measures.
In order to avoid redundancy, the present paper will focus on the derivation of the new model and the experimental comparison. For a comprehensive discussion and related work concerning the margin-constrained BoP problem on a graph, as well as some illustrative examples, see \cite{Guex-2016,Guex-2019}.
\subsection{Main contributions and content}
The main contributions of this work are
\begin{itemize}
\item The introduction of a new algorithm for solving the relative entropy-regularized optimal transport (in terms of minimum expected cost flow) on a graph problem by considering edge flow constraints.
\item An experimental comparison between the distance measure derived from the studied model and other state-of-the-art distance measures between nodes on a graph on semi-supervised classification problems.
\end{itemize}
The next section presents the notation and some preliminaries. Section \ref{Sec_optimal_transport01} develops the new RSP-based algorithm relying on flow constraints. Then, Section \ref{Sec_Experimental_Comparison01} details the experimental comparison and its results. Section \ref{Sec_Conclusion01} presents the conclusions of the work.
\section{Notation, problem statement, and preliminaries}
\label{Sec_notation_problem_statement01}
\subsection{Notation}
\label{Subsec_notation01}
Assume a directed, strongly connected and weighted, graph $G = \{\mathcal{V},\mathcal{E}\}$ containing $(n-2)$ nodes\footnote{See later for the justification; an extended graph with two additional nodes, and thus $n$ nodes in total, will be defined later in Subsection \ref{Subsec_extended_graph01}.}, in which we have to carry goods from a predefined set of input nodes $\mathcal{I}n$ to a set of output nodes $\mathcal{O}ut$. The user specifies the proportions of non-negative, continuous, \textbf{flow} $\sigma_{i}^{\mathrm{in}}$ coming from each input $i \in \mathcal{I}n$ as well as the proportion of flow delivered to each output, $\sigma_{j}^{\mathrm{out}}$, $j \in \mathcal{O}ut$. For all other nodes $i \notin \mathcal{I}n$, we set $\sigma_{i}^{\mathrm{in}} = 0$. Symmetrically, for all nodes $j \notin \mathcal{O}ut$, $\sigma_{j}^{\mathrm{out}} = 0$. In general, we almost always have that input nodes are different from output nodes, $\mathcal{I}n \cap \mathcal{O}ut = \varnothing$, but this assumption is not needed in the model.
Let us further assume
\begin{equation}
\begin{dcases}
\sum_{i \in \mathcal{I}n} \sigma_{i}^{\mathrm{in}} = 1 &\text{with all } \sigma_{i}^{\mathrm{in}} \ge 0 \\
\sum_{j \in \mathcal{O}ut} \sigma_{j}^{\mathrm{out}} = 1 &\text{with all } \sigma_{j}^{\mathrm{out}} \ge 0
\end{dcases}
\label{Eq_flow_constraints01}
\end{equation}
In other words, a unit flow is considered.
If we have to transport non-unitary flow, we simply find the solution for a unit flow and then multiply the quantities by the total flow. The $n \times 1$ column vector containing the input flow of each node is denoted by $\boldsymbol{\sigma}_{\mathrm{in}}$ whereas the vector containing the output flows is $\boldsymbol{\sigma}_{\mathrm{out}}$.
Moreover, it is assumed that a non-negative \textbf{local cost} $c_{ij}$ is associated to each edge $(i,j)$, reflecting the penalty of following this edge (it can be a distance, a cost, a travel time, etc). The \textbf{cost matrix} $\mathbf{C}$ contains the individual costs $c_{ij}$ as elements. When there is no edge from node $i$ to node $j$ we consider the cost $c_{ij}$ to be infinite. The graph $G$ is also associated to an \textbf{adjacency matrix} $\mathbf{A}$ containing local affinities, or weights, $\{ a_{ij} \}$ between nodes. When there is no direct link between two nodes $i,j$, $a_{ij} = 0$. Matrices $\mathbf{C}$ and $\mathbf{A}$ are given and depend on the problem at hand. The \textbf{transition probability matrix} associated to the natural random walk on this graph $G$ is $\mathbf{P}$, with elements
\begin{equation}
[\mathbf{P}]_{ij} = p_{ij} = \dfrac{a_{ij}}{\displaystyle\sum_{j'\in \mathcal{S}ucc(i)} a_{ij'}}
= \dfrac{a_{ij}}{\displaystyle\sum_{j'=1}^{n} a_{ij'}}
\label{Eq_transition_probabilities_original_graph01}
\end{equation}
where $\mathcal{S}ucc(i)$ is the set of successor nodes of node $i$. The third equality is valid because the elements on the $i$th row of the adjacency matrix are equal to $0$ for the missing links $j \in \mathcal{E} \setminus \mathcal{S}ucc(i)$. Finally, we assume that the Markov chain associated to the random walk on $G$ is \textbf{regular}, that is, strongly connected and aperiodic\footnote{This property is needed in Appendix \ref{Subsec_appendix_A2} for computing the transition matrix of the natural random walk on the extended graph, but this assumption could probably be alleviated.}.
\subsection{Problem statement}
\label{Subsec_problem_statement01}
The problem is then to find the best policy, which takes the form of path probabilities, for carrying the input flow from the source nodes to the target nodes, minimizing the expected cost along the paths connecting sources to targets, while
\begin{enumerate}
\item keeping a given level of exploration quantified by the Kullback-Leibler divergence between path probabilities and complete random paths provided by the natural random walk (\ref{Eq_transition_probabilities_original_graph01}), and
\item satisfying the flow constraints stating that input flows and output flows are fixed to $\sigma_{i}^{\mathrm{in}}$ for each node $i \in \mathcal{I}n$ and $\sigma_{j}^{\mathrm{out}}$ for each node $j \in \mathcal{O}ut$.
\end{enumerate}
\noindent
This problem will be solved in Section \ref{Sec_optimal_transport01}; before, let us introduce an extended graph $G_{\mathrm{ext}}$ by adding two nodes to $G$.
\definecolor{myRed}{RGB}{139,0,0}
\definecolor{myGreen}{RGB}{0,100,0}
\begin{figure}[t]
\centering
\subfigure{\begin{tikzpicture}[shorten >=1pt,auto, ultra thick,
node_style/.style={circle,ball color = white,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_in/.style={circle,ball color = green,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_out/.style={circle,ball color = red,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
scale=0.85,transform shape]
\node[node_style_in] (1) at (-2,1) {2};
\node[node_style_in] (2) at (-2,-1) {3};
\node[node_style] (3) at (0,2) {4};
\node[node_style] (4) at (0,0) {5};
\node[node_style] (5) at (0,-2) {6};
\node[node_style_out] (6) at (2,1) {7};
\node[node_style_out] (7) at (2,-1) {8};
\draw[draw=black,->,>=stealth'] (1) edge node{} (2);
\draw[draw=black,->,>=stealth'] (1) edge node{} (3);
\draw[draw=black,->,>=stealth'] (1) edge node{} (4);
\draw[draw=black,->,>=stealth'] (2) edge node{} (4);
\draw[draw=black,->,>=stealth'] (2) edge node{} (5);
\draw[draw=black,->,>=stealth'] (4) edge node{} (3);
\draw[draw=black,->,>=stealth'] (3) edge node{} (6);
\draw[draw=black,->,>=stealth'] (4) edge node{} (5);
\draw[draw=black,->,>=stealth'] (4) edge node{} (6);
\draw[draw=black,->,>=stealth'] (4) edge node{} (7);
\draw[draw=black,->,>=stealth'] (5) edge node{} (7);
\draw[draw=black,->,>=stealth'] (7) edge node{} (6);
\end{tikzpicture}}
\subfigure{\begin{tikzpicture}[shorten >=1pt,auto, ultra thick,
node_style/.style={circle,ball color = white,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_in/.style={circle,ball color = green,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_out/.style={circle,ball color = red,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_superout/.style={regular polygon,ball color = myRed,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},
node_style_superin/.style={regular polygon,ball color = myGreen,font=\sffamily\bfseries,minimum size=0.8cm,draw=black},scale=0.85,transform shape]
\node[node_style_superin] (0) at (-4,0) {1};
\node[node_style_in] (1) at (-2,1) {2};
\node[node_style_in] (2) at (-2,-1) {3};
\node[node_style] (3) at (0,2) {4};
\node[node_style] (4) at (0,0) {5};
\node[node_style] (5) at (0,-2) {6};
\node[node_style_out] (6) at (2,1) {7};
\node[node_style_out] (7) at (2,-1) {8};
\node[node_style_superout] (8) at (4,0) {9};
\draw[draw=black,->,>=stealth'] (0) edge node{} (1);
\draw[draw=black,->,>=stealth'] (0) edge node{} (2);
\draw[draw=black,->,>=stealth'] (1) edge node{} (2);
\draw[draw=black,->,>=stealth'] (1) edge node{} (3);
\draw[draw=black,->,>=stealth'] (1) edge node{} (4);
\draw[draw=black,->,>=stealth'] (2) edge node{} (4);
\draw[draw=black,->,>=stealth'] (2) edge node{} (5);
\draw[draw=black,->,>=stealth'] (4) edge node{} (3);
\draw[draw=black,->,>=stealth'] (3) edge node{} (6);
\draw[draw=black,->,>=stealth'] (4) edge node{} (5);
\draw[draw=black,->,>=stealth'] (4) edge node{} (6);
\draw[draw=black,->,>=stealth'] (4) edge node{} (7);
\draw[draw=black,->,>=stealth'] (5) edge node{} (7);
\draw[draw=black,->,>=stealth'] (7) edge node{} (6);
\draw[draw=black,->,>=stealth'] (6) edge node{} (8);
\draw[draw=black,->,>=stealth'] (7) edge node{} (8);
\end{tikzpicture}}
\caption{On the left, a small directed graph $G$ with two input nodes $\mathcal{I}n = \{ 2,3 \}$ (in green) and two output nodes $\mathcal{O}ut = \{ 7,8\}$ (in red). On the right, the extended graph $G_{\mathrm{ext}} $ of this small directed graph $G$ with one source supernode 1 (in dark green) connected to all the input nodes and one target supernode $n=9$ (in dark red) connected to all the output nodes. Therefore, on this extended graph, the source supernode is node $1$ and the target supernode is node $n$ (the total number of nodes in $G_{\mathrm{ext}} $).}
\label{fig:ToyExampleGraphExtended}
\end{figure}
\subsection{Definition of an extended, single-source and target, graph}
\label{Subsec_extended_graph01}
For convenience, we now transform the original graph $G$ into a new, equivalent, single-source single-target graph $G_{\mathrm{ext}} = \{\mathcal{V}_{\mathrm{ext}},\mathcal{E}_{\mathrm{ext}}\}$ with $n$ nodes in a standard way \cite{Ahuja-1993,Gondran-1984}. Two nodes -- one \textbf{source node} (a supernode indexed as node $1$) and one \textbf{target node} (a supernode indexed as node $n$, where $n$ is the total number of nodes in this new, extended, graph) are added to the graph. All the other nodes in $\mathcal{V}$ (nodes $2$ to $(n-1)$) and their connections remain the same as in $G$ ($G$ is a subgraph of $G_{\mathrm{ext}}$); therefore, $\mathcal{V}_{\mathrm{ext}} = \mathcal{V} \cup \{ 1, n \}$. All nodes with label $i \in \mathcal{V}$ keep the same index numbering in $\mathcal{V}_{\mathrm{ext}}$. Moreover, the new source node (node $1$) is only connected by a directed link to the input nodes with a zero cost, and the output nodes are only connected to the new target node (node $n$) by a directed link, also with zero cost (no penalty when following these links). In order to be equivalent to the original graph $G$, the source node generates a unit flow while the target node is made killing and absorbing (a cemetery or sink node), and thus absorbs this unit flow. This new graph will be called the \textbf{extended graph}. A toy example of this concept is presented in Figure \ref{fig:ToyExampleGraphExtended}.
\subsubsection{The adjacency matrix of the extended graph}
We might now ask ourselves which weights should be associated with the edges incident to node $1$ and node $n$. A natural requirement would be that the weights of the edges incident to node $1$ are proportional to the input flows, in such a way that the corresponding transition probabilities are exactly equal to these input flows $\boldsymbol{\sigma}_{\mathrm{in}}$, as required.
However, for the nodes incident to node $n$, this is slightly more difficult. Let us denote the weights (affinities) of the edges incident to node $n$ as $\mathbf{w}$ -- these weights can either be provided explicitly by the user or can be calculated in order to have $\sigma_out$ as output distribution, which is developed below.
The adjacency matrix of the extended graph, $\mathbf{A}_{\mathrm{ext}}$, can be written as
\begin{equation}
\mathbf{A}_{\mathrm{ext}} = \kbordermatrix{
& 1 & \{ 2, \dots, (n-1) \} = \mathcal{V} & n \cr
1 & 0 & \boldsymbol{\sigma}_{\mathrm{in}}^{\text{T}} & 0 \cr
\{ 2, \dots, (n-1) \} = \mathcal{V} & \mathbf{0} & \mathbf{A} & \mathbf{w} \cr
n & 0 & \phantom{{}^{\text{T}}} \mathbf{0}^{\text{T}} & 0 \cr
} \label{Eq_extendedAdjacencyMatrix01}
\end{equation}
where $\mathbf{0}$ is the null column vector full of 0s. Similarly, the cost matrix becomes
\begin{equation}
\mathbf{C}_{\mathrm{ext}} = \kbordermatrix{
& 1 & \{ 2, \dots, (n-1) \} = \mathcal{V} & n \cr
1 & 0 & \phantom{{}^{\text{T}}} \mathbf{0}^{\text{T}} & 0 \cr
\{ 2, \dots, (n-1) \} = \mathcal{V} & \mathbf{0} & \mathbf{C} & \mathbf{0} \cr
n & 0 & \phantom{{}^{\text{T}}} \mathbf{0}^{\text{T}} & 0 \cr
} \label{Eq_extendedCostMatrix01}
\end{equation}
If the weights $\mathbf{w}$ are set by the user, the transition matrix $\mathbf{P}_{\!\!\mathrm{ext}} = (p_{ij}^{\mathrm{ext}})$ of the natural random walk on this extended graph $G_{\mathrm{ext}}$ can easily be computed by Equation (\ref{Eq_transition_probabilities_original_graph01}) from $\mat{A}_{\mathrm{ext}}$ instead of $\mat{A}$. In that case, our model will compute a policy (transition probabilities followed for carrying the goods) interpolating between the optimal expected lowest-cost policy and the one closest (in terms of Kullback-Leibler divergence) to the natural random walk. But it will in general, when $\theta \to 0$, \emph{not be exactly equal} to this natural random walk transition matrix because of the flow constraints in Equation (\ref{Eq_flow_constraints01}) which are \emph{not satisfied in general} for edges entering node $n$ in the extended graph $G_{\mathrm{ext}}$.
It is, however, possible to find sets of values of the weights $\mat{w}$ such that the net flow in each of these edges is exactly equal to $\sigma_{j}^{\mathrm{out}}$ when considering a natural random walk on the extended graph \cite{Guex-2016}. Indeed, in Appendix \ref{Sec_computing_transition_matrix_consistent01}, the transition matrix on the extended graph leading to flows satisfying exactly the constraints in Equation (\ref{Eq_flow_constraints01}) for $j \in \mathcal{O}ut$ is computed in closed form by stating a simple consistency argument (inspired by \cite{Guex-2016}),
\begin{equation}
\mathbf{P}_{\!\!\mathrm{ext}} = \kbordermatrix{
& 1 & \{ 2, \dots, (n-1) \} = \mathcal{V} & n \cr
1 & 0 & \boldsymbol{\sigma}_{\mathrm{in}}^{\text{T}} & 0 \cr
\{ 2, \dots, (n-1) \} = \mathcal{V} & \mathbf{0} & (\mathbf{I} - \mathbf{Diag}(\boldsymbol{\alpha})) \mathbf{P} & \boldsymbol{\alpha} \cr
n & 0 & \phantom{{}^{\text{T}}} \mathbf{0}^{\text{T}} & 0 \cr
} \label{Eq_extendedTransitionMatrix01}
\end{equation}
where the quantity $\boldsymbol{\alpha}$ is defined in Equation (\ref{Eq_computing_alpha_matrix_form01}).
Interestingly, for nodes not belonging to $\mathcal{I}n$, these transition probabilities remain exactly the same as before, as long as the random walker is not absorbed. For the source node 1, its transition probabilities pointing to nodes in $\mathcal{I}n$ are set to $\boldsymbol{\sigma}_{\mathrm{in}}$ in order to satisfy the first constraint in Equation (\ref{Eq_flow_constraints01}).
If we do not have a good reason for choosing the weights $\mathbf{w}$, it seems reasonable to compute the consistent transition probabilities because it removes the arbitrariness associated with the choice of these weights.
\section{Optimal transport on a graph from a constrained randomized shortest paths framework}
\label{Sec_optimal_transport01}
Our formulation of the problem is based on the RSP framework defining dissimilarity measures interpolating between the shortest-path distance and the commute-time distance \cite{Kivimaki-2012,Yen-08K,Saerens-2008}. This formalism is based on full paths instead of standard ``local" flows \cite{Ahuja-1993}, and was initially inspired by a model developed in transportation science \cite{Akamatsu-1996}.
We start by providing a brief description (closely following \cite{Courtain-2020,Leleux-2021}) of the RSP formalism before defining the problem and then deriving the algorithm for solving the constraints-based multi-inputs multi-outputs transport problem on the graph $G_{\mathrm{ext}}$.
\subsection{The standard randomized shortest paths formalism}
\label{Sec_randomized_shortest_paths01}
Let us start with a short reminder about the RSP model. For the sake of simplicity, in this section, all quantities are discussed in the context of a graph $G$ with a single input node $1$ and a single target node $n$ in order to avoid more cumbersome notations.
The main idea behind the standard RSP is the following. We consider the set of all paths, or walks, $\wp \in \mathcal{P}_{1n}$ from node $1$ to absorbing node $n$ on $G$. Each path $\wp$ consists in a sequence of connected nodes starting in node $1$ and ending in $n$. Then, we assign a probability distribution $\text{P}(\cdot)$ on the set of paths $\mathcal{P}_{1n}$ by minimizing the free energy of statistical physics \cite{Jaynes-1957,Peliti-2011,Reichl-1998},
\begin{equation}
\vline\,\begin{array}{llll}
\mathop{\mathrm{minimize}}\limits_{\{ \text{P}(\wp) \}_{\wp \in \mathcal{P}_{1n}}} & \phi(\text{P}) = \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) \tilde{c}(\wp) + T \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) \log \left( \frac{\text{P}(\wp)}{\tilde{\pi}(\wp)} \right) \\[0.5cm]
\mathop{\mathrm{subject\,to}} & \sum_{\wp\in\mathcal{P}_{1n}}\textnormal{P}(\wp) = 1
\end{array}
\label{Eq_optimization_problem_BoP01}
\end{equation}
with $\tilde{c}(\wp) = \sum_{\tau = 1}^{\ell} c_{\wp(\tau-1) \wp(\tau)}$ is the total cumulated cost\footnote{The basic quantities that are defined on whole paths $\wp$ will be denoted with a tilde in order to distinguish them from the same local quantities defined on edges.} along path $\wp$ when visiting the sequence of nodes (path) $\wp = \left( \wp(\tau) \right)_{\tau=0}^{\ell(\wp)}$ in the sequential order $\tau = 0,1,2,\dots,\ell(\wp)$ where $\ell(\wp)$ (or simply $\ell$) is the length of path $\wp$. Here, $ \wp(\tau)$ is the node appearing at position $\tau$ on path $\wp$. Furthermore, $\tilde{\pi}(\wp) = \prod_{\tau = 1}^{\ell} p_{\wp(\tau-1) \wp(\tau)}$ is the product of the natural random walk transition probabilities (see Equation (\ref{Eq_transition_probabilities_original_graph01})) along path $\wp$, called the path likelihood.
The objective function (\ref{Eq_optimization_problem_BoP01}) is a mixture of two dissimilarity terms with the temperature $T$ balancing the trade-off between them.
The first term is the expected cost for reaching the target node from the source node (favoring shorter paths). The second term corresponds to the relative entropy, or Kullback-Leibler divergence, between the path probability distribution and the path likelihood distribution (introducing randomness). When the temperature $T$ is low, shorter paths are favored while when $T$ is large, paths are chosen according to their likelihood in the random walk on the graph $G$. Note that we should normally add non-negativity constraints but this is not necessary as the resulting probabilities will automatically be non-negative.
This argument, akin to maximum entropy \cite{Jaynes-1957}, leads to a \textbf{Gibbs-Boltzmann distribution} on the set of paths (see, e.g., \cite{Francoisse-2017} for a detailed derivation),
\begin{equation}
\text{P}^{*}(\wp)
= \frac{\tilde{\pi}(\wp) \exp[-\theta \tilde{c}(\wp)]}{\displaystyle\sum_{\wp'\in\mathcal{P}_{1n}} \tilde{\pi} (\wp')\exp[-\theta \tilde{c}(\wp')]}
= \frac{\tilde{\pi}(\wp) \exp[-\theta \tilde{c}(\wp)]}{\mathcal{Z}}
\label{Eq_Boltzmann_probability_distribution01}
\end{equation}
where $\theta = 1/T$ is the inverse temperature and the denominator $\mathcal{Z} = \sum_{\wp\in\mathcal{P}_{1n}} \tilde{\pi} (\wp)\exp[-\theta \tilde{c}(\wp)]$ is the \textbf{partition function} of the system of paths.
It can be shown that this set of path probabilities (the randomized policy in terms of paths) is exactly generated by, and thus equivalent to, a Markov chain with biased transition probabilities (the randomized policy in terms of local transitions) favoring shorter paths, depending on the temperature $T$ (see Equation (\ref{Eq_biased_transition_probabilities01}) in the Appendix \ref{Appendix:RSP}, or \cite{Saerens-2008}).
Interestingly, if we replace the probability distribution $\text{P}(\cdot)$ by the optimal distribution $\text{P}^{*}(\cdot)$ provided by Equation (\ref{Eq_Boltzmann_probability_distribution01}) in the objective function (\ref{Eq_optimization_problem_BoP01}), we obtain
\begin{align}
\phi_{1n} \triangleq \phi(\text{P}^{*}) &= \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}^{*}(\wp) \tilde{c}(\wp) + T \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}^{*}(\wp) \log \left( \frac{\text{P}^{*}(\wp)}{\tilde{\pi}(\wp)} \right) \nonumber \\
&= \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}^{*}(\wp) \tilde{c}(\wp) + T \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}^{*}(\wp) \left( -\tfrac{1}{T} \tilde{c}(\wp) - \log \mathcal{Z} \right) \nonumber \\
&= -T \log \mathcal{Z}
\label{Eq_optimal_free_energy01}
\end{align}
Furthermore, the Appendix \ref{Appendix:RSP} provides a brief summary of the most important quantities that can be derived from the standard RSP model.
\subsection{Statement of the problem}
\label{Subec_problem_statement01}
The objective now is to compute the randomized shortest paths solution on $G_{\mathrm{ext}}$ satisfying the source/target flow constraints\footnote{Required input flow $\sigma_{i}^{\mathrm{in}}$ is set to $0$ for nodes $i$ not in $\mathcal{I}n$ and output flow $\sigma_{j}^{\mathrm{out}}$ is set to $0$ for nodes $j$ not in $\mathcal{O}ut$.} stated in Section \ref{Sec_notation_problem_statement01}, and recalled here for convenience,
\begin{equation}
\begin{cases}
\bar{n}_{1i} = \sigma_{i}^{\mathrm{in}} &\text{for each node } i \in \mathcal{I}n \\
\bar{n}_{jn} = \sigma_{j}^{\mathrm{out}} &\text{for each node } j \in \mathcal{O}ut
\end{cases}
\label{Eq_equality_constraints01}
\end{equation}
where $\bar{n}_{ij}$ is the flow (expected number of passages) in edge $(i,j)$ (see Equation (\ref{Eq_computation_edge_flows01})).
As already mentioned in the introduction, the margin-constrained BoP problem on a graph has recently been studied in \cite{Guex-2016} and \cite{Lebichot-2018}. The former paper is based on an entropy regularization at the flow level while the later one adopts a bag-of-paths approach\footnote{Observe the difference with the standard RSP formulation of Equation (\ref{Eq_optimization_problem_BoP01}).}
\begin{equation}
\vline \begin{array}{lll@{}lll}
\underset{\{\mathrm{P}(\wp) \}_{\wp \in \mathcal{P}}} {\text{minimize}} & \phi(\mathrm{P}) = \displaystyle\sum\limits_{\wp \in \mathcal{P}} \mathrm{P}(\wp) \tilde{c}(\wp) + T \sum_{\wp \in \mathcal{P}} \mathrm{P}(\wp) \log \left( \frac{\mathrm{P}(\wp)}{\tilde{\pi}(\wp)} \right) \\
\text{subject to} & \sum_{j \in \mathcal{V}} \sum_{\wp_{ij} \in \mathcal{P}_{ij}} \mathrm{P}(\wp_{ij}) = \sigma^\mathrm{in}_i &\forall i \in \mathcal{I}n \\
& \sum_{i \in \mathcal{V}} \sum_{\wp_{ij} \in \mathcal{P}_{ij}} \mathrm{P}(\wp_{ij}) = \sigma^\mathrm{out}_j &\forall j \in \mathcal{O}ut \\
\end{array}
\label{Eq_previous_formulation01}
\end{equation}
where the set of considered paths is $\mathcal{P} = \cup_{i \in \mathcal{I}n} \cup_{j \in \mathcal{O}ut} \mathcal{P}_{ij}$.
It was shown that the solution can be obtained by iterative proportional fitting (also called matrix balancing or biproportional scaling), as for the standard, relaxed, optimal transport problem with entropy regularization (see for instance \cite{Cuturi2013,Erlander-1990,Kapur-1992,Wilson-1970}).
In the present paper, we thus adopt a different point of view (in comparison with \cite{Guex-2019}) by designing a new algorithm based on the constraints imposed on flows in the graph $G_{\mathrm{ext}}$ (see Equation (\ref{Eq_equality_constraints01})), inspired by \cite{Courtain-2020}. It is important to note that this formulation of the problem can readily integrate additional capacity constraints as well; see \cite{Courtain-2020} for details. This is what makes the present formulation interesting for practical problems: it extends the scope of the optimal transport on a graph procedure introduced in \cite{Guex-2016,Guex-2019} and solving (\ref{Eq_previous_formulation01}) to problems with capacity constraints.
\subsection{The margin-constrained bag-of-paths algorithm}
\label{Subec_optimal_transport01}
The algorithm solving the relative entropy-regularized optimal transport on a graph problem is derived from results obtained in \cite{Courtain-2020} by exploiting its Lagrange formulation and Lagrangian duality. It provides the optimal \textbf{randomized policy} taking the form of the transition matrix of a \textbf{biased random walk} on $G_{\mathrm{ext}}$, as in the case of the standard RSP problem (see Equation \ref{Eq_biased_transition_probabilities01}). It corresponds to a Markov chain on the extended graph biasing the random walk towards the output nodes while satisfying the input and output flow constraints of Equation (\ref{Eq_equality_constraints01}).
Note that, for convenience, most of the more technical results are reported in the Appendix \ref{Sec_derivation_of_the_algorithm01}.
\subsubsection{The Lagrange function}
\label{Subsec_Lagrange_function_edge_constraints01}
The equality constraints (\ref{Eq_equality_constraints01}) can be expressed in the following Lagrange function defined on the extended graph $G_{\mathrm{ext}}$ with $\mathcal{P}_{1n}$ being the set of all possible paths from source node $1$ to target node $n$,
\begin{align}
\mathscr{L}(\text{P},\boldsymbol{\lambda})
&= \underbracket[0.5pt][3pt]{ \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) \tilde{c}(\wp) + T \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) \log \left( \frac{\text{P}(\wp)}{\tilde{\pi}(\wp)} \right) }_{\text{free energy, }\phi(\text{P})}
+ \mu \bigg( \displaystyle\sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) - 1 \bigg) \nonumber \\
&\quad + \displaystyle\sum_{i \in \mathcal{I}n} \lambda_{i}^{\mathrm{in}} \big( \bar{n}_{1i} - \sigma_{i}^{\mathrm{in}} \big)
+ \displaystyle\sum_{j \in \mathcal{O}ut} \lambda_{j}^{\mathrm{out}} \big( \bar{n}_{jn} - \sigma_{j}^{\mathrm{out}} \big)
\label{Eq_Lagrange_edge_flow_constraints01}
\end{align}
where vector $\boldsymbol{\lambda}$ contains the Lagrange parameters $\{ \lambda_{i}^{\mathrm{in}} \}$ and $\{ \lambda_{j}^{\mathrm{out}} \}$.
As can be seen, there is a Lagrange parameter associated with each input node ($\mathcal{I}n$) and each output node ($\mathcal{O}ut$).
Note that because we are working on the extended graph, $\tilde{\pi}(\wp)$ is the product of the $p_{ij}^{\mathrm{ext}}$ (defined in Equation (\ref{Eq_extendedTransitionMatrix01})) along path $\wp$.
\subsubsection{Exploiting Lagrangian duality}
\label{Subsec_Lagrangian_duality_edge_constraints01}
Following the same reasoning as in \cite{Courtain-2020}, we will exploit the fact that, in this formulation of the problem, the Lagrange dual function and its gradient are easy to compute\footnote{This is actually a standard result related to maximum entropy problems (see for instance \cite{Jabara-2004}).}.
Moreover, because the objective function is convex and all the equality constraints are linear, there is only one global minimum and the duality gap is zero \cite{Bertsekas-1999,Culioli-2012,Griva-2008}. We therefore use a common optimization procedure, the Arrow-Hurwicz-Uzawa algorithm \cite{Arrow-1958}), which sequentially solves the primal and increases the dual (which is concave) until convergence.
In our context, this provides the two following steps, which are iterated until convergence,
\begin{equation}
\begin{cases}
\mathscr{L}(\text{P}^{*},\boldsymbol{\lambda}) = \min\limits_{\{ \text{P}(\wp) \}_{\wp \in \mathcal{P}_{1n}}} \mathscr{L}(\text{P},\boldsymbol{\lambda}) \text{\footnotesize{, subject to} } \sum_{\wp \in \mathcal{P}_{1n}} \text{P}(\wp) = 1 & \text{\footnotesize{(compute the dual function)}} \\
\mathscr{L}(\text{P}^{*},\boldsymbol{\lambda}^{*}) = \max\limits_{\boldsymbol{\lambda}} \mathscr{L}(\text{P}^{*},\boldsymbol{\lambda}) & \text{\footnotesize{(maximize the dual function)}}
\end{cases}
\label{Eq_primal_dual_lagrangian01}
\end{equation}
where we set $\mathscr{L}(\text{P},\boldsymbol{\lambda}) = \mathscr{L}(\text{P}^{*},\boldsymbol{\lambda}^{*})$ at the end of each iteration.
The dual function is computed analytically and then maximized through a block coordinate ascend in terms of the Lagrange parameters $\boldsymbol{\lambda}$.
It is shown in the Appendix \ref{Sec_derivation_of_the_algorithm01} that the dual function is
\begin{equation}
\mathscr{L}(\text{P}^{*},\boldsymbol{\lambda})
= -T \log \mathcal{Z}' - \displaystyle\sum_{i \in \mathcal{I}n} \lambda_{i}^{\mathrm{in}} \sigma_{i}^{\mathrm{in}}
- \displaystyle\sum_{j \in \mathcal{O}ut} \lambda_{j}^{\mathrm{out}} \sigma_{j}^{\mathrm{out}}
\label{Eq_dual_lagrangian01}
\end{equation}
where $\mathcal{Z}' = \sum_{\wp\in\mathcal{P}_{1n}} \tilde{\pi} (\wp)\exp[-\theta \tilde{c}'(\wp)]$ is the partition function (Equation (\ref{Eq_partition_function_definition01})) computed from the so-called \textbf{augmented costs} $c'_{ij}$ on $G_{\mathrm{ext}}$, which depend on the Lagrange multipliers,
\begin{equation}
c'_{ij} =
\begin{cases}
c_{ij}^{\mathrm{ext}} + \lambda_{j}^{\mathrm{in}} \\
c_{ij}^{\mathrm{ext}} + \lambda_{i}^{\mathrm{out}} \\
c_{ij}^{\mathrm{ext}}
\end{cases}
= \begin{cases}
\lambda_{j}^{\mathrm{in}} & \text{when } i = 1 \text{ and } j \in \mathcal{I}n \\
\lambda_{i}^{\mathrm{out}} & \text{when } i \in \mathcal{O}ut \text{ and } j = n \\
c_{ij}^{\mathrm{ext}} & \text{otherwise}
\end{cases}
\label{Eq_redefined_costs01}
\end{equation}
Moreover, as further shown in the Appendix \ref{Sec_derivation_of_the_algorithm01}, the maximization of the dual function provides the following Lagrange parameters updates at each iteration,
\begin{equation}
\begin{cases}
\lambda_{k}^{\mathrm{in}} = \tfrac{1}{\theta} \bigg( \log z'_{kn} - \displaystyle\sum_{l \in \mathcal{I}n} \sigma_{l}^{\mathrm{in}} \log z'_{kn} \bigg) \text{ for } k \in \mathcal{I}n \\
\lambda_{l}^{\mathrm{out}} = \tfrac{1}{\theta} \Bigg( \log z'_{ln} - \log \bigg( \dfrac{\sigma_{l}^{\mathrm{out}}} { p_{ln}^{\mathrm{ext}} } \bigg) - \displaystyle\sum_{k \in \mathcal{O}ut} \sigma_{k}^{\mathrm{out}} \bigg[ \log z'_{kn} - \log \bigg( \dfrac{\sigma_{k}^{\mathrm{out}}} { p_{kn}^{\mathrm{ext}} } \bigg) \bigg] \Bigg) \text{ for } l \in \mathcal{O}ut
\end{cases}
\label{Eq_lagrange_parameters_updates01}
\end{equation}
where the $z'_{kl}$ (element $k$, $l$ of the fundamental matrix) are computed thanks to Equation (\ref{Eq_fundamentalMatrix01}) in terms of the augmented costs and the natural random walk transition probabilities ($p_{kl}^{\mathrm{ext}}$) on the extended graph $G_{\mathrm{ext}}$.
\subsubsection{The resulting algorithm}
The resulting algorithm is presented in Algorithm \ref{Alg_optimal_transport01}.
The different steps of the procedure are the following:
\begin{itemize}
\item Compute the extended graph $G_{\mathrm{ext}}$ (its edge costs and transition probabilities matrices) from the original graph $G$ as described in Section \ref{Subsec_extended_graph01} and Equations (\ref{Eq_extendedAdjacencyMatrix01}), (\ref{Eq_extendedCostMatrix01}) and (\ref{Eq_extendedTransitionMatrix01}). We now work on this extended graph.
\item Initialize the Lagrange parameters to $0$.
\item Iterate the following steps until convergence, first to update the quantities associated to the source nodes, and then to update the quantities associated to the target nodes:
\begin{itemize}
\item The elements of the fundamental matrix are computed from the current augmented costs (Equation (\ref{Eq_fundamentalMatrix01})) on $G_{\mathrm{ext}}$.
\item The Lagrange parameters are updated (Equations (\ref{Eq_lagrange_parameters_updates01})).
\item The augmented costs are updated (Equation (\ref{Eq_redefined_costs01})).
\end{itemize}
\item Compute the optimal policy (transition probabilities) from the augmented costs (depending on the Lagrange parameters, see Equation (\ref{Eq_redefined_costs01})) on $G_{\mathrm{ext}}$ thanks to Equation (\ref{Eq_biased_transition_probabilities01}).
\end{itemize}
The time complexity of the algorithm is dominated by the two systems of linear equations that need to be solved at each iteration. Therefore, it is of order $2k . O(n^3)$ where $n$ is the number of nodes and $k$ is the number of required iterations. Note that Algorithm \ref{Alg_optimal_transport01} is closely related to Algorithm 2 presented in \cite{Guex-2019} page 102. Let us now introduce a dissimilarity measure based on this optimal transport model.
\subsection{A distance measure between nodes}
\label{Section:Distances}
In this subsection, we will derive two important quantities from the margin-constrained bag-of-paths model. These quantities are the coupling matrix and the surprisal distance measure between nodes of the graph, defined from the coupling matrix. For more details concerning these two quantities, see \cite{Guex-2019}.
\begin{algorithm}[t!]
\caption[Solving the relative entropy-regularized optimal transport on a graph problem with multiple sources and targets]
{\small{Solving the relative entropy-regularized optimal transport on a graph problem with multiple sources and targets, called the margin-constrained bag-of-paths model.}}
\algsetup{indent=2em, linenodelimiter=.}
\begin{algorithmic}[1]
\small
\REQUIRE $\,$ \\
-- A weighted directed, strongly connected, graph $G_{\mathrm{ext}}$ containing $n$ nodes. Node $1$ is the source supernode and node $n$ the absorbing, target, supernode. The indegree of node 1 and the outdegree of node $n$ are both equal to $0$. \\
-- The set of input nodes $\mathcal{I}n$ (only connected to the source supernode $1$) and output nodes $\mathcal{O}ut$ (only connected to the target supernode $n$).\\
-- The $n\times n$ transition matrix $\mathbf{P}_{\!\!\mathrm{ext}}$ associated to $G_{\mathrm{ext}}$.\\
-- The $n\times n$ non-negative cost matrix $\mathbf{C}_{\mathrm{ext}}$ associated to $G_{\mathrm{ext}}$ (see Equation (\ref{Eq_extendedCostMatrix01})). These original costs are equal to zero for edges starting in node $1$ and ending in $\mathcal{I}n$ as well as edges starting in $\mathcal{O}ut$ and ending in node $n$. \\
-- The $n \times 1$ vectors of input flows, $\boldsymbol{\sigma}_{\mathrm{in}}$, and output flows, $\boldsymbol{\sigma}_{\mathrm{out}}$, both non-negative and summing to $1$.\\
-- The inverse temperature parameter $\theta$.\\
\ENSURE $\,$ \\
-- The $n \times n$ randomized policy defined by the transition matrix $\mathbf{P}^{*}$.\\
\STATE $\boldsymbol{\lambda}_{\mathrm{in}} \leftarrow \mathbf{0}$; $\boldsymbol{\lambda}_{\mathrm{out}} \leftarrow \mathbf{0}$ \COMMENT{initialize $n \times 1$ Lagrange parameter vectors} \\
\STATE $\mathbf{C}' \leftarrow \mathbf{C}_{\mathrm{ext}}$ \COMMENT{initialize the augmented costs matrix} \\
\REPEAT[main iteration loop]
\STATE $\mathbf{W} \leftarrow \mathbf{P}_{\!\!\mathrm{ext}}\circ\exp[-\theta\mathbf{C}']$ \COMMENT{compute $\mathbf{W}$ matrix (elementwise exponential and multiplication $\circ$)} \\
\STATE Solve $(\mathbf{I}-\mathbf{W}) \mathbf{z}_{n} = \mathbf{e}_{n}$ \COMMENT{backward variables (column $n$ of the fundamental matrix $\mathbf{Z}$) with elements $z_{kn}$ ($n$ is fixed)} \\
\FORALL[compute Lagrange parameters associated to source nodes]{$k \in \mathcal{I}n$}
\STATE $\lambda_{k}^{\mathrm{in}} \leftarrow \tfrac{1}{\theta} \log z_{kn}$ \COMMENT{compute Lagrange parameters} \\
\ENDFOR
\FORALL[update quantities associated to source nodes]{$k \in \mathcal{I}n$}
\STATE $\lambda_{k}^{\mathrm{in}} \leftarrow \lambda_{k}^{\mathrm{in}} - \displaystyle\sum_{k' \in \mathcal{I}n} \sigma_{k'}^{\mathrm{in}} \lambda_{k'}^{\mathrm{in}}$ \COMMENT{normalize Lagrange parameters} \\
\STATE $c'_{1k} \leftarrow \lambda_{k}^{\mathrm{in}}$ \COMMENT{update augmented costs (recall that $c_{1k}^\mathrm{ext}=0$ for all $k \in \mathcal{I}n)$)} \\
\ENDFOR
\STATE $\mathbf{W} \leftarrow \mathbf{P}_{\!\!\mathrm{ext}} \circ \exp[-\theta\mathbf{C}']$ \COMMENT{update $\mathbf{W}$ matrix} \\
\STATE Solve $(\mathbf{I}-\mathbf{W}^{\text{T}}) \mathbf{z}_{1} = \mathbf{e}_{1}$ \COMMENT{forward variables (row $1$ of the fundamental matrix $\mathbf{Z}$) with elements $z_{1k}$ ($1$ is fixed)} \\
\FORALL[compute Lagrange parameters associated to target nodes]{$l \in \mathcal{O}ut$}
\STATE $\lambda_{l}^{\mathrm{out}} \leftarrow \tfrac{1}{\theta} \log z_{1l} - \tfrac{1}{\theta} \log \bigg( \dfrac{\sigma_{l}^{\mathrm{out}}} { p_{ln}^{\mathrm{ext}} } \bigg)$ \COMMENT{compute Lagrange parameters} \\
\ENDFOR
\FORALL[update quantities associated to target nodes]{$l \in \mathcal{O}ut$}
\STATE $\lambda_{l}^{\mathrm{out}} \leftarrow \lambda_{l}^{\mathrm{out}} - \displaystyle\sum_{l' \in \mathcal{O}ut} \sigma_{l'}^{\mathrm{out}} \lambda_{l'}^{\mathrm{out}}$ \COMMENT{normalize Lagrange parameters} \\
\STATE $c'_{ln} \leftarrow \lambda_{l}^{\mathrm{out}}$ \COMMENT{update augmented costs (recall that $c_{ln}^\mathrm{ext}=0$ for all $l \in \mathcal{O}ut)$} \\
\ENDFOR
\UNTIL{convergence of $\boldsymbol{\lambda}_{\mathrm{in}}$, $\boldsymbol{\lambda}_{\mathrm{out}}$}
\STATE $\mathbf{P}^{*} \leftarrow (\mathbf{Diag}(\mathbf{z}_{n}))^{-1} \mathbf{W} \, \mathbf{Diag}(\mathbf{z}_{n})$ \COMMENT{compute optimal policy}
\RETURN $\mathbf{P}^{*}$
\end{algorithmic}
\label{Alg_optimal_transport01}
\end{algorithm}
\subsubsection{The coupling matrix}
Let us first remind the definition of the \textbf{coupling matrix} \cite{villani2003topics,villani2008optimal}, which was used in \cite{Guex-2019} to define a distance measure between the nodes of the graph (see next subsection).
This coupling matrix is denoted by $\mathbf{\Gamma} = (\gamma_{ij})$ and is defined as the joint probability of starting the walk in node $i \in \mathcal{I}n$ (reaching input node $S = i$ from supernode $1$ at time step $1$) and ending the walk in node $j \in \mathcal{O}ut$ (visiting output node $T = j$ at time step $\ell(\wp) - 1$ and then immediately transiting to supernode $n$) when walking according to the optimal path probabilities defined in Equations (\ref{Eq_Boltzmann_probability_distribution01}) and (\ref{Eq_Boltzmann_probability_distribution02}) and using the augmented costs after convergence in order to satisfy the constraints,
\begin{align}
\gamma_{ij} &\triangleq \mathrm{P}^{*}(S = i, T = j)
= \mathrm{P}^{*}(\wp(1) = i, \wp(\ell - 1) = j | \wp(0) = 1, \wp(\ell) = n) \nonumber \\
&= \frac{w_{1i} \big( \sum_{\wp_{ij} \in \mathcal{P}_{ij}} \tilde{w}(\wp_{ij}) \big) w_{jn}} {\sum_{\wp'\in\mathcal{P}} \tilde{w}(\wp')}
= \frac{w_{1i} z'_{ij} w_{jn}} {\sum_{\wp'\in\mathcal{P}} \tilde{w}(\wp')} \nonumber \\
&= \frac{w_{1i} z'_{ij} w_{jn}} {\sum_{i' \in \mathcal{I}n} \sum_{j' \in \mathcal{O}ut} w_{1i'} z'_{i'j'} w_{j'n}} \quad \text{with } i \in \mathcal{I}n \text{ and } j \in \mathcal{O}ut
\label{Eq_coupling_probabilities01}
\end{align}
where $\mathcal{P}_{ij}$ is the set of paths starting in input node $i \in \mathcal{I}n$ and ending in output node $j \in \mathcal{O}ut$.
It also holds that $\sum_{i \in \mathcal{I}n} \gamma_{ij} = \sigma_{j}^{\mathrm{out}}$ and $\sum_{j \in \mathcal{O}ut} \gamma_{ij} = \sigma_{i}^{\mathrm{in}}$.
Notably, the weights $w_{ij}$ are computed from the augmented costs after convergence of Algorithm \ref{Alg_optimal_transport01}. Further note that the rows and the columns associated with the two supernodes indexed as nodes $1$ and $n$ should be removed to obtain the coupling matrix associated with the original graph.
\subsubsection{The margin-constrained bag-of-paths surprisal distance}
\label{Subsection:cBoP}
We can now compute the \textbf{margin-constrained bag-of-paths surprisal distance} introduced in \cite{Guex-2019} (called the margin-constrained bag-of-paths surprisal distance in this paper, and itself inspired by \cite{Francoisse-2017}) as
\begin{equation}
\myDelta_{ij}^{\mathrm{sur}} =
\begin{cases}
-\tfrac{1}{2}(\mathrm{log}(\gamma_{ij})+\mathrm{log}(\gamma_{ji}))&\text{if } i \neq j,\\
\hspace{3.1mm} 0 & \text{if } i = j\\
\end{cases}
\label{Eq_surprisal}
\end{equation}
where $\gamma_{ij}$ is the elements of the coupling matrix $\mathbf{\Gamma}$ defined in Equation (\ref{Eq_coupling_probabilities01}). This distance is a generalization of the surprisal distance \cite{Francoisse-2017,Kivimaki-2012} where positive weights $\mathbf{v}=(v_i)$, summing to 1, can be attached to each node through $\boldsymbol{\sigma}_{\mathrm{in}}$ and $\boldsymbol{\sigma}_{\mathrm{out}}$.
Intuitively, the distance (\ref{Eq_surprisal}) quantifies the ``surprise" generated by the event $(S = i) \land (T = j)$, that is, picking a path with input node $i \in \mathcal{I}n$ and output node $j \in \mathcal{O}ut$ from the bag of paths $\mathcal{P}_{1n}$ defined on $G_{\mathrm{ext}}$, with probability distribution (\ref{Eq_Boltzmann_probability_distribution01}) and using augmented costs in order to satisfy the constraints (\ref{Eq_equality_constraints01}).
In this work, we consider that each node acts as both input and output, with $\boldsymbol{\sigma}_{\mathrm{in}}=\boldsymbol{\sigma}_{\mathrm{out}}=\mathbf{v}$. This choice is inspired from the PageRank \cite{Brin-1998,Page-1998} and the random walk with restart \cite{Gori-2007,Tong-2007} models in which, at each time step, the random walker has a chance of leaving the current node (which is then similar to a sink node) for restarting in some nodes of the network (which are then similar to source nodes).
This choice, although somewhat counter-intuitive\footnote{Because we inject and remove the same quantity of flow in each node, this setting is only meaningful when the parameter $\theta$ is not too large. Indeed, when $\theta$ increases, the transportation is more and more optimal and the input-output flows tend to neutralize each other, resulting in a coupling matrix that converges to the identity matrix. This is, however, not the case for intermediate measures of $\theta$ for which the coupling matrix is able to capture the similarity between nodes (in terms of proximity and high connectivity).}, provides quite competitive results as will be shown in this section.
Note that the authors of \cite{Guex-2019} also propose a hitting path version of this distance which is not definable in our present framework. This distance based on hitting paths will nevertheless be investigated in our experiments.
\section{Experimental comparison on semi-supervised classification tasks}
\label{Sec_Experimental_Comparison01}
In this section, the introduced method of Section \ref{Section:Distances} and its equivalent on hitting paths \cite{Guex-2019} will be compared in terms of classification accuracy on semi-supervised classification tasks with the other methods defined in the bag-of-paths framework. The goal of this experiment is to highlight the best methods within the bag-of-paths framework rather than propose an extended comparison with a large number of state-of-the-art techniques. Indeed, the methods defined in the bag-of-paths framework, like the free energy distance, have already demonstrated their competitiveness with state-of-the-art techniques in some pattern recognition tasks \cite{Francoisse-2017,Guex-2021,Sommer-2016}.
The section is organized as follows. First, the set of investigated methods is presented in Subsection \ref{SubSection:Methods}. Then, Subsection \ref{SubSection:ExpDesign} provides details on the experimental design inspired by \cite{Courtain-2020,Francoisse-2017}. Finally, Subsection \ref{SubSection:Results} presents and discusses the results of the experiments.
\subsection{Investigated methods}
\label{SubSection:Methods}
For our experimental comparisons, we have selected seven methods defined in the RSP/BoP framework, introduced in Section \ref{Sec_randomized_shortest_paths01}. Recall that the main difference between these two models is that the randomized shortest paths model is defined based on the set of all (usually hitting) paths $\mathcal{P}_{st}^{\mathrm{h}}$ from a unique node $s$ to a unique target node $t$, and not based on the set of all paths $\mathcal{P}$ in the graph, as the bag-of-paths model does. Nevertheless, most of the methods introduced hereafter could be defined in both frameworks.
Moreover, as already stated in other terms in Subsection \ref{Subsec_problem_statement01}, the RSP framework interpolates between an optimal (exploitation) and a random (exploration) behavior based on a monitoring parameter $\theta$. This framework therefore allowed us to define dissimilarity measures interpolating between the shortest path distance when $\theta$ is large (exploitation) and the commute time distance (up to a scaling factor) when $\theta \rightarrow 0^{+}$ (exploration). Therefore, it seemed clear to us to use these two boundary methods as baselines in our experiments. Finally, the seven methods retained for the experiments are:
\begin{itemize}
\item The \emph{shortest path distance} (SP) between two nodes $i$ and $j$ is defined as the path with the smallest cost between these two nodes, derived from the cost matrix $\mathbf C$. This method is the most standard distance and has no hyperparameter.
\item The \emph{commute time kernel} (CT) \cite{FoussKDE-2005,Saerens04PCA} simply corresponds to the Moore-Penrose pseudoinverse of the Laplacian matrix $\mathbf{L^+}$ \cite{Fouss-2016}. This method has no hyperparameter.
\item The \emph{free energy distance} (FE) \cite{Francoisse-2017,Kivimaki-2012} is a distance build on the symmetrization of the directed free energy distance presented in Equation (\ref{Eq_optimal_free_energy01}). This method has one hyperparameter $\theta$.
\item The regular \emph{surprisal distance} (Sup) \cite{Francoisse-2017} is a distance quantifying the ``surprise" generated by the event $(S = i) \land (T = j)$ (see Subsection \ref{Subsection:cBoP}). This method has one hyperparameter $\theta$.
\item The regular \emph{randomized shortest paths dissimilarity} (RSP) \cite{Kivimaki-2012,Saerens-2008} is obtained by symmetrization of the expected cost of Equation (\ref{Eq_real_expected_cost01}). This method has one hyperparameter $\theta$.
\item The \emph{margin-constrained bag-of-paths surprisal distance} (cBoP) is the distance introduced in \cite{Guex-2019} and re-derived in this paper (Subsection \ref{Subsection:cBoP}) from another point of view. This method has two hyperparameters $\theta$ and the non-negative weights vector $\mathbf{v}$.
\item The \emph{margin-constrained bag-of-hitting-paths surprisal distance} (cBoPH) \cite{Guex-2019} is the counterpart of the previous method in terms of hitting paths. This method has two hyperparameters $\theta$ and the non-negative weights vector $\mathbf{v}$. We computed the quantity by following \cite{Guex-2019}, Algorithm 3, page 108.
\end{itemize}
\subsection{Experimental design}
\label{SubSection:ExpDesign}
As mentioned earlier, our experimental methodology is closely related to the one used in \cite{Courtain-2020,Francoisse-2017}. This design was initially inspired by \cite{Tang-2009,Tang-2009b,Tang-2010,Zhang-2008b,Zhang-2008} and was recently used in \cite{Guex-2021} leading to interesting results. Therefore, the following section will only summarize the procedure\footnote{The interested reader can found a more complete description in Section 7 of \cite{Francoisse-2017}.} and emphasize the main differences.
\subsubsection{Datasets}
A collection of 14 well-known network datasets, already used in previous experimental comparisons, has been selected to evaluate the performance of the different methods. The collection includes the 4 WebKB datasets \cite{Macskassy-07}, the IMDB dataset \cite{Macskassy-07}, and 9 extracted from the 20 Newsgroup datasets \cite{lichman2013uci,Yen-2009}. All these datasets are described by an adjacency matrix \textbf{A} and a class label vector \textbf{y}. Note that we consider that each graph is undirected and we assert it by using $\mathbf{A} = (\mathbf{A}+\mathbf{A}^{\mathrm{T}})/2$. Furthermore, the elements of the cost matrix $\mathbf{C}$ are defined as $c_{ij} = 1/a_{ij}$ as for electrical networks \cite{Francoisse-2017}. A summary of the main characteristics of each dataset can be found in Table \ref{Table:datasets}.
\begin{table}[t]
\centering
\footnotesize
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Dataset Name} & \scalebox{0.8}{\raisebox{0.4ex}{\#}} \textbf{Labels} & \scalebox{0.8}{\raisebox{0.4ex}{\#}} \textbf{Nodes} & \scalebox{0.8}{\raisebox{0.4ex}{\#}} \textbf{Edges} & \textbf{Prior of the majority class} \\ \hline
webKB-cornell (DB1) & 6 & 346 & 13416 & 41.91\% \\ \hline
webKB-texas (DB2)& 6 & 334 & 16494 & 48.80\% \\ \hline
webKB-washington (DB3) & 6 & 434 & 15231 & 39.17\% \\ \hline
webKB-wisconsin (DB4)& 6 & 348 & 16625 & 44.54\% \\ \hline
imdb (DB5) & 2 & 1126 & 20282 & 50.18\% \\ \hline
news-2cl-1 (DB6) & 2 & 400 & 33854 & 50.00\% \\ \hline
news-2cl-2 (DB7) & 2 & 398 & 21480 & 50.25\% \\ \hline
news-2cl-3 (DB8) & 2 & 399 & 36527 & 50.13\% \\ \hline
news-3cl-1 (DB9)& 3 & 600 & 70591 & 33.34\% \\ \hline
news-3cl-2 (DB10)& 3 & 598 & 68201 & 33.44\% \\ \hline
news-3cl-3 (DB11)& 3 & 595 & 64169 & 33.61\% \\ \hline
news-5cl-1 (DB12)& 5 & 998 & 176962 & 20.04\%\\ \hline
news-5cl-2 (DB13) & 5 & 999 & 164452 & 20.02\% \\ \hline
news-5cl-3 (DB14) & 5 & 997 & 155618 & 20.06\% \\ \hline
\end{tabular}
\caption{Main characteristics of the datasets used in our experiments.}
\label{Table:datasets}
\end{table}
\subsubsection{Experimental methodology}
The graph-based semi-supervised classification methodology is divided into two parts. The first part consists of extracting $\{5\%,10\%,20\%\}$ of the dominant eigenvectors of a kernel matrix to use them as node features in a linear support vector machine (SVM)\footnote{We use the LIBSVM library \cite{LIBSVM} with the options '-s 0' and '-t 0'.}. These extracted features contain condensed information about the graph structure.
The second part consists of directly feed the kernel matrix into a kernel SVM\footnote{We use the LIBSVM library \cite{LIBSVM} with the options '-s 0' and '-t 4'.}. We will refer to the first part as 5\%F, 10\%F, and 20\%F and the second part as Ker. The main objective is to determine to which extent the different methods can deal with partial information about the graph structure, contained in only a few dimensions ($5\%$, $10\%$ and $20\%$ -- node features extraction), as well as with the full information contained in the kernel matrix.
The kernel matrices $\mathbf{K}$ are derived from the dissimilarity matrices by using both classical multidimensional scaling (MDS) \cite{Borg-1997} and Gaussian transformation (Gauss) \cite{Scholkopf-2002}. We also considered centering the Gaussian kernels (GaussCenter)\footnote{We did not apply this transformation to the MDS kernels as they are centered by construction.} by applying the following transformation $\mathbf{K}=\mathbf{H}\mathbf{K}\mathbf{H}$ where $\mathbf{H} = \mathbf{I} - \mathbf{e} \mathbf{e}^{\mathrm{T}}/n$ is the centering matrix, $\mathbf{e}$ is a column vector full of 1's and $n$ is the number of nodes.
For the first part of the classification method, we also try two different options for extracting the node features. The first option is to weight the dominant eigenvectors by the square root of their corresponding eigenvalues before concatenating them into the data matrix $\mathbf{X}$. The matrix $\mathbf{X}$ contains the features of the nodes on its rows and is used as the input of the SVM. This option is equivalent to a multidimensional scaling limited to a reduced number of dimensions and it is denoted as SD (spectral decomposition). The second option corresponds to directly concatenating the dominant eigenvectors into the matrix $\mathbf{X}$ and to normalize each row, in such a way that the resulting node feature vectors are of unit length. This normalization corresponds to a projection of the rows of $\mathbf{X}$ on the unit radius sphere centered at the origin that removes the effect of the size of the feature vectors (only the direction is relevant). We will refer to this second option as NSD (normalized spectral decomposition). For conciseness, we only report for each method the results of the best kernel transformation and feature extraction options according to the final Nemenyi tests \cite{Demsar-2006}. The best combination for each method is reported in Table \ref{Tab:CombiKernelFE}.
The performance of the different methods will be evaluated in terms of classification accuracy. To avoid large variance in the results, all the methods are assessed by repeating 10 runs of a standard $5 \times 5$ nested cross-validation methodology with different folds of labeled/unlabeled nodes. In each external 5-folds cross-validation, methods are trained on 1 fold containing $20\%$ of the labels, and the remaining $80\%$ of the labels are hidden. The parameters of each method are tuned on the training fold by performing an internal 5-fold cross-validation with a labeling rate of $80\%$. In each run, external and internal folds are kept identical for all methods. The final accuracy and standard deviation are obtained by averaging the 50 results of the external cross-validation folds.
Concerning the parameters, the $\theta$ of the bag-of-paths-type methods are tuned among values of $\{10^{-6},10^{-5},10^{-4},10^{-3},10^{-2},10^{-1},1,10\}$, and the margin parameter $c$ of the SVM is tuned on the set of values $\{10^{-2},10^{-1}, 1, 10, 100\}$. For the margin-constrained bag-of-paths methods introduced in \cite{Guex-2019} and reinterpreted in Subsection \ref{Subsection:cBoP}, we tested the three following different types of positive weights $\mathbf{v}$:
\begin{itemize}
\item Uniform weights: $\mathbf{e}/n$;
\item L1-normalized degree weights: $\mathbf{d} / (\mathbf{e}^{\text{T}}\mathbf{d})$ with $\mathbf{d} = \mathbf{A} \mathbf{e}$;
\item L1-normalized inverse degree weights: $(\mathbf{e} \div \mathbf{d}) / \big( \mathbf{e}^{\text{T}} (\mathbf{e} \div \mathbf{d}) \big)$.
\end{itemize}
where $\div$ is the elementwise division.
To avoid redundancy in the comparisons, we only present the best results over the three (positive sum-to-one) weightings for each of the two margin-constrained bag-of-paths methods according to Nemenyi tests \cite{Demsar-2006}. We observed that, for every feature extractions sets, the weight achieving the best result is the normalized degree for both hitting and non-hitting margin-constrained bag-of-paths surprisal distances.
\begin{table}[t]
\footnotesize
\begin{center}
\scalebox{1}{
\begin{tabular}{|l|c|c|c|c|}
\hline
Methods & 5\%F & 10\%F & 20\%F & Ker \\ \hline
CT & NSD & NSD & NSD & / \\ \hline
cBoP & GaussCenterSD & GaussCenterSD & GaussCenterSD & GaussCenter \\ \hline
cBoPH & GaussCenterSD & GaussSD & GaussCenterSD & Gauss \\ \hline
FE & GaussCenterSD & GaussCenterSD & GaussSD & Gauss \\ \hline
RSP & GaussCenterSD & GaussSD & GaussCenterSD & Gauss \\ \hline
SP & GaussSD & GaussSD & GaussCenterSD & GaussCenter \\ \hline
Sup & GaussSD & GaussSD & GaussCenterSD & Gauss \\ \hline
\end{tabular}}
\caption{\footnotesize{The best combination of kernel transformation and feature extraction options for each method according to Nemenyi tests.}}
\label{Tab:CombiKernelFE}
\end{center}
\end{table}
\subsection{Results and discussion}
\label{SubSection:Results}
The classification accuracy and standard deviation averaged over the 50 runs are reported in Table \ref{table:classres} for the 14 datasets and the four extracted feature sets. The best performing method is highlighted in boldface for each dataset and each feature set. Bold values highlighted in grey indicate the best performing method overall (across all feature sets) for each dataset.
\subsubsection{Comparison of the different methods}
The classification results of the seven different methods are now compared for each feature set, and then across all feature sets.
From the raw results of Table \ref{table:classres}, it can be seen that 11 of the best results over 14 datasets are obtained by directly feeding the kernel to the SVM. Furthermore, 9 of these 14 results are obtained by the two introduced constrained optimal transport methods (cBoP and cBoPH). Across all the feature sets, we observe that the cBoP seems to outperform all the methods on DB1 to DB4 except in 20\%F where the RSP prevails on DB1. The results on DB5 are more contrasted: the FE performs the best for 10\%F and 20\%F, whereas the CT and the Sup are respectively the highlighted methods for 5\%F and Ker. For the newsgroup datasets (DB6-DB14), the best method is dataset-dependent and feature set-dependent except for the DB8 where the SP dominates. Nevertheless, we can underline that the cBoPH appears 18 times among the 32 highlighted results of the eight other newsgroup datasets. The remaining methods only appear respectively six times for the FE, five times for the Sup, two times for the RSP, and one time for the CT. Another observation is that the CT obtains slightly lower results with Ker compared to its performances in the other feature sets. The additional information provided by the kernel is therefore not optimally exploited by this method.
\setBoldness{0.5}%
\begin{table}[t!]
\footnotesize
\begin{center}
\scalebox{0.75}{
\begin{tabular}{lccccccc}
\hline
\textbf{Method $\rightarrow$}: & \multicolumn{1}{c}{\multirow{2}{*}{CT}} & \multicolumn{1}{c}{\multirow{2}{*}{cBoP}} & \multicolumn{1}{c}{\multirow{2}{*}{cBoPH}} & \multicolumn{1}{c}{\multirow{2}{*}{FE}} & \multicolumn{1}{c}{\multirow{2}{*}{RSP}} & \multicolumn{1}{c}{\multirow{2}{*}{SP}} & \multicolumn{1}{c}{\multirow{2}{*}{Sup}} \\
\textbf{Dataset} $\downarrow$:& \multicolumn{1}{c}{}&\multicolumn{1}{c}{} &\multicolumn{1}{c}{} &\multicolumn{1}{c}{} &\multicolumn{1}{c}{} &\multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
\hline
&\multicolumn{6}{c}{\textbf{5\%F}} \\
\hline
webKB-cornell &54.10$\pm$3.91 & {\fbseries59.41$\pm$3.88} & 58.82$\pm$2.82 & 59.07$\pm$2.85 & 58.82$\pm$2.90 & 47.65$\pm$3.43 & 58.67$\pm$3.15 \\
webKB-texas & 67.24$\pm$3.40 & {\fbseries78.62$\pm$2.34} & 77.44$\pm$3.05 & 77.87$\pm$3.13 & 76.28$\pm$3.52 & 66.89$\pm$2.83 & 77.14$\pm$2.94 \\
webKB-washington & 65.59$\pm$2.10 &{\fbseries71.10$\pm$3.02} & 69.65$\pm$2.74 & 69.50$\pm$2.83 & 69.49$\pm$2.87 & 62.75$\pm$2.69 & 68.98$\pm$2.61 \\
webKB-wisconsin &71.88$\pm$3.53 &{\fbseries79.43$\pm$1.99} & 78.53$\pm$2.07 & 78.16$\pm$1.87 & 76.62$\pm$2.23 & 64.14$\pm$3.12 & 77.67$\pm$2.24 \\
imdb & {\fbseries78.90$\pm$1.31} &77.61$\pm$1.20& 78.29$\pm$1.74 & 78.34$\pm$1.76 & 78.09$\pm$1.45 & 77.58$\pm$1.57 & 78.28$\pm$1.42 \\
news-2cl-1 &\cellcolor[HTML]{C0C0C0}{\fbseries96.64$\pm$0.73} & 95.50$\pm$1.30 & 95.84$\pm$1.04 & 95.66$\pm$1.18 & 95.56$\pm$1.74 & 92.82$\pm$1.30 & 95.59$\pm$1.30 \\
news-2cl-2 &91.17$\pm$1.34 & 91.64$\pm$2.22 & 93.29$\pm$1.50 & 93.22$\pm$1.41 & 92.54$\pm$1.96 & 91.61$\pm$1.81 & \cellcolor[HTML]{C0C0C0}{\fbseries93.47$\pm$1.40} \\
news-2cl-3 &94.08$\pm$0.68 & 96.18$\pm$1.01 & 96.47$\pm$0.81 & 96.39$\pm$0.90 & 96.19$\pm$1.26 &{\fbseries96.59$\pm$0.86} & 96.40$\pm$1.02 \\
news-3cl-1 &90.52$\pm$1.32 & 92.66$\pm$1.28 & {\fbseries93.05$\pm$1.43}& 92.82$\pm$1.48 & 92.75$\pm$1.48 & 92.87$\pm$1.18 & 92.91$\pm$1.38 \\
news-3cl-2 &89.92$\pm$1.23 & 92.00$\pm$1.51 & {\fbseries92.50$\pm$1.38} & 92.29$\pm$1.39 & 92.07$\pm$1.43 & 89.61$\pm$1.46 & 92.37$\pm$1.66 \\
news-3cl-3 &89.63$\pm$1.21 & 92.14$\pm$1.65 & {\fbseries92.91$\pm$1.36} & 92.47$\pm$1.61 & 92.06$\pm$1.81 & 91.42$\pm$1.00 & 92.71$\pm$1.40 \\
news-5cl-1 &78.50$\pm$1.55 & 88.49$\pm$0.95 & {\fbseries88.79$\pm$1.05} & 88.49$\pm$1.17 & 88.43$\pm$1.05 & 87.93$\pm$1.09 & 88.49$\pm$1.37 \\
news-5cl-2 &76.04$\pm$1.61 & 82.76$\pm$1.42 & 83.23$\pm$1.33 & 83.33$\pm$1.40 & 82.58$\pm$1.76 & 81.80$\pm$1.24 & {\fbseries83.35$\pm$1.37} \\
news-5cl-3 &75.25$\pm$1.78 & 82.19$\pm$1.36 & 82.51$\pm$1.73 & {\fbseries82.81$\pm$1.80} & 82.31$\pm$1.53 & 77.17$\pm$1.77 & 82.49$\pm$1.62 \\
\hline
&\multicolumn{6}{c}{\textbf{10\%F}} \\
\hline
webKB-cornell & 53.51$\pm$3.28 & {\fbseries58.08$\pm$3.25} & 57.98$\pm$3.19 & 57.79$\pm$3.10 & 57.95$\pm$3.95 & 48.29$\pm$3.27 & 57.00$\pm$3.89 \\
webKB-texas & 70.26$\pm$2.71 & {\fbseries78.05$\pm$3.14} & 76.82$\pm$3.03 & 76.81$\pm$3.27 & 76.48$\pm$2.49 & 65.86$\pm$3.25 & 76.04$\pm$3.46 \\
webKB-washington & 63.53$\pm$2.56 & {\fbseries70.46$\pm$2.58} & 68.29$\pm$2.46 & 68.63$\pm$2.15 & 69.59$\pm$2.78 & 62.40$\pm$2.69 & 67.75$\pm$2.54 \\
webKB-wisconsin & 73.00$\pm$1.36 & \cellcolor[HTML]{C0C0C0}{\fbseries79.61$\pm$2.01} & 77.67$\pm$2.17 & 77.59$\pm$1.97 & 77.06$\pm$2.05 & 63.79$\pm$2.59 & 76.52$\pm$2.11 \\
imdb & 77.35$\pm$1.31 & 77.76$\pm$1.31 & 78.95$\pm$1.42 & {\fbseries79.10$\pm$1.38} & 78.83$\pm$1.36 & 77.57$\pm$1.56 & 78.63$\pm$1.55 \\
news-2cl-1 &93.30$\pm$1.64 & 95.00$\pm$1.06 & 95.08$\pm$1.69 & 95.06$\pm$1.30 &{\unskip\setBold\aftergroup\unsetBold\aftergroup\ignorespaces 95.17$\pm$1.43} & 93.15$\pm$1.19 & 94.90$\pm$1.73 \\
news-2cl-2 &91.79$\pm$1.79 & 91.76$\pm$2.04 & 92.36$\pm$1.69 & 92.32$\pm$1.64 & 91.87$\pm$1.73 & 90.78$\pm$2.17 & {\fbseries92.41$\pm$1.52} \\
news-2cl-3 &93.53$\pm$1.17 & 96.22$\pm$0.83 & 96.17$\pm$1.13 & 96.29$\pm$1.02 & 96.36$\pm$1.03 & {\fbseries96.72$\pm$0.74} & 96.28$\pm$1.14 \\
news-3cl-1 &89.84$\pm$1.26 & 92.78$\pm$1.14 & 93.02$\pm$1.66 & {\fbseries93.08$\pm$1.25} & 92.63$\pm$1.53 & 92.83$\pm$1.09 & 92.69$\pm$1.46 \\
news-3cl-2 &90.28$\pm$1.56 & 92.23$\pm$1.53 & {\fbseries92.72$\pm$1.39} & 92.23$\pm$1.46 & 92.10$\pm$1.56 & 89.72$\pm$1.55 & 92.25$\pm$1.67 \\
news-3cl-3 &90.87$\pm$1.33 & 91.97$\pm$1.64 & {\fbseries92.97$\pm$1.47} & 92.23$\pm$1.48 & 92.21$\pm$1.27 & 91.72$\pm$1.12 & 92.76$\pm$1.46 \\
news-5cl-1 &83.95$\pm$1.16 & 88.56$\pm$1.03 & 88.79$\pm$1.26 & 88.71$\pm$0.96 & 88.45$\pm$1.30 & 88.01$\pm$0.81 & {\fbseries88.89$\pm$1.07} \\
news-5cl-2 &77.20$\pm$1.74 & 82.70$\pm$1.86 & {\fbseries83.56$\pm$1.52} & 83.18$\pm$2.02 & 82.42$\pm$1.96 & 81.55$\pm$1.35 & 83.36$\pm$1.88 \\
news-5cl-3 &80.50$\pm$1.82 & 82.16$\pm$1.82 & {\fbseries82.79$\pm$1.66} & 82.70$\pm$1.96 & 82.58$\pm$1.63 & 77.52$\pm$1.80 & 82.59$\pm$1.71 \\
\hline
&\multicolumn{6}{c}{\textbf{20\%F}} \\
\hline
webKB-cornell & 56.19$\pm$2.87 & 57.72$\pm$3.44 & 57.59$\pm$2.75 & 57.81$\pm$3.64 & {\fbseries58.13$\pm$3.71} & 48.46$\pm$3.62 & 56.54$\pm$3.32 \\
webKB-texas & 75.18$\pm$1.91 & {\fbseries77.66$\pm$3.48} & 75.91$\pm$3.95 & 76.72$\pm$3.60 & 76.45$\pm$3.22 & 65.06$\pm$3.32 & 76.12$\pm$3.57 \\
webKB-washington & 65.59$\pm$1.66 & {\fbseries70.41$\pm$2.31} & 68.95$\pm$2.23 & 69.36$\pm$2.04 & 69.83$\pm$2.80 & 63.12$\pm$2.31 & 67.61$\pm$2.31 \\
webKB-wisconsin & 74.48$\pm$1.71 & {\fbseries79.21$\pm$2.12} & 76.98$\pm$2.15 & 78.09$\pm$2.69 & 77.00$\pm$2.28 & 63.04$\pm$2.83 & 75.57$\pm$2.46 \\
imdb & 78.38$\pm$1.48 & 77.75$\pm$1.47 & 78.83$\pm$1.84 & {\fbseries79.06$\pm$1.51} & 78.71$\pm$1.66 & 77.43$\pm$1.69 & 78.42$\pm$1.54 \\
news-2cl-1 & 87.15$\pm$2.29 & 94.76$\pm$1.44 & {\fbseries95.16$\pm$1.37} & 95.00$\pm$1.53 & 95.00$\pm$2.07 & 93.31$\pm$1.13 & 94.95$\pm$1.67 \\
news-2cl-2 & 86.75$\pm$2.48 & 91.31$\pm$2.14 & 91.92$\pm$1.88 & {\fbseries92.01$\pm$1.41} & 91.89$\pm$1.53 & 90.94$\pm$1.81 & 91.63$\pm$2.17 \\
news-2cl-3 & 90.46$\pm$1.53 & 96.29$\pm$0.88 & 96.35$\pm$0.93 & 96.30$\pm$0.98 & 96.29$\pm$1.10 & {\fbseries96.64$\pm$1.00} & 96.38$\pm$0.97 \\
news-3cl-1 & 84.03$\pm$1.95 & 92.69$\pm$1.39 & 93.00$\pm$1.25 & {\fbseries93.01$\pm$1.40} & 92.50$\pm$1.43 & 92.82$\pm$0.98 & 92.99$\pm$1.27 \\
news-3cl-2 & 85.74$\pm$1.89 & 92.32$\pm$1.21 & {\fbseries92.99$\pm$1.22} & 92.69$\pm$1.32 & 92.69$\pm$1.08 & 89.57$\pm$1.39 & 92.81$\pm$1.41 \\
news-3cl-3 & 87.42$\pm$2.06 & 92.15$\pm$1.44 & 92.71$\pm$1.19 & {\fbseries92.80$\pm$1.10} & 92.38$\pm$0.94 & 91.82$\pm$1.14 & 92.80$\pm$1.16 \\
news-5cl-1 & 81.21$\pm$1.66 & 88.55$\pm$1.12 & {\fbseries88.91$\pm$1.07} & 88.75$\pm$0.98 & 88.53$\pm$1.15 & 88.09$\pm$0.93 & 88.84$\pm$1.12 \\
news-5cl-2 & 75.69$\pm$1.82 & 82.85$\pm$1.48 & {\fbseries83.48$\pm$1.52} & 83.25$\pm$1.45 & 82.27$\pm$1.92 & 81.69$\pm$1.46 & 83.44$\pm$1.41 \\
news-5cl-3 & 75.81$\pm$1.84 & 82.42$\pm$1.57 & 82.81$\pm$1.80 & {\fbseries82.94$\pm$1.61} & 81.98$\pm$1.92 & 77.57$\pm$1.49 & 82.86$\pm$1.80 \\
\hline
&\multicolumn{6}{c}{\textbf{Ker}} \\
\hline
webKB-cornell & 42.05$\pm$0.40 & \cellcolor[HTML]{C0C0C0}{\fbseries59.45$\pm$2.99} & 58.79$\pm$3.38 & 58.51$\pm$2.81 & 58.30$\pm$3.76 & 48.58$\pm$3.68 & 58.71$\pm$2.75 \\
webKB-texas & 50.13$\pm$2.04 & \cellcolor[HTML]{C0C0C0}{\fbseries78.87$\pm$2.81} & 76.68$\pm$3.26 & 77.11$\pm$3.17 & 76.88$\pm$2.93 & 64.50$\pm$3.57 & 76.91$\pm$3.05 \\
webKB-washington & 44.44$\pm$5.54 & \cellcolor[HTML]{C0C0C0}{\fbseries71.84$\pm$2.04} & 70.12$\pm$2.19 & 69.46$\pm$2.02 & 69.78$\pm$3.14 & 63.08$\pm$2.55 & 69.05$\pm$2.20 \\
webKB-wisconsin & 53.28$\pm$4.26 & {\fbseries78.35$\pm$1.97} & 76.42$\pm$2.42 & 77.26$\pm$2.24 & 75.69$\pm$2.32 & 63.96$\pm$2.93 & 76.25$\pm$2.32 \\
imdb & 79.15$\pm$1.08 & 77.86$\pm$1.57 & 79.24$\pm$1.48 & 78.95$\pm$1.51 & 78.99$\pm$1.21 & 77.57$\pm$1.97 & \cellcolor[HTML]{C0C0C0}{\fbseries79.58$\pm$1.52} \\
news-2cl-1 & 90.21$\pm$7.25 & 94.98$\pm$1.28 & 95.63$\pm$0.88 & 95.40$\pm$1.21 & {\fbseries95.66$\pm$1.50} & 93.41$\pm$1.08 & 95.45$\pm$0.88 \\
news-2cl-2 & 92.08$\pm$2.03 & 91.56$\pm$1.54 & {\fbseries92.59$\pm$1.60} & 92.27$\pm$1.61 & 92.04$\pm$1.60 & 91.25$\pm$1.56 & 92.32$\pm$1.97 \\
news-2cl-3 & 87.54$\pm$9.94 & 96.34$\pm$1.06 & 96.62$\pm$0.74 & 96.62$\pm$0.78 & 96.47$\pm$0.96 & \cellcolor[HTML]{C0C0C0}{\fbseries96.79$\pm$0.78} & 96.72$\pm$0.84 \\
news-3cl-1 & 63.96$\pm$15.90 & 92.85$\pm$1.25 & \cellcolor[HTML]{C0C0C0}{\fbseries93.33$\pm$1.00} & 93.24$\pm$0.92 & 92.80$\pm$1.32 & 92.84$\pm$1.00 & 93.29$\pm$1.02 \\
news-3cl-2 & 54.96$\pm$15.96 & 92.56$\pm$1.14 & 93.23$\pm$1.00 & 92.84$\pm$1.05 & 92.65$\pm$1.16 & 89.75$\pm$1.28 & \cellcolor[HTML]{C0C0C0}{\fbseries93.26$\pm$0.97} \\
news-3cl-3 & 52.11$\pm$12.71 & 92.30$\pm$1.16 & \cellcolor[HTML]{C0C0C0}{\fbseries93.26$\pm$1.02} & 92.78$\pm$1.16 & 92.42$\pm$1.27 & 91.89$\pm$0.79 & 92.98$\pm$1.21 \\
news-5cl-1 & 27.06$\pm$7.41 & 88.54$\pm$1.24 & \cellcolor[HTML]{C0C0C0}{\fbseries88.94$\pm$0.89} & 88.78$\pm$1.02 & 88.57$\pm$1.03 & 88.16$\pm$0.91 & 88.87$\pm$0.96 \\
news-5cl-2 & 30.23$\pm$8.69 & 82.73$\pm$1.47 & \cellcolor[HTML]{C0C0C0}{\fbseries83.94$\pm$1.32} & 83.26$\pm$1.20 & 82.66$\pm$1.64 & 81.66$\pm$1.38 & 83.70$\pm$1.27 \\
news-5cl-3 & 25.76$\pm$7.35 & 82.28$\pm$1.61 & \cellcolor[HTML]{C0C0C0}{\fbseries83.54$\pm$1.32} & 83.17$\pm$1.17 & 82.43$\pm$1.75 & 77.62$\pm$1.46 & 83.52$\pm$1.20 \\
\hline
\end{tabular}}
\end{center}
\caption{\footnotesize{Classification accuracy in percent $\pm$ standard deviation for the various classification methods, obtained on the different datasets. Results are reported for the four feature sets (5\%, 10\%, 20\%, and Ker). For each dataset and method, the final accuracy and standard deviation are obtained by averaging over 10 runs of a standard cross-validation procedure. Each run consists of a nested cross-validation with 5 external folds (test sets, for validation) on which the accuracy and the standard deviation of the classifier are averaged, and 5 internal folds (for parameter tuning). The best performing method is highlighted in boldface for each dataset and each feature set. Bold values highlighted in grey indicate the best performance overall for each dataset, across all feature sets.}}
\label{table:classres}
\end{table}
In order to have a more general overview of the results, a Borda ranking of the methods is performed for each feature set and reported in Table \ref{table:Borda}. The Borda ranking starts by sorting all the methods in ascending order of classification accuracy for each dataset. Then, the score of each method is computed by adding its ranks over all datasets. Therefore, the best method is the one with the highest Borda score reflecting a higher global accuracy across all the datasets.
From Table \ref{table:Borda}, we observe that the ranking of the methods does not change much across the feature sets. The top three methods are always the cBoPH, the FE, and the Sup. For 5\%F and 10\%F, the cBoPH is first followed by the FE and thereafter by the Sup. The cBoPH only exchanges its first rank for the second one with the FE for 20\%F, whereas the Sup takes the second position of the FE for Ker. The fourth and the fifth positions of the ranking are respectively taken by the cBoP and the RSP for 5\%F and Ker, although they exchange their ranks for 20\%F. The two methods on the bottom of the ranking are the SP and the CT. The CT is at the last position except for 5\%F where it is the SP. Globally, the cBoPH is in the first position, followed in order by the FE, the Sup, the cBoP, the RSP, the SP, and the CT. Furthermore, by observing the scores, we notice that the SP and the CT obtain much worse results in comparison to the other methods.
\begin{table}[t!]
\footnotesize
\begin{center}
\scalebox{0.9}{
\begin{tabular}{l|c|c|c|c|c|c|c|c||c|c|}
\cline{2-11}
& \multicolumn{2}{c|}{\textbf{5\%}} & \multicolumn{2}{c|}{\textbf{10\%}} & \multicolumn{2}{c|}{\textbf{20\%}} & \multicolumn{2}{c||}{\textbf{Ker}} & \multicolumn{2}{c|}{\textbf{Overall}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Method}} & Score & Position & Score & Position & Score & Position & Score & Position & Score & Position \\ \hline
\multicolumn{1}{|l|}{cBoPH} & 83 & 1 & 82 & 1 & 76 & 2 & 84 & 1 & 325 & 1 \\ \hline
\multicolumn{1}{|l|}{FE} & 73 & 2 & 74 & 2 & 83 & 1 & 67 & 3 & 297 & 2 \\ \hline
\multicolumn{1}{|l|}{Sup} & 69 & 3 & 63 & 3 & 66 & 3 & 78 & 2 & 276 & 3 \\ \hline
\multicolumn{1}{|l|}{cBoP} & 59 & 4 & 62 & 4 & 58 & 5 & 57 & 4 & 236 & 4 \\ \hline
\multicolumn{1}{|l|}{RSP} & 48 & 5 & 60 & 5 & 60 & 4 & 53 & 5 & 221 & 5 \\ \hline
\multicolumn{1}{|l|}{SP} & 29 & 7 & 28 & 6 & 30 & 6 & 32 & 6 & 119 & 6 \\ \hline
\multicolumn{1}{|l|}{CT} & 31 & 6 & 23 & 7 & 20 & 7 & 21 & 7 & 95 & 7 \\ \hline
\end{tabular}}
\end{center}
\caption{\footnotesize{Overall position of the different classification techniques for the four feature sets (5\%, 10\%, 20\% and kernel), and overall, according to Borda$'$s method performed across all datasets (the higher the score, the better).}}
\label{table:Borda}
\end{table}
The next step of our analysis consists of comparing the different methods across all the 14 datasets through a Friedman test followed by a Nemenyi post-hoc test \cite{Demsar-2006}. The Friedman test is a non-parametric equivalent of the repeated-measures ANOVA. The null hypothesis (H0) of this test is that all the classifiers have the same average ranks. The $p$-values of the Friedman tests are respectively $4.5 \times 10^{-7}$ for 5\%F, $3.7 \times 10^{-8}$ for 10\%F, $7.8 \times 10^{-9}$ for 20\%F, and $7.5 \times 10^{-9}$ for Ker. All these $p$-values are lower than the threshold $\alpha$ of $0.05$, meaning that we can reject H0 and that at least one classifier is significantly different from the others. As all the Friedman tests are positive, we can perform Nemenyi tests, which determine whether or not the performance of each method differs significantly from another. The results of these tests are reported in Figures \ref{fig:Nemenyi5} to \ref{fig:NemenyiKer}. First of all, we can observe that the rankings provided by the Nemenyi tests are quite similar to those provided by the Borda ranking. The tests confirm that the cBoPH, the FE, and the Sup all provide good results, which are significantly superior to the results obtained by the SP and the CT in all feature sets. Moreover, the cBoPH also outperforms the RSP in 5\%F. As regards the cBoP, it performs significantly better than the CT in all feature sets except for 5\%F. Furthermore, we can notice that the cBoP outperforms the SP for 10\%F. The tests also show that the RSP obtains results significantly superior to those of the CT for 10\%F and 20\%F.
\begin{figure}[t!]
\centering
\subfigure[5\%F]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{BestDim5.pdf}\label{fig:Nemenyi5}}
\subfigure[10\%F]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{BestDim10.pdf}\label{fig:Nemenyi10}}
\subfigure[20\%F]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{BestDim20.pdf}\label{fig:Nemenyi20}}
\subfigure[Ker]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{BestKernel.pdf}\label{fig:NemenyiKer}}
\subfigure[Overall]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{BestAllMicro.pdf}\label{fig:NemenyiOverall}}
\caption{\footnotesize{Mean ranks and 95\% Nemenyi confidence intervals for the 7 methods across the 14 datasets for feature sets 5\%F (a), 10\%F (b), 20\%F (c), Ker (d) and Overall (all feature sets) (e). Two methods are considered as significantly different if their confidence intervals do not overlap. The axis-x unit is the average rank of the methods. The higher the rank, the best the method. The best method is highlighted.}}
\label{fig:NemenyiAll}
\end{figure}
We continue our analysis by performing multiple Wilcoxon signed-ranks tests for matched data \cite{Demsar-2006} to potentially discover other significant pairwise differences between the methods. The Wilcoxon signed-ranks test is a non-parametric equivalent of the paired $t$-test. The null hypothesis (H0) of this test is that the two compared classifiers perform equally well. The results of these tests are presented in Table \ref{table:WilcoxonDim5-10} for 5\%F and 10\%F and in Table \ref{table:WilcoxonDim20-Ker} for 20\%F and Ker. All the $p$-values lower than our threshold $\alpha$ of $0.05$ are highlighted in boldface indicating that H0 is rejected. Besides confirming the findings of the Friedman-Nemenyi tests, the Wilcoxon tests show that the SP is outperformed by all the methods except the CT in all feature sets. As regards the CT, it is as well outperformed by all the methods except the SP for 5\%F, 10\%F, and 20\%F. Moreover, the CT obtains results significantly inferior to those of all the methods in Ker. These findings confirm that the techniques developed in the bag-of-paths framework can take advantage of both the SP and the CT to outperform them whatever the retained amount of information. The tests also highlight that the cBoPH performs significantly better than the Sup in 10\%F and 20\%F and the RSP in 10\%F and Ker. On its side, the RSP obtains results significantly inferior to those of the Sup for 5\%F, and Ker as well as those of the FE in all feature sets, except for 10\%F.
\begin{table}[H]
\footnotesize
\begin{center}
\scalebox{1}{
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\textbf{Method} & CT & cBoP & cBoPH & FE & RSP & SP & Sup \\ \hline
CT & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & \cellcolor[HTML]{C0C0C0}0.0012 & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{C0C0C0}0.0006 & 0.9515 & \cellcolor[HTML]{C0C0C0}0.0006 \\ \hline
cBoP & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & 0.6257 & 0.6698 & 0.4631 & \cellcolor[HTML]{C0C0C0}0.0023 & 0.8077 \\ \hline
cBoPH & \cellcolor[HTML]{C0C0C0}0.0001 & 0.5830 & \cellcolor[HTML]{000000} & 0.3910 & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{C0C0C0}0.0085 \\ \hline
FE & \cellcolor[HTML]{C0C0C0}0.0001 & 0.6257 & 0.2166 & \cellcolor[HTML]{000000} & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{C0C0C0}0.0006 & 0.3258 \\ \hline
RSP & \cellcolor[HTML]{C0C0C0}0.0001 & 0.5830 & \cellcolor[HTML]{C0C0C0}0.0419 & 0.0906 & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{C0C0C0}0.0085 \\ \hline
SP & 0.6257 & \cellcolor[HTML]{C0C0C0}0.0012 & \cellcolor[HTML]{C0C0C0}0.0004 & \cellcolor[HTML]{C0C0C0}0.0004 & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{000000} & \cellcolor[HTML]{C0C0C0}0.0004 \\ \hline
Sup & \cellcolor[HTML]{C0C0C0}0.0001 & 0.8077 & \cellcolor[HTML]{C0C0C0}0.0017 & 0.1353 & 0.7609 & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{000000} \\ \hline
\end{tabular}}
\end{center}
\caption{\footnotesize{The $p$-values provided by a pairwise Wilcoxon signed-rank test, for 5\%F in the upper right triangle and the 10\%F in the lower left.}}
\label{table:WilcoxonDim5-10}
\end{table}
\begin{table}[H]
\footnotesize
\begin{center}
\scalebox{1}{
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\textbf{Method} & CT & cBoP & cBoPH & FE & RSP & SP & Sup \\ \hline
CT & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{FFFFFF}0.5416 & \cellcolor[HTML]{C0C0C0}0.0001 \\ \hline
cBoP & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & 0.5016 & \cellcolor[HTML]{FFFFFF}0.3258 & \cellcolor[HTML]{FFFFFF}0.5830 & \cellcolor[HTML]{C0C0C0}0.0012 & \cellcolor[HTML]{FFFFFF}0.9032 \\ \hline
cBoPH & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{FFFFFF}0.6257 & \cellcolor[HTML]{000000} & \cellcolor[HTML]{FFFFFF}0.3258 & \cellcolor[HTML]{FFFFFF}0.2958 & \cellcolor[HTML]{C0C0C0}0.0004 & \cellcolor[HTML]{C0C0C0}0.0494 \\ \hline
FE & \cellcolor[HTML]{C0C0C0}0.0004 & \cellcolor[HTML]{FFFFFF}0.8552 & \cellcolor[HTML]{FFFFFF}0.0906 & \cellcolor[HTML]{000000} & \cellcolor[HTML]{C0C0C0}0.0327 & \cellcolor[HTML]{C0C0C0}0.0004 & \cellcolor[HTML]{FFFFFF}0.1040 \\ \hline
RSP & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{FFFFFF}0.9032 & \cellcolor[HTML]{C0C0C0}0.0009 & \cellcolor[HTML]{C0C0C0}0.0295 & \cellcolor[HTML]{000000}{\color[HTML]{000000} } & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{FFFFFF}0.8552 \\ \hline
SP & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{C0C0C0}0.0017 & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{C0C0C0}0.0006 & \cellcolor[HTML]{000000} & \cellcolor[HTML]{C0C0C0}0.0004 \\ \hline
Sup & \cellcolor[HTML]{C0C0C0}0.0001 & \cellcolor[HTML]{FFFFFF}0.7148 & \cellcolor[HTML]{FFFFFF}0.1726 & \cellcolor[HTML]{FFFFFF}0.2166 & \cellcolor[HTML]{C0C0C0}0.0134 & \cellcolor[HTML]{C0C0C0}0.0002 & \cellcolor[HTML]{000000} \\ \hline
\end{tabular}}
\end{center}
\caption{\footnotesize{The $p$-values provided by a pairwise Wilcoxon signed-rank test, for 20\%F in the upper right triangle and Ker in the lower left.}}
\label{table:WilcoxonDim20-Ker}
\end{table}
\noindent
Finally, for information, we also analyze the results overall by concatenated the 56 results obtained across the 14 datasets and the four feature sets for each method (last drawing, Figure \ref{fig:NemenyiOverall}). Here, the assumption that the 56 datasets are independent of each other is certainly not fulfilled (they are partially overlapping), so that we cannot draw any statistical conclusion.
However, we can notice that these results confirm the findings of the Borda ranking.
In summary, the first part of the experiments showed that three methods stand out from the others: the cBoPH, the FE, and the Sup. These methods achieve to consistently outperform most of the methods through all the feature sets on the investigated datasets. Among these three, the cBoPH set itself apart by being the best method across three of the four feature sets and the second in the last one according to the Borda ranking (see Table \ref{table:Borda}).
\subsubsection{Comparison of the impact of the different extracted feature sets}
We now analyze the impact of the feature extraction technique (with a growing number of extracted features) on the classification results, limited to the three methods performing best in the first part of the experiments for conciseness. Nevertheless, we have performed the analysis on all the methods and have drawn similar conclusions except for the CT. As we already pointed out, the performances of the CT are lower in Ker compared to the other feature sets which is not the case of the other methods.
\begin{figure}[H]
\centering
\subfigure[cBoPH]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{cBoPH-Degree.pdf}\label{fig:NemenyicBoPH}}
\subfigure[FE]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{FE.pdf}\label{fig:NemenyiFE}}
\subfigure[Sup]{\includegraphics*[width=0.49\textwidth,trim= 0 15 0 0]{Sup.pdf}\label{fig:NemenyiSup}}
\caption{\footnotesize{Mean ranks and 95\% Nemenyi confidence intervals for the three best methods across the four feature sets (5\%F, 10\%F, 20\%F, and Ker). Two feature sets are considered as significantly different if their confidence intervals do not overlap. The axis-x unit is the average rank of the feature sets: the higher the rank, the best the results obtained on the feature set. Each time, the best feature set is highlighted.}}
\label{fig:NemenyiAllIntra}
\end{figure}
To analyze the results of the three methods across all the feature sets ($5\%$, $10\%$, $20\%$ and Ker), we followed the same procedure as before. First, we perform Friedman tests to identify if there are some differences between the feature sets for each method. As all the $p$-values of these tests are lower than our threshold $\alpha$ of $0.05$\footnote{The $p$-values of the Friedman tests are respectively $0.0080$ for the cBoPH, $0.0370$ for the FE, $0.0169$ and $0.0071$ for the Sup.}, we pursue our analysis by performing Nemenyi tests \cite{Demsar-2006} and reported the results in Figures \ref{fig:NemenyicBoPH} to \ref{fig:NemenyiSup}. From these figures, we can observe that the Ker feature extraction outperforms the 10\%F and the 20\%F feature sets for the Sup and the cBoPH methods. We can also notice that the results obtained by the Ker feature set are significantly superior to those of the 10\%F feature set for the FE method.
To refine our analysis, we also perform multiple Wilcoxon signed-ranks tests performing pairwise comparisons \cite{Demsar-2006}. The Wilcoxon tests show that the Ker feature set outperforms (again at the $\alpha = 0.05$ level) the 20\%F feature set for the FE ($p$-value = 0.0494) method. Moreover, we can also observe that the 5\%F feature set seems to outperform the 10\%F feature set for the Sup ($p$-value = 0.0494). Nevertheless, the tests do not show any significantly difference between the Ker feature set and the 5\%F feature set for the cBoPH ($p$-value = 0.3910), the FE ($p$-value = 0.8552) and the Sup ($p$-value = 0.2412).
To conclude, the second part of the experiments highlighted that the results obtained by extracting 5\% of dominant eigenvectors are not significantly different from the results obtained by using all the information contained in the kernel for all the methods except the CT. This finding seems to show that the techniques developed in the bag-of-paths framework can perform equally well with only some parts of the information than with all of them, at least on the investigated datasets and following the stated methodology.
\section{Conclusion}
\label{Sec_Conclusion01}
In this work, we introduced a new algorithm that solves the relative entropy-regularized minimum expected cost flow with capacity constraint problem on a graph. This new formulation of the problem extends the previous ones \cite{Guex-2016,Guex-2019} by integrating flow capacity constraints, which frequently appear in real-world applications \cite{Ahuja-1993}. Therefore, this contribution expands the applications of the previous models to a larger range of real-world applications.
Furthermore, the first part of the experimental comparisons demonstrated that the margin-constrained bag-of-paths surprisal distance and its hitting version are competitive in comparison with other bag-of-paths methods and, consequently, with other state-of-the-art techniques \cite{Francoisse-2017,Guex-2021,Sommer-2016}.
In addition, the second part of the experiments shows that the performance of the two best bag-of-paths methods does not decrease significantly with partial information about the graph structure, in comparison with all the available information in our semi-supervised classification tasks.
Future work will aim at developing a similar approach for a bag of hitting paths, instead of regular paths in the present work. Another interesting track would be to make the link between the present approach and electrical current in the case of an undirected graph.
\section*{Acknowledgements}
\noindent This work was partially supported by the Immediate and the Brufence projects funded by InnovIris (Brussels region), as well as former projects funded by the Walloon region, Belgium. We thank these institutions for giving us the opportunity to conduct both fundamental and applied research.
\begin{center}
\rule{2.5in}{0.01in}
\end{center}
|
2,869,038,157,088 | arxiv | \section{Introduction}
Due to their high density, cores of globular clusters serve as excellent
laboratories for studying stellar interactions and the resulting changes in
stellar populations (\cite{hut92}; \cite{meyl97}). In particular, a small
fraction of clusters known as post core collapse (PCC) clusters are
characterized by extremely high central densities and tend to have color
gradients in the sense of becoming bluer inward. Though several authors have
shown that central depletion of bright red giant branch (RGB) stars is an
important contributor to the color gradient in these clusters
(cf.~\cite{piotto88}; \cite{burgbuat}; \cite{zoppr}, hereafter GWYSB), a
satisfactory explanation of the underlying physical cause of this RGB
depletion has proven elusive (\cite{djor91}). Most globular clusters are
well fit by King models (KMs; \cite{king62}) and very few of these show any
such color gradient. Even the KM clusters that are suspected to have a color
gradient (e.g., NGC~4147) are all quite centrally concentrated so that the
distinction between KM and PCC clusters is unclear in these cases
(\cite{djor92}, 1993).
M30 is a prototypical PCC cluster, with one of the best-studied color
gradients of all globular clusters (\cite{willbahc}; \cite{chunfree};
\cite{cord}; \cite{peterson86}; Piotto et~al. 1988; \cite{burgbuat}; GWYSB).
This gradient has traditionally been explained by a deficiency of red giants
and asymptotic giant branch stars near the cluster center, but Burgarella~\&
Buat (1996) found that this central deficiency does not account for the
observed color gradient, and GWYSB have independently suggested that the
central evolved star deficit only produces one third of the observed color
gradient in M30, and all evolved stellar populations put together produce
less than half of the observed gradient. A possible explanation for the rest
of the color gradient is mass segregation of main sequence stars, since stars
near the main sequence turnoff, with higher mass and bluer color, are
expected to be more centrally concentrated than the fainter, redder, and less
massive stars.
This paper reexamines the radial color profile of M30's starlight using {\it
Hubble Space Telescope\/} ({\it HST\/}) Wide Field Planetary Camera~2 (WFPC2)
images and wider field ground-based images, tests methods for redistributing
the light of bright stars, and addresses the question of whether main
sequence mass segregation produces the necessary color gradient to explain
the observations. Observations of M30's color gradient and methods for
uniformly redistributing the light of bright evolved stars are described in
\S\,2. Calculations of the effects of main sequence mass segregation are
presented in \S\,3, and \S\,4 contains a summary of the main points of the
paper.
\section{Observed Color Gradient}
\subsection{Non-Cluster ``Background'' Brightness}
In order to study the color of the cluster starlight, it is necessary to
determine and correct for the ``background'' level in the {\it HST}/WFPC2
images. The term ``background'' includes all non-cluster contributions to
the total light, including foreground zodiacal light, extragalactic background
light, and even non-astrophysical artifacts such as low level residual cosmic
rays and hot pixels. GWYSB estimated this background to be
0.094~ADU~pixel$^{-1}$ in F439W and 0.054~ADU~pixel$^{-1}$ in F555W based on
visually selected regions of the image more than $1\farcm5$ from the cluster
center and away from resolved stars. This technique could be biased in
either direction: the background would be overestimated if unresolved cluster
starlight contributes significantly to the background over the entire image,
or underestimated if atypically dark regions were selected that
systematically avoid, for example, hot pixels, residual cosmic rays, and/or
background galaxies.
This paper uses a combination of {\it HST}/WFPC2 data and short ground-based
$B$- and $V$-band CCD exposures of M30 provided by Mike Bolte (see
\cite{bolte87}; \cite{Sandquist} for details) to determine the background
brightness in the WFPC2 images, in contrast to the method used by GWYSB. The
ground-based images are used only for background estimation and not for
stellar photometry. They cover a sufficiently large field of view that
unresolved cluster light is unlikely to affect measurements of the background
in the far corners of the CCD frames. The background in these images
includes a dominant atmospheric airglow component in addition to the sources
listed above. The mean $B$- and $V$-band background brightnesses are
measured in selected regions of these images located at projected distances
of $r\sim3'\>$--$\>4\farcm7$ and $r\sim8'\>$--$\>11'$, respectively, from the
cluster center; these background estimates are then subtracted from the
images. Note, the ground-based $B$ data consist of two images, a core
pointing covering most of the WFPC2 field of view, and a southwest pointing
which overlaps the core pointing but not the WFPC2 field. Since the core
image extends only to $r\sim3'$, the southwest image is used for background
estimation. This background estimate is bootstrapped to the core image using
a difference image: core image minus registered, background-subtracted
southwest image. The difference image is nearly free of star-subtraction
residuals for $r\sim90''\>$--$\>180''$; the mean value of regions selected
from this area, avoiding obvious residual artifacts, is used as the
background value for the core image. The $V$-band background measurement is
more straightforward and is based on a single CCD image. The regions used to
measure the mean non-cluster background flux in the southwest $B$ and $V$
images are selected to be away from all resolved stars ($B<16$, $V<19$) as
these are likely to be cluster members according to the \cite{ratbah}
Galactic star count model (see discussion at the end of this section). The
$B$ and $V$ background estimates show no trend with angular separation from
the nearest bright star or from the cluster center, indicating that the
measurements are unaffected by scattered light from bright stars or
unresolved faint stars in M30.
The {\it HST}/WFPC2 PC1 and WF2--WF4 images are combined into a mosaic image
in each of the F439W and F555W bands (GWYSB). These mosaic images are
rotated, rebinned, and gaussian smoothed to match the orientation and
resolution of the corresponding ground-based images ($B$:
$0\farcs6$~pixel$^{-1}$, $\rm FWHM\sim1\farcs5$; $V$:
$0\farcs44$~pixel$^{-1}$, $\rm FWHM\sim1\farcs7$). The background-subtracted
ground-based images are then masked to preserve only the region of overlap
with the WFPC2 mosaic, taking care to mask out the edges of both images where
the smoothing of the latter is imperfect. The ground-based $V$ image covers
all of the WFPC2 F555W image, while the $B$ image covers about 93\% of the
WFPC2 F439W image, excluding only a small corner section of WF2 at
$r\gtrsim1'$ from the cluster center. The resulting {\it HST\/} images
differ from the corresponding ground-based images only in terms of a
photometric scale factor and the background flux in the WFPC2 images.
The WFPC2 background flux in each band is obtained in two different ways.
The first method involves a linear least squares fit to solve simultaneously
for the photometric scale factor and WFPC2 background flux level by doing a
pixel-to-pixel comparison of the $10^4\>$--$\>10^5$~pixels in each pair of
matched WFPC2 and ground-based images. Because of its high stellar surface
density, coupled with inaccuracies in the PSF match, the PC1 CCD is not used
in this comparison. The WFPC2 background flux is estimated to be 0.0083~ADU
in F439W and 0.092~ADU in F555W per $0\farcs0996\times0\farcs0996$ mosaic
image pixel. Unless otherwise mentioned, these least-squares-fit--based
WFPC2 background values are adopted for the rest of the paper. In the
second, more direct, method of background determination, the photometric
scale factor is estimated using aperture photometry on a selection of bright
stars which are known to be isolated on the original, high-resolution {\it
HST\/} images. After applying this multiplicative photometric correction to
the ground-based image, a difference image (WFPC2 minus ground-based) is
constructed whose mean value should be equal to the WFPC2 background flux.
In practice, point spread function (PSF) matching and star subtraction are
not perfect, so it is necessary to estimate the mean background level in
regions away from bright star residuals. A comparison of the two background
estimates and errors in the background flux determination are discussed in
\S\,2.2 and \S\,2.3.
The adopted WFPC2 background levels correspond to sky brightnesses of
$\mu_{\rm sky}(B)=24.84$~mag~arcsec$^{-2}$ and $\mu_{\rm
sky}(V)=21.45$~mag~arcsec$^{-2}$, but it should be noted that these
background brightness levels may be partly instrumental in origin---the Space
Telescope Science Institute pipeline bias subtraction may have been
inaccurate seeing as the data were obtained only a few months after the
installation of the WFPC2 instrument. Spatial variations in sky brightness
due to individual Galactic field stars are expected to be unimportant. For
example, the \cite{ratbah} Galactic star count model prediction in this part
of the sky is less than 1~star~arcmin$^{-2}$ with $V<20$ at the bright end of
the stellar distribution, which corresponds to a brightness level of
$\mu(V)\sim29$~mag~arcsec$^{-2}$. Since Galactic stars tend to outnumber
distant field galaxies for $V\lesssim21$ (cf.~\cite{bgs}), the effect of
stochastic variations in the bright end of the field galaxy population is
likely to be even smaller.
\subsection{Color Profile of the Cluster Starlight}
This study adopts the eight~radial bins defined in GWYSB for the purpose of
studying M30's $B-V$ color profile. The radial bins are chosen such that
each contains approximately the same number of evolved stars with
$V\lesssim19$, a sample dominated in number by faint RGB stars. These bins
are:
(1)~$r<5\farcs00$,
(2)~$5\farcs00\leq{r}<9\farcs80$,
(3)~$9\farcs80\leq{r}<15\farcs41$,
(4)~$15\farcs41\leq{r}<23\farcs2$,
(5)~$23\farcs2\leq{r}<35\farcs8$,
(6)~$35\farcs8\leq{r}<51\arcsec$,
(7)~$51\arcsec\leq{r}<71\arcsec$, and
(8)~$71\arcsec\leq{r}<130\arcsec$.
The characteristic radius adopted for each of these bins, for the purpose of
comparison with model predictions, is the median radial distance of the stars
in that bin (Table~\ref{table2}). As described in GWYSB, the total $B$ and
$V$ flux within each radial bin is derived from direct aperture photometry on
the background-subtracted WFPC2 F439W and F555W mosaic images, respectively,
by differencing successive concentric circular apertures. The contribution
of specific stellar populations, on the other hand, is determined by summing
over the list of detected stars for which photometry has been carried out
using standard techniques (PSF fitting, aperture photometry). Only bright
RGB and horizontal branch (HB) stars are analyzed in this manner for the
purpose of uniform redistribution (\S\,2.3). Detailed image simulations
indicate that the detection rate for these bright stars is practically 100\%
(\cite{yanny}; \cite{gysb96}) so that it is not necessary to apply any
radially-dependent incompleteness correction.
The mean $B-V$ color of M30 within each radial bin (annulus) is plotted in
the top panel of Figure~\ref{cgrad} (long dashed line with bold triangles).
Monte Carlo realizations, based on a synthetic $V$-band luminosity function
and $B-V$ color distribution to mimic the properties of M30's stellar
distribution, show that Poisson fluctuations in the bright RGB and HB
population result in a random error of $\delta(B-V)=0.084$~mag for each
radial bin. There is a significant radial gradient of about $+0.3$~mag from
the cluster center out to $r\sim1'$, followed by an abrupt change of about
$-0.2$~mag from $r=1'$ to $r=1\farcm5$. Specifically, a least squares fit of
a straight line to the $B-V$ vs log($r$) profile results in a best-fit slope
of $0.13\pm0.06$ using all eight radial bins, and $0.20\pm0.07$ using only
radial bins~1--7 ($r\lesssim1'$). These measurements are in keeping with
earlier observations of M30's central blueing trend (cf.~\cite{cord};
\cite{peterson86}; Piotto et~al.\ 1988; \cite{burgbuat}). Note, the revised
WFPC2 background flux estimates (\S\,2.1) result in a color profile that is
somewhat different from that published by GWYSB using the same data set,
especially for $r\gtrsim40''$. The open circles and horizontal bars show the
ground-based color measurements presented by Peterson (1986). Peterson's
measurements in concentric circular apertures are converted to colors in
annuli by differencing successive apertures; note, these measurements are
drawn from complete annuli, while the WFPC2 image geometry forces radial bins
to have incomplete azimuthal coverage beyond $r\gtrsim15''$. Nevertheless,
our {\it HST}-based and Peterson's ground-based measurements of the color
profile are in good agreement. The slight overall difference in $B-V$ color
($\approx0.05$~mag in the mean) between the two data sets could be the result
of systematic differences in photometric calibration.
\begin{figure}
\plotone{Howell.fig1.eps}
\end{figure}
\begin{figure}
\caption{{\it Top panel}:~The average $B-V$ color of
M30's starlight, integrated over radial bins (annuli), as a function of radius
using a combination of {\it HST}/WFPC2 and ground-based images (long dashed
line with filled triangles). The error bars account for Poisson variations
in the number of bright stars in each radial bin. The solid lines illustrate
the effect of $\pm1\sigma$ conservative error estimates in the background
flux level as described in \S\,2.2. Open circles with horizontal bars show
the ground-based color measurements made by Peterson (1986).~~~~
{\it Middle panel}:~Residual $B-V$ color profiles after uniform redistribution
of the light of bright red giant branch and horizontal branch stars using
three different algorithms---~~~
Method~A:~proportional to overall cluster $B+V$ light as in GWYSB (dotted
line);
Method~B:~proportional to faint RGB stars using overall bright to faint RGB
ratio (dot-dashed line);
and Method~C:~proportional to faint RGB stars using bright to faint RGB ratio
in radial bins~5--8 (short dashed line with filled circles).
Method~C is the most accurate redistribution algorithm; see \S\,2.3 for a
complete discussion of the various redistribution schemes. The long dashed
line and filled triangles represent the color profile of the cluster prior to
redistribution (same as upper panel). The original color profile is jagged,
becoming redder by about $+0.3$~mag from the center out to $r\gtrsim1'$,
while redistribution of HB and bright RGB flux results in a smooth profile
that is consistent with no residual color gradient.~~~~
{\it Bottom panel}:~The Method~C redistributed color profile (short dashed
line with bold circles, as in the middle panel), with the effect of
$\pm1\sigma$ conservative background errors on the redistributed color
profile illustrated by the solid lines.
\label{cgrad}}
\end{figure}
The effect of the errors in background flux measurements on the final color
profile is calculated in two ways:
\begin{itemize}
\item[$\bullet$]{The formal error estimate is derived from the least
squares fit to the matched WFPC2 and ground-based pixels in the WF CCDs only:
the error in the mean background flux over this area is transformed to the
error per WFPC2 mosaic pixel by multiplying by $\sqrt{N_{\rm WFPC2}(\rm
3WF)}$, where $N_{\rm WFPC2}(\rm 3WF)\approx3\times800\times800$ is the
number of mosaic pixels in the area of the fit (3WF). In the photometric
scale of the WFPC2 data set, the formal errors are 3.1~ADU in F439W and
2.0~ADU in F555W per mosaic pixel. The effect of WFPC2 background error on
each radial bin is then computed by scaling the pixel-to-pixel error by the
square root of the number of mosaic pixels in that radial bin.}
\item[$\bullet$]{The conservative error estimate is obtained by
performing an independent least squares analysis on each of the three WF
CCDs. The variation in the mean background flux amongst the three WF CCDs is
substantially larger than the formal error in the mean given by the fitting
routine. This variation is likely a result of systematic errors arising from
PSF mismatch and/or bias subtraction differences from CCD to CCD. Radial
bin~7 has an area comparable to that of a single WF CCD, so a conservative
error for bin~7 is derived from the spread of mean background values amongst
the three WF CCDs: the spread in the mean is scaled by the number of pixels
in each fit (the number of matched pixels in each WF CCD) to convert the
error in the mean to an error in the sum, and multiplied by the square root
of the ratio of the area of bin~7 to that of a WF CCD,
$\sqrt{A({\rm radial~bin~7})/A({\rm WF})}$. The background errors for the
other radial bins ($i=1,8$) are estimated in similar fashion by multiplying
by $\sqrt{A({\rm radial~bin}~i)/A({\rm WF})}$. The conservative error
estimate for radial bin~7 is (practically) free of the assumption that the
error in the background flux scales as the square root of the area since
$\sqrt{A({\rm radial~bin~7})/A({\rm WF})}=0.93$. This area ratio factor
departs most strongly from unity for the innermost bins, but then the
background flux (and the error in the background) is a negligible fraction of
the total flux in these bins.}
\end{itemize}
The error in the measurement of the background level in the ground-based
images is determined from the variance amongst mean values in different blank
regions. The variance is scaled to the area of each radial bin and is added
in quadrature to the WFPC2 background error (formal and conservative) for
that bin. Note, the uncertainty in the background level of the ground-based
images is generally unimportant in relation to the uncertainty in the WFPC2
background; the ground-based background error is only 20\% of the overall
error in the case of the formal $B$ band error and even less in the other
cases. The net $B$- and $V$- band background errors are combined in order to
assess the effect on the color profile of M30's starlight; the $\pm1\sigma$
conservative error in the background is shown by the solid lines in the top
panel of Figure~\ref{cgrad}.
The formal error is about an order of magnitude smaller than the conservative
error; these estimates probably represent lower and upper bounds,
respectively, on the true error. In assuming $\sqrt{A}$ scaling of the
summed background flux, the formal error estimate ignores systematic errors
such as PSF mismatch and possible CCD-to-CCD differences in residual bias
level. The conservative error estimate, on the other hand, is derived from a
comparison of the three~WF CCDs which are known {\it a priori\/} to have
different stellar densities and hence different degrees of systematic error
caused by PSF mismatch of bright stars. The direct measurement of the
background in bright-star--free regions agrees with the background value
derived from the least squares fit to the full area of the three WF CCDs to
well within the conservative error (\S\,2.1).
\subsection{Uniform Redistribution of the Light of Evolved Stars}
M30's nucleus has been shown to be deficient in the most luminous RGB stars.
The ratio of the surface density of bright RGB stars to the cluster surface
brightness (Piotto et~al. 1988) or to the surface density of faint
RGB/subgiant stars (GWYSB) is significantly lower in the central $r<30''$
than further out in the cluster. The central four radial bins ($r<23''$)
contain only 23~bright RGB stars, less than 40\% of the expected number (60).
The expected bright RGB number is derived from the observed number in radial
bins~5--8 ($23''<r<130''$) which are unaffected by the central deficiency of
these stars, normalizing to the faint RGB population; a comparable bright RGB
fraction is estimated from Peterson's (1986) ground-based data in the
$r=1'$--$3'$ region of the cluster. Similarly, the net bright RGB flux in
the inner four radial bins is 40.9\% of the flux in radial bins~5--8. Is
the central deficiency of bright RGB stars entirely responsible for M30's
bluer-inward color gradient? The light from bright RGB stars must be
redistributed in some {\it uniform\/} way in order to determine how much of
the observed color gradient is due to this central bright RGB depletion.
In GWYSB, bright RGB flux was redistributed following the radial dependence
of the total $B+V$ flux, with the relative normalization between the
two~bands chosen such that faint RGB stars (with the same color as the
average cluster color) contribute equally in $B$ and $V$. The result is the
dotted curve in the middle panel of Figure~\ref{cgrad}; we refer to this as
Method~A. While this traditional method of bright RGB redistribution
algorithm is convenient to implement, it is obviously inaccurate. In the
limiting case where the population being redistributed has a negligible
contribution to the overall cluster light, Method~A produces uniform
redistribution. In general, however, the stellar population of interest
accounts for a finite fraction of the total light, so that the redistributed
color profile contains a ``ghost'' of the original color profile. This is
due to the fact that the normalizing function for redistribution---the
integrated cluster light profile---is itself influenced to some degree by the
original radial distribution of the stars being redistributed. Specifically,
redistributing the bright RGB stars in M30 by Method~A results in a larger
residual bluer-inward color gradient than the more accurate methods defined
below (Fig.~\ref{cgrad}), because bright RGB stars contribute 30\%--50\% of
the total cluster light.
A more reasonable scheme might be to redistribute the bright RGB light in
proportion to the faint RGB stars. For example, this may be achieved by
assigning one eighth of the total bright RGB flux to each radial bin
(dot-dashed curve in middle panel of Fig.~\ref{cgrad}; Method~B), since the
radial bins were defined to contain approximately equal numbers of faint RGB
stars. Note, however, that the total bright RGB flux in the WFPC2 image of
M30 is lower than it would have been in the absence of the central bright RGB
depletion. Making the assumption that the relative abundance of the bright
RGB population is `normal' beyond $r\sim25''$ in M30, the preferred method of
redistribution is to assign one fourth of the bright RGB flux summed over
radial bins~5--8 to each of the 8~radial bins, again roughly in constant
proportion to faint RGB stars (Method~C; short dashed line and solid circles
in middle and bottom panels of Fig.~\ref{cgrad}). Unlike the other
redistribution methods, Method~C actually adds in extra bright RGB flux to
compensate for the central bright RGB deficiency in the cluster. If there is
no residual color gradient, this simply has the the effect of producing a
redward color offset with respect to the Method~B profile. However, if a
residual color gradient is present, Method~C produces both a shallower color
gradient and redder overall colors than Method~B because of dilution by the
extra bright RGB light. The two new bright RGB redistribution methods used
in this paper, Methods~B and C, are coupled with redistribution of the flux
of HB stars: each radial bin is assigned one eighth of the total HB flux,
thereby correcting for Poisson noise in the distribution of HB stars and
forcing the HB to faint RGB flux ratio to be constant. Table~\ref{table1}
summarizes the redistribution methods described above and lists both the
formal and conservative error estimates. The solid lines in the bottom panel
of Figure~\ref{cgrad} show the effect of $\pm1\sigma$ conservative background
errors on the Method~C residual profile.
\begin{figure}
\vskip -2.0truein
\centerline{\psfig{figure=Howell.tab1.ps}}
\end{figure}
\begin{table}\dummytable\label{table1}\end{table}
The motivation for these redistribution schemes is that mass segregation is
{\it not\/} expected to produce radial gradients in either the bright to
faint RGB ratio or the HB to faint RGB ratio, as the characteristic masses of
faint and bright RGB stars differ by only $\Delta{M}\lesssim0.03\,M_\odot$
(\cite{bergbvan}), and the typical masses of HB stars are thought to be
within $0.1\,M_\odot$ that of faint RGB stars (\cite{ldz90}). Moreover, the
dynamical timescale for mass segregation is about a factor of~4 longer than
the lifetime in the HB evolutionary phase (\cite{djor93a}; \cite{lee90}).
Redistribution of both bright RGB and HB flux results in a residual color
profile that is consistent with no residual color gradient, and quite smooth
compared to the observed color profile (Fig.~\ref{cgrad}). The jagged nature
of the original color gradient results from relatively small numbers of
bright RGB and HB stars dominating the light at any given radius. The
smoothness of the residual, redistributed color profile indicates that the
photometry and subtraction of these bright evolved stars must be accurate.
The slight kink in radial bin~3 may be due to oversubtraction at the level of
1--2\%. A $\chi^2$ test shows that a constant color (no gradient) is an
adequate fit to the Method~C residual color profile, and no significant slope
is found.
The fraction of $B$- and $V$-band light from evolved stars, defined to be
those with $V<18.6$ (bright/faint RGB, HB, subgiants, blue stragglers), is
measured in each radial bin after uniform redistribution of bright RGB and HB
stars using Method~C. These fractions, $f_{\rm ev}(V)$ and $f_{\rm ev}(B)$,
are shown in Figure~\ref{fev} (bold and open squares) and listed in
Table~\ref{table2}. Evolved stars contribute about three quarters of the
total flux at small radii in both bands and their flux fraction falls off
with increasing radius to about~0.6 at $r\gtrsim1'$. The $f_{\rm ev}(V)$
fractions are needed for appropriate normalization of the models in \S\,3.3;
the observed $f_{\rm ev}(V)$ and $f_{\rm ev}(B)$ values are also compared to
fractions derived from the Bergbusch~\& VandenBerg (1992) theoretical
luminosity function (\S\,3.4). Although $f_{\rm ev}$ is calculated after
evolved star redistribution, these values will be referred to as ``observed''
$f_{\rm ev}$ values for the rest of this paper.
\begin{figure}
\plotfiddle{Howell.fig2.eps}{4.0truein}{-90}{64}{64}{-243}{402}
\caption{The ratio of the light of evolved stars
($V<18.6$) to the light of all stars, $f_{\rm ev}$, as a function of
radius in M30. The `observed' $f_{\rm ev}$ values in the $B$ and $V$ bands
represent measurements made after the bright RGB and HB stars have been
redistributed via Method~C (dashed line with open squares and solid line with
filled squares, respectively). Predictions based on the
Bergbusch~\& VandenBerg (1992) 10~Gyr theoretical isochrones are shown as
dashed and solid lines (without symbols) for the $B$ and $V$ bands,
respectively. The redistributed evolved star fraction in both bands
falls progressively below the theoretical prediction with increasing radius
beyond $r\sim10''$.
\label{fev}}
\end{figure}
\section{Effect of Main Sequence Mass Segregation}
\subsection{Dynamical Models}
A Fokker-Planck (FP) dynamical model (\cite{dull}) designed specifically for
the cluster M15, constrained by its measured velocity dispersion, surface
brightness profile, and millisecond pulsar acceleration, is used in this
paper. No FP model has been designed for M30, so the M15 model is adapted to
M30 by applying a small adjustment to its radial scale (B.\ W.\ Murphy 1998,
private communication). Additional procedures used to tailor the model to
M30 are described in \S\,3.3. There are important differences between the
physical parameters of M15 and M30 some of which may even be relevant to the
central depletion of bright red giants---e.g.,~M15's central velocity
dispersion and central density are about twice the corresponding M30 values
(\cite{dubath97}; \cite{prymey93}). However, the FP model is merely used to
characterize mass segregation in M30, recognizing that the model is not
expected to apply to M30 in detail. The effect of varying the nature and
degree of mass segregation is explored in \S\S\,3.3--3.4.
The FP model specifies the number of stars in each of twenty stellar mass
bins as a function of radius. The first five mass bins represent nonluminous
stellar remnants (neutron stars and white dwarfs of various masses) which are
irrelevant for the color gradient computation. The next two mass bins
correspond to HB and RGB stars, respectively, and the last thirteen mass bins
cover main sequence stars of successively lower mass in the range
0.74$\>$--$\>0.11\,M_\odot$. The model mass functions are not strictly
monotonic with stellar mass at all radii, though they display a general
increase in number of stars towards smaller stellar mass, more so at large
radii than at small radii.
As an alternative to the FP dynamical model, a multimass KM is also
considered. Although PCC clusters are thought to be in a nonequilibrium
state, multimass KMs provide an adequate description of the observed
variation in mass function slope as a function of radius
(cf.~\cite{sosinking}). \cite{bolte89} derived the mass function slope at
$r\approx100''\>$--$\>400''$ in M30 from studies of the luminosity function
of faint main sequence stars, and fit the data with a multimass KM from
\cite{pryor86} using an assumed core radius of $10''$ (see Fig.~3 of Bolte
1989). In the present study, this KM is used to predict the mass function
slope $x$ as a function of radius in the inner part of M30: $x=-2.75$ in
radial bin~1 ($r\sim2''$) and $x=+0.25$ in radial bin~8: ($r\sim80''$). Note
$x$ is defined in the usual way: $dN(M)\propto{M}^{-(1+x)}dM$.
\subsection{Stellar Evolution Models}
Theretical stellar isochrones provide the luminosity and color of stars as a
function of their mass. We use a Bergbusch~\& VandenBerg (1992) isochrone
with $\rm [Fe/H]=-2.03$, $\rm [O/Fe]=+0.70$, and $\rm Y=0.235$, consistent
with the estimated abundance of M30 (cf.~\cite{zinnwest}; \cite{carretta};
\cite{Sandquist}). The calculations presented in this paper are based on a
10~Gyr isochrone; the main result, however, is insensitive to choice of
isochrone age. Bergbusch~\& VandenBerg compute evolutionary tracks in
$(M_V,~B-V)$ space as well as stellar luminosity functions for a variety of
power law initial mass function slopes.
Recently, D'Antona and collaborators have calculated stellar evolutionary
tracks using updated input physics (P.~Ventura 1998, private communication).
The models are based on the assumption of gray atmospheres for stellar masses
$M>0.6\,M_\odot$ (\cite{dantona}) and use model atmospheres from \cite{hab99}
and \cite{al00} for $M<0.6\,M_\odot$ (\cite{montalban}). The \cite{castelli}
conversion from effective temperature to $B-V$ color is used in both mass
ranges. Our study uses the 10~Gyr, $\rm [Fe/H]=-2$ isochrone from D'Antona's
group; this is hereafter referred to as the ``D'Antona'' theoretical
isochrone. The lower main sequence portion of the D'Antona isochrone
($M_V>9$) is bluer by up to $\Delta(B-V)=-0.2$~mag than the Bergbusch~\&
VandenBerg (1992) isochrone. Moreover, stars of a given mass are brighter by
as much as $-0.5$~mag in $M_V$ in the former isochrone. These differences
have a negligible impact on the predicted overall $B-V$ color profile
(\S\,3.3; Table~\ref{table2}).
\subsection{Predicted Color Gradient}
The dynamical models and stellar isochrones described above are used to
compute the $B-V$ color of the cluster light as a function of radius. For
computational convenience and by analogy with the M30 data set, the dynamical
models are normalized to a fixed number of red giants in each of the 8~radial
bins. The radii in the dynamical models are compared directly to the
observed (projected) radii in M30; projection effects are ignored since M30's
radial brightness profile is relatively steep. If anything, projection tends
dilute the model color gradient. Whereas, as we show below, the color
gradient predicted by mass segregation models is already very small.
In the case of the FP dynamical model, mass bin~\#7 represents red giants and
contains stars with mass $M\sim0.8\,M_\odot$ covering a wide range of
absolute magnitudes: $M_V\approx-2.7$ (tip of RGB) to $M_V\approx+3.6$ (main
sequence turnoff). What is the appropriate stellar luminosity to attach to
this mass bin? A direct flux-weighted integration of M30's observed evolved
star luminosity function in the $V$ and $B$ bands over the range $M_V<+3.6$
yields a characteristic absolute magnitude of $(M_V)_{\rm RGB}=+1.2$ and a
color of $(B-V)_{\rm RGB}=+0.80$ for the set of red giants, subgiants and
turnoff stars represented by mass bin~\#7. The HB stars in the FP dynamical
model (mass bin~\#6) are tailored specifically to M15, and M30 has a
significantly different HB morphology and HB to RGB ratio. Each HB star is
assigned an absolute magnitude of $(M_V)_{\rm HB}=+0.45$ and a color of
$(B-V)_{\rm HB}=-0.05$, as measured for M30's short blue HB. The total
number of HB stars predicted by the FP model across all 8~radial bins is
scaled so that the overall HB/RGB $V$-band flux ratio matches that observed
in M30 (after correcting for central bright RGB depletion), while preserving
the FP model's run of HB/RGB ratio with radius (monotonic increase outward).
Main sequence stars in mass bins~\#8--\#20 are assigned $B$ and $V$
luminosities based on the Bergbusch~\& VandenBerg (1992) evolutionary tracks.
As a further adaptation of the FP model to M30, the number of evolved stars
(RGB+HB, mass bins~\#6 and \#7) is adjusted with respect to the number of
main sequence stars (sum of mass bins~\#8--\#20). The adjustment is carried
out independently in each radial bin to ensure that the fractional evolved
star $V$-band flux, $f_{\rm ev}(V)$, in the model agrees with the `observed'
value in that radial bin of M30, after uniform redistribution of RGB and HB
stars (\S\,2.3; Table~\ref{table2}). This method of normalizing to the
observed $f_{\rm ev}(V)$ values is preferred over a direct integration of the
model because M30 is observed to have a 30\% higher RGB-to-turnoff ratio than
predicted by models (\cite{Sandquist}; GWYSB). At each radius, the overall
luminosities in the $B$ and $V$ bands are obtained by integrating over stars
in all the model mass bins. This yields the predicted $B-V$ color of M30 as
a function of radius (``FP'' entry in Table~\ref{table2}).
\begin{figure}
\vskip -2.0truein
\centerline{\psfig{figure=Howell.tab2.ps}}
\end{figure}
\begin{table}\dummytable\label{table2}\end{table}
The above technique is repeated for the multimass KM, using power law stellar
mass functions whose slope $x$ increases with radius (\S\,3.1). The
calculation is performed using both sets of evolutionary tracks to attach a
$V$-band luminosity and $B-V$ color to stars of a given mass and the results
are listed in Table~\ref{table2}: ``KM--BV1'' (Bergbusch~\& VandenBerg 1992)
and ``KM--DA1'' (D'Antona). As described above, the $f_{\rm ev}(V)$
normalization constraint (ratio of evolved stars to main sequence stars) is
applied at each radius; HB stars are normalized to the RGB population
overall, and the radial dependence of the HB to RGB ratio is adopted from the
FP model.
To be consistent with the redistribution of HB stars in M30 and following the
reasoning given in \S\,2.3, calculations are also carried out in which the
ratio of model HB to RGB flux is forced to be constant with radius and equal
to the ratio measured in M30 (after bright RGB and HB redistribution). The
calculations are otherwise identical to the ``KM--BV1'' and ``KM--DA1''
calculations described above. The constant HB-to-RGB results are labeled
``KM--BV2'' and ``KM--DA2'' in Table~\ref{table2}.
The effect of relaxing the $f_{\rm ev}(V)$ normalization constraint is also
explored. This is done via direct integration of the luminosity functions
tabulated by Bergbusch~\& VandenBerg (1992) for mass function slopes in the
range $x=0$ to +2.5. The mass functions must be extrapolated to obtain the
negative slopes, $x\gtrsim-3$, appropriate for M30's central regions
(\S\,3.1). The Bergbusch~\& VandenBerg model isochrones do not include the
HB phase of stellar evolution. As in the ``KM--BV2'' calculation, HB flux is
added in constant proportion to RGB stars at all radii. The resulting $B-V$
color gradient is labeled ``KM--BV3'' in Table~\ref{table2}. Note, the
$f_{\rm ev}^{\rm pred}(V)$ values for the Bergbusch~\& VandenBerg isochrone,
with HB stars added in, tend to be higher than the observed values in M30 for
$r\gtrsim10''$ (with bright RGB and HB stars redistributed): the discrepancy
is about 10\% at $r\sim30''$ (Table~\ref{table2}; Fig.~\ref{fev}).
Comparing the results of the six calculations described above
(Table~\ref{table2}), it is clear that choice of dynamical model has little
effect on the color profile: ``FP'' and ``KM--BV1'' model colors differ by
less than $\pm0.01$~mag at all radii. The bluer lower main sequence in the
D'Antona isochrone relative to the Bergbusch~\& VandenBerg (1992) isochrone
(\S\,3.2) results in slightly bluer overall colors, though the color gradient
is nearly the same (``KM--BV1'' vs ``KM--DA1'' and ``KM--BV2'' vs
``KM--DA2''). Lower main sequence stars produce a larger fraction of the
total light at large radii than at small radii, and this results in a
slightly greater difference in color between D'Antona and Bergbusch~\&
VandenBerg calculations at large radii. Comparing the ``KM--BV1'' results to
``KM--BV2'' and ``KM--DA1'' to ``KM--DA2'' shows that the increase in HB/RGB
ratio with increasing radius in the former set of calculations is responsible
for the overall bluer colors at large radii; the constant HB/RGB ratio in the
latter set of calculations yields a marginally redder-outward gradient.
Unlike the other calculations, the ``KM--BV3'' case avoids empirical
normalization of the evolved-to-main sequence flux ratio. Thus, ``KM--BV3''
has a higher evolved star fraction than the other cases since the
Bergbusch~\& VandenBerg $f_{\rm ev}^{\rm pred}(V)$ values are generally
higher than the observed $f_{\rm ev}(V)$ values; this results in slightly
bluer overall colors due to the increased prominence of evolved stars, most
notably bright blue HB stars.
A back-of-the-envelope calculation based on the ``KM--BV3'' model illustrates
why main sequence mass segregation can make no appreciable contribution to
M30's overall color gradient. The mean cluster color of about
$\langle{B-V}\rangle_{\rm M30}=+0.7$, which also happens to be the mean color
of faint RGB stars, is a reasonable value at which to separate the main
sequence into an upper and lower main sequence (UMS and LMS, respectively).
Integrating the LMS, UMS, and evolved star portions of the isochrone shows
that the relative numbers of stars are: $N_{\rm LMS}\approx{N}_{\rm
UMS}\approx8\,N_{\rm ev}$ in radial bin~2; and $N_{\rm LMS}\approx\,4{N}_{\rm
UMS}\approx50\,N_{\rm ev}$ in radial bin~7. Evolved stars contribute about
70\% of the total light in radial bin~2, and 55\% in radial bin~7. The
typical evolved star (RGB/HB) is $\approx100\times$ more luminous in $V$ and
$\approx0.3$~mag redder in $B-V$ than typical UMS stars, and
$\approx1000\times$ more luminous and $\approx0.3$~mag bluer than LMS stars.
Thus, the color shift relative to the faint RGB due to UMS stars is the
product of their fractional flux contribution ($0.7\times0.01\times8$) and
the color difference ($-0.3$~mag): $\Delta(B-V)=-0.017$~mag in radial bin~2,
and $\Delta(B-V)=0.55\times0.01\times(50/4)\times(-0.3)=-0.021$~mag in radial
bin~7. Similar calculations for the LMS yield shifts in $\Delta(B-V)$ of
$+0.002$~mag and $+0.008$~mag, respectively. The estimated net color
gradient due to main sequence mass segregation from this admittedly
oversimplified calculation is
$\Delta(B-V)=-0.021+0.008-(-0.017+0.002)=+0.002$~mag redder outward, well
within the range of $+0.01$~mag to $-0.05$~mag yielded by more precise
calculations, and equal to the value of $+0.002$~mag from the most closely
related such calculation, ``KM--BV3'' (Table~\ref{table2}).
\subsection{Fraction of Evolved Star Light}
The calculation involving direct integration of the Bergbusch~\& VandenBerg
(1992) stellar mass and luminosity functions, ``KM--BV3'', predicts a larger
fraction of evolved star light for $r\gtrsim10''$ than the $f_{\rm ev}$
values observed in M30 (\S\,2.3). Put another way, a higher fraction of main
sequence light is observed than predicted, especially as one moves away from
the cluster center. This discrepancy is similar in $B$ and $V$. To resolve
the discrepancy in the context of power law mass functions (as the multimass
KMs are parameterized above), the exponent must vary with radius from $x=-5$
in the innermost radial bin ($r\sim2''$) to $x=0$ around $r\sim20''$ to
$x=+1.5$ in the outermost radial bin ($r\gtrsim1'$). However, the stellar
mass function is not necessarily a power law and there may be alternate ways
of resolving the $f_{\rm ev}$ discrepancy. Also, the true discrepancy is
nearly 30\% greater than Figure~\ref{fev} indicates: the ``KM--BV3''
calculation does not take into account M30's 30\% higher ratio of RGB to
turnoff stars relative to model isochrones (\cite{Sandquist}; GWYSB).
As described in \S\,3.1, the mass function slopes used for the KM
calculations in this paper are derived from mass function measurements at
$r\gtrsim2'$ in M30 (Bolte 1989). This model-based inward extrapolation is
sensitive to the choice of cluster core radius. Recent high resolution
studies of the central region have shown that the effective cluster core
radius is significantly smaller than Bolte's assumed value of $r_{\rm
core}=10''$ (\cite{yanny}). This would imply a somewhat stronger radial
dependence of the mass function slope $x$ over the $r\lesssim2'$ region than
we have adopted. The KM--BV3 calculations are repeated using values of
$x=-5$ in the first radial bin to $x=0$ in the outermost radial bin, where
this range of $x$ values is obtained by a simple linear extrapolation of
Bolte's data points. As the models of Pryor et~al.\ (1986) flatten out at
$x\gtrsim-3$ at small radii, this linear extrapolation is an unrealistically
extreme case. The resulting color gradient is $\Delta(B-V)=-0.007$ mag bluer
outward, and, like the other calculations, is consistent with the M30's
residual color profile. The more extreme mass segregation invoked to explain
the discrepancy between predicted and observed $f_{\rm ev}$ values ($x=-5$ to
+1.5; see previous paragraph) likewise has little net effect on the color
gradient.
The fractional degree of contamination by faint red foreground field stars
is expected to increase radially outward---e.g.,~the area of radial bin~1 is
79~arcsec$^2$ while that of radial bin~8 is 8440~arcsec$^2$, even though both
bins contain roughly the same number of cluster giants. However, the density
of M30 stars is so high in the central region of the cluster covered by the
WFPC2 image that field star contamination should be negligible even in radial
bin~8. The number of field stars predicted by the Galactic star count model
of \cite{ratbah} is too low by several orders of magnitude to have a
significant effect on M30's color profile.
\section{Conclusions}
The radial $B-V$ color profile of the post core collapse cluster M30 is
measured using {\it Hubble Space Telescope\/} Wide Field Planetary Camera~2
images, along with ground-based images whose wider field of view allows for a
reliable determination of the non-cluster background in the WFPC2 image. M30
displays a significant radial color gradient of $\Delta(B-V)\sim+0.3$~mag,
corresponding to a slope of
$\Delta(B-V)/\Delta\log(r)=0.20\pm0.07$~mag~dex$^{-1}$ from
$r=2''\>$--$\>1'$. An accurate new technique is developed for uniform
redistribution of the light of the brightest cluster stars, which compensates
for stochasticity in their spatial distribution and for the central depletion
of bright red giants. There is no significant residual color gradient after
bright star redistribution, implying that post--main-sequence stars are
entirely responsible for the central color gradient in M30. This is contrary
to the recent results of GWYSB and Burgarella \& Buat (1996), but confirms
the earlier finding of Piotto et~al.\ (1988).
The physical mechanism responsible for the central depletion of bright red
giants (and hence the color gradient) in M30 and in other post--core-collapse
clusters remains a mystery. Direct stellar collisions are too infrequent to
destroy an appreciable fraction of the giants within their short lifetimes.
The lack of a comparable central depletion amongst horizontal branch stars,
the downstream evolutionary products of bright red giants, suggests a `short
circuiting' of the bright red giant phase rather than complete destruction of
such stars; this may bear some relation to the evolution of giants in binary
systems. The reader is referered to \cite{djor91} and GWYSB and to
references therein for a detailed discussion of these issues.
This study also investigates the effect on the color profile of mass
segregation of main sequence stars in the context of cluster dynamical models
and theoretical stellar isochrones. The model calculations show a slight
bluer-outward color gradient when the HB varies as predicted by the
Fokker-Planck dynamical model [$\Delta(B-V)\sim-0.06$ from
$r=20''\>$--$\>80''$], and an even smaller redder-outward gradient if the HB
is held constant with respect to the RGB. In all cases, the color gradient
predicted by mass segregation models is consistent with the data to within
the measurement uncertainties. The predicted fraction of light from evolved
stars using the theoretical mass and luminosity functions of Bergbusch~\&
VandenBerg (1992) suggests that there is a 10\%--30\% achromatic excess of
faint star light at large radii in M30 relative to conventional mass
segregation models.
\bigskip
\bigskip
\acknowledgments
We would like to thank Brian Murphy for providing an updated electronic
version of the Fokker-Planck models described in Dull et~al. (1997), Paolo
Ventura for providing the isochrones computed by D'Antona's group, and Mike
Bolte for providing ground-based images. We are grateful to Sandy Faber and
Mike Bolte for a critical reading of the manuscript, and to the referee,
George Djorgovski, for several insightful comments. PG would like to thank
his collaborators on the M30 WFPC2 study, Zo Webster, Brian Yanny, Don
Schneider, and John Bahcall, for useful discussions about M30's color
gradient in the context of an earlier paper that served as the motivation for
this work. This project was supported in part by an undergraduate McNair
Scholarship from the University of California at Davis (AT).
\clearpage
|
2,869,038,157,089 | arxiv | \section{Abstract}
We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant ``Kalman'' gain to track the state of a Gauss Markov process. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the ``quality'' of the sensor\slash source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the \emph{value of information} (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance.
\section{Introduction}\label{sec:intro}
There is a growing interest in providing real-time status updates in order to serve applications that depend upon the freshness of information available to them. For example, timely weather updates, stock prices information, Internet of Things devices, etc. Thus, the problem of ensuring timely status
updates in real-time applications has received much attention ~\cite{kaul2012real,DBLP:conf/allerton/KadotaUSM16,DBLP:journals/ton/KadotaSUSM18}. The ``Age of Information'' (AoI), captures the freshness of the information that is available with the end application,
and recent works have designed scheduling policies\slash network controllers with the objective of minimizing the AoI in scenarios where multiple applications share a common network infrastructure. For example, more recently generated data packets are prioritized over older packets~\cite{DBLP:conf/isit/BedewySS17}, or the packet of a user that currently has a larger value of age at the destination is prioritized over a user having smaller age, etc.
However, in many applications, it is not only the freshness of a packet that matters, but also the content that it contains.
We will be interested in applications that generate an estimate of a process after receiving data packets that contain information about this process. Since the packets contain only noisy observations of the process, and not the ``true'' value of the process, it is also important that a packet which contains measurements that have a higher precision\slash low noise, must be prioritized over a packet that has the same age but has a lower precision measurement.
In summary, the quality of a packet is judged not only on the basis of how ``fresh'' the packet is, i.e., its age or the time since it was generated at the source, but it also depends upon the following two factors
\begin{enumerate}
\item How much information it contain about the process that is being monitored,
\item How important this information is to the algorithm that is being used to update the status at the destination node.
\end{enumerate}
Regarding 1), we note that the information content in a packet is described by the joint probability distribution of its content and the true state\slash status of the process. In order for the factor 2) above to become crucial, it becomes important that the algorithm that is being deployed by the application uses the information contained in these packets efficiently.
In view of the above discussion, the following question arises naturally: How do we prioritize packets for scheduling in order to optimize the performance of such real-time systems? Should the packets be prioritized according to their age, or the noisiness of the observation contained in them, or a combination of both?
In this work, we provide a concrete answer to this question when the process of interest that is being monitored by an application is Gauss Markov, while the algorithm that is being used to generate an estimate of the process is a
simple-to-implement linear filter
with a constant gain. We show that the optimal scheduler takes the following form: it attaches an index to each packet that is present in its queue, and then schedules the packet having the largest value of the index. This index depends upon the following two factors a) the age of the packet, b) the mutual information between the packet content, and the system status, or equivalently the variance of the noise associated with the measurement.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figures/voi_fig_1.pdf}
\caption{$N$ sources generate data packets which contain information about the process to the network node. The packets at the network node are queued in a single queue, and wait to be transmitted to the destination. The sources differ in the precision of their measurements of the process $x(t)$.
}
\label{fig:estimator}
\end{figure}
\section{Related Works}
Applications involving real-time systems such as the smart-grid, weather monitoring systems, etc., often require that the destination node, which comprises the end decision maker, has a constant availability of fresh \emph{staus updates} about the process of interest. The age-of-information, or simply age, performance metric was introduced in~\cite{kaul2012real,adelberg1995applying,Cho00synchronizinga,DBLP:conf/icde/GolabJS09}, and is an appropriate choice in order to encapsulate the notion of freshness of data. The age at a given time is equal to the time elapsed since the freshest data packet available to destination was generated. Existing works on age derive age-optimal policies for various kinds of networks, e.g., when multiple sources share a common queue~\cite{2012ISIT-YatesKaul}, broadcast networks~\cite{DBLP:conf/allerton/KadotaUSM16,DBLP:journals/ton/KadotaSUSM18}, under battery energy constraints on the packet transmitter~\cite{DBLP:conf/isit/Yates15}, and for wireless networks~\cite{DBLP:journals/iotj/JiangKZZN19,DBLP:conf/infocom/JiangZN019}. The work~\cite{DBLP:conf/isit/SunPU17} shows that age optimal policies minimize estimation error when the process of interest that is being monitored by the destination is a Weiner process.
There are also other performance metrics relevant for real-time systems, which are
also concerned with the timeliness of information delivery to the destination. In applications where the end-to-end packet delays are required to be small, the notion of ``timely throughput'', i.e., the throughput of packets that reach their destination before their deadlines, is useful. The works~\cite{cdcdelay,singh2018throughput} develop decentralized policies that maximize the timely throughput. Another useful metric is that of variance\slash variations in the packet interdelivery times. Minimizing this quantity ensures that the stream of packet deliveries as viewed by the end application looks close to the periodic delivery case when it regularly gets a packet every 1\slash throughput time-slots. See~\cite{guo2018risk,guosingh,singh2015index} for more details. Service smoothness or regularity is a useful concept in applications which require that the end application should not suffer from the problem that there are long durations in which they obtain no packet at all. An important example is video streaming which suffers from outages whenever the video buffer is emptied by non-delivery of packets for a long time~\cite{singh2015optimizing}. A scheduler that maintains service smoothness avoids such service starvation in cases when multiple applications share a common network resource~\cite{singh2015maxweight,rs,atilla1,atilla2}.
One may note that the performance metrics mentioned above try to connect either the freshness or the timeliness of information updates to the performance of the monitoring algorithm that is being deployed by the destination. However, the quality
of the measurements in all data packets need not be the same, that is to say that they differ in the amount of noise in the observations contained in them. Thus, the above performance metrics are unable to appropriately prioritize packet transmissions when the packets are generated by different sources that have varying precision. In this work, we address this problem for the case when the process of interest being monitored is a Gauss Markov process~\cite{kumar}, and the destination node employs a constant-gain linear filter to monitor the process. However, in this work, we do not consider more complicated signaling mechanisms such as the information conveyed by the non-arrival of packets.
\section{Setup}
Throughout, we assume that time has been discretized into time-slots, and is numbered as $t=1,2,\ldots$. We assume that the system operates for $T$ time-slots, so that $t\in \left\{1,2,\ldots,T \right\}$. We denote the segment
$\left\{\ell,\ell+1,\ldots,m\right\}$ of $m-\ell+1$ consecutive time-slots by $[\ell,m]$.
\emph{Process and Network Description}:
We consider the system shown in Fig.~\ref{fig:estimator}. There is a single network node $\mathcal{N}$ that receives packets containing information about the process $x(t),t\in [1,T]$ from $M$ sources. The process $x(t)\in\mathbb{R}$ is a scalar Gauss Markov process, and evolves as follows
\begin{align}\label{eq:single_system}
x(t+1) = ax(t) + w(t), t\in [1,T],
\end{align}
where $a$ describes its dynamics, while the noise $w(t)$ is a zero mean Gaussian process that is i.i.d. across time. There are $M$ sources that are equipped with sensors that take observations of the process $x(t)$. Each such source is characterized by the noisiness of its measurements, as described below.
For a packet $\psi$, we let $t_{\psi}$ be the time at which the packet was generated at its source. Thus, the observation contained in $\psi$, denoted $y_{\psi}$, is given by,
\begin{align*}
y_{\psi} = x(t_\psi) +\sigma_{\psi} w_{\psi},
\end{align*}
where $w_{\psi}$ is a zero mean Gaussian random variable and $\sigma_{\psi}$ is the variance associated with observation noise corresponding to the source that generated $\psi$. Thus, the quantity $\sigma_{\psi}$ can assume one of the $M$ possible values. The observation noise $w_{\psi}$ is assumed to be independent across packets, and Gaussian with mean zero, variance $1$. We let $\tau_{\psi}(t)=t-t_\psi$ be the age of packet $\psi$ at time $t$.
\emph{Network Node}: The $M$ sources provide their packets to a shared network node $\mathcal{N}$. $\mathcal{N}$ maintains a queue with infinite buffer size for storing the incoming data packets and is connected to the destination node $\mathcal{D}$ via an \emph{unreliable} link $\ell$. We denote the reliability of $\ell$ by $p$, the probability that an attempted packet transmission is successful. The link $\ell$ has a capacity of $1$ packet\slash time-slot, i.e., at most one packet can be transmitted on it in one time-slot. As shown in Fig.~\ref{fig:estimator}, we assume that upon a successful transmission, the receiver sends an acknolwedgement (ACK) to $\mathcal{N}$, so that the network node knows whether or not the packet transmission was successful \emph{after} having attempted the transmission at $t$. If the transmission is successful, it removes the delivered packet from its queue. We assume that the packet arrivals to $\mathcal{N}$ from the $M$ sources are governed by a well-defined stochastic process. We let $Q(t)$ denote the set of packets contained in the queue at time $t$.
\emph{Filtering}: The destination node has a linear filter~\cite{kumar} that produces real-time\slash online estimates $\hat{x}(t)$ of $x(t)$. Let $\psi(t)$ denote the packet that is delivered to the destination at time $t$.
Each such delivered packet is used for updating the estimate $\hat{x}(t)$, as follows:
\begin{small}
\begin{align}\label{eq:kalman_filter}
\hat{x}(t+1) =
\begin{cases}
a \hat{x}(t) + K \left( a^{ \tau_{\psi(t)}(t) }y_{\psi(t)} - a \hat{x}(t) \right), \mbox{ if } \psi(t) \neq \left\{\emptyset \right\}\\
a \hat{x}(t) \mbox{ if } \psi(t) = \left\{\emptyset \right\},
\end{cases}
\end{align}
\end{small}
\noindent where the scalar gain $K$ is a constant gain,
$\tau_{\psi(t)}(t)$ is the age of packet $\psi(t)$, $y_{\psi(t)}$ is the observation value contained in the packet $\psi(t)$, and we let $\left\{ \psi(t) = \left\{\emptyset \right\} \right\}$ denote that no packet was delivered to $\mathcal{D}$ at $t$.
Such an event can occur either because the queue $Q(t)$ was empty, or because the packet that was transmitted was dropped by the link $\ell$.
For simplicity, we use a constant gain $K$ for all time-slots $t$. The updates~\eqref{eq:kalman_filter} correspond to Kalman filtering with constant gain when the filter obtains delayed time-stamped packets, and it can utilize the values of delays in order to minimize the estimation error~\cite{DBLP:journals/automatica/ShiXM09}. See Theorem~3.1 and Fig.~2 of~\cite{DBLP:journals/automatica/ShiXM09} for more details.
Let
\begin{align}\label{eq:error_def}
e(t):= x(t)-\hat{x}(t)
\end{align}
denote the estimation error of the filter at time $t$.
We are interested in designing scheduling policies that can be implemented at the network node in order to minimize
the expected value of the cumulative squared estimation error over the time interval $[1,T]$, i.e., the quantity
\begin{align}\label{eq:error_cost}
\mathbb{E}\left(\sum_{t=1}^{T} e^2(t)\right),
\end{align}
where the expectation is taken with respect to the randomness of packet arrivals, observation and process noises $w_\psi,w(t),t\in[1,T]$ and the scheduling policy employed by $\mathcal{N}$. We denote a generic scheduling policy by $\pi$, and the packet chosen for transmission at $t$ by $u(t)$. A policy $\pi$ makes the decision $u(t)$ denoting which packet to transmit, on the basis of its past observations, i.e., $\left\{Q(s)\right\}_{s=1}^{t},\left\{u(s),x(s)\right\}_{s=1}^{t-1},$ where $Q(t)$ denotes the queue length at time $t$.
At any time during the operation, the scheduler can access the age $\tau_{\psi}$, quality
$\sigma_{\psi}$, or observation $y_{\psi}$ of each packet $\psi$ present in its queue. Thus, the objective can be stated as follows,
\begin{align}\label{eq:cost}
\min_{\pi}J(\pi) : = \mathbb{E}_{\pi}\left(\sum_{t=1}^{T} e^2(t)\right).
\end{align}
\section{An Index Based Policy}
We begin by deriving some preliminary results concerning the estimation error process $e(t)$, and also introduce a simple-to-implement policy that is optimal when the scheduler has to make decision only at the last operating time $t=T-1$. We then use backward recursion argument to obtain the scheduler decision for all $t.$
We obtain the following expression for the evolution of the error $e(t)$ at the destination node.
Consider the linear
filter~\eqref{eq:kalman_filter} that performs updates by utilizing the observation $y_{\psi(t)}$ contained in packet $\psi(t)$ that was delivered to it at time $t$. Let $a_c := a(1-K)$ denote the closed-loop estimator
gain of the linear filter. Then, its estimation error $e(t)= x(t)-\hat{x}(t)$ evolves as follows,
\begin{align}\label{eq:error_update_lemma}
e(t+1) = a(t)~e(t) + \left(w_{s,\psi(t)}(t) +w_{p,\psi(t) }(t)\right),
\end{align}
where
\begin{align}
a(t) =
\begin{cases}
a_c \mbox{ if }~\psi(t) \neq \left\{ \emptyset \right\}\notag\\
a \mbox{ if }~\psi(t) = \left\{ \emptyset \right\}.
\end{cases}
\end{align}
Above, for a packet $\psi$, its ``sensing noise'' $w_{s,\psi}(t)$ and ``process noise'' $w_{p,\psi}(t)$ at time $t$ are defined as follows
\vspace*{-1mm}
\begin{small}
\begin{align}\label{def:noises_lemma}
w_{s,\psi}(t) :& =
\begin{cases}
a~K \left(a^{ \tau_{\psi}(t) } w_s(t-\tau_{\psi}(t)) \right) \mbox{ if }~\psi(t) \ne \left\{ \emptyset \right\} \notag\\
0 \mbox{ if }~\psi(t) = \left\{ \emptyset \right\}
\end{cases}
\\
w_{p,\psi }(t) :& =
\begin{cases}
-a~K\left(\sum_{l=1}^{\tau_{\psi}(t) } a^{l} w(t - l)\right) + w(t),\mbox{if }~\psi(t) \neq \left\{ \emptyset \right\} \notag\\
w(t)\mbox{ if }~\psi(t) = \left\{ \emptyset \right\}.
\end{cases}
\end{align}
\end{small}
We note that the sensing noise depends upon the age of the packet as well as the precision of its source, while the process noise is a function of its age only.
\emph{Scheduling Problem for $t=T-1$}: Now consider the case when $Q(T-1) \neq \left\{ \emptyset \right\}$, and the scheduler has to make a decision only during the last time-slot $t=T-1$. It follows from the update equation~\eqref{eq:error_update_lemma} that the cost incurred at time $t=T$ is given by
\begin{align}
\mathbb{E}~e^2(T) &= \left(p a^{2}_c + (1-p) a^{2}\right) e^2(T-1) \\
& + p~\mathbb{E} \left(w_{s,\psi(t)}(t) +w_{p,\psi(t) }(t)\right)^2 + (1-p) \mathbb{E} w^2(t)\notag\\
&= \left(p a^{2}_c + (1-p) a^{2}\right) e^2(T-1) + (1-p) \sigma^2\notag \\
&+ p \left( \mathbb{E}\left((w_{s,\psi(t)}(t) \right)^2 + \mathbb{E}\left((w_{p,\psi(t)}(t) \right)^2 \right) \notag\\
&= \left(p a^{2}_c + (1-p) a^{2}\right) e^2(T-1) + (1-p) \sigma^2\notag \\
&+ p\left( a^{ 2\tau_{\psi}(T)} \sigma^2_{s,\psi} + \sigma^2 \frac{a^{2\tau_{\psi}(T)} -1}{a^2 -1}\right) \notag \\
& + p(1+ K)^2 \sigma^2,
\end{align}
where the second equality follows since the process and sensing noise are independent. Thus, it is optimal to schedule the packet $\psi$ with the least value of $\mathbb{E}\left((w_{s,\psi}(T) \right)^2 + \mathbb{E}\left((w_{p,\psi}(T) \right)^2 $.
Thus, we define the following two quantities for a packet $\psi$, which correspond to the sensing and process noise variances associated with it at time $t$,
\begin{align}
W^{2}_{s,\psi}(t) : &= a^2K^2\left(a^{ 2\tau_{\psi}(t)} \sigma^2_{s,\psi} \right), \notag\\
W^{2}_{p,\psi}(t) :&= a^2K^2 \left(\sigma^2 \frac{a^{2\tau_{\psi}(t)} -1}{a^2 -1} \right)+ \sigma^2, \notag \\
W^{2}_{\psi}(t) :&= W^{2}_{s,\psi}(t) + W^{2}_{p,\psi}(t).\label{def:noises_variance_pkt}
\end{align}
For an empty packet, i.e., $\psi = \left\{\emptyset \right\}$, we let
\begin{align}
W^{2}_{s,\psi}(t) : &= 0, \notag\\
W^{2}_{p,\psi }(t) : &= \sigma^2,\label{def:noises_variance_pkt_1}
\end{align}
for all $t \in [1,T]$.
It then follows
that the policy which chooses the packet with the least value of the combined noise variance $W^{2}_{s,\psi}(T)+W^{2}_{p,\psi}(T)$, is optimal. Thus, we introduce the \emph{Index Policy}.
\begin{definition}(Index Policy)\label{def:index}
The Index Policy assigns an index of $W^2_{\psi}(t)$ to each packet $\psi$ in $Q(t)$, and transmits the packet with the least index. Thus, the scheduling decision $u(t)$ taken at time $t$ is
\begin{align}\label{eq:greedy_scheduler}
u(t) \in \arg\min \left\{ W^2_{\psi} (t+1)\big | \psi \in Q(t) \right\}.
\end{align}
\end{definition}
\begin{lemma}\label{lemma:index_opt}
The Index Policy of Definition~\ref{def:index} is optimal at time $t=T-1$.
\end{lemma}
Next, we derive some properties of the indices $W^2_{\psi}(t)$ which will allow for efficient implementation of the Index Policy.
\begin{definition}
Henceforth we let $Q(t)$ denote the \emph{ordered} set of packets in the queue $Q(t)$, with the ordering between two packets $\psi_1,\psi_2 \in Q(t)$ given as
\begin{align*}
\psi_1 \leq_{Q(t)} \psi_2 \iff W^{2}_{\psi_1}(t)\leq W^{2}_{\psi_2}(t).
\end{align*}
For two packets $\psi_a,\psi_b$ we let $\Delta(\psi_a,\psi_b,t)$ denote the difference in the values of indices of the two packets, i.e.,
\begin{align}\label{eq:delta_def}
\Delta(\psi_a,\psi_b,t) := W^2_{\psi_a}(t) - W^2_{\psi_b}(t).
\end{align}
\end{definition}
\begin{lemma}\label{lemma:index_order}
Consider two packets $\psi_1,\psi_2 \in Q(t_1)\cap Q(t_2)$, i.e., they are present with the scheduler in its queue at times $t_1$ as well as at time $t_2$. We then have that
\begin{align*}
\psi_1 \leq_{Q(t_1) } \psi_2 \iff \psi_1 \leq_{Q(t_2) } \psi_2.
\end{align*}
Moreover,
\begin{align}
\Delta(\psi_1,\psi_2,t) = a^{2t} C(\psi_1,\psi_2),
\end{align}
where $C(\psi_1,\psi_2)$ depends upon the packet generation times, and the sensor precisions, of the packets $\psi_1,\psi_2$.
\end{lemma}
\begin{proof}
To show that the ordering of packets doesn't change with time, it suffices to prove that the following holds for times $t_1,t_2$,
\begin{align*}
W^{2}_{\psi_1}(t_1)\leq W^{2}_{\psi_2}(t_1) \iff W^{2}_{\psi_1}(t_2)\leq W^{2}_{\psi_2}(t_2).
\end{align*}
The index of a packet at time $t$ can be derived as a function of its generation time $t_{\psi}$ as follows,
\begin{align*}
W^{2}_{\psi}(t) &= a^2K^2 \left(a^{ 2( t- t_{\psi} ) } \sigma^2_{s,\psi}+ \sigma^2 \frac{a^{2 (t - t_{\psi})} -1}{a^2 -1}\right) + \sigma^2\\
&= a^2K^2\left( a^{ 2( t- t_{\psi} ) } \left(\sigma^2_{s,\psi}+ \sigma^2 \frac{1}{a^2 -1} \right) - \sigma^2 \frac{1}{a^2 -1} \right) \\
&+ \sigma^2.
\end{align*}
Thus, in order to compare the indices of $\psi_1,\psi_2$ at time $t$, we need to solve the following inequality
\begin{align}
&a^{ 2( t- t_{\psi_1} ) } \left(\sigma^2_{s,\psi_1}+ \sigma^2 \frac{1}{a^2 -1} \right) \notag \\
& \leq a^{ 2( t- t_{\psi_2} ) } \left(\sigma^2_{s,\psi_2}+ \sigma^2 \frac{1}{a^2 -1} \right),\notag\\
\equiv & a^{ 2(- t_{\psi_1} ) } \left(\sigma^2_{s,\psi_1}+ \sigma^2 \frac{1}{a^2 -1} \right) \notag \\
& \leq a^{ 2(- t_{\psi_2} ) } \left(\sigma^2_{s,\psi_2}+ \sigma^2 \frac{1}{a^2 -1} \right).\label{ineq:1}
\end{align}
Since the terms in the above inequality do not depend upon time $t$, it then follows that either $\psi_1 \geq_{Q(t)} \psi_2 \forall t$ or vice--versa. This proves the first claim. The second claim follows from the inequality~\eqref{ineq:1} after letting $C(\psi_1,\psi_2)$ to be equal to the difference between the terms on the l.h.s. and r.h.s.
\end{proof}
It follows from Lemma~\ref{lemma:index_order} that the Index Policy can be implemented as follows. The network node maintains a single ordered queue $Q(t)$ in which it stores packets from \emph{all} the sources, i.e., the queue is \emph{shared} amongst all the sources. Upon receiving a new packet, it calculates its index, and then enqueues it in a location so that the resulting queue is ordered. Such an operation requires $O(\log|Q(t)|)$ computation if the queue is implemented using a binary search tree. Moreover, it does not require the scheduler to compute the indices of all the packets in $Q(t)$.
Moreover, it follows from Lemma~\ref{lemma:index_order} that since the order of packets does not change with $t$, the Index Policy can base its decisions on the values $W^{2}_{\psi}(t)$ instead of $W^{2}_{\psi}(t+1)$. Hence, the Index Policy is equivalently given by
\begin{align}\label{eq:greedy_scheduler_1}
u(t) \in \arg\min \left\{ W^2_{\psi} (t)\big | \psi \in Q(t) \right\}.
\end{align}
In later discussions, we will switch between the two definitions depending on which is more convenient.
\section{Optimality of Index Policy}
The Index Policy of the previous section is very simple to implement and is optimal when the scheduler has to make decisions for only the last time--slot $t=T-1$. Surprisingly, as will be shown in this section, it turns out that the policy continues to be optimal when implemented over the interval $[1,T]$.
In order that the problem is tractable, we make the following assumption about the parameter $a$ of the process $x(t)$, and the estimator gain $K$ of the filter~\eqref{eq:kalman_filter}.
\begin{assumption}\label{assum:1}
The quantities $a,a_c,K$ satisfy
\begin{align*}
|a|,|a_c| < 1,
\end{align*}
i.e., the process that is being monitored, as well as the filter, are stable. Additionally, we require the
estimator gain $K$ to be bounded as follows,
\begin{align*}
K\le \frac{1-a^2}{a^2}.
\end{align*}
\end{assumption}
Denote the index policy by $\pi_{idx}$. Let $\tilde{\pi}$ be a policy that uses a different decision rule for time $t=1$, and then employs $\pi_{idx}$ for the remaining time-slots $t\in \left[2,T-1\right]$. Our approach to proving the optimality of $\pi_{idx}$ will be to show that the cost incurred by $\tilde{\pi}$ is more than that of $\pi_{idx}$.
Let $\psi_1$ denote a
the packet with the least index in $Q(1)$, and let $\psi_2$ be a packet satisfying $\psi_2 >_{Q(1)} \psi_1$. At time $t=1$, $\pi_{idx}$ serves $\psi_1$ while $\tilde{\pi}$ serves $\psi_2$. With a probability equal to $1-p$, the transmission at time $t=1$ is unsuccesful, so that the queues at time $t=2$ are the same for both the policies. Moreover since $\pi_{idx}$ and $\tilde{\pi}$ agree on decisions at times $t\in [2,T-1]$, this implies that
with probability $1-p$ their cumulative costs in the interval $[1,T]$ are equal. Hence, without loss of generality, we only consider the case when the first packet transmission is successful.
Define the following stopping times:
\begin{align}\label{eq:stop}
T_{1} :&= \left\{ t: \psi(t;\pi_{idx}) = \psi_2 \right\},\notag\\
T_2 :&= \left\{ t: \psi(t;\tilde{\pi}) = \psi_1 \right\},
\end{align}
where the notation $\psi(\cdot;\pi )$ explicitly shows the dependence of the scheduling decisions on the policy $\pi$. Thus, $T_1$ is the time when $\pi_{idx}$ delivers $\psi_2$ to the destination.
The following result is easily derived, and is proven in the Appendix.
\begin{lemma}\label{lemma:compare_times}
Consider the stopping times $T_1,T_2$ as defined in~\eqref{eq:stop}. We have
\begin{align*}
T_2 \leq T_1.
\end{align*}
\end{lemma}
Now define $\pi_{idx} \left[T_2, T_{1} -1 \right] $ be the \emph{ordered} set of packets that were delivered under $\pi_{idx}$ in the time interval $\left[T_2, T_{1} -1 \right]$. Similarly consider the set $\tilde{\pi}\left[T_{2} +1 , T_{1} \right] $. We will now show that these sets are equal.
\begin{lemma}\label{lemma:ordered_sets_eq}
\begin{align*}
\pi_{idx} \left[T_2, T_{1} -1 \right] = \tilde{\pi}\left[T_{2} +1 , T_{1} \right].
\end{align*}
\end{lemma}
\begin{proof}(Proof of Lemma~\ref{lemma:ordered_sets_eq})
Since the two policies are the same for times $t>1$, and since both of them have the same set of packets at each time, except possibly packet $\psi_1$ or $\psi_2$, we have
\begin{align}\label{seq:2}
\pi_{idx} \left[2, T_{2} -1 \right] = \tilde{\pi}\left[2 , T_{2} -1 \right].
\end{align}
It then follows from~\eqref{seq:2} that
\begin{align}
\pi_{idx} \left[1, T_{1} \right] &= \psi_1~ \pi_{idx} \left[2, T_{2} -1 \right]~\pi_{idx} \left[T_{2},T_{1} -1 \right]~\psi_2\label{seq:1}\\
\tilde{\pi} \left[1, T_{1} \right] &= \psi_2~ \pi_{idx} \left[2, T_{2} -1 \right]~\psi_1 ~ \tilde{\pi} \left[T_{2}+1,T_{1} \right]. \label{seq:3}
\end{align}
Since the sets $ \pi_{idx} \left[1, T_{1} \right], \tilde{\pi}\left[1 , T_{1} \right]$ consist of the same elements, it follows from~\eqref{seq:2} that the sets $\pi_{idx} \left[T_{2},T_{1} -1 \right]$ and $\tilde{\pi} \left[T_{2}+1,T_{1} \right]$ consist of the same elements. Moreover, since under either of the two policies, the packets in both these sets are served in the same order, this shows that
\begin{align}\label{seq:5}
\pi_{idx} \left[T_{2},T_{1} -1 \right] =\tilde{\pi} \left[T_{2}+1,T_{1} \right].
\end{align}
\end{proof}
Throughout, we let
\begin{align}
\alpha_{s,t}:= \prod_{m=s}^{t} a^2(t).
\end{align}
\begin{theorem}\label{th:index_opt}
Consider the problem of scheduling data packets in order to minimize the estimation error $\mathbb{E}\left( \sum_{t=1}^T e^2(t)\right)$. Let $\tilde{\pi}$ be a policy that differs from $\pi_{idx}$ on scheduling decision for time $t=1$, and then employs $\pi_{idx}$ for the remaining time-slots $t\in \left[2,T-1\right]$. Then we have that
\begin{align*}
\mathbb{E}_{\tilde{\pi}}\left( \sum_{t=1}^T e^2(t)\right) \geq \mathbb{E}_{\pi_{idx}}\left( \sum_{t=1}^T e^2(t)\right).
\end{align*}
\end{theorem}
\begin{proof}
Consider a policy $\tilde{\pi}_{idx}$ that follows $\pi_{idx}$ except that it serves $\psi_2$ at times when $\tilde{\pi}$ serves $\psi_1$. We will show that $\tilde{\pi}_{idx}$ attains a lower cost than $\tilde{\pi}$. The \emph{ordered} sets of packets served by them until the time $T_1$ are given by,
\begin{align}
\pi_{idx} \left[1, T_{1} \right] &= \psi_1~ \pi_{idx} \left[2, T_{2} -1 \right]~\pi_{idx} \left[T_{2},T_{1} -1 \right]~\psi_2\label{seq:idx}\\
\tilde{\pi}_{idx} \left[1, T_{1} \right] &= \psi_1~ \pi_{idx} \left[2, T_{2} -1 \right]~\psi_2~\pi_{idx} \left[T_{2}+1,T_{1} -1 \right].\label{seq:tilde_idx}\\
\tilde{\pi} \left[1, T_{1} \right] &= \psi_2~ \pi_{idx} \left[2, T_{2} -1 \right]~\psi_1 ~ \pi_{idx} \left[T_{2}+1,T_{1} -1 \right].
\end{align}
We note that since all the policies are constructed on the same probability space, and since they are non-idling, the $a(t)$ are the same under all policies. The performance difference between $\tilde{\pi}_{idx},\tilde{\pi}$ is given by
\begin{small}
\begin{align}
&\left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}_{idx}}- \left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}} \notag \\
& = \Delta(\psi_1,\psi_2,1) \sum_{t\in [1,T_2 -1]}\alpha_{1,t} + \big(\Delta(\psi_2,\psi_1,T_2) \notag \\
& + \Delta(\psi_1,\psi_2,1)\alpha_{1,T_2} \big) \sum_{t\geq T_2}\alpha_{T_2,t}\notag\\
&= \Delta(\psi_1,\psi_2,1) \sum_{t\in [1,T_2 -1]}\alpha_{1,t} + \big( (a^2)^{T_2}\Delta(\psi_2,\psi_1,1) \notag \\
& + \Delta(\psi_1,\psi_2,1)\alpha_{1,T_2} \big) \sum_{t\geq T_2}\alpha_{T_2,t}\notag\\
&= \Delta(\psi_1,\psi_2,1) \left( \sum_{t\in [1,T_2 -1]}\alpha_{1,t} - \left( (a^2)^{T_2} - \alpha_{1,T_2} \right) \sum_{t\geq T_2}\alpha_{T_2,t} \right)\notag\\
&\leq 0,\label{ineq:pol_1}
\end{align}
\end{small}
\noindent where, in the first equality we used Lemma~\ref{lemma:index_order} in order to deduce $\Delta(\psi_2,\psi_1,1) = (a^2)^{T_2}\Delta(\psi_2,\psi_1,1)$. To show the last inequality, we prove in Lemma~\ref{lemma:negative_delta} that the term within the braces is positive, and also note that $\Delta(\psi_1,\psi_2,1)<0$.
We will now show that $\pi_{idx}$ has a lower cost than $\tilde{\pi}_{idx}$. Towards this end, construct a class of feasible policies $\tilde{\pi}_{idx,k},k\geq 0$. Denote $\tilde{\pi}_{idx}$ by $\tilde{\pi}_{idx,0}$. $\tilde{\pi}_{idx,k}$ is obtained from $\tilde{\pi}_{idx,k-1}$ as follows. $\tilde{\pi}_{idx,k}$ does not serve $\psi_2$ until $\tilde{\pi}_{idx,k-1}$ has delivered $\psi_2$. However, in case $\pi_{idx}$ attempts $\psi_2$, then $\tilde{\pi}_{idx,k}$ also attempts it.
Using the same techniques that were used for proving the inequality~\eqref{ineq:pol_1}, it can be shown that the costs satisfy
\begin{align*}
\left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}_{idx,k}}\leq \left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}_{idx,k-1} }, k\in \mathbb{Z}^{+}.
\end{align*}
Since it was shown in~\eqref{ineq:pol_1} that $\tilde{\pi}_{idx}$, equivalently $\tilde{\pi}_{idx,0}$, has a lower cost than $\tilde{\pi}$, this shows that
\begin{align*}
\left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}_{idx,k} } \leq \left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}_{idx} } ,~k\in \mathbb{Z}^{+}.
\end{align*}
Now, observe that the cost incurred by $\pi_{idx}$ is equal to that incurred by $\tilde{\pi}_{idx,k}$ with a probability equal to
\begin{align*}
\mathbb{P}\left( |\pi_{idx} \left[T_{2}+1,T_{1} -1 \right]|=k | \mathcal{F}_{T_2} \right).
\end{align*}
Since the costs of $\tilde{\pi}_{idx,k}$ are lower than that of $\tilde{\pi}$, we then have that
\begin{align*}
\left(\sum_{t=1}^{T} e^2(t) \right)_{\pi_{idx} } \leq \left(\sum_{t=1}^{T} e^2(t) \right)_{\tilde{\pi}}.
\end{align*}
This completes the proof.
\end{proof}
Consequently, we are now in a position to state the optimality of the Index Policy.
\begin{corollary}
The Index Policy that serves according to
\begin{align}
u(t) \in \arg\min \left\{ W^2_{\psi}(t) \big | \psi \in Q(t) \right\},
\end{align}
minimizes the estimation error $\mathbb{E}\left( \sum_{t=1}^T e^2(t)\right)$ under the Assumption~\ref{assum:1}. The optimal cost is a function of the starting state $e(0),Q(0)$, the time-duration that the system operates, and the channel reliability. Henceforth we denote this optimal cost by $J^{\star}(e(0);p;T)$.
\end{corollary}
\begin{proof}
Since the problem~\eqref{eq:cost} is a Markov decision process (MDP)~\cite{puterman}, we can obtain an optimal policy by solving the corresponding dynamic programming recursions. Let us assume that Index Policy is optimal when the scheduler has to make decisions for times $t\in [s,T]$, or equivalently the DP backward induction until time $t=s$ yields us $\pi_{idx}$. It then follows from Theorem~\ref{th:index_opt} that Index Policy continues to be optimal at time $t=s-1$, i.e. the DP recursion for time $t=s-1$ also yields $\pi_{idx}$. Since it was shown in Lemma~\ref{lemma:index_opt} that Index Policy is optimal at time at time $t=T-1$, we conclude by using backward induction that it is also optimal for all times $t\in [1,T]$.
\end{proof}
\section{Multiple Processes Sharing a Common Network For Communicating Update Packets}\label{sec:multiple_users}
In the setup considered so far, there is only a single process of interest. However, modern cyber-physical system (CPS) such as the smart-grid, vehicular networks, etc. are comprised of multiple complex processes that need to be monitored in real-time by their respective destination nodes.
Thus, we now consider the case when multiple processes share the same network in order to transport their packets containing updates to their respective destination nodes.
As shown in Fig.~\ref{fig:decomposition_1} there are $N$ processes denoted $x_i(t),i\in [1,N]$. The evolution of the $i$-th process $x_i(t)$ is described as
\begin{align}\label{eq:n_dyn}
x_i(t+1) = a_i x_i(t) + w_i(t), t\in [1,T-1],
\end{align}
where the noise process $w_i(t)$ is i.i.d. across times for each of the $N$ processes, has a Gaussian distribution with $0$ mean with variance $\sigma^2_{i,p}$. The $w_{i}(t)$ are also independent across different processes.
Each process is monitored by multiple sources. For sake of simplicity, we assume that there are $M$ sources for each of the $N$ processes, though the results can easily be generalized to the case when the numbers of sources differ across processes. The sources share a single network node $\mathcal{N}$ that transmits their packets to their respective destination nodes $\mathcal{D}_{i},i\in [1,N]$ via an unreliable link that has a reliability $p$. $\mathcal{N}$ maintains $N$ separate queues $Q_i(t),i\in [1,N]$, and stores packets from the sources of process $i$ in $Q_i(t)$. The destination node $\mathcal{D}_i$ maintains a Kalman-like linear filter for estimating $x_i(t)$, and its estimate is denoted $\hat{x}_i(t)$ which is updated as in~\eqref{eq:kalman_filter} with the term $\hat{s}(t)$ replaced by $\hat{x}_i(t)$.
The network node $\mathcal{N}$ makes scheduling decisions $u(t),t\in [1,T-1]$ regarding which packet from the $N$ queues, i.e., $\cup_{i=1}^{N} Q_i(t)$, should be scheduled for transmission. It can utilize its past observations, i.e., $\left\{Q(s)\right\}_{s=1}^{t},\left\{u(s),x_{\ell}(s)\right\}_{s=1}^{t-1}$, where $x_{\ell}(s)$ is the state of the channel which connects $\mathcal{N}$ to the destination node of the queue which was served at time $s$. More concretely, it can utilize a) the age values and the sensor precision information of all the packets it has received so far b) the channel state value for the channels it used for past transmissions.
The objective to be minimized is the cumulative quadratic estimation error of $N$ users during the time period $[1,T]$. Thus,
the \emph{Multi Process Scheduling Problem} is:
\begin{align}\label{eq:problem_n}
&\min_{\pi}J(\pi) :=\mathbb{E}_{\pi}\sum_{i=1}^{N}\left(\sum_{t=1}^{T} e^2_{i}(t)\right),
\end{align}
where $e_i(t)$ is the estimation error for $i$-th process.
It can be shown, using standard results on controlled Markov processes~\cite{kumar,puterman}, that it suffices to consider only Markovian policies, i.e., those policies for which $u(t)$ is a function of $\left\{Q_i(t)\right\}_{i=1}^{N}$. However, to derive the optimal policy, we need to obtain the value function associated with the stochastic control problem~\eqref{eq:problem_n} by solving the Dynamic Programming recursions~\cite{kumar,puterman}. For the problem~\eqref{eq:problem_n} the system state at time $t$ is described by $\left\{Q_i(t)\right\}_{i=1}^{N}$, i.e., the age and sensor precision values for all the packets present in the $N$ queues. The state-space thus grows exponentially with $N$. Since the computational complexity of Dynamic Programming recursions is proportional to the size of state-space, it is impractical to solve for the optimal policy. Thus, we now restrict the optimization problem~\eqref{eq:problem_n} to a particular class of policies, and this allows us to obtain a tractable solution.
Let the scheduling decision at time $t$ be $u(t)=\left(u_1(t),u_2(t)\right)$, where $u_1(t)\in [1,N]$ decides which of the $N$ queues is served, while $u_2(t)$ denotes the packet from $Q_{u_1(t)}(t)$ that is scheduled for transmission. We will restrict ourselves to the class of policies under which the $u_1(t)$'s are i.i.d. across time and do not depend upon system state $\left\{Q_i(t) \right\}_{i\in[1,N]}$. Such a policy is parametrized by a probability vector $\vec{p}:=\left\{ p_i \right\}_{i\in [1,N]}$ and $u_1(t)=i$ w.p. $p_i$ at each time. Thus, $\vec{p}\in \Delta(N)$, where $\Delta(N)$ is the $N$-dimensional simplex. We denote the class of such policies by $\Pi_{i.i.d.}$, i.e.,
\begin{align*}
\Pi_{i.i.d.} := \big\{ \pi: & \mathbb{P}\left(u_1(t) =i | \left\{Q_i(s)\right\}_{i=1}^{N}, s\in [1,t] \right) = p_i, \\
&\forall t, i\in [1,N] \big\}.
\end{align*}
Fix a $\vec{p}\in\Delta(N)$ and let $u_1(t)$ be according to $\vec{p}$. We will now derive a scheduling algorithm that will prioritize the packets on basis of their ages and sensor qualities, thereby making the decisions $u_2(t)$, i.e., we will solve
\begin{align}
&\min_{\left\{u_2(t)\right\}_{t\in [1,T]}}J :=\mathbb{E}\sum_{i=1}^{N}\left(\sum_{t=1}^{T} e^2_{i}(t)\right)\label{eq:problem_n_iid}\\
&\mbox{ s.t. } u_1(t) \sim \vec{p}~\mbox{ i.i.d. } \label{eq:problem_n_iid_const}
\end{align}
Now, since the decisions $u_1(t)$ are chosen i.i.d. according to $\vec{p}$, in order to analyze the combined system comprising of $N$ processes, we could equivalently assume that we are dealing with $N$ systems, where each system is composed of a \emph{single process} that is being estimated remotely. At the beginning of time-slot $t$, the queue for process $i$ obtains access to the channel for transmitting packet The packet transmission, if any, is successful w.p. $p$. Thus, the scheduler for the $i$-th process has to make scheduling decisions regarding the packets of \emph{only} process $i$, and can base them on the knowledge of only $Q_i(t)$ and not $\left\{Q_i(t)\right\}_{i\in [1,N] }$.
This decomposition property is shown in Fig.~\ref{fig:decomposition_2}. Next, we make this decomposition property concrete.
\begin{figure*}[!h]
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{figures/decomposition_1.pdf}
\caption{The original problem requires making decisions for $N$ processes, and suffers from the curse of dimensionality since the computational complexity of obtaining optimal policy grows exponentially with $N$.}
\label{fig:decomposition_1}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{figures/decomposition_2.pdf}
\caption{The problem of Fig.~\ref{fig:decomposition_1} decomposes into $N$ single-process problems, so that its complexity grows linearly with $N$.}
\label{fig:decomposition_2}
\end{subfigure}
\end{figure*}
Consider the problem~\eqref{eq:problem_n_iid}-\eqref{eq:problem_n_iid_const} of optimally scheduling data packets for $N$ processes while restricting to the class $\Pi_{i.i.d.}$. We will obtain a solution to it by solving $N$ \emph{single-process problems}. We define the single-process problems below.
\begin{definition}(Single Process Scheduling Problem)
The optimal scheduling problem for the $i$-th process is described as follows. There is a single Gauss-Markov process $x_i(t)$ that evolves as~\eqref{eq:n_dyn}. $M$ sensors record observations of the process $x_i(t)$, encode it into data packets and send them to the network node $\mathcal{N}_i$. $\mathcal{N}_i$ is connected to the destination node $\mathcal{D}_i$ via a channel of reliability $p_i$. The node $\mathcal{N}_i$ gets the opportunity to transmit if $u_1(t)=i$. The process $u_i(t)\in [1,N]$ is i.i.d. and distribted according to $\vec{p}$.
The scheduler at $\mathcal{N}_i$ is denoted $\pi_i$, and has to make scheduling decisions $u_2(t)$ on the basis of $Q_i(t),u_1(t)$ in order to minimize the estimation error
\begin{align*}
\mathbb{E}_{\pi_i} \left(\sum_{t=1}^{T} e^2_i(t) \right).
\end{align*}
The Index Policy is optimal for the above problem, and attains a cost equal to $J^{\star}_i(e_i(0);T)$. We denote the index policy applied to solve the above problem for process $i$ by $\pi^{(i)}_{idx}$.
\end{definition}
Next, we obtain a scheduler for the original problem~\eqref{eq:problem_n_iid}-\eqref{eq:problem_n_iid_const} by combining solutions of the \emph{single-process problems}. We denote by $\otimes_{i=1}^{N} \pi^{(i)}_{idx}$ the scheduler which implements the policy $\pi^{(u_1(t))}_{idx}$ at time $t$.
\begin{theorem}\label{lemma:separable_opt_pol}
Consider the problem~\eqref{eq:problem_n_iid}-\eqref{eq:problem_n_iid_const} of scheduling data packets for $N$ processes, where the channel access decisions are made in an i.i.d. manner. The policy $\otimes_{i=1}^{N} \pi^{(i)}_{idx}$ is optimal, and its estimation error cost is equal to $\sum_{i=1}^{N} J^{\star}_i(e_i(0);pp_i;T)$.
\end{theorem}
We can further decrease the cost~\eqref{eq:problem_n} by optimizing over the $u_1(t)$ proess, i.e., the choice of vector $\vec{p}$. One possibility to perform this is to use an ``online learning" algorithm such as the Simultaneous Perturbation Stochastic Approximation (SPSA)~\cite{spall1992multivariate}.
\section{Conclusion}\label{sec:conclusion}
We considered the problem of optimally scheduling pkts. in order to minimize the estimation error of a remote estimator that relies upon data pkts. that it receives from multiple sources.
We have shown that the optimal policy prioritizes pkts. on the basis of their \emph{Value of Information}. The Value of Information contained in a pkt. depends not only upon its age, i.e., how stale is the information content, but also on the sensor precision, i.e., the quality of information. Such a policy is easily implementable.
Further investigations include important extensions of the results that we obtained in this work. These include joint optimization of the filtering and the scheduling processes, decentralized scheduling over multihop networks, vector-valued processes, bounding the sub-optimality arising from using a constant gain, etc.
\section{Appendix}
\subsection{Proofs of Lemmas}
\begin{proof}[Lemma~\ref{lemma:index_opt}]
Since,
\begin{align*}
a^{ \tau_{\psi}(s) }y_{\psi} &= a^{ \tau_{\psi}(s) } x(t - \tau_{\psi}(s) ) + a^{ \tau_{\psi}(s) } w_s(t-\tau_{\psi}(s)),\\
\mbox{ and } ~~x(s) &= a^{ \tau_{\psi}(s) } x(s - \tau_{\psi}(s) ) + \sum_{l=1}^{\tau_{\psi}(s)} a^{l} w(s - l),
\end{align*}
we have
\begin{align} \label{eq:weighted_measure}
a^{ \tau_{\psi}(s) }y_{\psi} = x(s)- \sum_{l=1}^{\tau_{\psi}(s) } a^{l} w(s - l) + a^{ \tau_{\psi}(s) } w_s(t-\tau_{\psi}(s)).
\end{align}
Recall that the filter update equation is given by
\begin{align}
\hat{x}(t+1) &= a \hat{x}(t) + K \left( a^{ \tau_{\psi(t)}(t+1) }y_{\psi(t)} - a \hat{x}(t) \right)\notag\\
&= a \hat{x}(t) + a K \left( a^{ \tau_{\psi(t)}(t) }y_{\psi(t)} - \hat{x}(t) \right)\label{eq:kalman_filter_repeat}
\end{align}
Substituting~\eqref{eq:weighted_measure} into~\eqref{eq:kalman_filter_repeat} with $s=t$, and $\psi = \psi(t)$ we obtain
\begin{small}
\begin{align*}
\hat{x}(t+1) = a \hat{x}(t) + a K\left( x(t) - \hat{x}(t)\right) + w_{s,\psi(t)}(t) + w_{p,\psi(t)}(t) ,
\end{align*}
\end{small}
\noindent where $w_{s,\psi(t)}(t) + w_{p,\psi(t)}(t)$ are the sensing and process noise as defined earlier. Noting that $x(t+1)=a x(t)+w(t)$, the update for the error $e(t)$ is obtained by subtracting $\hat{x}(t+1)$ from $x(t+1)$, and is equal to
\begin{align}\label{eq:error_update}
e(t+1) &= a (1-K)e(t) - \left(w_{s,\psi(t)}(t) + w_{p,\psi(t)}(t)\right) + w(t).
\end{align}
Now define
\begin{align}\label{def:noises_variance}
W^{2}_{s,\psi}(t) : &= a^{ 2\tau_{\psi}(t) } \sigma^2_{s,\psi} \notag\\
W^{2}_{p,\psi }(t) : &= \sigma^2 \frac{a^{2\tau_{\psi}(t)} -1}{a^2 -1} + (1+ K)^2 \sigma^2.
\end{align}
\end{proof}
\begin{proof}[Lemma~\ref{lemma:compare_times}]
We note that the index of $\psi_1$ was lesser than that of $\psi_2$ at time $t=1$. Since it was shown in Lemma~\ref{lemma:index_order} that the order of the indices of packets does not change with time, this implies that the index of $\psi_2$ under $\pi_{idx}$ is always greater than the index of $\psi_1$ under $\tilde{\pi}$. Since both $\tilde{\pi},\pi_{idx}$ implement the least index first rule, and since the set of packets received by both policies are the same, it then follows that $\psi_1$ will be served under $\tilde{\pi}$ earlier than the time at which $\psi_2$ will be served under $\pi_{idx}$. This completes the proof.
\end{proof}
\subsection{Some Useful Results}
\begin{comment}
\begin{lemma}(Decomposition Lemma)\label{lemma:decompose}
Consider the filtering error process $e(t)$ as defined in~\eqref{eq:error_def}, and evolution described by~\eqref{eq:error_update_lemma}. Let $\mathcal{T}$ be a stopping time~\cite{shiryaev2007optimal} with respect to the filtration generated by the process $\left(Q(t),u(t)\right)$ and let $\mathcal{F}_{\mathcal{T}}$ denote the filtration until time $\mathcal{T}$. Then the following holds true
\begin{align*}
\sum_{t>\mathcal{T}} \mathbb{E}~e^2(t) &= \mathbb{E} \left(\mathbb{E}\left( \sum_{s>\mathcal{T} } W^2_{\psi(s)}(s) \sum_{t>s}\alpha_{s,t} \Big| \mathcal{F}_\mathcal{T} \right)+ e^2(\mathcal{T}) \mathbb{E}\left( \sum_{t>\mathcal{T} } \alpha_{\mathcal{T},t} \Big| \mathcal{F}_\mathcal{T} \right) \right)
\end{align*}
where $\alpha_{s,t}:= \prod_{m=s}^{t} a^2(t)$, and
\begin{align}
a(t) =
\begin{cases}
a(1-K) \mbox{ if }~\psi(t) \neq \left\{ \emptyset \right\}\notag\\
a \mbox{ if }~\psi(t) = \left\{ \emptyset \right\},
\end{cases}
\end{align}
while $W^{2}_{\psi}(t)$ are as in in~\eqref{def:noises_variance_pkt},~\eqref{def:noises_variance_pkt_1}.
\end{lemma}
\vspace{1cm}
\end{comment}
\begin{lemma}\label{lemma:negative_delta}
Let the system dynamics $a$, and the estimator gain $K$ satisfy the Assumption~\ref{assum:1}, and let $\alpha_{s,t}:= \prod_{m=s}^{t} a^2(t)$.
Let $\mathcal{T}$ be a stopping time that satisfies $\mathcal{T}>1$. We then have that
\begin{align*}
\left( \sum_{t\in [1,\mathcal{T}]}\alpha_{1,t} - \left( (a^2)^{\mathcal{T}} - \alpha_{1,\mathcal{T}} \right) \sum_{t > \mathcal{T}}\alpha_{\mathcal{T},t} \right) >0.
\end{align*}
\end{lemma}
\begin{proof}
If the time $\mathcal{T}>1$, we have
\begin{align*}
& \sum_{t\in [1,\mathcal{T}]}\alpha_{1,t} - \left( (a^2)^{\mathcal{T}} - \alpha_{1,\mathcal{T}} \right) \sum_{t > \mathcal{T}}\alpha_{T,t} \\
&= \sum_{t\in [1,\mathcal{T}]}\alpha_{1,t} + \alpha_{1,\mathcal{T}} \sum_{t > \mathcal{T}}\alpha_{\mathcal{T},t} - \left( (a^2)^{\mathcal{T}} \right) \sum_{t > \mathcal{T}}\alpha_{\mathcal{T},t} \\
&\geq \sum_{t\in [1,\mathcal{T}]} (a^2_c)^{t} + (a^2_c)^{\mathcal{T}} \sum_{t > \mathcal{T}}\alpha_{\mathcal{T},t} - \left( (a^2)^{\mathcal{T}} \right) \sum_{t > \mathcal{T}}\alpha_{\mathcal{T},t} \\
&\geq \left(\sum_{t\in [1,\mathcal{T}]} a^{2t}_c \right) - \frac{ (a^2)^{\mathcal{T}} - (a^2_c)^{\mathcal{T}} }{1-a^2} \\
&\geq 1 + a^2_c - \frac{ (a^2)^{\mathcal{T}} - (a^2_c)^{\mathcal{T}} }{1-a^2} \\
&\geq 1 + a^2_c - \frac{ (a^2)^{\mathcal{T}} }{1-a^2} \\
&\geq 1 + a^2_c - \frac{ a^2 }{1-a^2},
\end{align*}
where the inequalities follow from the fact that $0<a_c^2<a^2<1$. Hence, it suffices to show that $1 + a^2_c - a^2 \slash (1-a^2)\ge 0$ under Assumption~\ref{assum:1}. Setting $a_c = a(1-K)$, this condition simplifies to
\begin{align*}
1 \ge \frac{a^2 - a^2(1-K)^2 -a^4(1-K)^2 }{1- a^2}.
\end{align*}
A sufficient condition to ensure the above inequality, is to satisfy
\begin{align*}
1 \ge \frac{a^2 - a^2(1-K)^2 }{1- a^2},
\end{align*}
or equivalently
\begin{align*}
1 \ge a^2\frac{(2-K) K }{1- a^2} \equiv K(2-K) \le (1-a^2)\slash a^2.
\end{align*}
Since $1-K>0$, the condition is satisfied if $K<\frac{1-a^2}{a^2}$.
\end{proof}
\bibliographystyle{IEEEtran}
|
2,869,038,157,090 | arxiv |
\section{Introduction} \label{sec:intro}
Motivated by applications in crowdsourcing and device-to-device (D2D) communication, we consider an online bipartite matching problem over a bipartite graph $G(L\cup R,E)$, where the right vertex set $R$ is known ahead of time, while left vertices of $L$ arrive sequentially in a random order. The incident edge utilities from a vertex $\ell\in L$ to set $R$ are revealed only upon its arrival, as well as its cost $c_{\ell}$, and the problem is to decide which vertex of $R$ to match with $\ell$, if at all, immediately and irrevocably. If vertex $\ell$ is matched, a payment $p_{\ell}$ is made to vertex $\ell$ that has to be at least as much as its reported cost $c_{\ell}$. A total budget constraint of $B$ is assumed for payments to be made to the matched left vertices. We assume that left vertices are strategic players, which could potentially manipulate the reporting of their true cost, and hence seek a truthful algorithm, i.e., such that no incoming vertex has incentive to misreport its cost.
The considered problem is a {\it truthful} generalization of the online knapsack problem, where each item has a value and a weight, and the problem is to accept each item instantaneously and irrevocably, so as to maximize the sum of the values of the accepted items subject to a sum-weight constraint. If all left vertices in our problem are truthful, then keeping $p_{\ell} = c_{\ell}$ satisfies the truthfulness constraint, and the considered problem specializes to the knapsack problem with knapsack size of $B$. The best known algorithm for the online knapsack problem is a $10e$-competitive algorithm \cite{babaioff2007knapsack}.
\vspace{0.3in}
The considered problem is a special case of a reverse auction \cite{myerson1981optimal}, where users submit bids for accomplishing a set of tasks and if selected, expect a payment at least as much as their reported bids. The generality is in not enforcing the one-to-one matching constraint, i.e., one user can do more than one task, or any task can be assigned to more than one user.
Budget feasible mechanisms have been introduced in \cite{Singer10} for reverse auctions (set $R$ is made of a single vertex). For the offline problem, with a matching constraint similar to this paper, a $3$-approximate algorithm has been derived in \cite{goel2013matching} that is one-sided truthful. When the goal is to maximize the number of assigned tasks, \cite{Singer13} provides a $320$-competitive truthful algorithm assuming the secretary-input model.
Some important applications of the considered problem are in crowdsourcing and device-to-device (D2D) communication, that is expected to be part of modern wireless communication standards. For crowdsourcing applications, a platform advertises a set of tasks it wants to accomplish and multiple users successively bid for completing those tasks and expect some payment towards that end. The job of the platform is to select the set of users that maximize its utility under a total budget constraint. If the platform is not careful in selecting its payment strategy, bidders have an incentive to misreport their costs \cite{yang2012crowdsourcing, subramanian2015online}, and therefore we seek a truthful algorithm for matching users to tasks and decide on corresponding payments.
From a D2D communication perspective, consider a single base station (BS) and a set of users $U$ that are connected to that BS. Let $R\subseteq U$ be the set of active users, while the remaining $L= U\backslash R$ are inactive but can potentially help users of set $R$ to relay their communication to/from the BS, as envisaged in future networks. For any inactive user $\ell \in L$, the set of active users it can help is $R(\ell) \subseteq R$ and at any time it can only help any one user from $R(\ell)$. Since relaying requires $\ell$ to spend some of its resources, e.g., battery, it is natural to assume that $\ell$ expects some payment for its help, and submits a corresponding bid at the time of advertising its inactive state and ability to help. The job of the BS is to allocate at most one helper (matching) from set $L$ for each user in set $R$, and decide the corresponding payment that is at least as much as the bid of that user in $L$.
Clearly, there is incentive for users in set $L$ to misreport their bids in order to extract more payments and this motivates the need to seek truthful matching algorithm.
The main contribution of this paper is a $24\beta$-competitive randomized matching algorithm that is truthful and satisfies the payment budget constraint, where $\beta$ is the ratio of the largest to the smallest utility of any edge. To keep the problem non-degenerate, similar to other prior related works on online algorithms \cite{babaioff2007knapsack, KorulaPal}, we consider a secretarial input model, where the order of arrival of left vertices is uniformly random, but their utilities and bids can be arbitrary or even adversarial. Under this model, we modify the offline algorithm \cite{goel2013matching} and then use the sample and price algorithm \cite{babaioff2007knapsack} to make the algorithm online, where the novelty is in terms of defining the price for each right vertex and the corresponding payment for any left vertex that is matched to the right vertex. To contrast with the online knapsack problem, which we noted is a special case where truthfulness is guaranteed, the price of truthfulness is $24\beta/10e $ in terms of competitiveness.
\section{Online Truthful Budgeted \\ Matching}
Let $G = (L \cup R, E)$ be a bipartite graph with left vertices $L$, and right vertices $R$, and edge set $E$.
The utility or weight of each edge $e$ is $u(e)$. For a set of edges $E'$, its utility is $u(E') = \sum_{e\in E'} u(e)$.
Each left vertex $\ell$ has an associated cost or bid $c_{\ell}$, that does not depend on the right vertex $r$. We denote $c(e)=c_{\ell}$ as the cost/bid of edge $e=(\ell,r)$.
If a left vertex $\ell$ is matched to a \marceau{right vertex $r$}, then a minimum payment of \marceau{$c_{\ell}$ } has to be made to user $\ell$, with an overall budget constraint of $B$. Let $u_{max} = \max_e u(e)$ and $u_{min} = \min_e u(e)$. We assume that $\frac{u_{max}}{u_{min}} \le \beta$ . Moreover, we also assume the typical large market assumption \cite{goel2013matching}, i.e., $\frac{u_{max}}{u^*}$ is small, where $u^*$ is the optimal sum-utility of the matching under the budget constraint. Thus, no single user can influence the outcome significantly.
\begin{rem}As shown in \cite{yang2012crowdsourcing}, if bids of left vertices are used as payments, there is incentive for left vertices to misreport their bids, and consequently the mechanism is not truthful or incentive compatible. Thus, the payment strategy is non-trivial.
\end{rem}
In this work, we consider the online problem, where the set $R$ of right vertices is known ahead of time, while the left vertices of set $L$ arrive sequentially in time and reveal their edge set (and utilities) incident on the right vertices together with their bid.
On each left vertex arrival, it has to be matched irrevocably to any one of the unmatched right vertex at that time, if at all. If a left vertex is matched, then the payment to be made to it is also decided at the time of its arrival that cannot be changed later.
To keep the problem non-degenerate in terms of competitive ratio, we assume that the order of arrival of left vertices is uniformly random, that is each permutation of left vertices is equally likely. As a result, the objective is to find a truthful algorithm with constant expected competitive ratio under the payment budget constraint of $B$. The weights (utilities), however, are allowed to be selected by an adversary.
Before considering the online scenario, we first discuss the offline (all left vertices and their edges are available non-causally) case, and define the optimal fractional solution (matching under budget constraint) to be $\mathsf{OPT}(B)$.
Note that for defining OPT, we are using raw bids as payments and truthfulness is not required.
We note an important property for $\mathsf{OPT}(B)$ whose proof is immediate.
\begin{lemma}\label{lem:optscaling} For $\alpha \le 1$,
$u(\mathsf{OPT}(B)) \le \frac{1}{\alpha}u(\mathsf{OPT}(\alpha B))$.
\end{lemma}
For the offline scenario, we propose a \textsc{Threshold} algorithm that is inspired (a modified version) by the \textsc{UniformMechanism} algorithm \cite{goel2013matching}, where the \textsc{Greedy} subroutine is the usual greedy matching algorithm for a bipartite graph.
We define for each edge $e = (\ell, r)$ a buck per bang $b(e) = \frac{c(e)}{u(e)}$ that represents the \marceau{cost per unit utility}. For any $\gamma$, let $G(\gamma)$ be the graph obtained by removing all edges $e \in E(G)$ with buck per bang $b(e) > \gamma$.
\begin{algorithm}
\caption{\textsc{Threshold}}\label{alg:unimech}
\begin{algorithmic}[1]
\State {\bf Input:} Graph $G$, Budget $B$, $m=|E(G)|$
\State {\bf Output:} Matching $\mathsf{M}$, Threshold $\gamma_B$
\State$\mathcal{A}(G) = \{\gamma : \sum_{e\in \mathsf{M}}\gamma u(e)\leq B,\; \mathsf{M}=\mbox{\textsc{Greedy}}(G(\gamma))\}$
\State $\gamma_B=\max\{\gamma: \gamma \in \mathcal{A}(G) \}$
\State Accept all users in $\mathsf{M} = \mbox{\textsc{Greedy}}(G(\gamma_B))$
\end{algorithmic}
\end{algorithm}
The main idea behind the \textsc{Threshold} algorithm is to find
the largest threshold $\gamma_B$, (subject to budget constraint), such that all edges whose buck per bang is more than that threshold are not considered for matching, while maintaining a (greedy) matching with large enough sum-utility.
\begin{lemma}\label{lem:umguarantee}Let $\mathsf{M}$ be the matching output by \textsc{Threshold} algorithm with input graph $G$ under budget constraint $B$. Then $u(\mathsf{M}) \ge \frac{\mathsf{OPT}(B)}{3}$.
\end{lemma}
\begin{proof}
For convenience we suppress the dependence on $B$ whenever its not essential to do so. Decompose the optimal fractional matching solution $\mathsf{OPT} = \{\mathsf{OPT}^{+} \cup \mathsf{OPT}^{-}\}$, where $\mathsf{OPT}^{+}$ contains edges of $\mathsf{OPT}$ that have $b(e) > \gamma_B$, and $\mathsf{OPT}^{-}$ contains edges of $\mathsf{OPT}$ that have $b(e) \le \gamma_B$. Similarly, let $\mathsf{OPT}(\gamma_B)$ be the optimal fractional matching on subgraph $G(\gamma_B)\subseteq G$, where $\gamma_B$ is the output threshold from the \textsc{Threshold} algorithm with graph $G$. By definition of optimal matching, $u(\mathsf{OPT}^{-}) \le u(\mathsf{OPT}(\gamma_B))$. Moreover, for $\mathsf{M}$, the output matching from \textsc{Threshold} algorithm with graph $G$, we have
$u(\mathsf{M}) \ge \frac{u(\mathsf{OPT}(\gamma_B))}{2}$, since $\mathsf{M}$ is a greedy matching on $G(\gamma)$ (subgraph with all edges having $b(e) \le \gamma_B$). Therefore,
$u(\mathsf{M}) \ge \frac{u(\mathsf{OPT}^{-})}{2}$.
All edges $e = (\ell ,r) \in\mathsf{OPT}^{+}$, have $b(e) = \frac{c_{\ell}}{u(e)} > \gamma_B$. Thus,
$u(\mathsf{OPT}^{+}) = \sum_{e =(\ell, r) \in \mathsf{OPT}^{+}} x(\ell) u(e) < \frac{\sum_{e = (\ell ,r) \in \mathsf{OPT}^{+}}x(\ell) c_{\ell}}{\gamma_B}$, where $x(\ell)$ are fractional weights in the optimal solution.
Moreover, the total budget constraint of $B$ ($\sum_{e = (\ell ,r) \in \mathsf{OPT}} x(\ell)c_{\ell} \le B$) implies that $u(\mathsf{OPT}^{+}) < \frac{B}{\gamma_B}$. Assuming that the budget constraint is tight with the \textsc{Threshold} algorithm ($\sum_{e\in \mathsf{M}}\gamma_B u(e)= B$), $u(\mathsf{M}) = \frac{B}{\gamma_B}$. Therefore,
$u(\mathsf{OPT}^{+}) < u(\mathsf{M})$. Combining this with $u(\mathsf{M}) \ge \frac{u(\mathsf{OPT}^{-})}{2}$, we have $u(\mathsf{M}^*) \le 3 u(\mathsf{OPT})$ as required.
If the budget constraint is not tight with the \textsc{Threshold} algorithm, then under our assumption that the maximum utility of any edge is $u_{max}$, and by the definition of \textsc{Threshold} algorithm that finds the largest feasible $\gamma$, the leftover budget $B-\sum_{e\in \mathsf{M}}\gamma_B u(e)$ is no more than $\gamma_B u_{max}$, and similar argument gives us that $u(\mathsf{OPT}) \le (3+o(1)) u(\mathsf{M})$.
\end{proof}
We next state a critical lemma for analyzing the proposed online version of \textsc{Threshold}, $\mathsf{ON}$.
\begin{lemma}\label{lem:monotonegreedymatching}
Let $G = (L\cup R, E)$ and $F\subseteq G$, such that $F = (L\backslash L'\cup R, E')$, and the edge set $E'$ is such that all edges incident on left vertices in set $L'$ are removed simultaneously, while all edges incident on $L\backslash L'$ are retained as it is. Then $$u(\textsc{Greedy}(G)) \ge u(\textsc{Greedy}(F)).$$ Moreover $$u(\textsc{Greedy}(G(\gamma_1))) \ge u(\textsc{Greedy}(G(\gamma_2)))$$ for $\gamma_1 \ge \gamma_2$, and $u(\textsc{Greedy}(G(\gamma))) \ge u(\textsc{Greedy}(F(\gamma)))$.
\end{lemma}
\begin{proof} For arbitrary subgraph $F \subseteq G$, $u(\text{\textsc{Greedy}}(G))$ may or may not be larger than $u(\text{\textsc{Greedy}}(F))$. However, when a left vertex is removed (by deleting all edges incident to it), the proof of claim $1$ follows standard procedure by showing that the weight of the edge incident on any right vertex in $\text{\textsc{Greedy}}(G)$ is at least as much as in $\text{\textsc{Greedy}}(F)$. Detailed proof is omitted for lack of space.
For the second and third claim, note that an edge $e$ incident on left vertex $\ell$ is removed in $G(\gamma)$ compared to $G$, if
$b(e) > \gamma$ or equivalently if $u(e) < \frac{c_{\ell}}{\gamma}$.
Recall that the cost of any edge only depends on the index of its left vertex. Hence, if edge $e =(\ell, r)$ is removed from $G$ to obtain $G(\gamma)$,
then all the edges $e'$ incident on $\ell$ with utility $u(e') < u(e) $ are also removed. So essentially, edges are removed monotonically from $G$ to produce $G(\gamma)$. So the proofs for the second and third claim follow similarly to the first.
\end{proof}
The importance of Lemma \ref{lem:monotonegreedymatching} is in showing that \textsc{Threshold} is solvable in polynomial time and the threshold $\gamma_B$ is monotonic. We prove the two claims as follows.
\begin{lemma}\label{lem:polytimecomplexity}
\textsc{Threshold} is solvable in polynomial time.
\end{lemma}
Algorithm \textsc{Threshold} involves finding a maximum in Step 4. We will show that one can use bisection to solve this maximization. We would like to note that if $u(\textsc{Greedy}(G(\gamma))) \ngtr u(\textsc{Greedy}(F(\gamma)))$, then finding this maximum is non-trivial.
\begin{proof} From the definition of Algorithm \textsc{Threshold} its clear that if any $\gamma \in \mathcal{A}(G)$, then $\gamma_B \ge \gamma$. Hence the key step is to show that if any $\gamma \notin \mathcal{A}(G)$, then $\gamma_B < \gamma$ which follows from the second claim of Lemma \ref{lem:monotonegreedymatching}, that $u(\textsc{Greedy}(G(\gamma_1))) \ge u(\textsc{Greedy}(G(\gamma_2)))$ for $\gamma_1 \ge \gamma_2$. Therefore, if for any $\gamma \notin \mathcal{A}(G)$, then for any $\gamma' > \gamma$, $\gamma' \notin \mathcal{A}(G)$. Hence we can use bisection to find the maximum.
\end{proof}
The main and critical difference between the \textsc{Threshold} and \textsc{UniformMechanism} \cite{goel2013matching} algorithm is the maximization step that ensures the following monotonicity property on $\gamma_B$ (Lemma \ref{lem:monotoneGamma}) that allows us to make the algorithm {\it online} using the \textsc{SampleandPrice} algorithm \cite{KorulaPal}.
\begin{lemma}\label{lem:monotoneGamma}
Let $G = (L\cup R, E)$ and $F = (L\backslash L'\cup R, E')$, where the edge set $E'$ is such that all edges incident on left vertices in set $L'$ are removed simultaneously, while all edges incident on $L\backslash L'$ are retained as it is. Then
$\gamma_B(F) \ge \gamma_B(G)$.
\end{lemma}
\begin{proof}
From Lemma \ref{lem:monotonegreedymatching}, \begin{equation}\label{eq:FgGg}
u(\mathsf{M}(F(\gamma))) \le u(\text{\textsc{Greedy}}(G(\gamma))).
\end{equation}
Let the threshold and the matching obtained by running \textsc{Threshold} on $G$ with budget $B$ be $\gamma_B(G) = \gamma$, and $\mathsf{M}(G)$, respectively, where $\gamma \le \frac{B}{u(\text{\textsc{Greedy}}(G(\gamma)))}$. Now we consider $F(\gamma)$ as the input graph to the \textsc{Threshold} with same budget constraint $B$. Since $\gamma \le \frac{B}{u(\text{\textsc{Greedy}}(G(\gamma)))}$, from \eqref{eq:FgGg}, clearly,
$\gamma \le \frac{B}{u(\mathsf{M}(F(\gamma))}$, and $\sum_{e\in \mathsf{M}(F(\gamma))}\gamma u(e)\leq B$. Therefore, $\gamma \in \mathcal{A}(F)$, which by definition of $\gamma_B(F)$ implies $\gamma_B(F) \ge \gamma$.
\end{proof}
We now describe our online algorithm $\mathsf{ON}$, that produces the matching $\mathsf{M}_{\mathsf{ON}}$ and associated payments for left vertices that are part of matching $\mathsf{M}_{\mathsf{ON}}$.
\begin{algorithm}
\caption{$\mathsf{ON}$ Algorithm}\label{alg:msandp}
\begin{algorithmic}[1]
\State {\bf Input:} $L$ set of left vertices/users that arrive sequentially, $R$ set of right vertices, Budget $B' = \frac{B}{\beta}$,
\State $L_{1/2}$ = first half of left vertices $L$
\State Run \textsc{\textsc{Threshold}} on $G_{1/2}= (L_{1/2} \cup R, E_{1/2})$ with budget $B'$ to obtain $\gamma_{1/2}\triangleq\gamma_{B'}(G_{1/2})$ and matching $\mathsf{M}_{1/2}$
\For{each right vertex $r\in R$}
\State Set value $v(r):=u(e)$ for $e = (\ell, r) \in \mathsf{M}_{1/2}$
\EndFor
\State \%Decision Phase
\State$\mathsf{M}_{\mathsf{ON}} =\emptyset$
\For{every new left vertex $\ell \in L \backslash L_{1/2}$},
\State \%Pruning:
\State Delete all edges $e = (\ell, r), r\in R$ s.t. $b(e)>\gamma_{1/2}$
\State Let $e^\star = \arg \max_{e=(\ell, r), r \in R, u(e) > v(r)} u(e)$ be the largest weight (utility) edge after pruning with weight larger than the value of the corresponding right vertex.
\State Let $e^\star$ be incident on right vertex $r^\star$ \If{$\mathsf{M}_{\mathsf{ON}} \cup \{e^{\star}\}$ is a matching }
\State $\mathsf{M}_{\mathsf{ON}} = \mathsf{M}_{\mathsf{ON}} \cup \{e^\star\}$
\State Pay $p_{\ell} = \beta \gamma_{1/2} v(r^{\star})$ to user $\ell$
\Else
\State Let $\ell$ be permanently unmatched
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
The idea behind $\mathsf{ON}$ is as follows:
\begin{itemize}
\item Do not match any of the first half of left vertices (called the observation phase), and only use them to run the offline \textsc{Threshold} algorithm and find the threshold $\gamma_{1/2}$ and the matching $\mathsf{M}_{1/2}$ with budget $B' = \frac{B}{\beta}$. Recall that $\beta > 1$, hence we are finding a conservative estimate for $\gamma_{1/2}$.
\item For any right vertex $r \in \mathsf{M}_{1/2}$, set its value $v(r)$ to be the weight (utility) of the matched edge in $\mathsf{M}_{1/2}$. We are assuming that $|L_{1/2}|$ is large enough compared to the number of right vertices and all right vertices are matched by \textsc{Threshold} in offline phase using the first half of the left vertices, i.e., $v(r) > 0, \ \forall \ r\in R$.
\item In the decision phase, starting with the arrival of $|L/2|+1^{th}$ left vertex, delete all edges that have buck per bang $b(e)$ larger than $\gamma_{1/2}$. Among the surviving edges, match the left vertex to the right vertex with the largest weight (utility) that is higher than the value of the right vertex found from $\mathsf{M}_{1/2}$, if any.
\item For each matched left vertex $\ell$, pay $p = \beta \gamma_{1/2} v(r)$, where $r$ is the right vertex to which the accepted edge from $\ell$ is matched.
\end{itemize}
We now compute the expected utilities of matchings $\mathsf{M}_{1/2}$ and $\mathsf{M}_{\mathsf{ON}}$, where the expectation is over the uniformly random left vertex arrival sequences.
\begin{lemma}\label{lem:offguarantee} ${\mathbb{E}}\{u(\mathsf{M}_{1/2})\} \ge u(\mathsf{OPT}(\frac{B}{\beta}))/12$.
\end{lemma}
\begin{proof} Let $B' = \frac{B}{\beta}$. Let $G = (L \cup R, E)$ be the full graph, while $G_{1/2} = (L_{1/2}(\sigma) \cup R, E_{1/2}(\sigma))$, be the graph consisting of only the first half of left vertices that depends on arrival sequence $\sigma$.
Since $G_{1/2}\subseteq G$, from Lemma \ref{lem:monotoneGamma}, we have $\gamma_{B'}(G_{1/2}) \ge \gamma_{B'}(G)$.
Thus, all the edges of $G(\gamma_{B'}(G))$ that are incident on left vertices $L_{1/2}(\sigma)$ are also present in the pruned graph $G_{1/2}(\gamma_{B'}(G_{1/2}))$. Let the greedy matching over the 'bigger' graph $G(\gamma_{B'}(G))$ be $\mathsf{M}(G)$. Let the subset of edges of $\mathsf{M}(G)$ that are incident on left vertices belonging to $L_{1/2}(\sigma)$ be $\mathsf{M}(G)_{\text{fh}}$.
Let the optimal fractional matching on $G_{1/2}(\gamma_{B'}(G_{1/2}))$ be $\mathsf{OPT}(B')_{1/2}$,
By definition, we have $$u(\mathsf{OPT}(B')_{1/2}) \ge u(\mathsf{M}(G)_{\text{fh}}).$$ Since we are considering the uniformly random arrival model for left vertices, i.e., $L_{1/2}$ is obtained by sampling each left vertex of $L$ with probability $\frac{1}{2}$, we have ${\mathbb{E}}\left\{u(\mathsf{M}(G)_{\text{fh}})\right\} \ge \frac{u(\mathsf{M}(G))}{2}$ and hence
\begin{equation}\label{eq:leftcont}
{\mathbb{E}}\left\{u(\mathsf{OPT}(B')_{1/2})\right\} \ge \frac{u(\mathsf{M}(G))}{2}.
\end{equation}
Moreover, since \textsc{Threshold} computes a greedy matching over $G_{1/2}(\gamma_{B'}(G_{1/2}))$, we have $u(\mathsf{M}_{1/2}) \ge \frac{u(\mathsf{OPT}(B')_{1/2})}{2}$ for any realization $\sigma$. From Lemma \ref{lem:umguarantee}, we already know that
$u(\mathsf{M}(G)) \ge \frac{\mathsf{OPT}(B')}{3}$. Hence from \eqref{eq:leftcont}, we have ${\mathbb{E}}\{u(\mathsf{M}_{1/2})\} \ge u(\mathsf{OPT(B')})/12$.
\end{proof}
\begin{lemma}\label{lem:onguarantee} ${\mathbb{E}}\{u(\mathsf{M}_{\mathsf{ON}})\} \ge {\mathbb{E}}\{u(\mathsf{M}_{1/2})\}/2$.
\end{lemma}
\begin{proof} Consider the graph $G(\gamma_{B'}(G))$, where $\gamma_{B'}$ is the output threshold of THRESHOLD algorithm with budget $B'$ and graph $G_{1/2}$. Then the setting of value for each right vertex, and the greedy selection of edges with weights larger than the value of right vertices in the decision phase of $\mathsf{ON}$ is identical to running SAMPLEANDPRICE algorithm (Algorithm~\ref{alg:sampleandprice}) on graph $G(\gamma_{B'}(G))$ with $p=\frac{|L_{1/2}|}{|L|} = \frac{1}{2}$, and hence it follows from Lemma 2.5 \cite{KorulaPal} that ${\mathbb{E}}\{u(\mathsf{M}_{\mathsf{ON}})\} \ge {\mathbb{E}}\{u(\mathsf{M}_{1/2})\}/2$.
\end{proof}
\begin{algorithm}
\caption{$\mathsf{SAMPLEADPRICE}$ Algorithm}\label{alg:sampleandprice}
\begin{algorithmic}[1]
\State {\bf Input:} $G=(L\cup R,E)$ and $p \in [0,1]$
\State $k \leftarrow Binomial(|L|, p)$
\State Let $L'$ be the first $k$ vertices of $L$
\State $M_1 \leftarrow$ \textsc{Greedy}($G'$), with $G'=(L'\cup R, E')$
\For{each $r \in R$}
\State Set $v(r) = u(e)$ of the edge $e$ incident to $r$ in $M_1$
\EndFor
\State $M \leftarrow \emptyset $
\For{each subsequent $\ell \in L \backslash L'$}
\State Let $e = (\ell, r)$ be the highest-weight edge with $u(e) \ge v(r)$
\If{$M\cup \{e\}$ is a matching}
\State $M = M\cup \{e\}$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
The following theorem is the main result of the paper.
\begin{theorem}
Algorithm $\mathsf{ON}$ is $24\beta$-competitive, satisfies the budget constraint and is truthful.\end{theorem}
\begin{proof} The $24\beta$-competitiveness of $\mathsf{ON}$ follows from combining Lemma \ref{lem:optscaling},\ref{lem:offguarantee}, and \ref{lem:onguarantee}. The budget feasibility and truthfulness are shown in Lemma \ref{lem:budfeas} and \ref{lem:ic}, respectively.
\end{proof}
\begin{lemma}\label{lem:budfeas}
Algorithm $\mathsf{ON}$ satisfies the payment budget constraint, and payment $p_{\ell} \ge c_{\ell}$ for any selected left vertex $\ell$.
\end{lemma}
\begin{proof} Let $\gamma_{1/2} = \gamma_{B'}(G_{1/2})$ for simplicity. For a selected left vertex $\ell$, its buck per bang $\frac{c_{\ell}}{u(e)}\le \gamma_{1/2}$ and $u(e) > v(r)$, where $e = (\ell, r)$ is the selected edge. From the definition of $\beta$, $\beta v(r) \ge u(e)$. Thus, $p_{\ell} = \gamma_{B'} \beta v(r) \ge \gamma_{1/2} u(e)$ and hence $p_{\ell} \ge c_{\ell}$.
From the definition of algorithm \textsc{Threshold}, we know that the output threshold $\gamma_{B'} \in \mathcal{A}(G)$ for the offline phase (first half of left vertices) of $\mathsf{ON}$, and hence
\begin{equation}\label{eq:umbf}
\gamma_{1/2} \sum_{e \in \mathsf{M}_{1/2}}u(e) \le B'.
\end{equation}
Since the value of any right vertex $r$ is $v(r) = u(e)$ where $e = (\ell, r) \in \mathsf{M}_{1/2}$. Therefore, from \eqref{eq:umbf}, we have that
\begin{equation}\label{eq:sumvalue}
\sum_{r: e=(\ell, r) \in \mathsf{M}_{1/2}} v(r) \le \frac{B'}{\gamma_{1/2}}.
\end{equation}
Clearly, in the decision phase of $\textsf{ON}$, at most one left vertex is selected for each right vertex, and the payment made is $p_{\ell} = \beta \gamma_{B'} v(r)$ if $\ell$ is matched, and $p_{\ell} = 0$ otherwise. Thus, the total payment made
$$\sum_{\ell, e=(\ell, r) \in \mathsf{M}_{\text{on}}} p_{\ell} \le \sum_{r, e=(\ell, r) \in \mathsf{M}_{1/2}}\beta \gamma_{1/2} v(r).$$
Thus, from \eqref{eq:sumvalue}, we get that
$\sum_{\ell, e=(\ell, r) \in \mathsf{M}_{\text{on}}} p_{\ell} \le B$.
\vspace{-0.1in}
\end{proof}
Next, we show the most important property of $\mathsf{ON}$, its truthfulness. Towards that end, we will use the Myerson's Theorem \cite{myerson1981optimal}.
\begin{theorem}\cite{myerson1981optimal}\label{Myerson_Theorem}
A reverse auction is truthful if and only if:
\begin{itemize}
\item The selection rule is monotone. If a user $\ell$ wins the auction by bidding $c_{\ell}$, it would also win the auction by bidding an amount $c_{\ell}'$, where $c_{\ell}' < c_{\ell}$.
\item Each winner is paid a critical amount. If a winning user submits a bid greater than this critical value, it will not get selected.
\end{itemize}
\end{theorem}
\begin{lemma}\label{lem:ic}
$\mathsf{ON}$ is a truthful online algorithm.
\end{lemma}
\begin{proof} As stated before, the considered problem is a special case of a reverse auction. Thus to ensure that $\mathsf{ON}$ is truthful, we show that both the conditions of Theorem \ref{Myerson_Theorem} are satisfied. The monotone condition is easy to check, since in the decision phase, if any left vertex reduces its bid, then clearly its buck per bang $b(e)$ decreases, and hence it is still accepted if it was accepted before.
The second condition of payment being critical is also satisfied, shown as follows. Note that the payment made by $\mathsf{ON}$ to a selected left vertex $\ell$ is $p_{\ell} = \beta \gamma_{B'}(G_{1/2}) v(r), e =(\ell,r) \in \mathsf{M}_{on}$, where the right vertex index $r$ is such that utility $u(e), e=(\ell, r)$ is largest among the unmatched right vertices at the time of arrival of vertex $\ell$ that have an edge to left vertex $\ell$, and $u(e) > v(r)$.
Recall that
$\frac{u_{max}}{u_{min}} \le \beta$, hence $\frac{u(e)}{v(r)} \le \beta$, where $e =(\ell,r)$.
Now, if suppose the bid $c_{\ell}$ of left vertex $\ell$ is more than $p_{\ell}=\beta \gamma_{B'}(G_{1/2}) v(r)$, then since $u(e) \le \beta v(r)$, buck per bang of left vertex $\ell$, $c_{\ell} /u(e) > \gamma_{B'}(G_{1/2})$. Moreover, since $u(e) > u(e')$ for all edges $e'$ incident on unmatched right vertices from $\ell$ at the arrival of left vertex $\ell$, we have that $c_{\ell}/u(e') > \gamma_{B'}(G_{1/2})$.
Thus, all edges out of left vertex $\ell$ incident on currently unmatched right vertices are removed in the pruning stage of the decision phase, and hence vertex $\ell$ cannot be selected.
\end{proof}
\bibliographystyle{unsrt}
|
2,869,038,157,091 | arxiv | \section{Introduction}
The data driven Neural Machine Translation(NMT), which follows the end-to-end framework, has shown its superiority on high-resource languages in recent years ~\citep{sutskever2014sequence, bahdanau2014neural, wu2016google,vaswani2017attention}. Because of the fact that NMT system is highly depended on extensive high-quality parallel data, which can only be acquired for few language pairs, it is still challenging for low-resource and zero-shot NMT ~\citep{koehn2017six}. Existing approaches for zero-shot NMT include multilingual NMT ~\citep{firat2016zero,ha2016toward,johnson2017google}, interactive multimodal framework ~\citep{kiros2014unifying,nakayama2017zero}, pivot-based NMT ~\citep{wu2007pivot,cheng2016neural,leng2019unsupervised} and teacher-student architecture ~\citep{chen2017teacher}.
We focus on the multilingual NMT system, which is simple and effective. Recent multilingual NMT with a simple approach named target-forcing, which is trained on a mixture of several parallel data and adds a token to the start of the source sentence to determine the target language, is effective for low-resource languages ~\citep{johnson2017google}. Particularly by using this way, the multilingual NMT system has possibility to perform zero-shot translation through sharing the common model. In order to improve the performance of zero-shot languages, \citet{lakew2017improving} proposed the self-learning algorithm, which generates synthetic parallel data by translating existing target data through the multilingual NMT round and round. The whole process is a self-learning cycle of train-infer-train.
However, there still exist some problems about the proposed methods for zero-shot translation. We find that the multilingual NMT system does not accurately translate the source language into the required target language when performing zero-shot translation. In addition, the self-learning algorithm, which uses beam search to choose the highest probability sentence to generate synthetic parallel data, demolishes the diversity of the generated source sentences. Especially in the last few rounds of the iterative process, the model’s effect on zero-shot translation is improved slightly and even declines. We speculate that this is because of the synthetic parallel data that is generated by beam search is almost the same in the last few iteration, which amplifies the effect of harmful noise.
In this paper, we improve the multilingual NMT and the self-learning algorithm to address the two problems. We first extend the Google’s multilingual NMT system and add a token to the start of the target language to indicate that the target language is the required one. This ensures that the source language can be translated into the required target language. Then the multilingual NMT system is trained on the available mixed parallel data until convergence called tagged-multilingual NMT. Next, we improve the self-learning algorithm by replacing beam search with random sample ~\citep{pillemer1988prevalence} to generate synthetic parallel data for zero-shot translation, which can not only increase the diversity of source language of the synthetic parallel data but also can increase fluency in the target language generated by decoder. In order to testify the effectiveness of the improved method, we experiment it on a multilingual-NMT scenario including Italian-English and Romanian-English parallel corpora, assuming the zero-shot translation is Italian-Romanian and Romanian-Italian. For the tagged-multilingual NMT, experimental results show that adding a target tag can not only make the model to accurately translate the source language into the required target language, but also can improve the performance of the multilingual NMT system, especially in zero-shot translations. For the improved self-learning method, the method effectively improve the performance of zero-shot translation and even exceed the single NMT model with 20K parallel data in Romanian-Italian translation.
In summary, our contribution are as follows:
\begin{itemize}
\item we add a token to the start of the target language to ensure the tagged-multilingual NMT can accurately generate the required target language. It significantly improve the performance of the zero-shot translation and is simultaneously helpful for the low-resource multilingual NMT.
\item We improve the self-learning method via replacing beam search with random sample to increase the diversity of the generated synthetic parallel data, which makes the generated data more relevant to real language situations.
\item Experimental result on the multilingual translation shared task published in 2017 International Workshop on Spoken Language Translation(IWSLT) shows the superiorities of our tagged-multilingual NMT model and the improved self-learning method over the previous methods.
\end{itemize}
The reminder of the article is organized as follows. Section 2 summarizes the related work and highlights the differences of our tagged-multilingual NMT model and the improved self-learning algorithm from previous studies. Section 3 briefly describes the NMT model for the multilingual NMT. Section 4 gives details of our proposed tagged-multilingual NMT model and the improved self-learning algorithm. Section 5 introduces the detail of our data sets, experiment settings and baselines. Section 6 reports the experimental results on IWSLT multilingual translation tasks. Finally, we conclude in section 7 with future work.
\section{Related work}
In this section, we first introduce the origin and development of multilingual NMT. Some existing methods for zero-shot translation, which extends multilingual NMT or is based on other architectures, are shown in the next part.
\subsection{Multilingual NMT}
Inspired by the sequence to sequence NMT, \citet{dong2015multi} proposed a one-to-many multi-task NMT to achieve higher translation quality, which has a same source language and different target languages. The same source language can make full use of the source corpora for better representation through a shared encoder. Different target languages use a separate decoder and attention mechanism, which can learn latent similar semantic and structural information across different languages. In a related work, \citet{luong2015multi} used separate encoder and decoder networks for modeling language pairs in a many-to-many setting. Aiming at reducing ambiguity at translation time, \citet{zoph2016multi} proposed a multi-source NMT with multiple encoders and one attention mechanism, which obtains excellent performance through the novel combination method to encode the common semantic of multiple source languages and the multi-source attention. Follow the Dong and Luong et al’s work, \citet{firat2016multi} proposed a multi-way multilingual NMT, which is a many-to-many translation. It shares the attention mechanism and uses different encoders and decoders across different language pairs. Experimental results show the effectiveness of the shared attention mechanism for low-resource languages.
But, due to the high complexity of the previously mentioned method, Johnson et al and Ha et al attempted to build a multilingual NMT without modifying the network architecture. \citet{ha2016toward} applied a language-specific coding to words of both source and target languages for better representation. In practice, language-specific coding for words and sub-words significantly increased the length of sentences, which causes trouble for sentence representation and attention mechanism. In order to translate into the specific target language, they use target forcing to add a token to the start and the end of the source sentence. Even more concisely, \citet{johnson2017google} just add an artificial token to the start of source sentence to indicate the required target languages.
\subsection{Zero-shot translation}
Researchers have done fantastic work for zero-shot NMT. An intuitive way is to select a medium as a pivot. \citet{cheng2016neural} proposed a pivot-based method, which use a third language as pivot, for zero-resource NMT. It translate the source language to a pivot language, which is then translated to target language. Similarly, \citet{nakayama2017zero} had shown that multimodal information is also effective as a pivot to zero-resource NMT. However, the pivot method suffers from expensive computational resource and error propagation ~\citep{zhu2013improving}. Base on the exiting problems, \citet{chen2017teacher} proposed a teacher-student architecture for zero-resource NMT by assuming that parallel sentences have close probabilities of generating a sentence in a third language. In ~\citep{chen2018zero}, chen et al proposed a multimodal framework to make full use of monolingual multimodal content to achieve direct modeling of zero-resource source-to-target NMT, which includes captioner and translator two agents. The captioner, which is CNN-RNN architecture, translated image into source sentence. The translator, which is the training target, translated source sentence into the target sentence.
The another benefit of multilingual NMT is to have possibility for zero-shot translation. By extending the approach in ~\citep{firat2016multi}, \citet{firat2016zero} extends the one-to-one pivoted based strategy for zero-shot NMT, where the second stage is replaced by the many-to-one strategy. Moreover, the attention mechanism of the many-to-one strategy is fine-tuned by the generated pseudo parallel corpus. Following google’s multilingual NMT, \citet{lakew2017improving} proposed the self-learning algorithm for zero-shot NMT, which is a process of constantly iterating through a train-infer-train mechanism to generate synthetic parallel data for zero-shot translation. By using this way, the quality of the generated parallel data is significantly improved.
Despite the success of the proposed method for zero-shot translation, we extend the method proposed by Lakew et al and found the inadequacies of the multilingual NMT and the self-learning algorithm. Therefore, we improve the multilingual NMT by adding a target token to the start of the target language to indicate the required target language and improve the self-learning method via sampling to increase the diversity of generated parallel data.
\section{Neural machine translation}
Without loss of generality, the multilingual NMT and self-learning method can be applied to any NMT models. We use the transformer model for the multilingual NMT proposed by \citet{vaswani2017attention}, which is by far the most effective end-to-end NMT model. Therefore, in this section, we briefly introduce the overall architecture of the model.
The encoder, which encodes the source sentence $\chi = \left ( \chi _{1},\chi _{2},\cdots ,\chi _{n} \right )$ into a series of context vector $ C = \left ( h _{1},h _{2},\cdots ,h _{n} \right )$, consists of six identical layers. Each layer includes two sub-layers. The first layer is a multi-head attention mechanism, which learns the association between words in a sentence by weighting the sum of all words in the sentence to express a word. The attention mechanism is computed as follows:
\begin{equation}
Attention\left ( Q,K,V \right )= softmax\left ( \frac{QK^{T}}{\sqrt{d_{k}}} \right )V
\end{equation}
Where $Q$, $K$, $V$ are query, key and value matrix. $QK^{T}$ is used to compute the weights between words. These weights are multiplied by corresponding word embedding and then added to obtain a new representation of the query word in $Q$. Finally, the encoder obtains the context vector $C$ that better presents the source sentence. Compared with RNN and LSTM, self-attention mechanism can better learn long sentence dependencies. The second layer is a full-connected feed forward network. Because of the multi-head attention mechanism map the sentence to different semantic spaces, the six-layer encoder learns the deep semantic relationship of the sentence.
Similarly, the decoder also consists of six identical layers. The difference is that each layer includes a mask multi-head attention mechanism, attention mechanism and feed forward network. Where the attention mechanism uses the weighted sum of the context vector to represent the target words, which works like word alignment to match the source words with the target words. In the inference process, the target word is generated by maxing
\begin{equation}
p(y_{t}|y_{<t},C)=softmax(f(y_{<t},C))
\end{equation}
where $y_{<t}$ are the generated words, $f$ represents a series of function operations inside the decoder. Specially, residual connection is applied between the sub-layers to avoid gradient vanishing in both encoder and decoder. At the end of each sub-layer, layer norm is applied to speed up training.
\section{Tagged-multilingual NMT and the improved self-learning algorithm}
In this section, we describe the tagged-multilingual NMT and the improved self-learning algorithm for zero-shot translation.
We extend the Google’s multilingual NMT and add a token to the start of the target language to indicate the required language. For example, the tagged Roman-English parallel data are as follows:
\begin{flushleft}
Source~(Romanian):\\
$<2en>$ Am zburat cu Air Force Two timp de opt ani.\\
Target~(English):\\
$<en>$ I flew on Air Force Two for eight years.
\end{flushleft}
We add tokens to the source sentences and corresponding target sentences like the example for all parallel data. After this, we train a multilingual NMT model called tagged-multilingual NMT as shown in Figure 1. The decoder starts with a start-tag and then generates the required target tag with the help of attention mechanism when decoding, it ensure that subsequent words are the correct target language, which is more effective for zero-shot translation.
\begin{figure}[htb]
\centering
\includegraphics[width=8.8cm,height=8cm]{./Figure1.jpg}
\caption{The tagged-multilingual NMT model and the improved self-learning algorithm.}
\label{fig1}
\end{figure}
Furthermore, considering the self-learning method with beam search, which choose the sentence with the highest probability, decreases the diversity of the generated synthetic parallel data and amplify the negative effect of noise in the process of continuous iteration, thus we improved the self-learning method via replacing the beam search method with random sample. The improved self-learning method is shown in Algorithm 1.
\begin{algorithm}[ht]
\caption{Zero-shot translation L1$\leftrightarrow$L2 }
\label{algorithm1}
\begin{algorithmic}[1]
\Require
mixed parallel data D, source language l1, target language l2
\State Tagged-Multilingual NMT $\leftarrow$ Train ($\theta$, D)
\State Monolingual L1 $\leftarrow$ Extract form (D, l2)
\State Monolingual L2 $\leftarrow$ Extract form (D, l2)
\For{i=1,N}
\State L1* L2* $\leftarrow$ using Tagged-Multilingual NMT to translate L1, L2 to generate the source langue via sampling for zero-shot translation
\State New mixed parallel data D*=(l1+L1*+L2*, l2+L1+L2)
\State Update Tagged-Multilingual NTM $\leftarrow$ train($\theta_{1}$, D*) for 3 epoch
\EndFor
\Ensure
Return updated Tagged-Multilingual NMT
\end{algorithmic}
\end{algorithm}
The improved self-learning methods can be divided into three part. For the first step, we train a tagged-multilingual NMT on the mixed parallel data in line 1. Next, the synthetic parallel data for zero-shot translation is generated by translating the target language into source language through the tagged-multilingual NMT model in line 5, which uses sample to increase the diversity of the synthetic data instead of beam search. The sample during decoding is calculated as follows:
\begin{equation}
\widetilde{y}_{t}=Sample(f(y_{t}|y_{<t},C))
\end{equation}
The decoder starts with start tag and then generates the next word based on the probability distribution of the word. By using random sample, some low-probability words are generated to create a fluent sentence, which is more fitted to the distribution of real data. Finally, we add the synthetic parallel data to the mixed data to update the tagged-multilingual NMT to get better performance round and round. The iteration is performed a total five times in line 6-7, where the tagged-multilingual NMT is trained on new mixed parallel data for 3 epoch at each time.
\section{Experiment}
In this section, we first introduce the data sets and hyper-parameters for the tagged-multilingual NMT. Next we briefly introduce the baselines that has been mentioned in the prat of related work.
\subsection{Data set and preprocessing}
We consider the scenario that there are Romanian~(Ro) $\leftrightarrow$ English~(En), Italian~(It) $\leftrightarrow$ English~(En) parallel data for the multilingual NMT, assuming that Italian(It) $\leftrightarrow$ Romanian(Ro) are zero-shot translations. The details of the datasets are shown in Table 1. All the parallel data are from the 2017 IWSLT multilingual TED talks machine translation task ~\citep{cettolo2012wit3}. The dev set and test sets are from IWSLT 2010, 2017 evaluation campaigns for development and evaluating the models. Specially, we combine the dev sets of exiting parallel data into a mixed dev set for multilingual NMT.
\begin{table}[htp]
\centering
\scriptsize
\caption{All data sets for the tagged-multilingual NMT.}
\begin{tabular}{lcccc}
\hline
\vspace{1mm}\\[-3mm]
Language pair & Train & Dev 2010 & Test 2010 & Test2017 \\
\vspace{1mm}\\[-3mm]
\hline
\vspace{1mm}\\[-3mm]
It-En & 231619 & 929 & 1566 & 1147\\
Ro-En & 220538 & 914 & 1678 & 1129\\
It-Ro & 217551 & 914 & 1643 & 1127\\
\hline
\end{tabular}
\label{table1}
\end{table}
In data preprocessing process, we first use Moses’s word segmentation tool to segment parallel data\footnote{http://www.statmt.org/moses/?n=Moses.Baseline}. Then we segment words into sub-words via Bite Pair Encoding ~\citep{sennrich2016neural} to effectively decrease the number of words that is out-of-vocabulary(OOV).
\subsection{Experiment settings}
All the experiments are trained based on the transformer models ~\citep{vaswani2017attention}, which is implemented by the Mxnet-based(version 1.4.1) NMT framework sockeye(version 1.18.99) ~\citep{hieber2017sockeye}. We do a lot of works to find suitable hyper-parameters for low-resource languages and the tagged-multilingual NMT model as shown in Table 2 and Table 3. From Table 2, we find that setting BPE merge number to 8000 and drop out to 0.3 gets the best result for It-En translation. However, from Table 3, we can see that embedding dropout is valid for the tagged-multilingual NMT. Especially, it works best on the validation set when embedding dropout is 0.3, and is close to the results when embedding dropout is 0.2. Unfortunately, compared with the embedding dropout of 0.2,it increases the training time by 30$\%$. So, we finally choose embedding dropout is 0.2 and BPE merge number is 12000 for the tagged-multilingual NMT.
\begin{table*}[htp]
\centering
\scriptsize
\caption{The hyper-parameters for It-En translation.}
\begin{tabular}{cccccccccccc}
\hline
\vspace{1mm}\\[-3.5mm]
\multicolumn{12}{c}{It-En}\\
\vspace{1mm}\\[-3.5mm]
\hline
\vspace{1mm}\\[-3mm]
BPE merge number & 36000 & 15000 & 10000 & 9000 & 8000 & 8000 & 8000 & 8000 & 7000 & 6000 & 4000\\
Drop out & 0.3 & 0.3 & 0.3 & 0.3 & 0.3 & 0.5 & 0.4 & 0.2 & 0.3 & 0.3 & 0.3\\
Dev set & 25.3 & 27.2 & 28.31 & 27.96 & 28.59 & 26.15 & 28.21 & 27.12 & 28.07 & 28.36 & 27.63\\
\hline
\end{tabular}
\label{table2}
\end{table*}
\begin{table*}[t]
\centering
\scriptsize
\caption{The experimental results of the bilingual NMT, the multilingual NMT, the tagged-multilingual NMT and their adjusted model on test 2010 and test 2017.}
\begin{tabular}{cccccccc}
\hline
Direction&\tabincell{c}{Bilingual\\Baseline1} & \tabincell{c}{Multilingual\\Baseline2} & Tagged-multilingual & \tabincell{c}{Adjusted\\ tagged-multilingual} & \tabincell{c}{Adjusted\\ bilingual} & \tabincell{c}{Adjusted\\ multilingual} & \tabincell{c}{Improved\\self-learning\\ algorithm}\\
\hline
\vspace{1mm}\\[-3mm]
\multicolumn{8}{c}{Test2010}\\
\vspace{1mm}\\[-3.5mm]
\hline
\vspace{1mm}\\[-3mm]
It-En& 29.23& 29.71& 30.16(+0.45)& 30.86(+1.15)& 30.51& 30.75 & -\\
En-It& 26.34& 26.23& 26.41(+0.18)& 27.23(+1.00)& 27.00& 26.63 & -\\
Ro-En& 31.53& 31.57& 31.74(+0.17)& 32.94(+1.37)& 32.31& 32.85& -\\
En-Ro& 23.40& 24.03& 24.51(+0.48)& 25.1(+1.07)& 24.01& 24.79&-\\
Ro-It& 19.27& 6.82& \textbf{10.66(+3.84)}& \textbf{16.23(+9.41)}& 19.69& 13.71 &19.86\\
It-Ro& 17.60& 6.09& \textbf{8.78(+2.69)}& \textbf{15.17(+9.08)}& 18.17& 14.35& 17.96\\
\hline
\vspace{1mm}\\[-3mm]
\multicolumn{8}{c}{Test2017}\\
\vspace{1mm}\\[-3.5mm]
\hline
\vspace{1mm}\\[-3mm]
It-En& 32.20& 32.34& 32.55(+0.21)& 33.31(+0.99)& 33.17& 33.69&-\\
En-It& 28.84& 29.19& 29.16(-0.03)& 30.11(+0.92)& 29.25& 30.03&-\\
Ro-En& 26.11& 27.18& 27.35(+0.17)& 28.15(+0.97)& 27.43& 27.96&-\\
En-Ro& 20.61& 20.88& 21.51(+0.69)& 21.71(0.89)& 20.14& 21.03&-\\
Ro-It& 18.77& 6.82& \textbf{9.57(+2.75)}& \textbf{14.67(+7.85)}& 19.51& 12.49& 19.34\\
It-Ro& 17.41& 5.65& \textbf{7.65(+2)}& \textbf{13.64(+7.99)}& 17.59& 12.25&16.96\\
\hline
\end{tabular}
\label{table4}
\end{table*}
\begin{table}[htp]
\centering
\scriptsize
\caption{The hyper-parameters the tagged-multilingual NMT.}
\begin{tabular}{ccccc}
\hline
\vspace{1mm}\\[-3mm]
\multicolumn{5}{c}{Tagged-multilingual NMT}\\
\vspace{1mm}\\[-3mm]
\hline
\vspace{1mm}\\[-3mm]
BPE merge number & 8000 & 12000 & 12000 & 12000\\
Embedding dropout & 0 & 0 & 0.2 & 0.3\\
Dev set & 19.56 & 20.02 & 20.15 & 20.23\\
\hline
\end{tabular}
\label{table3}
\end{table}
The other hyper-parameters of the transformer are as follows. Considering the high data sparsity of low-resource languages and to prevent over-fitting ~\citep{srivastava2014dropout} of the model, we set the label smoothing ~\citep{szegedy2016rethinking} to 0.1 and set the dropout of 0.3 for multi-head attention, feed-forward network and preprocessing block according to the sennrich et al’s work for low-resource NMT ~\citep{sennrich2019revisiting}. In addition, We use the Adam ~\citep{kingma2014adam} as the optimizer and set the initial learning rate to 0.0003. Particularly, at the beginning of the training, The warmup$-{}$traing$-{}$steps is set to 16000 to warm up the learning rate, which prevents model oscillation caused by random initialization of parameters, batch size is 4096. In the training process, we use early stop as the stop condition of the training model. If the model’s effect on the dev set is no longer improved for 10 times, we contend that the model is optimal. For decoding, a beam search size 10 is applied for the NMT model. Finally, the BLEU score, which is proven evaluation metrics, is applied to verify the effectiveness of the model ~\citep{papineni2002bleu}.
\subsubsection{Baselines}
We refer to our model as the tagged-multilingual NMT and compared it against the following baselines.
\begin{itemize}
\item Baseline 1 is a bilingual NMT based on transformer.
\item Baseline 2 is google’s multilingual NMT with self-learning algorithm for zero-shot translation.
\end{itemize}
The multilingual NMT uses a token at the start of the source language to indicate the required target language. Furthermore, the self-learning algorithm generates synthetic parallel data for zero-shot translation by translating the target language.
\section{Results and analysis}
In this section, we conduct experiment to evaluate the effectiveness of the tagged-multilingual NMT and the improved self-learning method on IWSLT multilingual translation task. We describe the experimental results and analysis them.
\begin{figure*}[t]
\centering
\begin{minipage}{6.5cm}
\includegraphics[width=6.5cm]{./Figure2.jpg}
\end{minipage}
\begin{minipage}{6.5cm}
\includegraphics[width=6.5cm]{./Figure3.jpg}
\end{minipage}
\caption{The improved self-learning algorithm for It-Ro zero-shot translation on Test2010 and Test2017.}
\label{figure2}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}{6.5cm}
\includegraphics[width=6.5cm]{./Figure4.jpg}
\end{minipage}
\begin{minipage}{6.5cm}
\includegraphics[width=6.5cm]{./Figure5.jpg}
\end{minipage}
\caption{The improved self-learning algorithm for Ro-It zero-shot translation on Test2010 and Test2017}
\label{figure3}
\end{figure*}
\subsection{Tagged-multilingual NMT}
We train the tagged-multilingual NMT on the mixed parallel data until the performance on the dev set are no longer improved for consecutive ten times. The experimental results of the adjusted tagged-multilingual NMT, the tagged-multilingual NMT, the adjusted multilingual NMT, the multilingual NMT, the adjusted bilingual NMT and the bilingual NMT on test 2010 and 2017are shown in table 4.
From table 4, we can see that the tagged-multilingual NMT achieve the same or better results than the multilingual model. More obviously, the tagged-multilingual NMT improves \textbf{3.84} and \textbf{2.69} BLEU scores respectively for Ro-It and It-Ro zero-shot translations. After tuning the hyperparameters, the tagged-multilingual NMT improves \textbf{9.41} and \textbf{9.08} BLEU scores respectively. Similarly, the results on test 2017 are in same, which improve \textbf{2.75} and \textbf{2} BLEU scores for Ro-It and It-Ro zero-shot translations. Specially the adjusted tagged-multilingual NMT improves \textbf{7.85} and \textbf{7.99} BLEU scores respectively for zero-shot translations. We also find that, for translations with parallel data such as It-En and so on, the multilingual model can translate the source language into required target language. Unfortunately, it works poorly for zero-shot translation, which translates lots of the source language into the target language that has parallel data with the source language. The reasons are that the attention mechanism of the multilingual NMT, which works like word alignment, learns the correspondences between the source language and the target language with corresponding parallel data. More importantly, the source token that can be seen a word of source language dose not have a corresponding target word, which causes the target words and the source token to have low weights in the attention mechanism. Therefore, due to the lack of parallel data, the attention mechanism works poorly for the zero-shot translation. However, the tagged-multilingual model adds corresponding target tokens to the target languages, which corresponds to the source token and associates the start tag with the required target language. In the training process, the self-attention mechanism can learn the relationship between the target token and the target language. The attention mechanism linking source and target languages learns the alignment between the source token and the target token. Therefore, when decoding, the decoder start with the start tag and then generate the target token of the required target language with the help of the attention mechanism. Next, the start tag and the generated target token are presented by self-attention mechanism. Then the required target words will be generated by classifier with the help of attention mechanism. Because the target words are generated by maxing the conditional probability of $p(y|start-tag,target- token, context-vector)$, which bases on the previous words and context vectors, therefore the target token can specify the required target language accurately at the beginning of decoding. Experimental results show that the adding a target token effectively solve the problem of specifying the target language for zero-shot translations.
\subsection{Improved self-learning algorithm}
In order to further improve the performance of zero-shot translation, we improve the self-learning method via replacing the beam search with random sample for the multilingual NMT. We test beam search, sample and a combination of the two methods, where beam search size is 10, sample size is 5. All results on test2010 and test2017 for It-Ro zero-shot translation are shown in Figure 2. Similarly, for Ro-It zero-shot translation, all results of different self-learning algorithms are shown in Figure 3. Besides, the detailed experimental data is shown in appendix. From these Figures, we can see that the effect of the self-learning algorithm with beam search on all test sets increases slowly and even declines since the third round. For this phenomenon, we try to improve it with random sample and the combination of the tow methods. Experimental results as shown in Table 5 and Table 6 in appendix show that both methods are helpful, and the sample is more effective. More obviously, the adjusted self-learning algorithm significantly improves \textbf{1.31} and \textbf{0.66} BLEU scores on test2010 and 2017 It-Ro zero-shot translation, and improves \textbf{1.21} and \textbf{0.87} BLEU scores on test2010 and 2017 Ro-It zero-shot translation. Moreover, compared to adjusted single It-Ro and Ro-It translation with 20K parallel data as shown in table 4, the improved self-learning algorithm for zero-shot translations has very close results with bilingual NMT model and even excess the single model on test2010 Ro-It translation. We contend that beam search chooses the highest probability sentences, which will result in generating same sentences during the iteration. That will demolish the diversity of the generated sentences and amplify the impact of the same noise. However, the synthetic parallel data generated by random sample have sentences that select low probability words with corresponding probability, which is helpful for increasing the diversity of the source language and properly covering the true data distribution.
\section{Conclusion an future work}
This paper has presented the tagged-multilingual NMT and the improved self-learning algorithm. Unlike the multilingual NMT and the self-learning algorithm, our tagged-multilingual NMT model can correctly translate the source language into the required target language by learning the relationship between the target language and the target token, which is especially obvious in zero-shot translations. The improved self-learning algorithm generates synthetic parallel data closer to the true data distribution through random sample, which increase the diversity of the source language of the synthetic parallel data. Experimental results on IWSLT multilingual NMT tasks and It-Ro bidirectional zero-shot translations show that the tagged-multilingual NMT and the self-learning algorithm achieve significant improvement over a variety of baselines.
In the future work, we are going to explore better approaches to improve the self-learning mechanism to generate high-quality synthetic parallel data.
|
2,869,038,157,092 | arxiv | \section{Introduction.}
Intuitively, when the closed curve defining a Wilson loop operator is
uniformly scaled we expect a qualitative change to occur: for small loops
the parallel transport matrix round the loop is close to unity while for large
loops this matrix should be as far from unity as possible, as a result of
confinement. This is true in 2,3 and 4 Euclidean dimensions in pure YM, with
gauge group $SU(N)$.
As the scale of the loop is varied, the operator goes from being sensitive to
short distance physics to being sensitive to long distance physics. Somewhere
on the way it undergoes a crossover. In a previous
paper~\cite{ourjhep} we put forward the hypothesis
that as $N$ increases the crossover narrows and becomes a
phase transition at infinite $N$, in the sense usually applied to individual
large matrices. The eigenvalue distribution for small Wilson loops is centered
around +1, and has a gap around -1. The gap is eliminated for large loops and
the eigenvalue distribution
covers the entire circle, becoming uniform for asymptotically large
loops. Confinement means that the uniform
limit is approached with a correction that goes to zero
exponentially in the square of the scale factor.
The hypothesis is more than just asserting a transition in the sense that the
eigenvalue density has a point of non-analytic dependence on the scale parameter.
The hypothesis also states that this phenomenon happens in 2, 3 and 4 Euclidean
dimensions and that in all these dimensions the transitions are in the same
universality class. For large $N$, close to the critical scale,
all the complicated dependence on loop shape comes in only through a finite
number of parameters, which are coefficients of terms dependent on
sub-leading terms in $N$, of the form $N^{-\nu}$ with $\nu$ being
universal exponents; further corrections in $\frac{1}{N}$ are less significant.
The main exponent is related to the average eigenvalue spacing at -1 of the
Wilson matrix close to criticality. The average
spacing is then in between ${\cal O}(1)$, for a gap,
and ${\cal O} ( N^{-1} )$, for nonzero eigenvalue density.
The purpose of this paper is to test our hypothesis in continuum YM
in 3 Euclidean dimensions by numerical Monte Carlo simulation on the lattice.
Critical behavior induced by taking an extensive parameter to infinity is often
tested by numerically confirming the presumed universal approach to
the respective thermodynamic limit. In the case of ordinary second order phase
transitions one may seek to identify a finite-size scaling function.
Something similar needs to be done in the case of large $N$ transitions.
The hypothesis we need to test says that the complicated 3 and 4 dimensional
cases have the same universal behavior as the exactly solvable 2 dimensional
case. In two dimensions we know much about the approach to infinite $N$,
where the transition has
been established long ago. We refer to this transition as
the DO transition, after Durhuus and Olesen who discovered it~\cite{duol}.
Since we are using lattice methods
to learn about continuum YM theory, we need to take the zero lattice spacing and
infinite volume limits. We work under the assumption that these limits interact
simply with the large $N$ limit. This is a standard assumption, and our results
are consistent with it.
We first derive the universal behavior of a specific observable related to the
Wilson matrix in two dimensions, where we work directly at infinite volume and
in the continuum.
We then take
finite $N$ ``data'' arrived at by employing exact analytical formulas
and numerically check if this data
exhibits the universal asymptotic behavior in the crossover.
With these tools
in hand we proceed to a numerical project in
three dimensions, where in addition we need to handle
statistical errors coming from the stochastic approximations used for the path
integrals and systematic errors having to do with not working directly in the
continuum and not at infinite volume. We exploit the relative
ease to do simulations in three dimensions to make the statistical
errors much smaller than absolutely necessary. Also, working at large $N$
reduces finite volume effects, leaving the approach to continuum as the main new
ingredient we need to get under control.
\section{Two dimensions: basics.}
The Wilson loop matrix in YM on the infinite plane is given by
the product of many unitary matrices close to unity. Using methods first
introduced by Migdal~\cite{migdal}, the matrix associated with a
curve that does not intersect itself is seen to be given by a product of a large
number of independently and identically distributed (i.i.d.) unitary matrices. These
unitary matrices are distributed in a small width around the unit matrix
and the probability distribution of the Wilson loop matrix depends on a
single parameter made out of the number of matrices and the width of their
distribution. This parameter is in one to one correspondence to the
area enclosed by the loop in units of the gauge coupling constant.
The multiplicative matrix model has been introduced by Janik and
Wieczorek~\cite{janik} who employed a solution method
similar to that of Gopakumar and Gross~\cite{gogr};
we shall refer to it as the JW model.
It's precise definition is: Let
$U_i$ with $i=1,..,n$ be i.i.d. $N\times N$ unitary random matrices.
$U_j=e^{i\epsilon H_j}$, where the hermitian matrix $H_j$ is either
unconstrained or traceless
and distributed with a normalized probability density given by:
\begin{equation}
P(U_j) = {\cal N} e^{-\frac{N}{2} {\rm Tr} H_j^2}
\end{equation}
The parameter $\epsilon$ obeys $0<\epsilon << 1$ and the integer $n$ is large,
so that the product $\epsilon^2 n$ is finite. We shall take the limit
$n\to\infty$, $\epsilon\to 0$ with $t=n\epsilon^2$ kept fixed. $t$ is
related to the unit-less area mentioned above. The relation will be
made precise later on.
The Wilson loop matrix is given by:
\begin{equation}
W=\prod_{i=1}^n U_i
\end{equation}
It turns out that the simplest gauge invariant observable made out
of $W$ which exhibits universal approach to critical behavior is the
average characteristic polynomial of $W$,
$\langle\det(z-W)\rangle$.
The average characteristic polynomial is in one to one correspondence with
the set of traces of $W$ in all totally antisymmetric representations of
$SU(N)$ or $U(N)$. Nontrivial representations with zero $N$-ality do not enter.
We shall derive integral and
polynomial expressions for
\begin{equation}
Q_N(z,t)=\lim_{n\to\infty,\epsilon\to 0}
\left< \det(z-W)\right>|_{t=\epsilon^2n\ \ {\rm fixed}}
\end{equation}
that are valid for all $N$, separately for $SU(N)$ and for $U(N)$.
These results are used to find the $N\to\infty$ limit, find a critical loop
size in that limit, and then zoom into the the vicinity of this infinite $N$
critical point. This vicinity is described by a ``double scaling limit'' of
the average characteristic polynomial. The double scaling limit turns
out to be identical for $SU(N)$ and for $U(N)$.
\subsection{The average characteristic polynomial of the Wilson
matrix.}
We will derive the integral relation
\begin{equation}
Q_N(z,t)
=\cases{
\sqrt{\frac{N\tau}{2\pi}}
\int_{-\infty}^\infty d\nu
e^{-\frac{N}{2}\tau\nu^2}
\left[z-e^{-\tau\nu
-\frac{\tau}{2}
}
\right]^N & for $SU(N)$ \cr
\sqrt{\frac{Nt}{2\pi}}
\int_{-\infty}^\infty d\nu
e^{-\frac{N}{2}t\nu^2}
\left[z-e^{-t\nu
-\frac{\tau}{2}
}
\right]^N & for $U(N)$ \cr}\label{inteqn}
\end{equation}
where $\tau=t\left(1+\frac{1}{N}\right)$.
Given this relation, we can perform a binomial expansion
and then compute the integral to obtain the polynomial
relation
\begin{equation}
Q_N(z,t)
=\cases{
\sum_{k=0}^N
\pmatrix{N\cr k\cr} z^{N-k} (-1)^k e^{-\frac{\tau k(N-k)}{2N}} &
for $SU(N)$ \cr
\sum_{k=0}^N
\pmatrix{N\cr k\cr} z^{N-k} (-1)^k e^{-\frac{t k(N+1-k)}{2N}} &
for $U(N)$ \cr
}\label{poleqn}
\end{equation}
Before we proceed to give the details of the derivation
of (\ref{inteqn}), we make some observations with regard
to the polynomial expressions for $Q_N(z,t)$.
\subsubsection{Heat-kernel measure for $W$ in the $SU(N)$ case.}
The definition of the $SU(N)$ random matrix ensemble
produces an evolution in ``time'' of the probability distribution of the
product matrix over the manifold of $SU(N)$. Invariance properties
and locality imply that, up to some rescaling of the variable $t$ to
a variable $\tau$, the
probability distribution of the product matrix,
$W$, will be given by the
heat-kernel for $SU(N)$:
\begin{equation}
P(W,\tau ) dW = \sum_R d_R \chi_R (W) e^{-\tau C_2 (R)} dW
\end{equation}
Here, $dW$ is the Haar measure on $SU(N)$, $R$ labels the irreducible
representations of $SU(N)$, $C_2(R)$ is the second order Casimir
in the representation $R$ and $\chi_R (W)$ is the character
of the representation $R$ evaluated on the matrix $W$, with the
convention that $\chi_R ({\mathbf 1})=d_R$ with $d_R$ being the
dimension of $R$. The normalization convention for the Casimir
operator are related to the scale freedom in $t$.
The normalization of the Haar
measure $dW$ is such that the characters $\chi_R (W)$ are orthonormal with
respect to $dW$. Finally, the probability distribution is properly
normalized such that $\int P(W,\tau)dW =1$.
Let us now focus on the k-fold antisymmetric representations,
$k=1,..,N$ and label them by $k$. $C_2 (k) = A_N \frac {k(N-k)}{2N}$
and $d_k={N\choose k}$. We absorb $A_N$ in the definition of $\tau$.
If the eigenvalues of $W$ are
$e^{i\theta_1},e^{i\theta_2},e^{i\theta_3},....,e^{i\theta_N}$,
and we define the moments, $M_k(t)$, by
\begin{equation}
M_k(t) =
\langle
\sum_{1\le j_1 < j_2 < j_3....< j_k\le
N}e^{i(\theta_{j_1}+\theta_{j_2}+...+\theta_{j_k})}\rangle,
\label{moments}
\end{equation}
it follows that
\begin{equation}
M_k(t)=\langle \chi_k (W) \rangle =
d_k e^{-\tau C_2(k)} = {N\choose k} e^{-\frac{\tau k(N-k)}{2N}}
\label{cheqn}
\end{equation}
Next we note that
\begin{equation}
Q_N(z,t) = \langle \prod_{j=1}^N (z-e^{i\theta_j})\rangle
= \sum_{k=0}^N z^{N-k}
(-1)^k M_k(t) \label{momeqn}
\end{equation}
and we are consistent with the SU(N) case in (\ref{poleqn})
if we use (\ref{cheqn}) above.
This consistency with a heat-kernel probability distribution
for $W$ provides a check of the derivation of
(\ref{inteqn}) in the $SU(N)$
case.
\subsubsection{$Q_N(z,t)$ does not self-average at finite
$N$: $U(N)$ case.}
The moments $M_k$ defined in (\ref{moments}) are given by
\begin{equation}
M_k(t)=\cases { e^{-\frac{\tau k(N-k)}{2N} } & for $SU(N)$ \cr
e^{-\frac{t k(N+1-k)}{2N} } & for $U(N)$ \cr}
\label{mkeqn}
\end{equation}
as seen by matching (\ref{momeqn}) with (\ref{poleqn}).
We note that $M_k(t)=M_{N-k}(t)$ only for $SU(N)$
since $W$ and $W^\dagger$ are equally probable and $\det W=1$.
There is insufficient information contained in the moments $M$ to
determine the joint probability distribution of the $\theta_i$, or
even of the average resolvent,$\langle
\sum_{j=1}^N\frac{1}{z-e^{i\theta_j}}\rangle$.
Nevertheless,
$Q_N(z,t)$ is a polynomial in $z$ and its zeros are determined
by the coefficients $M_k$.
Obviously, for any fixed $W$, the $e^{i\theta_j}$ are
the zeros of $\det(z-W)$. Therefore, we expect the zeros of
$Q_N(z,t)$ to represent in some manner the statistical
properties of the $e^{i\theta_j}$. For any finite $N$ there is no way
to obtain the exact marginal distribution of even just a single
eigenvalue of $W$ (``one point function'') from the average
characteristic polynomial. However, in the large $N$ limit, this
often becomes possible.
It is obvious that in our case, at finite $N$,
\begin{equation}
\log \langle \det(z-W)\rangle \ne \langle\log \det(z-W)\rangle
\end{equation}
for $U(N)$.
This is seen already by comparing the $z^0$ term on the two sides.
That $\langle \det W\rangle =M_N(t)=e^{-\frac{t}{2}}$
follows from (\ref{momeqn}) and (\ref{mkeqn}).
It is easy to understand
the above result.
The probability of any one of
the hermitian matrices $H_j$ factorizes into a factor depending only
on the traceless part of $H_j$ ($SU(N)$ part)
and another depending just on ${\rm Tr} H_j$ ($U(1)$ part):
${\rm Tr} H^2 = {\rm Tr} (H-\frac{1}{N}{\rm Tr} H)^2 + \frac{1}{N} ({\rm Tr} H)^2$.
$\det W$ only depends on the $U(1)$ part and since this is the
commuting part, we get
\begin{equation}
\langle \det(W)\rangle=e^{-\frac{n\epsilon^2}{2}}=e^{-\frac{t}{2}}
\end{equation}
On the other hand, $\langle \log \det(W)\rangle =0$.
Obviously, this example is specific to $U(N)$ and would not hold in
the $SU(N)$ case. The observation is nevertheless useful as it
provides an easy check of our derivation to follow.
This might be viewed as a $\frac{1}{N}$
effect, since one would expect $<\det (W)>\sim e^{-N(...)}$ at large
$N$. This is consistent with the difference between $U(N)$ and $SU(N)$
being of lower order in $\frac{1}{N}$. However, since we are going to
look at a more subtle large $N$ limit, where we amplify a critical
regime introducing extra dependences on $N$ into some of the
parameters $z$ and $t$ (a ``double scaling'' limit) we need to be
careful about the distinction between $U(N)$ and $SU(N)$. Eventually
we shall see that the difference between $U(N)$ and $SU(N)$ indeed
does not matter as the exponents $\nu$ will be smaller than one. Thus,
the universal corrections to the singular behavior at the critical
point are larger than the $\frac{1}{N}$ correction differentiating
$U(N)$ from $SU(N)$.
\subsubsection{Zeros of $Q_N(z,t)$ and the
Lee-Yang theorem~\cite{leeyang}.}
We have commented already that the information about the true
distribution of eigenvalues of the stochastic Wilson matrix is
represented by the average characteristic polynomial only in a
statistical sense, in that it would reproduce the moments contained in
the coefficients of the characteristic polynomial, but not necessarily
other spectral properties. We show now that the roots of the average
characteristic polynomial in the case of $SU(N)$ are on the unit
circle, similarly to the roots of every instance of the random Wilson
matrix. This goes a long way toward justifying that the spectrum of
the average characteristic polynomial itself can be seen as an
approximation of the average spectrum of the Wilson loop matrix.
The polynomial expression for the $SU(N)$ in (\ref{poleqn})
is
\begin{equation}
Q_N(z,t) = (-1)^N e^{-\frac{N\tau}{8}} (-z)^{\frac{N}{2}}
\sum_{k=0}^N {N\choose k} (-z)^{\frac{N}{2}-k} e^{\frac{\tau}{2N}
(k-\frac{N}{2})^2}
\label{q-nn}
\end{equation}
Introduce now $N$ Ising spins,
$\sigma_i=\pm\frac{1}{2}, ~~i=1,...N$ and the magnetization
$M(\sigma)=\sum_{i=1}^N \sigma_i$. Then,
\begin{equation}
M(\sigma)=\frac{N}{2}-k
\end{equation}
where $k$ is the number of spins equal to $-\frac{1}{2}$ and
varies between $0$ and $N$.
Taking into account the number of configurations with $k$ spins
equal to $-\frac{1}{2}$ we get:
\begin{equation}
Q_N(z,t) = (-1)^N e^{-\frac{N\tau}{8}} (-z)^{\frac{N}{2}}
\sum_{\sigma_1,\sigma_2,...\sigma_N=\pm\frac{1}{2}} (-z)^{M(\sigma)}
e^{\frac{\tau}{2N} M^2(\sigma)}
\end{equation}
The self interaction terms from the
magnetization squared can be extracted as a further prefactor. What
remains is the partition function of an Ising model on an $N$ vertex
graph where every vertex is connected to every other vertex.
\begin{equation}
Z_N(z,t)=
Q_N(z,t)(-1)^N e^{\frac{(N-1)\tau}{8}} (-z)^{-\frac{N}{2}}=
\sum_{\sigma_1,\sigma_2,...\sigma_N =\pm\frac{1}{2}}
e^{\ln(-z)\sum_i \sigma_i}
e^{\frac{\tau}{N} \sum_{i > j} \sigma_i \sigma_j}
\end{equation}
The interaction is ferromagnetic for positive $\tau$ and there is a
complex external magnetic field $\log(-z)$. The conditions of the
Lee-Yang theorem~\cite{leeyang} are therefore fulfilled and all roots
of this partition function (and hence of the polynomial $Q_N(z,t)$)
lie on
the unit circle.
This is a result about the finite $N$ average characteristic
polynomial, which holds for all $N$ in the $SU(N)$ case, but, as
expected and explained earlier, cannot and does not hold in the $U(N)$
case, where the circle on which the eigenvalues lie shrinks
exponentially with $t$.
\subsubsection {Derivation of the integral representation for
$Q_N(z,t)$.}
We proceed to derive (\ref{inteqn}) through a series
of statements.
We will need the following external field
integrals over $H$ as part of our derivation. For $U(N)$ we have
\begin{equation}
\langle<e^{i\epsilon {\rm tr}(HX)}\rangle>=
e^{-\frac{\epsilon^2}{2N} {\rm tr}(X^2 )}
\label{unext}
\end{equation}
and, for $SU(N)$ we have
\begin{equation}
\langle<e^{i\epsilon {\rm tr}(HX)}\rangle>=
e^{-\frac{\epsilon^2}{2N} {\rm tr}(X^2)
+\frac{\epsilon^2}{2N^2} (tr(X))^2}
\label{sunext}
\end{equation}
The $SU(N)$ formula gives
$\langle<e^{i\epsilon {\rm tr}(HX)}\rangle>=1$
for $X$ proportional to the unit matrix, as expected,
since $tr(HX)\propto {\rm Tr} (H)=0$.
An essential tool in our derivation is a path integral
representation of the characteristic polynomial
which is set up in the following statement.
\noindent {\bf Statement} I:
\begin{equation}
\det (z-W) =
\int \prod_{i=1}^n [ d\psi_id\bar\psi_i]
e^{\sum_{i=1}^n \left[ w\bar\psi_i \psi_i -\bar\psi_i U_i \psi_{i+1}\right]}
\end{equation}
where $\bar\psi_i$, $\psi_i$ are Grassmann variables,
$z=w^n$, $W=\prod_{i=1}^n U_i$ and $\psi_{n+1}=\psi_1$.
As $n\to\infty$, $w\to 1$ while the complex
variable $z=w^n$ is held fixed.
\noindent {\bf Proof:}
This statement reflects the obvious gauge invariance of the Grassmann
system, in addition to a $Z(n)$ invariance under $w\to w e^{\frac{2\pi
i}{n}}$. The proof is by recursion. One step in the recursion process
is
\begin{equation}
\int d\psi_j d\bar\psi_j e^{w\bar\psi_j\psi_j -\bar\psi_jU_j\psi_{j+1}
-\bar\psi_n R_j \psi_j}
= w^N e^{-\bar\psi_n R_{j+1} \psi_{j+1}}
\end{equation}
with
\begin{equation}
R_{j+1} = \frac{1}{w} R_jU_j.
\end{equation}
Noting that $R_1=U_n$, we can repeat the single step above
to integrate out all Grassmann
variables except $\bar\psi_n$ and $\psi_n$ and obtain
the desired result of Statement I:
\begin{eqnarray}
\int \prod_{i=1}^n [ d\psi_id\bar\psi_i]
e^{\sum_{i=1}^n \left[ w\bar\psi_i \psi_i -\bar\psi_i U_i \psi_{i+1}\right]}
&=& w^{N(n-1)} \int d\psi_n d\bar\psi_n e^{w\bar\psi_n\psi_n
- \bar\psi_n R_n \psi_n}\cr
&=&w^{N(n-1)} \det (w - R_n)\cr
&=& \det (w^n - w^{n-1}R_n)\cr
&=& \det (z - W)
\end{eqnarray}
The derivation of (2.4) proceeds by first performing the average over $U_j$
followed by the integration over the Grassmann variables.
It will be useful to have an additional identity
in the form of another Grassmann integral
as stated below.
\noindent {\bf Statement} II:
For $k>1$,
\begin{eqnarray}
e^{-\bar\psi Y^k \chi}
&=&\int \prod_{l=1}^{k-1} d\bar\eta^k_l d\eta^k_l
e^{-\sum_{l=1}^{k-1} \bar\eta^k_l \eta^k_l -\bar\psi Y \eta^k_1
+\sum_{l=1}^{k-2} \bar\eta^k_l Y \eta^k_{l+1} +\bar\eta^k_{k-1} Y\chi}\cr
&\equiv& \left < e^{-\bar\psi Y \eta^k_1
+\sum_{l=1}^{k-2} \bar\eta^k_l Y \eta^k_{l+1} +\bar\eta^k_{k-1} Y\chi}
\right>_\eta
\end{eqnarray}
\noindent {\bf Proof:} For each $k>1$, we have $(k-1)$ pairs of Grassmann variables
denoted by $\bar\eta_l^k$ and $\eta_l^k$, $l=1,\cdots, (k-1)$.
in the above statement. The proof of this statement also works
by recursion. One step in the recursion process is
\begin{equation}
\int d\bar\eta_l^k d\eta_l^k
e^{-\bar\eta_l^k\eta_l^k -\bar\psi Y^l \eta_l^k
+\bar\eta_l^k Y \eta_{l+1}^k}
= e^{-\bar\psi Y^{l+1} \eta_{l+1}^k}
\end{equation}
Identifying $\eta_l^l=\chi$, we perform the above recursion
$(k-1)$ times, starting from $l=1$ to $l=(k-1)$ to obtain
the result of Statement II.
We can use the result of the Statement II to perform the
integral over $U$. We focus on one such integral in
the following statement.
\noindent {\bf Statement} III:
\begin{equation}
\left < e^{-\bar\psi U \chi} \right> =
e^{-\bar\psi\chi}
\left <
\cases{
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2+
\frac{\epsilon^2}{2N^2}({\rm Tr} X)^2} & for $SU(N)$ \cr
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2} & for $U(N)$ \cr}
\right >_\eta
\end{equation}
where
the matrix $X$ is
\begin{equation}
X_{ij} =
\chi_i \bar\psi_j
+\sum_{k=2}^{\infty} \frac{1}{(k!)^{1/k}}(\eta_1^k)_i\bar\psi_j
-\sum_{k=3}^\infty\sum_{l=1}^{k-2} \frac{1}{(k!)^{1/k}}
(\eta_{l+1}^k)_i(\bar\eta_l^k)_j
-\sum_{k=2}^\infty \frac{1}{(k!)^{1/k}}\chi_i(\bar\eta_{k-1}^k)_j
\end{equation}
\noindent {\bf Proof:}
\begin{eqnarray}
\left < e^{-\bar\psi U \chi} \right> &=&
\left < e^{-\bar\psi e^{i\epsilon H}\chi} \right >
= \left <
\prod_{k=0}^{\infty}
e^{-\bar\psi \left[\frac{i\epsilon H}{(k!)^{1/k}}\right]^k \chi}
\right > \cr
&=&
e^{-\bar\psi\chi}
\int \prod_{k=2}^\infty \prod_{l=1}^{k-1} d\bar\eta^k_l d\eta^k_l
e^{-\sum_{k=2}^\infty \sum_{l=0}^{k-1} \bar\eta_l^k \eta_l^k}
\left< e^{-i\epsilon{\rm Tr}HX}\right>\cr
&=&
e^{-\bar\psi\chi}
\left <
\cases{
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2+
\frac{\epsilon^2}{2N^2}({\rm Tr} X)^2} & for $SU(N)$ \cr
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2} & for $U(N)$ \cr}
\right >_\eta \label{stat3pr}
\end{eqnarray}
We have used Statement II to obtain the third equality in
(\ref{stat3pr}) and we have used (\ref{unext}) and
(\ref{sunext}) to obtain the
fourth equality in (\ref{stat3pr}).
We can now perform the integrals over the full set of
$\eta$ and $\bar\eta$ variables to get the result of
the following statement.
\noindent {\bf Statement} IV
\begin{equation}
\left < e^{-\bar\psi U \chi} \right> =
\sqrt{\frac{N}{2\pi}} \int_{-\infty}^\infty
d\lambda e^{-\frac{N}{2}\lambda^2}
e^{-\left[1-\lambda\epsilon\sqrt{1+\frac{u}{N}}
- \frac{1}{2}\epsilon^2\left(1-\frac{u}{N^2}\right)\right]
\bar\psi\chi}
\end{equation}
with $u=0$ for $U(N)$ and $u=1$ for $SU(N)$.
\noindent {\bf Proof:}
In the limit of $n\to\infty$ and $\epsilon\to 0$, we can write
\begin{equation}
\left <
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2+
\frac{\epsilon^2}{2N^2}({\rm Tr} X)^2}\right>_\eta
=
e^{-\frac{\epsilon^2}{2N}\left<{\rm Tr}X^2\right>_\eta+
\frac{\epsilon^2}{2N^2}\left< ({\rm Tr} X)^2\right>_\eta}
\label{dissun}
\end{equation}
for $SU(N)$
and
\begin{equation}
\left <
e^{-\frac{\epsilon^2}{2N}{\rm Tr}X^2}\right>_\eta
=
e^{-\frac{\epsilon^2}{2N}\left<{\rm Tr}X^2\right>_\eta}
\label{disun}
\end{equation}
for $U(N)$.
The connected correlators of the exponent ignored above
will result in terms of the form
\begin{equation}
F(\zeta) = \epsilon^2f_1(\epsilon)\zeta+\epsilon \sum_{k=2}^\infty
f_k(\epsilon)\zeta^k
\label{extraf}
\end{equation}
where $\zeta=\epsilon\bar\psi\chi$ and $f_k(\epsilon)$,
$k=1,\cdots\infty$ have a power series expansion in $\epsilon$
with only non-negative powers.
That the terms can only depend on the fermion bilinear $\bar\psi\chi$
is evident from symmetry arguments.
Since $X$ appears with one power of $\epsilon$
on the left hand side of (\ref{dissun}) and (\ref{disun}), we can
associate a $\sqrt{\epsilon}$ with each fermion. Every term
in the connected correlator that contributes
should have at least one term
of the form $\bar\eta_l^k\eta_l^k$ that got integrated.
This gives at least one
extra power of $\epsilon$. If the connected correlator
has to result in $\zeta$, the term should have at least
two terms of the form $\bar\eta_l^k\eta_l^k$ since
the relevant terms comes from $\left({\rm Tr} X^2\right)^2$,
$\left({\rm Tr} X\right)^4$, ${\rm Tr} X^2\left({\rm Tr} X\right)^2$
or higher powers of $X$.
The extra powers of $\epsilon$ in (\ref{extraf})
result in the vanishing of
this term in the $\epsilon\to 0$ and $n\to\infty$
limit.
Even though $X$ has an infinite number of terms, there are
only two terms in $\left < {\rm Tr} X^2\right >_\eta$
and two terms in $\left < ({\rm Tr} X)^2\right >_\eta$:
\begin{eqnarray}
\left < {\rm Tr} X^2\right >_\eta &=& -(\bar\psi\chi)^2 -N \bar\psi\chi \cr
\left < ({\rm Tr} X)^2\right >_\eta &=& (\bar\psi\chi)^2 - \bar\psi\chi
\label{trx2}
\end{eqnarray}
Inserting (\ref{trx2}) into (\ref{dissun}) and
(\ref{disun}) and the result into Statement III
gives us
\begin{equation}
\left < e^{-\bar\psi U \chi} \right>
=
e^{-\bar\psi\chi}
e^{\frac{\epsilon^2}{2N}\left(1+\frac{u}{N}\right)
(\bar\psi\chi)^2+\frac{\epsilon^2}{2}\left(1-\frac{u}{N^2}\right)
\bar\psi\chi};
\label{oe2}
\end{equation}
with $u=0$ for $U(N)$ and $u=1$ for $SU(N)$.
Since,
\begin{equation}
\sqrt{\frac{N}{2\pi}}
\int_{-\infty}^\infty d\lambda e^{-\frac{N}{2}\lambda^2
+\lambda\epsilon\sqrt{1+\frac{u}{N}}\bar\psi\chi}
= e^{\frac{\epsilon^2}{2N}\left(1+\frac{u}{N}\right)(\bar\psi\chi)^2}
\end{equation}
the result in (\ref{oe2}) reduces to statement IV.
Had we kept the term
$F(\zeta)$ in (\ref{extraf}), we
can change the factor $e^{-{\frac{N}{2}\lambda^2}}$ in the integrand
depending on the auxiliary fields $\lambda$ to
$e^{-{\frac{N}{2}\lambda^2}}(1+P(\epsilon,\lambda))$ so as to
reproduce those terms.
Alternatively, one
can introduce an auxiliary field capturing the entire
$F(\zeta)$
dependence using an inverse Laplace transform and interpreting it
perturbatively (that is not worrying about the convergence of the
$\lambda$ integration, as anything beyond quadratic order is assumed
to get expanded and truncated according to the power of
$\epsilon$). In either case one gets extra terms that will vanish in
the correlated, large $n$ -- small $\epsilon$, limit.
Now, we can use Statement IV to perform each $U_i$ integral appearing
in the expression for $\det(z-W)$ in Statement I resulting in
the following statement.
\noindent {\bf Statement} V
\begin{eqnarray}
\left< \det(z-W)\right> =&&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\int \prod_{i=1}^n d\lambda_i e^{-\frac{N}{2}\sum_{i=1}^n
\lambda_i^2} \cr
&&\int \prod_{i=1}^n [ d\psi_id\bar\psi_i]
e^{\sum_{i=1}^n \left[ w\bar\psi_i \psi_i -
\left[1-\lambda_i\epsilon\sqrt{1+\frac{u}{N}}
-\frac{\epsilon^2}{2}\left(1-\frac{u}{N^2}\right)\right]\bar\psi_i\psi_{i+1}\right]}
\end{eqnarray}
We can now perform the Grassmann integrals exactly following
the proof of Statement I for $1\times 1$ matrices. The result
is stated below.
\noindent {\bf Statement} VI
\begin{eqnarray}
\left< \det(z-W)\right> &=&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\int \prod_{i=1}^n d\lambda_i e^{-\frac{N}{2}\sum_{i=1}^n
\lambda_i^2} \cr
&&\left[z-\prod_{i=1}^n \left[1-\lambda_i\epsilon\sqrt{1+\frac{u}{N}}
-\frac{\epsilon^2}{2}\left(1-\frac{u}{N^2}\right)\right]
\right]^N
\end{eqnarray}
We are now set to proof the main result, namely, (\ref{inteqn}).
We start by exponentiating the term inside the product
of statement VI.
One needs to
take into account then a term of order $\epsilon^2\lambda_i^2$ which
makes a finite contribution. This term is inserted into the
exponentiated form in such a manner that the agreement between the
exponentiated expression and the original one holds also at order $\epsilon^2$.
\begin{eqnarray}
\left< \det(z-W)\right> &=&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\int \prod_{i=1}^n d\lambda_i e^{-\frac{N}{2}\sum_{i=1}^n
\lambda_i^2} \cr
&&\left[z-e^{-\epsilon\sqrt{1+\frac{u}{N}}\sum_i\lambda_i
-\frac{n\epsilon^2}{2}\left(1-\frac{u}{N^2}\right)
-\frac{\epsilon^2}{2}\left(1+\frac{u}{N}\right)\sum_i \lambda_i^2
}
\right]^N
\end{eqnarray}
Let $\Lambda=(\lambda_1,\lambda_2,\cdots,\lambda_n)$.
let $r_1,r_2,\cdots,r_n$ be an orthonormal basis with
$r_1=\frac{1}{\sqrt{n}}(1,\cdots,1)$.
Finally, let $r_i\cdot\Lambda=\xi_i$. Then, using $t=n\epsilon^2$,
\begin{equation}
\left< \det(z-W)\right> =\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\int \prod_{i=1}^n d\xi_i e^{-\frac{N}{2}\sum_{i=1}^n
\xi_i^2}
\left[z-e^{-\sqrt{t}\sqrt{1+\frac{u}{N}}\xi_1
-\frac{t}{2}\left(1-\frac{u}{N^2}\right)
-\frac{\epsilon^2}{2}\left(1+\frac{u}{N}\right)\sum_i \xi_i^2
}
\right]^N
\end{equation}
Now, let $\xi_1=\sqrt{t}\mu$ and $\xi_k=\sqrt{n}\mu_k$ for $k=2,\cdots,n$.
Then, again using $t=n\epsilon^2$,
\begin{eqnarray}
\left< \det(z-W)\right> =&&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\sqrt{t} n^{\frac{n-1}{2}}\int d\mu
\int \prod_{i=2}^n d\mu_i e^{-\frac{N}{2}t\mu^2-\frac{N}{2}n\sum_{i=2}^n
\mu_i^2} \cr
&& \left[z-e^{-t\sqrt{1+\frac{u}{N}}\mu
-\frac{t}{2}\left(1-\frac{u}{N^2}\right)
-\frac{t\epsilon^2}{2}\left(1+\frac{u}{N}\right)\mu^2
-\frac{t}{2}\left(1+\frac{u}{N}\right)\sum_{i=2}^n \mu_i^2
}
\right]^N
\end{eqnarray}
Next, we go to polar coordinates in $\mu_i$, $i=2,\cdots,n$ and
set $r^2=\sum_{i=2}^n \mu_k^2$. Then we get,
\begin{eqnarray}
\left< \det(z-W)\right> =&&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\sqrt{t} n^{\frac{n-1}{2}} \frac{2\pi^{\frac{n-1}{2}}}
{\Gamma\left(\frac{n-1}{2}\right)}
\int_{-\infty}^\infty d\mu
\int_0^\infty dr r^{n-2}
e^{-\frac{N}{2}t\mu^2-\frac{N}{2}n r^2} \cr
&& \left[z-e^{-t\sqrt{1+\frac{u}{N}}\mu
-\frac{t}{2}\left(1-\frac{u}{N^2}\right)
-\frac{t\epsilon^2}{2}\left(1+\frac{u}{N}\right)\mu^2
-\frac{t}{2}\left(1+\frac{u}{N}\right)r^2
}
\right]^N
\end{eqnarray}
For large $n$, we can perform a saddle point calculation of
the $r$ integral.
To leading order in $n$ the saddle point is at
$r_c=\sqrt{\frac{1}{N}}$ and we get
\begin{eqnarray}
\left< \det(z-W)\right> =&&\left(\frac{N}{2\pi}\right)^\frac{n}{2}
\sqrt{t} n^{\frac{n-1}{2}} \frac{2\pi^{\frac{n-1}{2}}}
{\Gamma\left(\frac{n-1}{2}\right)}
N^{-\frac{n}{2}}e^{-\frac{n}{2}}\sqrt{\frac{\pi}{nN}}N
\int_{-\infty}^\infty d\mu
e^{-\frac{N}{2}t\mu^2} \cr
&& \left[z-e^{-t\sqrt{1+\frac{u}{N}}\mu
-\frac{t}{2}\left(1+\frac{1}{N}\right)
-\frac{t\epsilon^2}{2}\left(1+\frac{u}{N}\right)\mu^2
}
\right]^N
\end{eqnarray}
Now we take the limit, $n\to \infty$, $\epsilon\to 0$ for a fixed $t$
and we get
\begin{equation}
\left< \det(z-W(t))\right> =\sqrt{\frac{Nt}{2\pi}}
\int_{-\infty}^\infty d\mu
e^{-\frac{N}{2}t\mu^2}
\left[z-e^{-t\sqrt{1+\frac{u}{N}}\mu
-\frac{t}{2}\left(1+\frac{1}{N}\right)
}
\right]^N
\end{equation}
Finally, we define
$\tau=t\left(1+\frac{1}{N}\right)$ and
$\nu=\frac{\mu}{\sqrt{1+\frac{u}{N}}}$.
Then the above equation reduces to
(\ref{inteqn}).
We note that $U(N)$ and $SU(N)$ become
indistinguishable in the large $N$ double scaling limit we shall later
employ. We will restrict ourselves to the $SU(N)$ case at finite $N$.
Since we are interested in the large $N$ limit, we will not
distinguish between $\tau$ and $t$ and we will set
\begin{equation}
Q_N(z,t)
=
\sqrt{\frac{Nt}{2\pi}}
\int_{-\infty}^\infty d\nu
e^{-\frac{N}{2}t\nu^2}
\left[z-e^{-t\left(\nu+\frac{1}{2}\right)}\right]^N
\label{qnzt}
\end{equation}
for all the discussion to follow.
If we need to compare to continuum two dimensional
YM, we should keep in mind that, on the basis of a comparison to a
heat-kernel formula for the average characteristic polynomial,
it is $\tau$ that is related directly to the inverse 't Hooft
coupling, not $t$. In other words, there is a factor of $1+\frac{1}{N}$
in the relationship between the parameter $t$ of the JW model and
the inverse 't Hooft coupling in the standard notation for the
dimensionless area in two dimensional $SU(N)$ YM theory.
\subsubsection{The average characteristic
polynomial for negative areas.}
The average characteristic polynomial depends on $t$ in an analytic
manner. In particular, it is interesting to consider the case $t\le
0$. The conditions for applying the Lee-Yang theorem no longer hold,
as the interaction has become anti-ferromagnetic. Explicit examples
show that all roots of the average characteristic polynomial are real
and negative. On the other hand, it remains true for any $t$ that if
$z$ is a root so are $\frac{1}{z}, z^*, \frac{1}{z^*}$. These
symmetries are consistent with restricting all roots to the unit
circle or to the positive or negative portions of the real axis. Thus,
the symmetries alone do not tell much.
The case of $t\le 0$ corresponds to imaginary $\epsilon$, since
$t=n\epsilon^2$. Imaginary $\epsilon$
corresponds to a complex Wilson matrix
obtained by multiplying i.i.d. hermitian matrices
close to identity. One would expect this Wilson matrix to have a
spectrum covering a region of the complex plane in the stochastic
sense. In this case we see that the roots of the average
characteristic polynomial carry little information about the spectral
properties of the Wilson matrix. This makes it clear why we carried
out various checks to convince ourselves that the average
characteristic polynomial was a useful observable for $t\ge 0$,
when the matrices that get multiplied are unitary.
\subsection{The large $N$ phase transition in $Q_N(z,t)$.}
We end up concluding that the average characteristic polynomial,
$Q_N(z,t)$, at infinite $N$ should reproduce the DO phase
transition. In other words, one can replace the average of a logarithm
by the logarithm of the average; this is somewhat analogous to a
self averaging result proved by Berezin in 1972~\cite{berezin}.
We check that the DO transition is captured by the average
characteristic polynomial by comparing our result to that of
~\cite{janik}, who showed that the multiplicative matrix model has the
DO phase transition using different methods, not involving the average
characteristic polynomial, but rather accessing the resolvent
$\lim_{N\to\infty}\frac{1}{N} \langle tr
\frac{1}{z-W}\rangle$ directly.
At infinite $N$, there is no distinction between $SU(N)$ and $U(N)$.
We take the large $N$ limit by finding the saddle point in $\mu$ that
controls the integral; at the saddle point, $\nu=\lambda(t,z)$.
\begin{equation}
\frac{1}{N} \log Q_N(z,t)
=-\frac{1}{2N}\log\left[1+ t\left(\lambda^2+\lambda\right)\right]
+\log \left ( z-
e^{-t (\lambda+\frac{1}{2})}\right ) -\frac{t}{2}\lambda^2
\end{equation}
Here, $\lambda$ solves:
\begin{equation}
\lambda=\lambda(t,z)=\frac{1}{ze^{t(\lambda+\frac{1}{2})}-1}
\end{equation}
To get the resolvent of $W$ we take a derivative with respect to $z$.
Only the explicit $z$ dependence on the right hand side matters, since
the expression is stationary with respect to variation in $\lambda$.
We need to interchange the matrix averaging and the logarithm (a procedure
we now have reason to believe will be valid in the limit of infinite $N$)
at fixed $t$ and $z$. The interchange can be viewed as a
version of large $N$ factorization, but now extended to a quantity
that has an exponential dependence on $N$. This ``self-averaging''
property may also hold in the double scaling limit we shall introduce
later, because violations of factorization would typically be (in
view of the new type of observable) of order $\frac{1}{N}$
while the double scaling limit will be seen to
add some dependencies in the couplings which are of slightly lower
order coming in via factors of $\frac{1}{N^\nu}$ with
$\nu=1/2,3/4$.
The expression for the resolvent in the
large $N$ limit is:
\begin{equation}
G=\frac{1}{N} \langle {\rm Tr} \frac{1}{z-W}\rangle =
\frac{1}{z-e^{-t\left(\lambda+\frac{1}{2}\right)}}\end{equation}
JW define a function $f(t,z)$ by:
\begin{equation}
f(t,z)=zG(z,t)-1
\end{equation}
and it is easy to see that $f$ and $\lambda$ are the same.
The equation for $\lambda$ can be rewritten as:
\begin{equation}
z\lambda=(1+\lambda) e^{-t\left(\lambda+\frac{1}{2}\right)}
\end{equation}
leading to equation (17) in~\cite{janik}. This allowed us to bypass
the usage of the $S$-transform trick of~\cite{voicu} employed in
~\cite{janik}. We needed to bypass the usage of the $S$-transform
trick, because we need the universal smoothed out behavior at
asymptotically large, but not infinite, $N$, and the $S$-transform
procedure has no known extension away from the infinite $N$ limit.
We conclude that the average characteristic polynomial has a critical
point at infinite $N$ at $t=4$, which is the location of the DO phase
transition. The transition is reflected by the behavior around $z=-1$,
which is where the gap in the eigenvalue is first opened.
\section{The double scaling limit.}
We wish to zoom into the region close to $z=-1$ when $t$ is close to its
critical value of $4$. Our previous discussion has led us to conclude that a
good quantity to look at is the derivative of the logarithm of the average
characteristic polynomial with respect to $z$ at $z=-1$. It simplifies matters
to focus on the real $z$ axis.
\subsection{ General structure: dimensions 2, 3, 4.}
We set $z=e^y$ and define a function $F(y)$ from
$Q_N(z,t)$ that is explicitly even in $y$:
\begin{equation}
F(y)=e^{-\frac{Ny}{2}} \left (-1\right )^N Q_N( -e^y
, t)=
=\langle \det\left
(e^{\frac{y}{2}}+e^{-\frac{y}{2}}W \right )\rangle
\label{fofy}
\end{equation}
We have suppressed the dependence on $t$ in the function $F(y)$.
We now introduce some new variables and notations:
\begin{equation}
\Upsilon=\tanh\frac{y}{2},~~~A=\frac{1-W}{1+W} =-iM
\end{equation}
$A$ is anti hermitian and $M$ is hermitian. If $e^{i\theta}$ is an
eigenvalue of $W$, $-i\tan\frac{\theta}{2}$ is the corresponding
eigenvalue of $A$ and $\tan\frac{\theta}{2}$ of $M$. $A,M$ become
singular when the gap in the eigenvalue spectrum of $W$
closes. The inverse transformation to $W$ is:
\begin{equation}
W=\frac{1-A}{1+A}=\frac{1+iM}{1-iM},~~~~1+W=\frac{2}{1+A}=\frac{2}{1-iM}
\end{equation}
The density of eigenvalues of $W$ is denoted by $\rho_N(\theta)$,
normalized by:
\begin{equation}
\int_{-\pi}^\pi \rho_N(\theta ) \frac{d\theta}{2\pi}=N
\end{equation}
$F(y)$ can be evaluated by Monte Carlo simulations in dimensions
higher than 2.
\begin{eqnarray}
&F(y)=\left (2\cosh\frac{y}{2}\right )^N \langle \det\left (\frac
{1}{1+A}\right ) \det(1+\tanh\frac{y}{2}\;\; A) \rangle=\cr & \left
( \frac{4}{1-\Upsilon^2}\right )^{\frac{N}{2}} \langle \det\left
(\frac{1}{1+A}\right ) e^{-\sum_{n=1}^\infty (-1)^n \frac{\Upsilon^n}{n} {\rm Tr}
A^n } \rangle\end{eqnarray}
This equation is still exact. For each $A$, $\det(1-A)=\det(1+A)$ on
account of the $SU(N)$ condition $\det W=1$. Since, in addition,
the probability for an $A$ equals
that for a $-A$, $F(y)$ (which also depends on the loop and on the gauge
coupling) is even in $y$ (and, evidently then, in $\Upsilon$). This can be made
explicit:\begin{equation}
F(y)= \left
( \frac{4}{1-\Upsilon^2}\right )^{\frac{N}{2}} \langle \det\left
(\frac{1}{1+A}\right ) e^{-\sum_{k=1}^\infty \frac{\Upsilon^{2k}}{2k}
{\rm Tr} A^{2k} } \; \cosh\left ( \sum_{k=1}^\infty
\frac{\Upsilon^{2k-1}}{2k-1} {\rm Tr} A^{2k-1} \right ) \rangle
\end{equation}
From the above equation one can derive exact expressions for the
coefficients $F_k$ in $F(y)=\sum_{k=0}^\infty F_{k} \Upsilon^{2k}$.
If the joint
distribution of all the eigenvalues of $A$ were known one could
replace the averaging brackets on the right hand side by an integral
over all eigenvalues weighted by that distribution. If we apply large
$N$ factorization, the right hand side simplifies considerably, and
one is able to write it just in terms of the single eigenvalue distribution
of $A$. From previous discussions we feel it is fine to assume that
large $N$ factorization holds in this case.
If we apply large $N$ factorization, and use ${\rm Tr} A^{2k+1}=0$ for
integer $k$, we obtain:
\begin{equation}
F_{\rm factorized} (\Upsilon) = \left ( \frac{4}{1-\Upsilon^2}\right
)^{\frac{N}{2}}
\frac{1}{\sqrt{\langle \det(1-A^2)\rangle}}\exp\left (
-\sum_{k=1}^\infty \frac{\Upsilon^{2k}}{2k}\langle {\rm Tr} A^{2k} \rangle
\right )
\end{equation}
In terms of $M$, we have:
\begin{equation}
F_{\rm factorized} (\Upsilon) = \left ( \frac{4}{1-\Upsilon^2}\right
)^{\frac{N}{2}}
\frac{1}{\sqrt{\langle \det(1+M^2)\rangle}}\exp\left (
\sum_{k=1}^\infty (-1)^{k-1} \frac{\Upsilon^{2k}}{2k}\langle {\rm Tr} M^{2k} \rangle
\right )
\end{equation}
Let the eigenvalues of $M$ be denoted by $\lambda$. The
eigenvalue density in $\theta$, $\rho_N (2\arctan y )$, which we now denote by
an abuse of notation as $\rho_N (\lambda )$, is normalized by:
\begin{equation}
\frac{1}{\pi}\int \frac{d\lambda}{1+\lambda^2} \rho_N(\lambda) = N
\end{equation}
$\rho_N(\lambda )$ is an even function: $\rho_N(\lambda )=\rho_N(-\lambda )$.
When $\theta$ is close to $\pm\pi$, $\lambda$ goes to $\pm \infty$.
The critical regime around $\theta\approx \pm\pi$ we are interested in
has been mapped to $\lambda\to\pm\infty$. The eigenvalue spacing in
$\theta$ goes as the spacing in $\frac{1}{\lambda}$ in the large
$|\lambda |$ regime.
Let us assume a very large, but finite $N$. If $W$ is gap-less at
infinite $N$ at $-1$, $\rho_N(\lambda) \sim cN,~c>0$ as
$\lambda\to\pm\infty$. If $W$ has a gap in the infinite $N$ limit,
$\rho_N(\lambda) \sim e^{-c^\prime N}, ~c^\prime >0$ as
$\lambda\to\pm\infty$. At the critical point when the gap just closes
at $\pm\pi$, $\rho_N(\lambda)\sim c^{\prime\prime} N
|\lambda|^{-\frac{1}{3}}$~\cite{janik}, as $\lambda\to\pm\infty$.
If we now take the infinite $N$ limit, $\frac{1}{N}
\rho_N$ converges point-wise to a function $\rho_\infty$ that has
compact support if there is a gap, infinite support with regular
behavior at infinity if there is no gap and infinite support with a
singular behavior at infinity if we are exactly at criticality.
One can then re-express the logarithmic derivative of the factorized $F$
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial \Upsilon} \log F_{\rm factorized}
(\Upsilon)=\frac{\Upsilon}{1-\Upsilon^2}+\frac{\Upsilon}{N}\langle
{\rm tr}
\frac{M^2}{1+\Upsilon^2 M^2}\rangle
=\frac{\Upsilon}{N(1-\Upsilon^2)}
\langle{\rm tr} \frac{1+M^2}{1+\Upsilon M^2}\rangle
\end{equation}
in terms of $\rho_\infty(\lambda)$
as
\begin{equation}
\lim_{N\to\infty}
\frac{1}{N} \frac{\partial}{\partial \Upsilon} \log F_{\rm factorized}
(\Upsilon)
=\frac{1}{1-\Upsilon^2}\frac{\Upsilon}{|\Upsilon|}
\frac{1}{\pi}
\int_{-\infty}^\infty d\lambda \frac{\rho_\infty
\left(\frac{\lambda}{|\Upsilon|}\right)}{1+\lambda^2}
\label{ffact}
\end{equation}
\subsubsection{Heuristic picture of the large $N$
phase transition.}\label{heur}
The determinant $\det(z-W)$ can be
thought of as the exponent of a sum of $N$ logarithms, one term for each
eigenvalue. It is then the exponent of the two dimensional electrostatic
potential created by $N$ charges located at the zeros of the characteristic
polynomial. These zeros are on the unit circle and we can look at the potential
in the vicinity of the point $-1$ on this circle. There are two extreme cases:
all charges are located at $+1$ or, the total charge is uniformly distributed on
the circle.
For the extreme case where all charges are located at $+1$,
$\rho_\infty(\lambda) = \pi \delta(\lambda)$. Inserting this
into (\ref{ffact}) results in
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial \Upsilon} \log F_{\rm factorized}
(\Upsilon)=\frac{\Upsilon}{1-\Upsilon^2}
\end{equation}
For the other extreme case
of a uniform distribution of charges on the unit circle,
$\rho_\infty (\lambda )= 1$. Inserting this
into (\ref{ffact}) results in
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial \Upsilon} \log F_{\rm factorized}
(\Upsilon)=\frac{\epsilon(\Upsilon)}{1-\Upsilon^2}
\end{equation}
Recalling that
$\frac{\partial}{\partial\Upsilon}=\frac{2}
{1-\Upsilon^2}\frac{\partial}
{\partial y}$
we conclude that
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial y} \log F
(y)=\cases{
\frac{1}{2}\tanh\frac{y}{2} & for all charges at $+1$\cr
\frac{1}{2}\epsilon(y) & for uniform distribution of charges on the
unit circle \cr}
\end{equation}
For a charge distribution that is critical,
$\frac{1}{N} \frac{\partial}{\partial y} \log F
(y)$ goes as $y^{\frac{1}{3}}$ as $y$ goes to zero.
If we now
rescale the $y$ variable by $N^{\frac{3}{4}}$, defining $y
=\frac{\xi}{N^{\frac{3}{4}}}$,
$\frac{1}{N} \frac{\partial}{\partial y} \log F
(y)$ becomes
of order $N^{-\frac{1}{4}}$ for fixed $\xi$.
The double scaling limit will smooth out the non-analyticity at $y=0$ which
we exhibited explicitly above for the case of
a uniform distribution.
At infinite $N$, there will always be a non-analyticity at $y=0$ if
the eigenvalue distribution has no gap, whether the
distribution is uniform or not. The jump is proportional
to the density of eigenvalues of $W$ at $z=-1$, $\rho(\pi)$.
On the other hand, when there is a gap, the behavior
at $y=0$ is smooth.
Up to a few non-universal parameters, the double-scaling limit
captures the universal content of the non-analyticity.
Neither of the two extreme limits that we have seen above,
namely, a delta function and a uniform distribution, are
necessary to be attainable in a particular model,
for the transition represented by the non-analyticity
we have seen to take place and be universally described by the
scaling limit.
To match the scaling limit to the data of a particular
physical realization, some parameters will need to be fit.
For a large physical loop, one expects an almost uniform distribution.
Suppose now that we have a distribution that is almost uniform, with a
small deviation from uniformity proportional to $\cos\theta$. In
terms of $\lambda$,
\begin{equation}
\rho(\lambda)=1+\delta \frac{1-\lambda^2}{1+\lambda^2}
\end{equation}
Inserting this
into (\ref{ffact}) results in
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial \Upsilon} \log F_{\rm factorized}
(\Upsilon)=\frac{\epsilon(\Upsilon)}{1-\upsilon^2}\left[ 1 + \delta
\frac{|\Upsilon|-1}{|\Upsilon|+1}\right]
\end{equation}
In terms of the variable $y$, the result is
\begin{equation}
\frac{1}{N} \frac{\partial}{\partial y} \log F(y)=
\frac{\epsilon(y)}{2}\left [ 1 - \delta e^{-|y|}\right]
\end{equation}
For a large loop, when the deviation of the eigenvalue distribution
from uniformity is small and determined by the string tension times
the area $t$, we have $\delta\propto e^{-\sigma t}$. Positive and
negative $y$ values are related by a Z(2) symmetry. The result is odd
in $y$ and undergoes a discontinuous change as $y$ goes through 0.
Taking a first derivative with respect to $y$ of the above equation,
we see that the area law term dominates for $y\ne 0$.
\subsection{ Structure in two dimensions.}
Inserting (\ref{qnzt}) into the definition of $F(y)$ in
(\ref{fofy}) yields
\begin{equation}
F(y)=e^{-\frac{Ny}{2}} \left (-1\right )^N Q_N( -e^y
, t)=2^N e^{-\frac{Nt}{8}}\sqrt{\frac{Nt}{2\pi}} Z_N (y,t),
\end{equation}
where,
\begin{equation}
Z_N(y,t) =\int dx \;\; e^{N[\log(\cosh \frac{y+tx}{2}) -\frac{1}{2} tx^2 ]}
\label{znyt}
\end{equation}
We now extract from $Z_N$ the same factor we had extracted from $F$:
\begin{equation}
Z_N(y,t) =\left(\cosh\frac{y}{2}\right )^N \int dx \;\;
e^{ N\left [\log \left (\frac{\cosh \frac{y+tx}{2}}{\cosh\frac{y}{2}}\right )
-\frac{1}{2}tx^2\right ] }
\end{equation}
Expanding the hyperbolic cosine of the sum in the exponent, we get:
\begin{equation}
Z_N(y,t) =\left(\cosh\frac{y}{2}\right )^N \int dx \;\;e^{ N\left
[\log \left ( 1+\tanh\frac{y}{2} \tanh\frac{tx}{2}\right )
-\frac{2}{t} \left ( \frac{tx}{2} \right )^2
-\frac{1}{2}\log\left ( 1-\tanh^2 \frac{tx}{2}\right ) \right ] }
\end{equation}
We change variables of integration from $x$ to
$v=\tanh\frac{xt}{2}$. The inverse transformation is
$\frac{xt}{2}=-\frac{1}{2}\log\frac{1-v}{1+v}=\sum_{k=0}^\infty
\frac{v^{2k+1}}{2k+1}$.
\begin{equation}
Z_N(y,t)=\left(\cosh\frac{y}{2}\right )^N \frac{2}{t} \int_{-1}^1
\frac{dv}{1-v^2}\;e^{ N\left
[\log \left ( 1+\tanh\frac{y}{2} v \right ) -\frac{1}{2t} \left (
\log\frac{1-v}{1+v} \right )^2 -\frac{1}{2}\log\left ( 1-v^2 \right )
\right ] }\end{equation}
From now on, the integration over $v$ will be implicitly understood to run from
$-1$ to $+1$. We also introduce the parameter $\Upsilon=\tanh\frac{y}{2}$
with the understanding that $\Upsilon$ is real and that $|\Upsilon|\le 1$. Expanding the exponent in $v$ we have
\begin{eqnarray}
Z_N(y,t) &=& \left ( \frac{1}{1-\Upsilon^2}\right
)^{\frac{N}{2}}\frac{2}{t} \int \frac{dv}{1-v^2}\times\cr
&&e^{ N\left [ \sum_{n=1}^\infty (-1)^{n-1}\frac{1}{n}\Upsilon^n v^n
-\frac{2}{t}v^2\left (\sum_{k=0}^\infty \frac{1}{2k+1} v^{2k}
\right)^2 +\frac{v^2}{2}\left (\sum_{k=0}^\infty
\frac{1}{k+1}v^{2k}\right ) \right ] }\cr
&=&\left ( \frac{1}{1-\Upsilon^2}\right
)^{\frac{N}{2}}\frac{2}{t} \int \frac{dv}{1-v^2} \times\cr
&&e^{ N\left [ \left ( (\Upsilon v) -\frac{1}{2}(\Upsilon v)^2
+\frac{1}{3}(\Upsilon v)^3+\dots \right ) +\left (
(\frac{1}{2}-\frac{2}{t} ) v^2 +\frac{1}{4} v^4
-\frac{4}{3}\frac{v^4}{t}\dots \right) \right ] }
\label{serieseq}
\end{eqnarray}
The critical point is at $t=4$, where the coefficient of the term
$Nv^2$ vanishes (the term of order $v^2$ that has a coefficient of
order 1 does not matter, as we are interested in the large $N$
critical point). The double scaling limit is defined so that the
highest power of $v$ (without a factor of $\Upsilon$) is 4. This means
that the integration variable $v$ will be conveniently redefined as
\begin{equation}
v=\left (\frac{12}{N}\right )^{\frac{1}{4}} u
\end{equation}
To keep a $\Upsilon$ dependence we need rescale $\Upsilon$ so that the
variable $\xi$ below is kept fixed as $N\to\infty$.
\begin{equation}
\Upsilon=\frac{\xi}{12^{\frac{1}{4}}\; N^{\frac{3}{4}}}
\end{equation}
To keep the $v^2$ dependence we need to keep $t$ close to $4$, writing
\begin{equation}
\frac{4}{t}=1+\frac{\alpha}{\sqrt{3N}}
\end{equation}
We end up with:
\begin{equation}
\lim_{N\rightarrow\infty}
\left(\frac{4N}{3}\right)^{\frac{1}{4}}Z_N(y,t) =
\int_{-\infty}^{\infty} du e^{-u^4-\alpha u^2+\xi u }
\equiv \zeta(\xi,\alpha)
\end{equation}
The above equation explicitly shows that
keeping $\xi$ and $\alpha$ fixed,
while taking $N$ to infinity will make the function
$\left(\frac{4N}{3}\right)^{\frac{1}{4}}
Z_N(y,t)$ converge point-wise to the $\alpha$- and $\zeta$- dependent limit given by $\zeta(\xi,\alpha)$.
Looking at equation~(\ref{serieseq}), we see that corrections will go as
powers of $\frac{1}{\sqrt{N}}$.
A plot of the logarithmic derivative of $\zeta$ with respect to
$\xi$ in Figure~\ref{smoothfig}.
shows that the double scaling limit
provides a smoothed version for the non-analyticity
discussed in (\ref{heur}).
\section{Formulation of the large $N$ universality hypothesis in dimensions
2,3,4. }
We now abstract from the two dimensional case a hypothesis expected to hold
also for Euclidean $SU(N)$ gauge theory in dimensions 3 and 4.
We first formulate the statement in continuum ignoring renormalization,
and next provide a precise formulation using lattice gauge theory as a
constructive definition of continuum YM.
\subsection{Continuum formulation -- ignoring renormalization.}
Suppose we have a Wilson loop associated with a curve ${\cal C}$, $W({\cal C})$.
Suppose the loop ${\cal C}$, is parametrically described by a closed,
non-self-intersecting curve $x_\mu (s), s \in [0,1]$. This description is
redundant under re-parameterizations of the curve.
Consider this curve together with an infinite family of scaled versions of
it: ${\cal C}(m)$, described parametrically by $x_\mu(s,m)=\frac{1}{m} x_\mu
(s)$,with $m > 0.$ If we collect all these families we obtain the space of all
loops. We wish to think about a single loop ${\cal C}(m)$
as being labeled by its shape ${\cal C}(*)$,
which is the label
of its scaled family and is described by dimensionless
parameters, and a particular scale $m$ which identifies it
uniquely within the family and is of dimension mass.
We now pick a loop shape and look at the family of operators $W(m,{\cal C}(*))=
W({\cal C}(m))$. We are interested in the behavior of $W(m,{\cal C}(*))$
as we vary $m$, keeping ${\cal C}(*)$ fixed. More specifically, we are looking
at \begin{equation}
O_N (y,m,{\cal C}(*))=\langle \det (e^{\frac{y}{2}}+e^{-\frac{y}{2}}
W(m,{\cal C}(*))\rangle
\end{equation}
with particular interest focused on the region where $y$ is close to $0$.
The first part of the hypothesis is that the definition makes sense, meaning
that $O_N(y,m,{\cal C}(*))$ is well defined, and that indeed there exists some
scale $m_c$ of the basic loop shape ${\cal C}(*)$
at which the Wilson matrix
undergoes the DO large $N$ phase transition. The part of the hypothesis that
has to do with large $N$ universality says that there exists a (non-universal)
normalization
${\cal N}(N,m,{\cal C}(*))$, dependent on $N$, $m$ and the loop shape,
and
finite dimensionless parameters $a_1({\cal C}(*)),a_2({\cal C}(*))$ such that
\begin{equation}
\lim_{N\rightarrow\infty} {\cal N}(N,b,{\cal C}(*))
O_N\left(y=
\left(\frac{4}{3N^3}\right)^{\frac{1}{4}}\frac{\xi}{a_1({\cal C}(*))},
m=m_c\left [1+\frac{\alpha}{\sqrt{3N}a_2({\cal C}(*))}\right ]\right) =
\zeta(\xi,\alpha)\end{equation}
\subsubsection{Two dimensions: no renormalization needed.}
In two dimensions, for a non-self-intersecting loop, the dependence on ${\cal
C}$ comes only through its total enclosed
area; there is no dependence on ${\cal C}(*)$, the loop
shape, but only on $m$, its scale, which can be defined as the square root of
the inverse of the area. Two dimensional YM has a dimensional coupling which
does not renormalize and simply keeps track of dimensions. The issue of
renormalization does not arise at all.
We may as well regard
the area as dimensionless and set the coupling constant to unity. The
dimensionless positive parameter $t$ of the random matrix model corresponds to
this dimensionless area. It is convenient to change notation, from $t$ to $b$,
\begin{equation}
b=\frac{4}{t}
\end{equation}
and view $b$ as $m^2$ in our discussion above.
$m_c=1$, since $t_c=4$. When $m$ increases the loop shrinks.
We can now summarize our previous findings in two dimensions as follows:
Consider
\begin{equation}
{\tilde O}_N(y,b)=
\left(\frac{N}{12}\right)^{\frac{1}{4}} \sqrt{\frac{2\pi}{Nb}}
\frac{e^{\frac{N}{2b}}}{2^N} \left < \det \left( e^{\frac{y}{2}}
+e^{-\frac{y}{2}} \prod_{i=1}^N U_i \right) \right>
\label{modeleqn}
\end{equation}
${\tilde O}$ is proportional to $O$ but the normalization is
$b$ dependent. The large $N$ universal content is independent of the
prefactor, so long as the normalization is smooth in $b$ at the point
$b=b_c$; therefore the difference between ${\tilde O}$ and $O$ is immaterial. Using (\ref{znyt}), we can see that
\begin{equation}
{\tilde O}_N(y,b)=\left(\frac{N}{12}\right)^{\frac{1}{4}}
\int d\rho e^{N\left[\ln\cosh\rho-\frac{b}{8}
(2\rho-y)^2\right]}.
\label{mock2d}
\end{equation}
Defining
$\xi$ and $\alpha$ by
\begin{equation}
y=\left(\frac{4}{3N^3}\right)^{\frac{1}{4}}\xi;\ \ \
b=1+\frac{1}{\sqrt{3N}}\alpha
\label{yarho}
\end{equation}
and expanding in $\frac{1}{\sqrt{N}}$, we obtained:
\begin{equation}
\lim_{N\rightarrow\infty} {\tilde O}_N(y,b)=
\zeta(\xi,\alpha)=\int du e^{-u^4 -\alpha u^2 + \xi u}
\label{airy}
\end{equation}
This is how the universality hypothesis is realized in two dimensions, by
construction.
\subsection{Lattice formulation -- completely defined.}
Several of the choices we shall make are not conceptually essential, but
they help streamline the discussion.
\subsubsection{Shape and scale of curves on the lattice.}
We start by replacing space-time by a hypercubic lattice in $d$ dimensions.
This hypercubic lattice will be viewed as dimensionless, a collection of
vertices, or sites, labeled by $x_\mu \in Z, \mu=1,..,d$, and the
shortest arcs, or links, connecting them. One adds an orientation to the
links: this means that a link parallel to the $\mu$-axis
$\mu=1,...,d$ can be traversed
in the direction of its orientation $(+\mu)$, or in the opposite sense $(-\mu)$.
This setup is used to define approximations
to curve shapes. The curve shape is replaced by a contiguous sequence of
links, where the angles between any two links have to be a multiple of
ninety degrees. Symbolically, the curve is represented by an ordered
sequence, $(\mu_1,\mu_2,...,\mu_L)$ where $\mu_i=-d,-(d-1),....,d-1,d$.
The curve is closed when $\sum_{i=1}^L \delta_{\nu,\mu_i} =0$ for
$\nu=1,2,..,d$. When the curve is closed there is a redundancy under cyclic
shifts of the sequence.
The curve is non-self-intersecting if every site is visited no more than once.
The total number of links, $L$, determines how good the approximation
is. In the continuum limit one needs to take $L$ to infinity.
A scale parameter is attached to the curve shape by the ``dynamics''.
To each link we attach an $SU(N)$ unitary matrix $U$. There is a joint
probability distribution for all link matrices $U$, which is parameterized
by a positive parameter that we again call $b$. The mass scale $m$ is determined
by $b$ and the relationship is monotonic: $m(b)\to 0$ as $b\to\infty$.
The continuum limit is obtained
by taking $L\to\infty$, $b\to \infty$, in such a way that $m(b) L=l$ stay
finite. One can then arbitrarily introduce a unit of length to give $l$
engineering dimensions.
For a fixed continuum curve, its shape ${\cal C}(*)$ is obtained from the
lattice sequence in the limit when $L\to\infty$. Simultaneously with that limit
one needs to take $b\to\infty$, while the product $m(b) L=l$ stays finite.
$l$ determines the scale of the curve ${\cal C}$, and plays the role of
the parameter $\frac{1}{m}$ in the continuum discussion. One way to investigate
what happens as the continuum scale $m$ goes through its critical value
for a given curve shape, is to vary $b$ at a fixed lattice curve with a fixed
$L$. The universality hypothesis makes a prediction about this behavior; this
prediction is approximate in that the parameters $a_1, a_2$ are $L$ dependent
but becomes accurate as $L\to\infty$. The order of the
limits $L\to\infty$ and $N\to\infty$ is assumed to not matter, although there
are some limitations on the ranges.
\subsubsection{Regularization of perimeter and corner divergences.}
To make the prediction of universality quantitative we need to assure that the
lattice version of $O_N(y,b)$ is well defined and has a finite continuum limit. We need to eliminate
corner and perimeter divergences.
They are eliminated by replacing the link
matrices $U$ in the standard definition of $W$
by smeared versions, denoted by $U^{(n)}$, where
$n$ is an integer.
We employ APE smearing \cite{ape10}, defined
recursively from $n=0$, where the smeared matrix is equal to
$U_\mu(x)$. Let $\Sigma_{U^{(n)}_\mu (x;f)}$ denote the ``staple''
associated with the link $U^{(n)}_\mu(x;f)$ in terms of the entire
set of $U^{(n)}_\nu(y;f)$ matrices. One step in the recursion
takes one from a set $U^{(n)}_\mu (x;f)$ to a set $U^{(n+1)}_\mu
(x;f)$:
\begin{eqnarray}
X^{(n+1)}_\mu (x;f)= (1-|f|) U^{(n)}_\mu (x;f)+\frac{f}{2(d-1)}
\Sigma_{U^{(n)}_\mu (x;f)}\nonumber\\
Y^{(n+1)}_\mu (x; f
)=X^{(n+1)}_\mu (x;f) \frac{1}{\sqrt{[X^{(n+1)}_\mu (x;f)]^\dagger
X^{(n+1)}_\mu (x;f)}}\nonumber\\
U^{(n+1)}_\mu(x;f)=\frac{Y^{(n+1)}_\mu (x; f)}
{\det^{\frac{1}{N}}\left[Y^{(n+1)}_\mu (x; f)\right]}
\end{eqnarray}
In the simulation, one never encounters a situation where the
unitary projection in the above equations stalls because $X^{(n)}$
is singular. In other words, smearing is well defined with
probability one.
$U^{(n)}_\mu (x;f)$ transforms under gauge transformations the
same way as $U_\mu (x)$ does. For definiteness we restrict our
subsequent discussion to rectangular loops of sides $L_1$ and $L_2$
which fit into a two dimensional plane in the $d$ dimensional
Euclidean space time.
Our smeared Wilson loop operators,
$\hat W [L_1,l_2 ; f ; n]$ are defined as ordered products around
the $L_1 \times L_2$ rectangle restricted to a plane. $L_\alpha$
are integers and give the size of the loop in units of the lattice
spacing. When the traversed link starts at site
$x=(x_1,x_2,...x_d)$ $ x_\mu\in Z$ and connects to the
neighboring site in the positive direction $\mu$, $x+\mu$, the
link matrix is $U^{(n)}(x;f)$, while when this oriented link is
traversed in the opposite direction, the link matrix is
$U^{\dagger (n)}(x;f)$. $\hat W$ depends on the place the loop was
opened, but its eigenvalues do not. The set of eigenvalues is
gauge invariant under the fundamental gauge transformation
operating on $U_\mu (x)$.
We adjust the parameter
dependence in $\hat W$ such that the $N=\infty$ transition points
which are seen to occur on the lattice, survive in
the continuum limit in which the lattice coupling $b$ is taken to
infinity together with $L_{1,2}$ in such a way that the physical
lengths $l_\alpha = L_\alpha m(b)$ are kept fixed. (
$l_1/l_2=L_1/L_2$ is independent of $b$, and represents the loop
shape; our previously defined scale $l$ is $l=2m(b)(L_1+L_2)$).
We set the number of smearing steps $n$ to be proportional to the
perimeter square (we restricted the loop sizes to even
$L_1+L_2$), $n=\frac{(L_1+L_2)^2}{4}$. The physical
sizes of the loop are $l_\alpha$. We have set
$n=\frac{(L_1 + L_2 )^2}{4}$ because in physical terms
the product $f n$ is a length squared. This is
because smearing is a random walk that
fattens the loop and the thickness grows as the square root of the
number of smearing steps. Our choice for $n$ makes $f$ a
dimensionless parameter in the physical sense; on the lattice $f$
is actually bounded to an interval of order one.
The effect of smearing is easy to understand in perturbation
theory where one supposes that each individual step in the
smearing iteration can be linearized. Writing $U^{(n)}_\mu
(x;f)=\exp(iA^{(n)}_\mu (x;f))$, and expanding in $A_\mu$ one
finds~\cite{pertsmear11}, in lattice Fourier space:
\begin{equation}
A^{(n+1)}_\mu (q;f)= \sum_\nu h_{\mu\nu} (q) A^{(n)}_\nu (q;f)
\end{equation}
with
\begin{equation}
h_{\mu\nu} (q)= f(q)(\delta_{\mu\nu} - \frac {{\tilde q}_\mu
{\tilde q}_\nu}{\tilde q^2})+\frac {{\tilde q}_\mu {\tilde
q}_\nu}{\tilde q^2} \end{equation} where $\tilde q_\mu = 2\sin
(\frac{q_\mu}{2})$ and
\begin{equation}
f(q)=1-\frac{f}{2(d-1)}\tilde q^2
\end{equation}
The iteration is solved by
replacing $f(q)$ by $f^n (q)$, where, for small enough $f$,
\begin{equation}
f^n(q) \sim e^{-\frac{f n}{2(d-1)} \tilde q^2}
\end{equation}
Much larger loops should not be smeared with an $f n$ factor that
keeps on growing as the perimeter squared; rather, for a square
loop of side $L$, for example, the following choice would be
appropriate:
\begin{equation}
f=\frac{\tilde f}{1+M^2 L^2}
\end{equation}
Here, $M$ is in lattice units, $M=\Gamma m(b)$. $\Gamma$ is a hadronic
scale chosen so that at the large $N$ transition, $\Gamma l$ is
less then $0.01$, say.
The new parameter $f$ should be considered as fixed once and for all; its
exact value is unimportant so long as it is reasonable. However, if that
value is changed by some modest amount the critical loop size will change too.
This critical loop size is non-universal; only the fact that such a critical
value exists within some reasonable hadronic range is universal.
Smearing provides a means to regularize the basic observable and allows us to
proceed finally to the lattice formulation of the large $N$ universality
hypothesis. For simplicity, we formulate it only for square Wilson loops,
denoting by $W$ the operator constructed from smeared link variables.
\subsubsection{Universality hypothesis for square lattice Wilson loops.}
We assume to be given a table (the data)
with numerical values for the expectation value of
\begin{equation}
O_N(y,b)=
\left< \det (e^{\frac{y}{2}}+e^{-\frac{y}{2}}W)\right>
\end{equation}
for an $L\times L$ Wilson loop at an inverse
't Hooft rescaled gauge coupling $b$. The hypothesis says that $O_N(y,b)$ will
exhibit critical behavior at $b=b_c(L)$ and $y=0$
as $N\rightarrow\infty$. There, it will obey
large $N$ universality, which means that there exists a ${\cal N}(b,N)$,
smooth in $b$ at $b=b_c$, such that:
\begin{equation}
\lim_{N\rightarrow\infty} {\cal N}(b,N)
O_N\left(y=
\left(\frac{4}{3N^3}\right)^{\frac{1}{4}}\frac{\xi}{a_1(L)},
b=b_c(L)\left [ 1 +\frac{\alpha}{\sqrt{3N}a_2(L)}\right ] \right) =
\zeta(\xi,\alpha)\end{equation}
${\cal N}(b,N)$ is a normalization factor
similar to the one in (\ref{modeleqn}).
\subsubsection{Large $N$ universality holds already before the continuum limit.}
Even at finite $L \ge L_0$, where $L_0$ is some finite number there
will be a large $N$ phase transition in loops. Our hypothesis includes
the belief that this transition will be in the DO universality class
even before the continuum limit is taken. Thus, for simple enough
loops it always makes sense to define $b_c(L), a_1(L), a_2(L)$.
If all three parameters approach the continuum limit in the
standard manner, then
large $N$ universality is a property of the continuum
limit.
This is somewhat similar to spontaneous chiral symmetry on the
lattice. Using the overlap action for example~\cite{ovlap}, we can
define a pion decay constant at finite lattice spacing, by
relating the pion mass for small bare quark masses
using standard chiral symmetry considerations. That all
this survives the continuum limit amounts simply to checking that
physical quantities have smooth limits,
approached in standard ways. The
key is to have a good lattice definition that preserves the essential
ingredient of the phenomenon. When there is no reason, the continuum
limits and other critical behaviors do not interfere with each
other. However, if the lattice regularization is faulty, for example
ignoring perimeter effects in the case of Wilson loops, or choosing a
Wilson type of fermionic action in the chiral case, one will have
interference with the continuum limit. This is not to say that these
problems cannot be overcome --only the analysis becomes more murky and
delicate.
\subsubsection{How to test for universal large $N$ behavior numerically?}
We test for large $N$ universality hypothesis as follows:
Obtain
estimates for $b_c(L)$, $a_1(L)$,
$a_2(L)$ denoted by $b_c(L,N)$, $a_1(L,N)$ and
$a_2(L,N)$
from data at various values of $N$
assuming $N$ is already large enough to use the asymptotic formulas.
\begin{itemize}
\item Show that all three $N$-dependent quantities have a well
defined limit as $N\to\infty$, which is approached in a way consistent with
large $N$ universality.
\item Show that $b_c(L)$, $a_1(L)$,
$a_2(L)$ have finite limits as $L\to\infty$ (which are correlated with
taking $b\to\infty$ keeping the physical loop size fixed). Moreover,
these limits should be approached in the manner expected
of normal physical field theoretical observables (that is the sub-leading
corrections can be organized by dimensional analysis restricted by symmetry
considerations).
\end{itemize}
\subsubsection{The estimates $b_c(L,N)$ for $b_c(L)$.}\label{bcest}
$O_N(y,b)$ is an even
function of $y$ because $W\in SU(N)$. It is obvious from (\ref{airy})
that $\zeta(\xi,\alpha)$ is an even function of $\xi$.
Let
\begin{equation}
O_N(y,b)= C_0(b,N) + C_1(b,N) y^2 + C_2(b,N) y^4 + \cdots
\label{taylor}
\end{equation}
be the Talyor's series for $O_N(y,b)$.
At some fixed value of $L$ we consider the following quantity,
derived from the average characteristic polynomial of the regularized Wilson
loop:
\begin{equation}
\Omega(b,N) = \frac{ C_0(b,N) C_2(b,N)}{C_1^2(b,N)}.
\end{equation}
$\Omega(b,N)$ is essentially a ``Binder cumulant''~\cite{binder}.
The normalization ${\cal N}(b,N)$ and any rescaling of $y$ drop
out from $\Omega (b,N)$. Therefore, if $N$ is large enough,
and if we set $b=b_c(L,N=\infty)$ we should get a value close to the number
$\Omega (b_c,\infty)$. We define an approximation to $b_c(L,N=\infty)$,
$b_c(L,N)$, by the equation:
\begin{equation}
\Omega(b_c(L,N),N) =
\frac{\Gamma(\frac{5}{4}) \Gamma(\frac{1}{4})}{6 \Gamma^2(\frac{3}{4})} =
\frac{\Gamma^4(\frac{1}{4})}{48\pi^2}= 0.364739936
\label{method1}
\end{equation}
Viewing the $u$ integrand in
(\ref{airy}) as performing an average over $u$ dependent observables,
we would write, $C_k=\frac{\langle u^{2k}\rangle}{(2k)!}$. For $|\alpha|>>1$
we can assume that $u$ is
approximately distributed as a Gaussian. For $\alpha >0$ the mean is zero,
$\langle u\rangle =0$, and using $\frac{\langle u^4 \rangle}{{\langle u^2\rangle}^2}=3$ gives
$\Omega =\frac{1}{2}$. If $\alpha <0$
there are two nonzero saddles of the same absolute magnitude
$\langle u\rangle\ne 0$; these saddles dominate over fluctuations, giving
$\Omega =\frac{1}{6}$. The full function $\Omega(\alpha)$ is shown
in Figure~\ref{omegas}.
At $\alpha=0$ $\Omega$ comes out pretty close to the arithmetic
average of the two asymptotic values: $\frac{1}{2}(\frac{1}{2}+\frac{1}{6})$.
The exact number is given in (\ref{method1}); it was obtained from (\ref{airy})
using
\begin{equation}
\int_{-\infty}^\infty du u^{2k} e^{-u^4} =\frac{1}{2}\Gamma
\left ( \frac{2k+1}{4} \right )
\end{equation}
Expanding ${\tilde O}(y,b)$ in (\ref{mock2d}) to order $y^4$ leads to
explicit expression for $\Omega(b,N)$.
Figure \ref{omegan} shows $\Omega(b,N)$ for different
values of $N$.
In two dimensions the exact $\Omega (b,N)$ connects
monotonically the two extremes, $1/6$ and $1/2$ as $b$ varies
from far below $b_c$ to far above $b_c$.
If one takes $N\to\infty$, there is a discontinuous jump at $b=b_c$,
between the above two asymptotic values. The double scaling
limit of $\Omega(b,N)$, which produced $\Omega(\alpha)$, smoothed
out the jump but maintained the asymptotic behavior of the exact
expression at finite $N$.
From the data we get estimates of $C_i(b,N)$, $i=0,1,2$,
from which we extract $\Omega(b,N)$. We then use (\ref{method1})
to obtain an estimate of $b_c(L,N)$.
$b_c(L,N)$ has been constructed from the free energy of the combined
system of gauge fields and fermions used to represent
the characteristic polynomial. Therefore, ordinary $N$-power
counting rules should apply, and we expect $b_c(L,N)$ to approach
$b_c(L,\infty)\equiv b_c(L)$ as a series in $\frac{1}{N}$.
\subsubsection {The estimates $a_2(L,N)$ for $a_2(L)$.}\label{a2est}
The parameter $a_2(L,N)$ is
obtained by first setting
\begin{equation}
b=b_c(L,N)\left[ 1 + \frac{\alpha}{\sqrt{3N}a_2(L,N)}\right],
\label{bscale}
\end{equation}
where $b_c(L,N)$ has been defined above. Next we write
the derivative of $\Omega$ with
respect to $\alpha$ and set the result equal to the corresponding
universal number in the large $N$ limit.
\begin{equation}
\left.\frac{d\Omega(b,N)}{d\alpha} \right |_{\alpha=0} =
\left.\frac{1}{a_2(L,N)\sqrt{3N}} \frac{d\Omega}{db}\right |_{b=b_c(L,N)}
= \frac{\Gamma^2(\frac{1}{4})}{6\sqrt{2}\pi}
\left( \frac{\Gamma^4(\frac{1}{4})}{16\pi^2} -1\right)
=0.0464609668
\label{a2ex}
\end{equation}
$\frac{d\Omega}{db}$ would be close to maximal at $b=b_c$; hence
$\frac{d\Omega}{db}$ varies relatively little as $b$ stays close to $b_c$. Since
$b_c$ is not known to infinite accuracy the reduced sensitivity on the exact
value of $b_c$ is an advantage which motivates this choice for defining
$a_2(L,N)$.
Unlike $b_c(L,N)$, the definition of $a_2(L,N)$ involves
going into the large $N$ critical regime around $b_c(L,\infty)$ and
non-standard powers of $N$ come in.
Taking this into account, we expect $a_2(L,N)$ to approach
$a_2(L,\infty)\equiv a_2(L)$ as a power series in $\frac{1}{\sqrt{N}}$.
\subsubsection {The estimates $a_1(L,N)$ for $a_1(L)$.}\label{a1est}
We substitute
\begin{equation}
y=
\left(\frac{4}{3N^3}\right)^{\frac{1}{4}}\frac{\xi}{a_1(L,N)}
\end{equation}
in (\ref{taylor}). We then form a ratio whose value at infinite $N$ is again
a universal number we can easily compute.
\begin{equation}
\sqrt{\frac{4}{3N^3}} \frac{1}{a_1^2(L,N)}
\frac{C_1(b_c(L,N),N)}{C_0(b_c(L,N),N)}
= \frac{\pi}{\sqrt{2}\Gamma^2(\frac{1}{4})}=0.16899456
\label{a1ex}
\end{equation}
This relation defines $a_1(L,N)$.
Similarly to $a_2(L,N)$, the definition of $a_1(L,N)$ involves
going into the large $N$ critical regime around $b_c(L,\infty)$.
Consequently, we expect $a_1(L,N)$ to also approach
$a_1(L,\infty)\equiv a_1(L)$ as a power series in $\frac{1}{\sqrt{N}}$.
\subsection {Example of a universality test on synthetic two dimensional data.}
In two dimensions we work already in the limit $L=\infty$. Our main objective is
to check what kind of finite $N$ data could be used to produce the known
infinite $N$ values of $a_1,a_2,b_c$, using the above procedures (with $L$
eliminated) to define $a_1(N), a_2(N), b_c(N)$. The extrapolation to infinite
$N$ is done using a series in $\frac{1}{N}$ for $b_c(N)$ and a series in
$\frac{1}{\sqrt{N}}$ for $a_1(N), a_2(N)$ as explained above.
The values of $N$ in Figure~\ref{omegan}
were chosen to match the ones employed
in the three dimensional simulation.
$\Omega(b,N)$, as a continuous function of $b$ for a fixed $N$,
defines via (\ref{method1}) the number $b_c(N)$.
With $b_c(N)$ so determined, we use (\ref{a2ex})
to determine $a_2(N)$ from $\frac{d\Omega(b,N)}{db}\left. \right |_{b=b_c(N)}$.
Further, we use (\ref{a1ex}) to find the value of $a_1(N)$ from
$\frac{C_1(b_c(N),N)}{C_0(b_c(N),N)}$. Figures \ref{bcn},\ref{a2n},\ref{a1n}
show what can be done with ``perfect'' data
for $N=17,23,29,37,41,47$.
The $N\to\infty$ estimate of $b_c$ is the most accurate followed
by the estimates of $a_1$ and $a_2$.
This is typical in that we expect (and need) an accurate estimate of the
critical point while the estimate of the amplitudes come at lower accuracy.
While the synthetic data was produced only at values of $N$ that are practical
also in three dimensions, it
has three features that are not in common with
lattice data obtained by Monte Carlo simulations:
\begin{enumerate}
\item There are no statistical errors.
\item We know $\Omega(b,N)$
and $\frac{C_1(b_c(N),N)}{C_0(b_c(N),N)}$
as continuous functions of $b$. The numerical
simulation will be performed only on a discrete set of
$b$ values that brackets $b_c(N)$
and one will need to interpolate.
\item We know $\frac{d\Omega(b,N)}{db}$ exactly.
A direct numerical estimate of this derivative would involve linear
combinations of connected correlations of $C_i(b,N)$ with
the plaquette operator. This has large statistical
errors and is expected to be too expensive
to compute accurately. Therefore, we shall not have a direct
numerical estimate of the derivative and will extract it
from the interpolation of $\Omega(b,N)$ we have already used when
determining $b_c (N)$.
\end{enumerate}
The synthetic data is used to
indicate to us what ranges of $N$ are needed
to reliably extrapolate the three parameters to their
$N\to\infty$ limits.
The conclusion is that it is possible to carry out quite accurate
estimates of
$b_c$ and reasonably accurate estimates of
$a_2$ and $a_1$ in that $N\to\infty$ limit
from data obtained
at values of $N$ which are within the range of Monte Carlo simulations
in dimensions higher than two. However, the differences we have listed above
are sources of extra systematic and stochastic errors that we shall need to
control.
\subsection{Volume dependence and large $N$ reduction.}
In a precise sense, YM theory in 3 or 4 dimensions on a finite torus
becomes independent of torus size at infinite $N$ if
the torus is larger than some critical torus~\cite{largenred}.
The size has to be large enough
for the system to be in the so called 0c phase at infinite $N$.
In 0c, traces of Wilson loops in representations of finite dimension are equal
to their infinite volume values up to corrections of order $\frac{1}{N^2}$.
0c is characterized by all Polyakov loops having uniform eigenvalue
distributions.
Using the fermionic representation of the average characteristic polynomial
we expect that
\begin{equation}
\frac{1}{N}\log(\langle\det(z-W)\rangle)\end{equation}
will also be independent of the volume at leading order at large $N$, with
corrections going as $\frac{1}{N}$. Looking at the powers of $N$ that enter into
the function $\zeta(\xi,\alpha)$ we conclude that it also should be independent
of the volume. However, one expects the sub-leading
corrections in $\frac{1}{N}$,
which are volume dependent (and non-universal even at
infinite volume) to be relatively larger than for traces of Wilson loops.
The reason to expect slower convergence to the infinite $N$ limit is
that $\zeta(\xi,\alpha)$ describes a large $N$ critical regime, where taking
enough derivatives with respect to some parameter would produce quantities that
diverge in the ordinary (without double scaling) large $N$ limit. Obviously,
nothing is supposed to diverge at finite $N$, so sub-leading corrections must be
large. These sub-leading corrections will be even more significant at
smaller
volumes.
Thus, although large $N$ reduction can be exploited, one needs to carry out an
explicit check to determine how much contamination of the final estimates has
been caused by using relatively small volumes.
\section{Three dimensions.}
We use an ordinary single plaquette Wilson action defined on a
hypercubic lattice.
Our simulation method
employs a combination of heat-bath and over-relaxation updates and
``thermal equilibrium'' is achieved in reasonable lengths of computer time.
We keep our statistical errors small relative to systematical ones.
Throughout, we use $b$ for the lattice gauge coupling which is the
inverse bare 't Hooft coupling. It is related to the conventional
lattice coupling $\beta$ by
\begin{equation}
b=\frac{\beta}{2N^2}=\frac{1}{g_{\rm YM}^2N}\end{equation}
and $b$ already has the right power of $N$
extracted.
It is useful to consider the tadpole improved coupling,
$b_I=be(b)$ where $e(b)$ is the average value of the
trace of plaquette operator. To facilitate a translation
from $b$ to $b_I$, we have plotted $e(b)$
in Figure \ref{plaq} for the range
of $b$ used in this paper.
We started the project by carrying out preliminary simulations, intended to
identify a convenient value for the
parameter $f$. We established, in a way
similar to
our earlier work in four dimensions~\cite{ourjhep},
that as $L_1=L_2=L, f, n$ are varied, at specific
lattice couplings, the spectrum of $\hat W [L_1=L_2=L; f ; n] $
opens a gap for very small and/or very smeared loops. This gap
closes for very large and/or very lightly smeared loops.
We worked\footnote{We would like to thank Alejandro de la Puente
for some preliminary work in this direction.} at $N=37$
on a $8^3$ lattice. Keeping $b$ fixed, we varied $f$ and obtained
an estimate of the gap using the technique described in~\cite{ourjhep}.
This was done for several Wilson loops of size $L^2$, $L$
ranging from $L=2$ to $L=10$. In this manner, we obtained
estimates for $f_c(L,b)$, the critical value of $f$ at which the
gap opens around eigenvalue $-1$
when the smearing of $L\times L$ Wilson loops
is steadily increased at fixed $b$. The function $f_c(L,b)$ has a continuum
limit, obtained when $L$ and $b$ go to infinity in the usual correlated way.
This was tested employing five different values of
coupling, $b=0.85,0.9,1.0,1.1,1.2,1.3$; we made sure that all these
couplings are in the $0$c phase~\cite{threed} for our $8^3$ lattice.
We found that all the values $f_c(L,b)$ fall on a common curve
when plotted as a function $L/b_I (b)$ as shown in Figure \ref{fcrit}.
Based on this work, we chose to carry out the more detailed
analysis of the large $N$ critical region, which is the main topic
of this paper, at $f=0.03$. Other values of $f$, between $0.02$ and $0.04$,
might have served as well, although many numbers, including $b_c$ and $a_2$,
would have changed by modest amounts. Much higher values of
$f$ are counter indicated at this stage of our research
because we want to avoid finite volume effects and
therefore wish to keep $L$ below $8$. A lattice of size $8^3$ affords
reasonably speedy simulation, even at $N=47$, but the cost
quickly escalates when the lattice size is increased. A more detailed
discussion of finite size effects will be presented below.
\subsection{Details of the numerical analysis.}
Our simulations are carried out for prime numbers for $N$ to ensure that
the phase 0c does not decay into phases related to proper
subgroups of $Z(N)$. This is a precaution; it is possible that one
could also work with non-prime values of $N$.
We employed six different values of $N$,
namely $N=17,23,29,37,41,47$. There are three more primes in this
range, but they are so close to other primes, that we did not expect the
extra information to be worth the effort.
In order to check for volume dependence
we obtained data for $2^2$ Wilson loops on $3^3$, $4^3$ and $6^3$ lattices
and for the $3^2$ loop on $4^3$ and $8^3$ lattices. For our main study of the
double scaling limit we produced data for loops of larger sizes, $4^2$,
$5^2$ and $6^2$, all on a single lattice size, $8^3$. For each square
loop $L^2$, and for each value of $N$ we carried out a series of simulations in
a range of $b$ separated by small steps $\Delta b$.
Table~\ref{tab1}, which provides the intermediate numerical output used in
the study of the double scaling limit, also lists all values of $L$ and $N$
along with the lattice volume $V$.
After equilibration, for which we typically allowed several thousands
of lattice passes, the different steps were separated by 1000
passes. We tested the autocorrelation for our observable and saw that
we exceeded it by enough not to have to worry about the independence
of our samples. For each entry in the Table~\ref{tab1} we did
somewhere between $31$ to $48$ separate simulations on parallel nodes
in one of our PC clusters. Measurements on a single Wilson loop
was averaged over the whole lattice for all orientations.
Statistical errors obtained from the measurements on
several configurations were always estimated
by jackknife with single elimination.
In each run we collected data for 30 values of $y$ around zero, at equally
spaced points, where the range was determined to be fixed in terms of the
corresponding rescaled $\xi$ variable, assuming $a_1=1$ at all $N$, $L$ and $b$:
$0 \le \xi \le 3$. There is no need to collect data also at negative values of
$\xi$, since the symmetry under a sign flip of $y$ is exact.
In order to perform a cross-check of our procedure described in
(\ref{bcest}), (\ref{a2est}) and (\ref{a1est}),
the first type of data we collected is for the observable $O(y,b,L)$:
\begin{equation}
O(y,b,L)=\langle \det\left ( e^{\frac{y}{2}}+ e^{-\frac{y}{2}} W(b,L)\right )\rangle
\end{equation}
More specifically, we collected data for its logarithmic derivative with
respect to $y$ directly; this means that at fixed $y,b,L$
for each gauge configuration and for each loop one keeps two numbers,
the determinant and its derivative with respect to $y$.
These numbers are summed over all translations of the loop and these two
numbers are stored for subsequent gauge averaging when the analysis is done.
For a fixed $N$ and $L$, the data makes up a two dimensional rectangular grid
in the $\xi,\alpha$ plane.
We used a nonlinear fitting
routine to find a best match of the logarithmic derivative with respect to $y$
of $O$ to the logarithmic derivative with respect to $\xi$ of
the double scaling function $\zeta(\xi,\alpha)$. This produces three parameters
$b_c(L,N),a_1(L,N),a_2(L,N)$ which can be extrapolated later on, first in $N$,
and subsequently in $L$. The fitting routine we used was based on the
Levenberg-Marquart method and the implementation in~\cite{numrec}.
The logarithmic derivative with respect to $\xi$ of
the double scaling function $\zeta(\xi,\alpha)$ was calculated using gaussian
integration over several intervals to high accuracy.
In addition to producing estimates to the parameters as mentioned, this showed
us that indeed one approaches the double scaling limit. We first tested the
nonlinear fitting method on synthetic data in two dimensions as reported
in~\cite{pospar}. This will not be reviewed here again.
As a method of estimating
parameters, the simultaneous nonlinear fit has the drawback that all parameters
now have corrections of the order $\frac{1}{\sqrt{N}}$. As we have seen, for
$a_1(L,N),a_2(b,N)$ this is unavoidable, but for $b_c(L,N)$ we can do better.
The nonlinear simultaneous fit mixes the corrections up and therefore is not
the best way to prepare the ground for the large $N$ extrapolation.
The second type of data we collected is used for determining the parameters from
the behavior around $y=0$ that are expressed by the three coefficients
$C_i(b,N,L)$.
We first obtain an estimate for $\Omega(b,N,L)$.
Figure \ref{omega47} shows a sample plot of $\Omega(b,N,L)$ as a function
of $b$ for $N=47$ and $L=3$ on a $4^3$ lattice.
The behavior is similar to the two dimensional case.
The top and bottom horizontal lines are the limits at
weak and strong coupling, $1/2,1/6$ respectively. The middle line
is the expected value at critical coupling in the $N\to\infty$
limit as given by the right hand side of (\ref{method1}).
We focus on a region of $\Omega(b,N,L)$ that is bounded by the two
horizontal lines that are on either side of the middle line in
Figure \ref{omega47}. We view $b$ as a function of
$z=\Omega(b,N,L)-0.364739936$ in this region and
use a linear three-parameter fit to a degree 2 polynomial:
\begin{equation}
b=b_c(L,N) + \frac{1}{\frac{d\Omega}{db}|_{b=b_c(L,N)}} z
+\beta z^2
\end{equation}
This gives us our determination for $b_c(L,N)$.
With the help of this same polynomial we then extract $a_2(L,N)$, using
(\ref{a2ex}).
Next, we analyze the numbers for the ratio $\frac{C_1(b,N,L)}{C_0(b,N,L)}$ as
follows: We take $\frac{C_1(b,N,L)}{C_0(b,N,L)}$ as a
function of $z$ in the same region we used above
and again carry out a linear three-parameter fit to a degree 2 polynomial.
$\frac{C_1(b_c(L,N),N,L)}{C_0(b_c(L,N),N,L)}$ is set as the leading coefficient
in this fit. Finally, $a_1(L,N)$ is extracted using equation (\ref{a1ex}).
\subsection{Extrapolation to infinite $N$.}
We take the $6^2$ loop on $8^3$ lattice as a sample case and plot the
results from the linear fit using described in (\ref{bcest}),
(\ref{a2est}) and (\ref{a1est}). The solid circles in figures
\ref{bc6},\ref{a26} and \ref{a16} show the results for $b_c(L,N)$,
$a_2(L,N)$ and $a_1(L,N)$ respectively. The extrapolation to infinite
$N$ was done using a three term series. One cannot use smaller $N$
value when doing this and larger $N$ values are two expensive in
computer time to produce, the simulation time growing as $N^3$. Some
systematic errors are induced by this extrapolation; one can get a
feel for it by using more, or less, powers of $N$ in the series.
The open circles in figures \ref{bc6},\ref{a26} and \ref{a16}
show the performance of the fit finite $N$ numbers
and their infinite $N$ extrapolated values. The $N=\infty$ estimate
differs from the data at the largest $N$ by $6\%$, $39\%$ and $10\%$
for $b_c$, $a_2$ and $a_1$ respectively. This amount of extrapolation
is roughly the same as that we had in the analysis of the synthetic
two dimensional data except for $a_2$, where it is around $20\%$ for
the synthetic data. All in all, although the extrapolations are quite
substantial, they are in line with expectations, and the two
dimensional study provides some confidence in the validity of the
infinite $N$ numbers we obtained in three dimensions. The sample case
we show is typical of other loops we have analyzed. Always, the
determination of $b_c(L,\infty)$ is the most reliable. Next in terms
of reliability is the determination of $a_1(L,\infty)$. The
determination of $a_2(L,\infty)$ is the least reliable, perhaps
because of an amplification of small errors in the determination of
$b_c(L,\infty)$.
As a cross-check of the determination of the infinite $N$ numbers we
analyze the $6^2$ loops also using the nonlinear simultaneous three parameter
fit we described earlier. We compare our three target numbers, $b_c(L,\infty)$
$a_1(L,\infty)$ and $a_2 (L,\infty)$ obtained
in the nonlinear simultaneous fit to those obtained in the
linear method based on $\Omega$.
It is only the infinite $N$ values that have to agree within errors, since
finite $N$ effects will differ in the two methods. We do not make a great
effort to estimate the errors in the nonlinear fit, as it is used
only for a general consistency test. We observe a dependence on the
ranges we use which produces systematic errors that are larger
than the statistical ones. It is this dependence on ranges we eliminate in
large measure (not completely though, as we need a range of $b$-values
for interpolation purposes, as explained) in the linear fitting method, based on
$\Omega$. But, one may worry that focusing on too narrow a range in
$b$ can do more harm than good. This is the intuitive reason for our
carrying out this consistency test. It goes above the usual feeling that
nonlinear multi-parameter fits are less reliable then sequential linear fits.
The results from the nonlinear simultaneous fit are shown
by solid triangles in Figures
\ref{bc6},\ref{a26} and \ref{a16}. The open triangles show
the performance of the fit versus $N$ and also show the
extrapolated values at infinite $N$.
As expected, they do not agree at finite $N$,
but there is reasonable agreement on the extrapolated values at $N=\infty$.
This assures us that focusing on the
region at $y=0$ in the linear method based on $\Omega$
does not present any dangers to reliability.
The method used for fitting the parameters, as we have explained before, deals
with one parameter at a time and indirectly
uses linear fits. This is our main method and it gave three numbers
for each $V,L,N$, which are summarized in Table~\ref{tab1}
\subsection{Finite volume effects.}
As explained, one needs to test for contamination from finite volume effects,
even though there is large $N$ reduction promising an eventual lack of
sensitivity to finite volume effects. It is not that the true infinite $N$
values are suspected of being dependent on torus size: The point is that the
estimates we get for the infinite $N$ values are obtained by extrapolation from
a set of finite $N$ values. Each one of these finite $N$ numbers does have a
torus size dependence. Fitting to this finite set of values at finite $N$'s will
produce best fit parameters that will depend on torus size too. The parameter
giving the coefficient of $\frac{1}{N^0}$ will have some torus size dependence
too, which would get weaker as more data at higher values of $N$ is made
available. The coefficients of sub-leading terms of the form $\frac{1}{N^k}, k>0$
will have a dependence on torus size that is not supposed to disappear when data
at higher values of $N$ is made available. In any finite set of data, the
coefficient of $\frac{1}{N^0}$ will compensate, by some torus size dependence,
for the absence of data at even higher values of $N$.
We have tested for contamination from finite volume effects
in two cases: We compare in Figures \ref{bcv},\ref{a2v}
and \ref{a1v} the results obtained using
a $4^3$ and a $8^3$ volume for a $3^2$ loop
and $3^3$, $4^3$ and $6^3$ volumes for a $2^2$ loop.
We see that only smaller values of $N$ are affected.
Also,
the effect is stronger on the $3^2$ loop, which makes sense
since $3^2$ loop on a $4^3$ volume
ought to be more contaminated then a $2^2$ loop on a $3^3$ volume.
However, the main conclusion is positive: our infinite $N$ values
are safe at our level of accuracy from finite volume contamination.
\subsection{Extrapolation to infinite $L$ -- continuum extrapolation.}
The transition is a continuum feature; therefore, all the values of $L$
represent the same critical loop of a physical size $l_c$. In three dimensions
$b$ has dimensions of length, therefore $\frac{L}{b_c(L)}$ needs to approach
a finite limit. This limit is to be approached with corrections dictated by
renormalization theory. Although the action generates only dimension two
corrections, the observable has also dimension one corrections, and therefore all
our fits are to three terms in a series in $\frac{1}{L}$ for $\frac{b_c}{L}$,
$a_1$ and $a_2$.
Higher $L$ values run the danger of finite
volume contamination and therefore we avoided producing them. There also is a
cost factor involved since
the higher $L$ is the larger the lattice $b$ is and
consequently the larger the lattice volume has to be
in order to stay in the right phase, 0c. As usual, computer time will eventually
grow linearly with $L^3$.
One way to check for consistency is to redo the fits by replacing the values
$b_c(L)$ for each $L$ by their mean field, or tadpole, improved values:
\begin{equation}
b^I (b) = b e(b)
\end{equation}
where $e(b)$ is the average of $\frac{{\rm Tr} (U_p)}{N}$ for
unsmeared parallel transporters round plaquettes $p$. It is known that
a large fraction of $\frac{1}{b^2}$ corrections get absorbed when using
$b^I$ instead of $b$ as an extrapolation parameter to continuum. If our
continuum extrapolation method is to be more than
merely asserted, it should give almost the same numbers in
the continuum limit when using either extrapolation method. However,
when using mean field improvement we should see
less fractional variability as function of $L$. These features were indeed
observed; thus our continuum extrapolation has passed a self-consistency check,
which admittedly is somewhat heuristic.
Figures \ref{bca},\ref{a2a} and \ref{a1a} show how the continuum limit
is approached. The $2\times 2$ loop is probably on too coarse a
lattice and we perform fits which include and fits which exclude this
loop. Figure \ref{bca} shows the results for $\frac{b_c(L,\infty)}{L}$
and $\frac{{b_c}_I(L,\infty)}{L}$. The extrapolated value for both
data sets are consistent with each other but there is a significant
difference in the extrapolated value with and without the $2\times 2$
loop. The $\chi^2$ value indicates that probably the lattice spacing
for the $2^2$ loops becomes too large.
We therefore quote $0.113(6)$ as the continuum value for $b_c/l$
at $f=0.03$. The extrapolated result for $a_2$ has large errors as
expected and all we can quote is $5.8(2.3)$. The result for $a_1$ is
consistent with $a_1$ being unity.
\section{Summary and Discussion.}
We hypothesized that the strong to weak coupling phase transition
in large $N$ QCD is in the same universality class in two, three
and four dimensions.
Our primary finding is a picture that is consistent with our hypothesis
in three dimensional Euclidean YM. Moreover, it seems that the parameter
$a_1$ is consistent with the value $1$, indicating that indeed the
phase transition is as simple as it only could be.
One should keep in mind that numerical tests are never foolproof.
Even if it turns out that some details do not work as the hypothesis
we formulated predicts, there could be weaker forms of the
hypothesis that do hold. We believe that it is better to have some clear
hypothesis one is testing, than just trying to accumulate a large body
of numerical information and look for systematics later. We hope that
our hypothesis, or a competing one, will be independently
checked in perhaps other ways.
\section{Plans for the future.}
The first problem for the future is to extend this
work to four dimensions. If everything works like in three dimensions
one can proceed to address the question of shape dependence.
It would be useful to derive double scaling limits for observables other
than
the characteristic polynomial and use those for carrying out numerical tests.
In particular, the double scaling limit of the distribution of extremal
eigenvalues would hold the promise of providing easier and more
stringent numerical checks.
A related question has to do with matching the universal data in the transition
region to the perturbative side. For this, smearing and the precise
form of the observable become important practical details.
Intuitively, the Grassmann/fermion representation for the
characteristic polynomial provides a framework that
has better potential to be amenable to
standard renormalized perturbation theory than whatever
framework would be able to handle extremal eigenvalues.
The reason is that the Grassmann/fermion
representation provides local expressions in spacetime, but it is hard to
imagine a space-time local approach for handling a double scaling limit
for the distribution of extremal eigenvalues.
Nevertheless, exact results about extremal eigenvalues would be useful, only,
one may need to match (this time within the matrix model)
the parametric behavior of extremal eigenvalues
(assuming one can be obtained) to the
description of the average characteristic polynomial
we have been working with in this paper.
The concept of an extremal eigenvalue is less natural for a unitary matrix
than it is for a hermitian one. In the unitary case, one
would define ``extremal'' by the eigenvalue on the unit circle that is closest
to -1, the distance being measured by the shortest arc connecting the
eigenvalue to -1 round the unit circle. Perhaps a more natural alternative is to
consider the probability distribution of the largest gap, where the
``gaps'' are measured by arcs connecting two consecutive eigenvalues round the
circle.
\acknowledgments
R.N. acknowledge partial support by the NSF under grant number
PHY-055375. H. N. acknowledges partial support by the DOE under grant
number DE-FG02-01ER41165 at Rutgers University and a Humboldt
Foundation prize. The stay associated with the prize, at Humboldt
University Berlin, was very pleasant and H. N. is grateful to Ulli
Wolff and his computational physics group for their hospitality during
two long-term stays in Berlin. Comments made by J. Feinberg, during
an extended visit at Rutgers, are also gratefully acknowledged.
|
2,869,038,157,093 | arxiv | \section{Preliminaries}\label{sec:preliminaries}
\subsection{Legendrian and transverse knots}
A \emph{Legendrian knot} $L$ in $\R^3$ (or in $S^3=\R^3\cup\{\infty\}$), endowed with the standard contact form $dz-ydx$ is an oriented knot along which the form $dz-ydx$ vanishes identically.
Legendrian knots are determined by their front projection to the $xz$-plane; the projections are smooth in all but finitely many cusp points, have no vertical tangents ($y=\frac{dz}{dx}$) and at each crossing the strand with smaller slope is in the front. Note, that in order to have the standard orientation on $\R^3$ the $y$ axis points into the page. By changing the parts with vertical tangents to cusps and adding zig-zags, a generic smooth projection of a knot can be arranged to be of the above type. Thus any knot can be placed in Legendrian position. But this can be done in many different ways up to Legendrian isotopy. For example by adding extra zig-zags in the front projection we obtain a different Legendrian representative. This method is called \emph{stabilization}. If we add a down cusp then it is called \emph{positive stabilization} and it is called \emph{negative stabilization} otherwise. (Here and throughout the paper we use the conventions of Etnyre \cite{E}.) There are two classical invariants for Legendrian knots, described in the following. By pushing off the knot in the $\frac{\partial}{\partial z}$ direction we obtain the \emph{Thurston-Bennequin framing} of the Legendrian knot. Comparing this with the Seifert framing we get a number which is called the \emph{Thurston-Bennequin number} $\textrm{tb}(L)$. The \emph{rotation number} $\textrm{r}(L)$ is the winding number of $TL$ with respect to a trivialization of the contact planes along $L$, that extends to a Seifert surface.
A \emph{transverse knot} in $S^3$ with the standard contact structure is a knot along which the contact form $dz-ydx$ never vanishes.
Any transverse knot is naturally endowed with an orientation, the one along which the contact form is positive.
Again, every knot can be placed in transverse position, by for example translating its Legendrian realization in the $\pm\frac{\partial}{\partial y}$ direction. The resulting transverse knot is called the \emph{transverse push off} of the Legendrian knot. A push off is called positive if the orientation of the knot agrees with the natural orientation of the transverse knot and called negative otherwise.
Note that the negative push off is a transverse knot with reverse orientation. A Legendrian knot is called the \emph{Legendrian approximation} of its positive push off.
Two transverse knots are transverse isotopic if and only if their Legendrian approximations have common negative stabilizations.
By pushing off the transverse knot $T$ in a direction of a vector field in the contact planes we get a
framing. Comparing this to the Seifert framing we get the \emph{self-linking number} $\textrm{sl}(T)$. The self-linking number can be deduced from the classical invariants of a Legendrian approximation of the knot: $\textrm{sl}(L_{\pm})=\textrm{tb}(L)\mp\textrm{r}(L)$.
A knot is called \emph{Legendrian simple} (or \emph{transversely simple}) if any two Legendrian (transverse) realization of it with equal Thurston-Bennequin and rotation (self-linking) number(s) are isotopic through Legendrian (transverse) knots.
As it is explained in \cite{EH2}, there is a well defined notion of the connected sum of two Legendrian or transverse knots in $S^3$, which comes from connected summing the two $S^3$'s the knots are sitting in. The above process has a good description in the front projection of the knots, as can be seen in Figure \ref{fig:legendreonnsum}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{connsummleg}
\caption{Connected sum of two Legendrian knots}
\label{fig:legendreonnsum}
\end{figure}
\subsection{Knot Floer homology with multiple basepoints}
Here we outline the basic definitions of knot Floer homologies with multiple basepoints, originally defined by Ozsv\'ath and Szab\'o \cite{AAA} and independently by Rasmussen \cite{R}. Consider a knot $K$ in an oriented, closed three-manifold $Y$, there is a self-indexing Morse function with $k$ critical points such that $K$
is made out of $2k$ flow lines connecting all the index zero and index three critical points. Such a Morse function gives rise to a Heegaard diagram $(\Sigma,\alphas,\betas,\ws,\zs)$ for $(Y,K)$ in the following way. Let $\Sigma=f^{-1}(\frac{3}{2})$ be a genus $g$ surface. The $\alpha$-curves $\alphas=\{\alpha_i\}_{i=1}^{g+k-1}$ are defined to be the circles of $\Sigma$ whose points flow down to the index one critical points. Similarly $\betas=\{\beta_i\}_{i=1}^{g+k-1}$ are the curves with points flowing up to the index two critical points. Finally let $\ws=\{w_i\}_{i=1}^{k}$ be the positive and $\zs=\{z_i\}_{i=1}^{k}$ the negative intersection points of $K$ with $\Sigma$.
Consider the module ${CF}^-(\Sigma, \alphas,\betas,\ws)$ over the polynomial algebra $\mathbb{Z}_2[U_1,\dots,U_k]$ freely generated by the intersection points of the totally real submanifolds $\Ta=\alpha_1\times\cdots\times\alpha_{g+k-1}$ and $\Tb=\beta_1\times\cdots\times\beta_{g+k-1}$ of $\mathrm{Sym}^{g+k-1}(\Sigma)$. This module is endowed with the differential:
\[
\partial^-\mathbf x=
\sum_{\mathbf{y}\in \Ta\cap\Tb}
\sum_{\begin{subarray}{l}
\phi \in\pi_2(\mathbf x,\mathbf{y}) \\
\mu(\phi)=1
\end{subarray}}
\left|\frac{\mathcal{M}(\phi)}{\R}\right|U_1^{n_{w_1}(\phi)}\cdots U_k^{n_{w_k}(\phi)}\mathbf{y}
\]
where as usual $\pi_2(\mathbf x,\mathbf y)$ denotes the space of homotopy classes of Whitney disks connecting $\mathbf x$ to $\mathbf{y}$, $\mathcal{M}(\phi)$ denotes the moduli space of pseudo-holomorphic representatives of $\phi$, the Maslov index,
$\mu(\phi)$ denotes its formal dimension and $n_p(\phi)=\#\{\phi^{-1}(p\times\mathrm{Sym}^{g+k-2}(\Sigma))\}$ is the local multiplicity of $\phi$ at $p$. Let
\begin{equation}\label{eqn:fact}
\left(\widehat{CF}(\Sigma,\alphas,\betas,\ws),\widehat{\partial}\right)=\left(\frac{{CF}^-(\Sigma,\alphas,\betas,\ws)}{(U_1=0)},\left[\partial^-\right]\right).
\end{equation}
The chain-homotopy type of the above complexes are invariants of $Y$ in the following sense:
\begin{theorem}\cite{AAA} Let $Y$ be an oriented closed three manifold. Consider the Heegaard diagrams $(\Sigma_1,\alphas_1,\betas_1,\ws_1)$ and $(\Sigma_2,\alphas_2,\betas_2,\ws_2)$ for $Y$ with $\abs{\ws_1}=k_1$ and $\abs{\ws_2}=k_2$. Assuming $k_1\geq k_2$ both complexes ${CF}^-(\Sigma_1,\alphas_1,\betas_1,\ws_1)$ and ${CF}^-(\Sigma_2,\alphas_2,\betas_2,\ws_2)$ can be thought of as chain complexes over $\mathbb{Z}_2[U_1,\dots,U_{k_1}]$ where $U_{k_2},\dots,U_{k_1}$ act identically on the latter complex. With this setup the two chain complexes are chain-homotopy equivalent.
Similar statement holds for the $\widehat{CF}$-theory, moreover the chain homotopy equivalences form a commutative diagram with the factorization map of (\ref{eqn:fact}). \qed
\end{theorem}
Hereafter we assume that our underlying three-manifold is the three-sphere. Note that in this case the homology of ${CF}^-(\Sigma,\alphas,\betas,\ws)$ is ${HF}^-(S^3)=\mathbb{Z}_2[U]$.
The relative Maslov-grading of two intersection points $\mathbf x,\mathbf{y}\in \Ta\cap\Tb$ is defined by $M(\mathbf x)-M(\mathbf{y})=\mu(\phi)-2\sum n_{w_i}(\phi)$, where $\phi\in\pi_2(\mathbf x,\mathbf{y})$ is any homotopy class from $\mathbf x$ to $\mathbf{y}$. We extend this relative grading to the whole module by $M(U_1^{a_1}\cdots U_k^{a_k}\mathbf x)=M(\mathbf x)-2(a_1+\cdots+a_k)$. This grading can be lifted to an absolute grading in $S^3$ by fixing the grading of the generator of ${HF}^-(S^3)=\mathbb{Z}_2[U]$ at $0$.
Note that so far we made no reference to the basepoints $\zs$. The relative Alexander grading is defined by $A(\mathbf x)-A(\mathbf{y})=\sum n_{z_i}(\phi)-\sum n_{w_i}(\phi)$, where again $\phi$ can be chosen to be any homotopy class in $\pi_2(\mathbf x,\mathbf{y})$. This relative grading can be uniquely lifted to an absolute Alexander grading to satisfy $\sum T^{A(x)}=\Delta_K(T)(1-T)^{n-1} \quad (\textrm{mod } 2)$, where $\Delta_K(T)$ is the symmetrized Alexander polynomial. We can extend the Alexander grading to the module by $A(U_1^{a_1}\cdots U_k^{a_k}\mathbf x)=A(\mathbf x)-(a_1+\cdots+a_k)$. As the local multiplicities of pseudo-holomorhic discs are non-negative, we obtain filtered chain complexes ${CFK}^-(\Sigma,\alphas,\betas,\ws,\zs)$ and $\widehat{CFK}(\Sigma,\alphas,\betas,\ws,\zs)$, that are invariants of the knot:
\begin{theorem}\cite{AAA} Let $K$ be an oriented knot in the 3-sphere. Consider the Heegaard diagrams $(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$ for $(S^3,K)$ with $\abs{\ws_1}=\abs{\zs_1}=k_1$ and $\abs{\ws_2}=\abs{\zs_2}=k_2$. Assuming $k_1\geq k_2$ both complexes ${CFK}^-(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and ${CFK}^-(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$ can be thought of as filtered chain complexes over $\mathbb{Z}_2[U_1,\dots,U_{k_1}]$ where $U_{k_2},\dots,U_{k_1}$ act identically on the latter complex. With this setup the two filtered chain complexes are filtered chain-homotopy equivalent.
Similar statement holds for the $\widehat{CFK}$-theory, moreover the chain homotopy equivalences form a commutative diagram with the factorization map. \qed
\end{theorem}
As it is easier to work with we usually consider the homologies of the associated graded objects denoted by $\mathrm{HFK}^-$. That is the homology of the complex ${CFK}^-(\Sigma,\alphas,\betas,\ws,\zs),\partial_0^-$ where,
\[
\partial^-_0\mathbf x=
\sum_{\mathbf{y}\in \Ta\cap\Tb}
\sum_{\begin{subarray}{l}
\phi \in\pi_2(\mathbf x,\mathbf{y}) \\
n_{z_1}(\phi)+\cdots+n_{z_k}(\phi)=0\\
\mu(\phi)=1
\end{subarray}}
\left|\frac{\mathcal{M}(\phi)}{\R}\right|U_1^{n_{w_1}(\phi)}\cdots U_k^{n_{w_k}(\phi)}\mathbf{y}.
\]
\begin{theorem}\cite{AAA} Let $K$ be an oriented knot in the 3-sphere $S^3$. Consider the Heegaard diagrams $(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$ for $(S^3,K)$. Then the $U_i$'s act identically on the homologies thus we can think of $\mathrm{HFK}^-(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $\mathrm{HFK}^-(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$ as modules over $\mathbb{Z}_2[U]$. And in this sense they are isomorphic.
Similar statement holds for the ${\widehat{\mathrm{HFK}}}$-theory, moreover the isomorphisms form a commutative diagram with the factorization map. \qed
\end{theorem}
Knot Floer homology satisfies a K\"unneth principle for connected sums:
\begin{theorem}\cite{AAA,}\label{thm:connsum} Let $K_1$ and $K_2$ be oriented knots in $S^3$ described by the Heegaard diagrams $(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$. Let $w_1^1\in \ws_1$ and $z_1^2\in \zs_2$. Then
\begin{enumerate}
\item[(1)] $\left(\Sigma_1\#\Sigma_2,\alphas_1\cup\alphas_2,\betas_1\cup\betas_2,(\ws_1-w_1^1)\cup\ws_2,\zs_1\cup(\zs_2-z_1^2)\right)$ is a Heegaard diagram for $K_1\# K_2$. Here the connected sum $\Sigma_1\#\Sigma_2$ is taken in the regions containing $w_1^1\in\Sigma_1$ and $z_1^2\in \Sigma_2$;
\end{enumerate}
\noindent Let $\abs{\ws_1}=\abs{\zs_1}=k_1$ and $\abs{\ws_2}=\abs{\zs_2}=k_2$. Both complexes ${CFK}^-(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1)$ and ${CFK}^-(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2)$ are $\mathbb{Z}_2[U_1,\dots,U_{k_1},V_1,\dots,V_{k_2}]$-modules with $U_1,\dots,U_{k_1}$ acting trivially on the latter and $V_1,\dots,V_{k_2}$ acting trivially on the former. Then
\begin{enumerate}
\item[(2)] ${CFK}^-\left(\Sigma_1,\alphas_1,\betas_1,\ws_1,\zs_1\right)\otimes_{U_1=V_1}{CFK}^-\left(\Sigma_2,\alphas_2,\betas_2,\ws_2,\zs_2\right)$ is filtered chain homotopy equivalent to ${CFK}^-\left(\Sigma_1\#\Sigma_2,\alphas_1\cup\alphas_2,\betas_1\cup\betas_2,(\ws_1-w_1^1)\cup\ws_2,\zs_1\cup(\zs_2-z_1^2)\right)$;
\item[(3)] $\mathrm{HFK}^-(S^3,K_1\# K_2)$ is isomorphic to $\mathrm{HFK}^-(S^3,K_1)\otimes\mathrm{HFK}^-(S^3,K_2)$ and this isomorphism can be given by $\mathbf x_1 \otimes \mathbf x_2\mapsto( \mathbf x_1,\mathbf x_2) $ on the generators.
\end{enumerate}
Here, all tensor products are taken over $\mathbb{Z}_2[U_1,\dots,U_{k_1},V_1,\dots,V_{k_2}]$.
Similar statement holds for the $\widehat{CFK}$-theory, moreover the chain homotopy equivalences form a commutative diagram with the factorization map. \qed
\end{theorem}
\subsection{Grid diagrams}
As it was observed \cite{MOSzT, MOS} knot Floer homology admits a completely combinatorial description via grid diagrams.
A \emph{grid diagram} $G$ is a $k$ by $k$ square grid placed on the plane with some of its cells decorated with an $X$ or an $O$ and containing exactly one $X$ and $O$ in each of its rows and columns. Such a diagram naturally defines an oriented link projection by connecting the $O$'s to the $X$'s in each row and the $X$'s to the $O$'s in the columns and letting the vertical line to overpass at the intersection points. For simplicity we will assume that the corresponding link is a knot $K$. There are certain moves of the grid diagram that do not change the (topological) knot type \cite{OSzT}. These are
\emph{cyclic permutation} of the rows or columns, \emph{commutation} of two consecutive rows (columns) such that the $X$ and the $O$ from one row (column) does not separate the $X$ and the $O$ from the other row (column) and \emph{(de)stabilization} which is described in the following. A square in the grid containing an $X$ ($O$) can be subdivided into four squares by introducing a new vertical and a new horizontal line dividing the row and the column that contains that square. By replacing the $X$ ($O$) by one $O$ ($X$) and two $X$'s ($O$'s) in the diagonal of the new four squares and placing the two $O$'s ($X$'s) in the subdivided row and column appropriately we get a new grid diagram which is called the stabilization of the original one. The inverse of stabilization is destabilization. There are eight types of (de)stabilization: $O\!:\!SW$, $O\!:\!SE$, $O\!:\!NW$, $O\!:\!NE$, $X\!:\!SW$, $X\!:\!SE$, $X\!:\!NW$ and $X\!:\!NE$, where the first coordinate indicates which symbol we started with and the second shows the placement of the unique new symbol. A stabilization of type $X\!:\!NW$ is depicted on Figure \ref{fig:stab}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{stab}
\caption{Stabilization of type $X\!:\!NW$}
\label{fig:stab}
\end{figure}
Placing the grid on a torus by identifying the opposite edges of the square grid we obtain a Heegaard diagram with multiple basepoints for $(S^3,K)$. Here the $\ws$'s correspond to the $O$'s and the $\zs$'s to the $X$'s. As each region of this Heegaard diagram is a square, it is``nice'' in the sense defined in \cite{SW}. Thus boundary maps can be given by rectangles. This observation led to a completely combinatorial description of knot Floer homology \cite{MOSzT,MOS} in the three-sphere. This can be set up without referring to the original holomorphic theory \cite{MOS}.
\subsection{Legendrian and transverse invariants on grid diagrams}\label{subsec:leginv}
Consider a grid diagram $G$ it does not only describes a knot projection but also a front projection of a Legendrian realization of its mirror $m(K)$ as follows. Rotate the grid diagram by $45^{\circ}\!$ clockwise, reverse the over- and under crossings and turn the corners into cusps or smooth them as appropriate. Legendrian Reidemeister moves correspond to certain grid moves giving the following result:
\begin{proposition}\cite{OSzT}\label{prop:leggrid} Two grid diagrams represent the same Legendrian knot if and only if they can be connected by a sequence of cyclic permutation, commutation, and (de)stabilization of types $X\!:\!NW$, $X\!:\!SE$, $O\!:\!NW$ and $O\!:\!SE$. \qed
\end{proposition}
Moreover stabilization of type $X\!:\!NE$ or $O\!:\!SW$ of the grid diagram correspond to negative stabilization of the knot thus
\begin{proposition}\cite{OSzT}
Two grid diagram represent the same transverse knot if and only if they can be connected by a sequence of cyclic permutation, commutation, and (de)stabilization of types $X\!:\!NW$, $X\!:\!SE$, $X\!:\!NE$, $O\!:\!NW$, $O\!:\!SE$ and $O\!:\!SW$. \qed
\end{proposition}
Consider a grid diagram $G$ for a Legendrian knot $L$ of knot type $K$,
pick the upper right corner of every cell containing an $X$. This gives a generator of ${CFK}^-(S^3,m(K))$ denoted by $\mathbf{x}_+(G)$. Here $m(K)$ denotes the mirror of $K$. Since there is no positive rectangle starting at $\mathbf{x}_+(G)$, it is
a cycle defining an element $\lambda_+(G)$ in $\mathrm{HFK}^-(S^3,m(K))$. Similarly one can define $\mathbf{x}_-(G)$ and $\lambda_-(G)$ by taking the lower left corners of the cells containing $X$'s. These elements are proved to be independent of the grid moves that preserve the Legendrian knot type, giving an invariant of the Legendrian knot $L$:
\begin{theorem}\cite{OSzT} Consider two grid diagram $G_1$ and $G_2$ defining the same oriented Legendrian knot, then there is a quasi-isomorphism of the graded chain complexes ${CFK}^-$ taking $\mathbf{x}_+(G_1)$ to $\mathbf{x}_+(G_2)$ and $\mathbf{x}_-(G_1)$ to $\mathbf{x}_-(G_2)$. \qed
\end{theorem}
One can understand their transformation under positive and negative stabilization:
\begin{theorem}\cite{OSzT} Let $L$ be an oriented Legendrian knot, denote $L_+$ its positive and $L_-$ its negative stabilization. Then
\begin{enumerate}
\item There is a quasi-isomorphism of the corresponding graded complexes taking $\lambda_+(L)$ to $\lambda_+(L_+)$ and $U\lambda_-(L)$ to $\lambda_-(L_+)$;
\item There is a quasi-isomorphism of the corresponding graded complexes taking $U\lambda_+(L)$ to $\lambda_+(L_-)$ and $\lambda_-(L)$ to $\lambda_-(L_-)$.
\end{enumerate} \qed
\end{theorem}
We know \cite{E} that the Legendrian knots with transversely isotopic positive push offs are the ones that obtained by a sequence of negative (de)stabilization of $L$.
So there is a well defined invariant for transverse knots: if $L$ is a Legendrian approximation of $T$ then define $\theta(T)=\lambda_+(L)$.
\begin{theorem}\cite{OSzT}
For any two grid diagrams $G_1$ and $G_2$ of Legendrian approximations of the transverse knot $T$ there is quasi-isomorphism of the corresponding graded chain complexes taking $\theta(G_1)$ to $\theta(G_2)$. \qed
\end{theorem}
\section{Proof of Theorem \ref{thm:connectedsum}}\label{sec:proofmain}
The Legendrian invariant has two appearance depending on in which version of the Floer homology we think it is.
The one introduced in subsection \ref{subsec:leginv} is in the combinatorial Heegaard Floer homology. Once the grid is placed on the torus we get a Heegaard diagram, and thus there is a natural identification of the combinatorial Heegaard Floer complex with the holomorphic Heegaard Floer complex, and under this identification the previously defined invariant has a counterpart in the original, holomorphic Heegaard Floer homology. We will use the same notation for both. We introduce yet another invariant for Legendrian knots:
\subsection{Legendrian invariant on sphere Heegaard diagrams}
A $k$ by $k$ grid diagram $G$ of a Legendrian knot $L$ of topological type $K$ can also be placed on the two sphere in the following way.
Let $S^2=\{(x,y,z)\in\R^3:\; \vert(x,y,z)\vert=1 \}$ and define the circles $\widetilde{\alphas}=\{\widetilde{\alpha}_i\}_{i=1}^{k-1}$ as the intersection of $S^2$ with the planes $A_i=\{(x,y,z)\in\R^3:\; z=\frac{i}{k}-\frac{1}{2}\}$ ($i=1,\dots,k-1$), similarly $\widetilde{\betas}=\{\widetilde{\beta}_i\}_{i=1}^{k-1}$ as the intersection of $S^2$ with the planes $B_i=\{(x,y,z)\in\R^3:\; x=\frac{i}{k}-\frac{1}{2}
\}$ ($i=1,\dots,k-1$). Call $F=\{(x,y,z)\in\R^3:\; \vert(x,y,z)\vert=1, y\ge 0 \}$ the front hemisphere, and $R=\{(x,y,z)\in\R^3:\; \vert(x,y,z)\vert=1, y\le 0 \}$ the rear hemisphere. Then there is a grid on both the front and on the rear hemisphere. We place the $X$'s and the $O$'s on the front hemisphere in the way they were placed on the original grid $G$.
After identifying the $O$'s with $\widetilde{\ws}=\{\widetilde{w}_i\}_{i=1}^{k}$ and the $X$'s with $\widetilde{\zs}=\{\widetilde{z}_i\}_{i=1}^{k}$ this defines a Heegaard diagram with multiple basepoints $(S^2,\widetilde{\alphas},\widetilde{\betas},\widetilde{\ws},\widetilde{\zs})$ for $(S^3,K)$.
A spherical grid diagram for the trefoil knot can be seen on Figure \ref{fig:trefoil}.
\begin{figure}
\centering
$\begin{array}{c@{\hspace{2cm}}c}
\includegraphics[scale=0.2]{trefoilfront}
&
\includegraphics[scale=0.2]{trefoilrear}\\[0.2cm]
\mbox{front hemisphere} & \mbox{rear hemisphere}
\end{array}$
\caption{Spherical grid diagram for the trefoil knot}
\label{fig:trefoil}
\end{figure}
Let $L$ be a Legendrian knot in $S^3$. To define the ``spherical'' Legendrian invariant $\lambda^S_+ (L)$ we will use grid diagrams that have an $X$ in its upper right corner. This can always arranged by cyclic permutation, but in the following we will need a slightly stronger property:
\begin{lemma}\label{lem:ox} For any Legendrian knot there exists a grid diagram representing it which contains an $X$ in its upper right corner and an $O$ in its lower left corner.
\end{lemma}
\begin{proof}
Consider any grid diagram describing the Legendrian knot $L$.
As it is illustrated on Figure \ref{fig:ox} we can obtain a suitable diagram as follows.
First do a stabilization of type $X\!:\!NE$, and then do a stabilization of type $O\!:\!NE$ on the newly obtained $O$. Lastly by cyclic permutation we can place the lower $X$ introduced in the first stabilization to the upper right corner of the diagram. Notice that the $O$ on the upper right of this $X$ will be automatically placed to the lower left corner. According to Proposition \ref{prop:leggrid} the Legendrian type of the knot is fixed under these moves. Thus the statement follows.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{ox}
\caption{Grid moves}
\label{fig:ox}
\end{figure}
\end{proof}
Suppose, that $G$ is a grid diagram having an $X$ in its upper right corner. Form a spherical grid diagram as above,
then $\mathbf{x}_+^S(L)$ is the generator of ${CFK}^-(S^2,\widetilde{\alphas},\widetilde{\betas},\widetilde{\ws},\widetilde{\zs})$ consisting of those intersection points on the front hemisphere that occupy the upper right corner of each region marked with an $X$. Note that the $X$ in the upper right corner has no such corner. On Figure \ref{fig:trefoil} the element $\mathbf{x}_+^S$ is indicated for the trefoil knot. Similarly to the toroidal case:
\begin{lemma}\label{lem:xps}The element $\mathbf{x}_+^S(L)$ is a cycle in $(S^2,\widetilde{\alphas},\widetilde{\betas},\widetilde{\ws},\widetilde{\zs})$.
\end{lemma}
\begin{proof}
We will show, that for any $\mathbf y$ there is no positive disc $\psi \in \pi_2(\mathbf{x}_+^S,\mathbf y)$ with $\mu(\psi)=1$. As the diagram ${CFK}^-(S^2,\widetilde{\alphas},\widetilde{\betas},\widetilde{\ws},\widetilde{\zs})$ is ``nice'' in the sense of \cite{SW} the elements $\mathbf{x}_+^S$ and $\mathbf y$ differ in exactly two coordinates and $\mathcal{D}(\psi)$ is either a rectangle or the union of two bigons. In any case $\mathcal{D}(\psi)$ contains an $X$ which means it is not counted in the boundary map.
\end{proof}
The homology class of $\mathbf{x}_+^S$, denoted by $\lambda^S_+(G)$, turns out to be an invariant of $L$ (i.e.\ it is independent of the choice of the grid diagram, and the way it is placed on the sphere). This can be proved directly through grid moves but instead we show:
\begin{theorem}\label{thm:sphericalinv} Consider a grid diagram for the Legendrian knot $L$ in $S^3$ having an $X$ in its upper right corner. Then there is a filtered quasi-isomorphism $\psi: {CFK}^-(T^2,\alphas,\betas,\ws,\zs) \to {CFK}^-(S^2,\widetilde{\alphas},\widetilde{\betas},\widetilde{\ws},\widetilde{\zs})$ of the corresponding toroidal and spherical Heegaard diagrams which maps $\mathbf{x}_+(L)$ to $\mathbf{x}_+^S(L)$.
\end{theorem}
In the proof we will need the notion of Heegaard triples, which we will briefly describe for completeness. Consider a pointed Heegaard triple $(\Sigma,\alphas,\betas,\gammas,\zs)$ then the pairs $(\Sigma,\alphas,\betas,\zs)$, $(\Sigma,\betas,\gammas,\zs)$ and $(\Sigma,\alphas,\gammas,\zs)$ define the
three-manifolds $Y_{\alpha\beta}$, $Y_{\gamma\beta}$ and $Y_{\alpha\gamma}$, respectively. There is a map from ${CF}^-(\Sigma,\alphas,\betas,\zs)\otimes{CF}^-(\Sigma,\betas,\gammas,\zs)$ to ${CF}^-(\Sigma,\alphas,\gammas,\zs)$ given on
a generator $\mathbf x\otimes\mathbf y$ by
\[
\sum_{\mathbf{v}\in \Ta\cap\Tg}
\sum_{\begin{subarray}{l}
u\in\pi_2(\mathbf x,\mathbf y,\mathbf{v}) \\
n_\zs(u)=0\\
\mu(u)=0
\end{subarray}}
\abs{\mathcal{M}(u)}\mathbf{v}
\]
where $\pi_2(\mathbf x,\mathbf y,\mathbf{v})$ is the set of homotopy classes of holomorphic triangles connecting $\mathbf x$, $\mathbf y$ to $\mathbf{v}$; maps from a triangle to $\mathrm{Sym}^{g+k-1}(\Sigma)$ sending the edges of the triangle to $\Ta$,$\Tb$ and $\Tg$. This gives a well-defined map on the homologies ${HF}^-$. Also, the same definition gives a map on the filtered chain complexes ${CFK}^-$. When $\gammas$ can be obtained from $\betas$ by Heegaard moves then the manifold $Y_{\beta\gamma}$ is $\#^{g}S^1\times S^2$ and ${HF}^-(\#^{g}S^1\times S^2)$ is a free $\mathbb{Z}_2[U]$-module generated by $2^g$-elements. Denote its top-generator by $\Theta^-_{\beta\gamma}$. The map ${CFK}^-(Y_{\alpha\beta})\to {CFK}^-(Y_{\alpha\gamma})$ sending $\mathbf x$ to the image of $\mathbf x\otimes\Theta^-_{\beta\gamma}$ defines a quasi-isomorphism of the chain complexes.
\begin{proof}[Proof of theorem \ref{thm:sphericalinv}]
{From} a toroidal grid diagram one can obtain a spherical one by first sliding every $\beta$-curve over $\beta_1$ and every $\alpha$-curve
over $\alpha_1$ and then destabilize the diagram at $\alpha_1$ and $\beta_1$. Thus we will construct the quasi-isomorphism by the composition $\psi=\psi_{\textrm{destab}}\circ\psi_\alpha\circ\psi_\beta$, where
\[
\psi_{\beta}=
\sum_{\mathbf{y}\in \Ta\cap\Tbp}
\sum_{\begin{subarray}{l}
u\in\pi_2(\mathbf{x}_+(L),\Theta^-,\mathbf{y}) \\
n_\zs(u)=0\\
\mu(u)=0
\end{subarray}}
\abs{\mathcal{M}(u)}\mathbf{y}\]
with $\Theta^-\in\Tb\cap\Tbp$ being the top generator of ${HF}^-(T^2,\betas,\betasp,\zs)={HF}^-(S^1\times S^2)$, and $\psi_\alpha$ is defined similarly.
Note that in the case of the sliding there is also a ``closest point'' map denoted by $^{\prime}$ for the sliding of the $\beta$-curves and by $^{\prime\prime}$ for the sliding of the $\alpha$-curves. We claim:
\begin{lemma}\label{lem:beta} $\psi_{\beta}(\mathbf{x}_+)=\mathbf{x}_+^{\prime}$.
\end{lemma}
\begin{lemma}\label{lem:alpha} $\psi_{\alpha}(\mathbf{x}_+^{\prime})=(\mathbf{x}_+^{\prime})^{\prime\prime}$.
\end{lemma}
Here we just include the proof of Lemma \ref{lem:beta}; Lemma \ref{lem:alpha} follows similarly.
\begin{proof}[Proof of Lemma \ref{lem:beta}]
Figure \ref{fig:slidebeta} shows a weekly admissible diagram for the slides of the $\beta$-curves.
\begin{figure}
\centering
\includegraphics[scale=0.4]{slidebetatriangle}
\caption{Handleslides}
\label{fig:slidebeta}
\end{figure}
\begin{claim}
The Heegaard triple $(T^2,\alphas,\betas,\betasp,\zs)$ of Figure \ref{fig:slidebeta} is weekly admissible.
\end{claim}
\begin{proof}
Let $\mathcal{P}_{\beta_i \beta^{\prime}_i\beta_1}$ ($i>1$) denote the domain bounded by $\beta_i$, $\beta^{\prime}_i$ and $\beta_1$ and containing no basepoint, similarly $\mathcal{P}_{\beta_1\beta^{\prime}_1}$ denotes the domain bounded by $\beta_1$ and $\beta^{\prime}_1$ and containing no basepoint.
These domains form a basis for the periodic domains of $(T^2,\betas,\betasp,\zs)$ and as all have domains with both positive and negative coefficients we can see that $(T^2,\betas,\betasp,\zs)$ is weekly admissible.
Consider a triply periodic domain $\mathcal{P}$. If there is no $\alpha$-curve in its boundary, then it is a periodic domain of $(T^2,\betas,\betasp,\zs)$, and by the previous observation we are done. So $\mathcal{P}$ must contain an $\alpha$-curve in its boundary. To ensure it does not contain an $X$, there must be some vertical curve, either from $\beta$ or $\beta^{\prime}$, in the boundary. At the intersection point of the horizontal an vertical lines the domain must change sign.
\end{proof}
The grey area in Figure \ref{fig:slidebeta} indicates a domain of a canonical triangle $u_0$ in $\pi_2(\mathbf{x}_+(L),\Theta^-,$ $\mathbf{x}_+^{\prime}(L))$; by the
Riemann mapping theorem there is exactly one map with that domain. We claim that this is the only map that is encountered in $\psi_{\beta}$.
For this let $u\in \pi_2(\mathbf{x}_+(L),\Theta^-,\mathbf{y})$ be a holomorphic triangle with $\mu(u)=0$ and $n_{\zs}(u)=0$.
\begin{claim} There exists a periodic domain $\mathcal{P}_{\beta\beta^{\prime}}$ of $(S^2,\betas,\betasp,\zs)$ such that $\partial(\D(u)-\D(u_0)-\mathcal{P}_{\beta\beta^{\prime}})\vert_{\betas}=\emptyset$. Thus $(\D(u)-\D(u_0)-\mathcal{P}_{\beta\beta^{\prime}})\vert_{\betas}$ is a domain in $(T^2,\alphas,\betasp,\zs)$, representing an element $v$ in $\pi_2(\mathbf{x}_+^{\prime},\mathbf{y})$ with Maslov index $\mu(v)=\mu(u)-\mu(u_0)-\mu(\mathcal{P}_{\beta\beta^{\prime}})=0$.
\end{claim}
\begin{proof}
As $n_{\zs}(u)=0$ and
$\mathbf{x}_+^S(L)$ is in the upper right corner of the $X$'s, the domain of any triangle must contain $\mathcal{D}(u_0)$. Consequently
$\partial \mathcal{D}(u)\vert_{\beta_i}$ is an arc containing the small part $\overline{\mathcal{D}(u_0)}\cap \beta_i$ and some copies of the whole $\beta_i$. By subtracting $\mathcal{D}(u_0)$ and sufficiently many copies of the periodic domains $\mathcal{P}_{\beta_i \beta^{\prime}_i\beta_1}$
we obtain a domain with no boundary component on $\beta_i$. Doing the same process for every $i>1$ and then by subtracting some $\mathcal{P}_{\beta_1\beta_1^{\prime}}$ we can eliminate every $\beta_i$ from the boundary.
\end{proof}
\begin{claim} There is no positive disc in $\pi_2(\mathbf{x}_+^{\prime},\mathbf{y})$.
\end{claim}
\begin{proof} This follows similarly as Lemma \ref{lem:xps}.
\end{proof}
\begin{claim}
None of the regions of $(T^2,\alphas,\betasp,\zs)$ can be covered completely with the periodic domains of $(S^2,\betas,\betasp,\zs)$ and $\D(u_0)$.
\end{claim}
\begin{proof} The periodic domains are the linear combinations of $\{\mathcal{P}_{\beta_i, \beta^{\prime}_i,\beta_1}\}_{i=2}^{k}\cup\{\mathcal{P}_{\beta_1, \beta^{\prime}_1,}\}$, and those can not cover the domains of $(S^2,\betas,\betasp,\zs)$.
\end{proof}
Putting these together we have, that $\D(u)-\D(u_0)-\mathcal{P}_{\beta\beta^{\prime}}$ have a negative coefficient, which gives a negative coefficient in $\D(u)$ too, contradicting the fact that $u$ was holomorphic. This proves Lemma \ref{lem:beta}.
\end{proof}
Note that by assuming that there is an $X$ in the upper right corner of the grid diagram we assured that the intersection point $\mathbf{x}_+$ contains $\alpha_1\cap\beta_1$, and that point remained there. Thus by stabilizing at $\alpha_1$ and $\beta_1$ we get Theorem \ref{thm:sphericalinv}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:connectedsum}]
Consider two Legendrian knots $L_1$ and $L_2$ of topological type $K_1$ and $K_2$.
Note that once we obtain the result for $\lambda^S_+$, we are done. Indeed, passing from the toroidal diagram to the spherical one, the invariants $\lambda_+(L_1)$ and $\lambda_+(L_2)$ are mapped to $\lambda^S_+(L_1)$ and $\lambda^S_+(L_2)$, respectively. Knowing that $\lambda^S_+(L_1)\otimes\lambda^S_+(L_2)$ is mapped to $\lambda^S_+(L_1\#L_2)$ and passing back to the toroidal diagram there is an isomorphism that maps this to $\lambda_+(L_1\#L_2)$. So the composition of the aboves proves Theorem \ref{thm:connectedsum}.
Consider the grid diagrams $G_1$ and $G_2$ corresponding to $L_1$ and $L_2$ admitting the conditions of Lemma \ref{lem:ox}. These define the spherical grid diagrams
$(S^2,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $(S^2,\alphas_2,\betas_2,\ws_2,\zs_2)$. Let $z\in\zs_1$, $w\in\ws_2$ be the basepoints corresponding to the $X$ in the upper right corner of the first diagram and the $O$ in the lower left corner of the second diagram. Form the connected sum of
$(S^2,\alphas_1,\betas_1,\ws_1,\zs_1)$ and $(S^2,\alphas_2,\betas_2,\ws_2,\zs_2)$ at the regions containing $z$ and $w$ to obtain a Heegaard diagram with multiple basepoints $(S^2,\alphas_1\cup\alphas_2,\betas_1\cup\betas_2,\ws_1\cup(\ws_2-\{w\}),(\zs_1-\{z\}\cup\zs_2))$ of $(S^3,L_1\#L_2)$.
By \ref{thm:connsum} the map
\begin{eqnarray*}
\psi_{\textrm{connsum}}:&&\mathrm{HFK}^-(S^2,\alphas_1,\betas_1,\ws_1,\zs_1) \otimes \mathrm{HFK}^-(S^2,\alphas_2,\betas_2,\ws_2,\zs_2) \to \\&&\mathrm{HFK}^-(S^2,\alphas_1\cup\alphas_2,\betas_1\cup\betas_2,\ws_1\cup(\ws_2-\{w\}),(\zs_1-\{z\})\cup\zs_2)
\end{eqnarray*}
defined on the generators as $\mathbf{x}_1\otimes\mathbf{x}_2\mapsto (\mathbf{x}_1,\mathbf{x}_2)$ is an isomorphism. Thus the image of $\lambda^S_+(L_1)\otimes\lambda^S_+(L_2)$ is
$(\lambda^S_+(L_1),\lambda^S_+(L_2))$.
\begin{figure}
\centering
$\begin{array}{c@{\hspace{2cm}}c}
\includegraphics[scale=0.2]{connectedsum}
&
\includegraphics[scale=0.2]{connectedsumrear}\\[0.2cm]
\mbox{front hemisphere} & \mbox{rear hemisphere}
\end{array}$
\caption{Connected Sum}
\label{fig:connectedsum}
\end{figure}
Figure \ref{fig:connectedsum} shows the resulting Heegaard diagram for the connected sum of a trefoil and a figure-eight knot.
From this diagram of the connected sum one can easily obtain a spherical grid diagram by isotoping every curve in $\alphas_1$ over the curves in $\betas_2$ and every curve in $\alphas_2$ over the curves in $\betas_1$ as shown on Figure \ref{fig:isotopy}. Indeed, the resulting diagram is a grid obtained by patching $G_1$ and $G_2$ together in the upper right $X$ of $G_1$ and the lower left $O$ of $G_2$ and deleting the $X$ and $O$ at issue. Now by connecting the $X$ in the lower row of $G_2$ to the $O$ in the upper row of $G_1$ and proceeding similarly in the columns, we get that the grid corresponds to the front projection of $L_1\#L_2$. Again, a quasi-isomorphism $\psi_{\textrm{isot}}$ is given with the help of holomorphic triangles.
A similar argument as in the proof of Lemma \ref{lem:beta} shows that
under the isomorphism induced by $\psi_{\textrm{isot}}$ on the homologies the element $(\lambda^S_+(L_1),\lambda^S_+(L_2))$ is mapped to $\lambda^S_+(L_1\#L_2)$.
\begin{figure}
\centering
$\begin{array}{c@{\hspace{2cm}}c}
\includegraphics[scale=0.2]{isotopy}
&
\includegraphics[scale=0.2]{isotopyrear}\\[0.2cm]
\mbox{front hemisphere} & \mbox{rear hemisphere}
\end{array}$
\caption{Isotoping to obtain a grid diagram}
\label{fig:isotopy}
\end{figure}
\end{proof}
\section{Proof of Theorems \ref{thm:nontransversesimple}
}\label{sec:prooftransversenonsimple}
One way of distinguishing transverse knots in a given knot type is to prove that their $\widehat{\theta}$-invariants are different. This however cannot be done straightforwardly as the vector space ${\widehat{\mathrm{HFK}}}$ does not canonically correspond to a knot.
So in order to prove that two elements are different, we have to show that there is no isomorphism of ${\widehat{\mathrm{HFK}}}$ carrying one to the other. Or more explicitly it is enough to see, that there is no such isomorphism induced by a sequence of Heegaard moves. For instance if we show that one element is $0$, while the other is not, we can be certain that they are different. This is used in the proof of Theorem \ref{thm:nontransversesimple}.
\begin{proof}[Proof of Theorem \ref{thm:nontransversesimple}]
Ng, Ozsv\'ath and Thurston \cite{NgOT} showed that the knot type $10_{132}$ contains transversely non isotopic representatives $L_1$ and $L_2$ with equal self-linking number.
They proved that $\widehat{\theta}(L_1)$ is zero in ${\widehat{\mathrm{HFK}}}(S^3,m(10_{132}))$ while $\widehat{\theta}(L_2)$ is not.
In the following we will prove, that the types $\#^n 10_{132}$ are transversely non simple. By the uniqueness of prime decomposition of knots \cite{Ad} these are indeed different knot types. Thus this list provides infinitely many examples of transversely non simple knots.
The two transversely non isotopic representatives are $\#^nL_2$ and $L_1\# (\#^{n-1}L_2)$. For the self-linking numbers we have $\textrm{sl}(\#^nL_2)=n\textrm{sl}(L_2)+(n-1)=(n-1)\textrm{sl}(L_1)+\textrm{sl}(L_2)(n-1)=\textrm{sl}(L_1\# (\#^{n-1}L_2))$.
Then we use Corollary \ref{cor:connectedsumhat} to distinguish the transverse isotopy type of $\#^nL_2$ and $L_1\# (\#^{n-1}L_2)$. There is an isomorphism from ${\widehat{\mathrm{HFK}}}(S^3,m(10_{132}))\otimes{\widehat{\mathrm{HFK}}}(S^3,\#^{n-1}m(10_{132}))$ to ${\widehat{\mathrm{HFK}}}(S^3,\#^nm(10_{132}))$ mapping $\widehat{\theta}(L_1)\otimes \widehat{\theta}(\#^{n-1}L_2))=0$ to $\widehat{\theta}(L_1\#(\#^{n-1}L_2))$ thus it is zero. Similarly there is an isomorphism mapping $\widehat{\theta}(L_2)\otimes\widehat{\theta}(\#^{n-1}L_2))\neq 0$ to $\widehat{\theta}(L_2\#(\#^{n-1}L_2))$ thus by induction on $n$ it does not vanish.
\end{proof}
\bibliographystyle{plain}
|
2,877,628,088,320 | arxiv | \section{Introduction}
\emph{Transactional memory}~\cite{herlihy+93} (TM) is a concurrent programming abstraction that promises scalable performance without programmer pain.
The programmer gathers instructions into \emph{transactions}, and the system guarantees that each appears to be performed entirely and instantaneously, or not at all.
To achieve this, a typical TM system tracks each transaction's memory accesses, and if it detects a conflict (i.e., another thread concurrently accessing the same location, at least one access being a write), resolves it by aborting the transaction and rolling back its changes.
\subsection{Motivating Example: Lock Elision in ARMv8}
\label{sec:intro:hlebug}
One important use-case of TM is \emph{lock elision}~\cite{rajwar+01, dice+09}, in which the lock/unlock methods of a mutex are skipped and the critical region (CR) is instead executed speculatively inside a transaction.
If two CRs do not conflict, this method allows them to be executed simultaneously, rather than serially.
If a conflict is detected, the transaction is rolled back and the system resorts to acquiring the mutex as usual.
Lock elision may not apply to all CRs, so an implementation must ensure mutual exclusion between transactional and non-transactional CRs.
This is typically done by starting each transactional CR with a read of the lock variable (and self-aborting if it is taken)~\cite[\S16.2.1]{intel17}.
If the mutex is subsequently acquired by a non-transactional CR then the TM system will detect a conflict on the lock variable and abort the transactional CR.
Thus, reasoning about lock elision requires a concurrency model that accounts for both modes, transactional and non-transactional.
In particular, systems with memory models weaker than \emph{sequential consistency} (SC)~\cite{lamport79} must ensure that the non-transactional lock/unlock methods synchronise sufficiently with transactions to provide mutual exclusion.
In their seminal paper introducing lock elision, \citeauthor{rajwar+01} argued that ``correctness is guaranteed without any dependence on memory ordering''~\cite[\S9]{rajwar+01}.
In fact, by drawing on a decade of weak memory formalisations~\cite{alglave+14, flur+16, pulte+17} and by extending state-of-the-art tools~\cite{wickerson+17, alglave+11a, lustig+17}, we show it is straightforward to contradict this claim \emph{automatically}.
\begin{BoxedExample}[Lock elision is unsound under ARMv8]
\label{ex:hle}
\renewcommand\tabcolsep{0.4mm}
\renewcommand\arraystretch{0.9}
\newcommand\xrightbrace[2][1]{%
\def\mylineheight{0.35}%
\raisebox{2.1mm}{%
\smash{%
\begin{tikzpicture}[baseline=(top)]
\coordinate (top) at (0,#1*\mylineheight-0.06);
\coordinate (bottom) at (0,0);
\draw[colorcomment, pen colour={colorcomment}, decoration={calligraphic brace,amplitude=2.1pt}, decorate, line width=1pt]
(top) to node[auto, inner sep=0] {~~\begin{tabular}{l}#2\end{tabular}} (bottom);
\end{tikzpicture}}}}
\newcommand\lc[1]{\textcolor{colorlock}{#1}}
Consider the program below, in which two threads use CRs to update a shared location $x$.
\vspace*{1.5mm}
\begin{center}
\small
\begin{tabular}{@{}ll@{\hspace{1mm}}||@{\hspace{1mm}}ll@{}}
\hline
\multicolumn{4}{c}{Initially: $"[X0]"=x=0$} \\
\hline
\lc{"lock()"} & & \lc{"lock()"} & \\
"LDR W5,[X0]" & \xrightbrace[3]{$x \leftarrow x+2$}
& "MOV W7,\#1" &
\xrightbrace[2]{$x \leftarrow 1$} \\
"ADD W5,W5,\#2" & & "STR W7,[X0]" & \\
"STR W5,[X0]" & & \lc{"unlock()"} & \\
\lc{"unlock()"} & \\
\hline
\multicolumn{4}{c}{Test: $x=2$} \\
\hline
\end{tabular}
\end{center}
It must not terminate with $x=2$, for this would violate mutual exclusion.
Now, let us instantiate the lock/unlock calls with two possible implementations of those methods.
\begin{center}
\vspace*{-1.5mm}
\small
\begin{tabular}{@{}lll@{\hspace{1mm}}||@{\hspace{1mm}}lll@{}}
\hline
\multicolumn{6}{c}{Initially: $"[X0]"=x=0$, $"[X1]"=m=0$} \\
\hline
\blacknum[2]{1}& \lc{"Loop:"} &
\xrightbrace[6]{atomically \\ update $m$ \\ from 0 \\ to 1} &
\blacknum[7]{3}& \lc{"TXBEGIN"} &
\xrightbrace{begin txn} \\
& \lc{"LDAXR W2,[X1]"}& &
& \lc{"LDR W6,[X1]"} &
\xrightbrace[4]{load $m$ \\ and abort \\ if non-\\ zero} \\
& \lc{"CBNZ W2,Loop"} & &
& \lc{"CBZ W6,L1"} & \\
\blacknum[2]{4}& \lc{"MOV W3,\#1"} & &
& \lc{"TXABORT"} & \\
& \lc{"STXR W4,W3,[X1]"} & &
& \lc{"L1:"} & \\
& \lc{"CBNZ W4,Loop"} & &
& "MOV W7,\#1" &
\xrightbrace[2]{$x \leftarrow 1$} \\
\blacknum{2} & "LDR W5,[X0]" &
\xrightbrace[3]{$x \leftarrow x+2$} &
& "STR W7,[X0]" & \\
\blacknum[2]{5}& "ADD W5,W5,\#2" & &
&\lc{"TXEND"} &
\xrightbrace{end txn} \\
& "STR W5,[X0]" & &
& & \\
&\lc{"STLR WZR,[X1]"} &
\xrightbrace{$m \leftarrow 0$} &
& \\
\hline
\multicolumn{6}{c}{Test: $x=2$} \\
\hline
\end{tabular}
\end{center}
\vspace*{1.5mm}
The left thread executes its CR non-transactionally, using the recommended ARMv8 spinlock~\cite[K9.3]{arm17}, while the right thread uses lock elision (with unofficial but representative TM instructions).
This program \emph{can} terminate with $x=2$, thus witnessing the unsoundness of lock elision, as follows:
{
\renewcommand\labelenumi{\blacknum{\theenumi}}
\begin{enumerate}[leftmargin=*]
\item The left thread reads the lock variable $m$ as $0$ (free).
"LDAXR" indicates an \emph{acquire} load, which means that the read cannot be reordered with any later event in program-order.
\item The left thread reads $x$ as $0$.
This load can execute speculatively because ARMv8 does not require that the earlier store-exclusive ("STXR") completes first~\cite{pulte+17}.
\item The right thread starts a transaction, sees the lock is still free, updates $x$ to $1$, and commits its transaction.
\item The left thread updates $m$ to $1$ (taken).
This is a store-exclusive ("STXR")~\cite{jensen+87}, so it only succeeds if $m$ has not been updated since the last load-exclusive ("LDAXR").
It does succeed, because the right thread only \emph{reads} $m$.
\item Finally, the left thread updates $x$ to $2$, and $m$ to $0$. "STLR" is a \emph{release} store, which means that the update to $m$ cannot be reordered with any earlier event in program-order.
\end{enumerate}
}
\end{BoxedExample
The crux of our counterexample is that a (non-transaction\-al) CR can start executing after the lock has been observed to be free, but before it has actually been taken.
Importantly, this relaxation is safe if all CRs are mutex-protected (i.e., the spinlock \emph{in isolation} is correct), since every lock acquisition involves writing to the lock variable and at most one store-exclusive can succeed.
Rather, the counterexample only arises when this relaxation is \emph{combined} with any reasonable TM extension to ARMv8.
This includes a proposed extension currently being considered within ARM Research.
Furthermore, there appears to be no easy fix.
Re-implemen\-ting the spinlock by appending a "DMB" fence to the "lock()" implementation would prevent the problematic reordering, but would also inhibit compatibility with code that uses the ARM-recommended spinlock, and may decrease performance when lock elision is not in use.
Otherwise, if software portability is essential, transactional CRs could be made to \emph{write} to the lock variable (rather than just read it), but this would induce serialisation, and thus nullify the potential speedup from lock elision.
\subsection{Our Work}
In this paper, we use formalisation to tame the interaction between TM and weak memory.
Specifically, we propose axiomatic models for how transactions behave in x86~\cite{intel17}, Power~\cite{power30}, ARMv8~\cite{arm17}, and C++~\cite{c++tm15}.
As well as the lock elision issue already explained, our formalisations revealed:
\begin{itemize}
\item an ambiguity in the specification of Power TM (\S\ref{sec:x86_power:adding_txns}),
\item a bug in a register-transfer level (RTL) prototype implementation of ARMv8 TM (\S\ref{sec:arm:testing}),
\item a simplification to the C++ TM proposal (\S\ref{sec:cpp:adding_transactions}), and
\item that coalescing transactions is unsound in Power (\S\ref{sec:metatheory:monotonicity}).
\end{itemize}
Although TM is conceptually simple, it is notoriously challenging to implement correctly, as exemplified by Intel repeatedly having to disable TM in its processors due to bugs~\cite{hachman14, skl105}, IBM describing adding TM to Power as ``arguably the single-most invasive change ever made to IBM's RISC architecture''~\cite{adir+14}, and the C++ TM Study Group listing ``conflict with the C++ memory model and atomics'' as one of their hardest challenges~\cite{wong14}.
To cope with the combined complexity of transactions and weak memory that exist in real systems, we build on several recent advances in automated tooling to help develop and validate our models.
In the x86 and Power cases, we use the SAT-based \Memalloy{} tool \cite{wickerson+17}, extended with an exhaustive enumeration mode \`a la \citet{lustig+17}, to automatically synthesise exactly the `minimally forbidden' tests (up to a bounded size) that distinguish our TM models from their respective non-TM baselines.
We then use the \Litmus{} tool~\cite{alglave+11a} to check that these tests are never observed on existing hardware (i.e., that our models are sound).
We also generate a set of `maximally allowed' tests, which we use to assess the completeness of our models (i.e., how many of the behaviours our models allow are empirically observable).
Moreover, we investigate several properties of our models.
For instance, C++ offers `atomic' transactions and `relaxed' transactions; we prove that atomic transactions are strongly isolated, and that race-free programs with no non-SC atomics and no relaxed transactions enjoy `transactional SC'.
Other properties of our models we verify up to a bound using \Memalloy{}; these are that introducing, enlarging, or coalescing transactions introduces no new behaviours, and that C++ transactions compile soundly to x86, Power, and ARMv8.
Finally, we show how \Memalloy{} can be used to check a library
implementation against its specification by encoding it as a program
transformation.
We apply our technique to check that x86 and Power lock elision libraries correctly implement mutual exclusion -- but that this is not so, as we have seen, in ARMv8.
\paragraph{Summary} Our contributions are as follows:
\begin{itemize}
\item a fully-automated toolflow for generating tests from an axiomatic memory model and using them to validate the model's soundness, its completeness, and its metatheoretical properties (\S\ref{sec:methodology});
\item formalisations of TM in the SC (\S\ref{sec:transactions}), x86 (\S\ref{sec:x86_power}), Power (\S\ref{sec:x86_power}), ARMv8 (\S\ref{sec:arm}), and C++ (\S\ref{sec:cpp}) memory models;
\item proofs that the transactional C++ memory model guarantees strong isolation for atomic transactions, and transactional SC for race-free programs with no non-SC atomics or non-atomic transactions (\S\ref{sec:cpp});
\item the automatic, bounded verification of transactional monotonicity and compilation from C++ transactions to hardware (\S\ref{sec:metatheory}); and
\item a technique for validating lock elision against hardware TM models, which is shown to be effective through the discovery of the serious flaw of Example~\ref{ex:hle} (\S\ref{sec:metatheory}).
\end{itemize}
\paragraph{Companion Material} We provide all the models we propose (in the \cat{} format~\cite{alglave+14}), the automatically-generated litmus tests used to validate our models, litmus tests corresponding to all the executions discussed in our paper, and Isabelle proofs of all statements marked with the \isabelleqed{} symbol.
\section{Background: Axiomatic Memory Models}
\label{sec:memory_models}
\newcommand\semi{\mathbin{\hspace{-0.2ex};\hspace{-0.2ex}}}
\newcommand\eqdef{=}
\newcommand\id{\mathit{id}}
\newcommand\domain{\mathsf{domain}}
\newcommand\range{\mathsf{range}}
\newcommand\imm{\mathsf{imm}}
\renewcommand\min{\mathsf{min}}
\newcommand\acyclic{\mathbf{acyclic}}
\newcommand\irreflexive{\mathbf{irreflexive}}
\newcommand\isempty{\mathbf{empty}}
\newcommand\EXT[1]{#1{_{\mathrm{e}}}}
\newcommand\INT[1]{#1{_{\mathrm{i}}}}
\newcommand\LOC[1]{#1{_{\mathrm{loc}}}}
\newcommand\DLOC[1]{#1{_{\neq\mathrm{loc}}}}
\newcommand\Exec{\mathbb{X}}
\newcommand\loc{\mathit{loc}}
\newcommand\sloc{\mathit{sloc}}
\newcommand\rfe{\EXT{\rf\hspace*{-1.4pt}}}
\newcommand\coe{\EXT{\co}}
\newcommand\fre{\EXT{\fr\hspace*{-0.5pt}}}
\newcommand\rfi{\INT{\rf\hspace*{-1.4pt}}}
\newcommand\coi{\INT{\co\hspace{0.5pt}}}
\newcommand\fri{\INT{\fr}}
\newcommand\com{\mathit{com}}
\newcommand\come{\EXT{\com}}
\newcommand\comi{\INT{\com}}
\newcommand\hb{\mathit{hb}}
Here we give the necessary background on the formal framework we use for reasoning about programs, which is standard across several recent works ~\cite{alglave+14, wickerson+17, lustig+17}.
A \emph{memory model} defines how threads interact with shared memory.
An \emph{axiomatic} memory model consists of constraints (i.e., axioms) on \emph{candidate executions}.
An execution is a graph representing a runtime behaviour, whose structure is defined below.
The candidate executions of a program are obtained by assuming a non-deterministic memory system: each load can observe a store from anywhere in the program.
After filtering away the candidates that fail the constraints, we are left with the \emph{consistent} executions; i.e., those that are allowed in the presence of the actual memory system.
\subsection{Executions}
\label{sec:memory_models:executions}
Let $\Exec$ be the set of all executions.
Each execution is a graph whose vertices, $E$, represent runtime memory-related events and whose labelled edges represent various relations between them.
The events are partitioned into $R$, $W$, and $F$, the sets of read, write, and fence events.\footnote{
We encode fences as \emph{events} (rather than edges) because this simplifies execution minimisation (\S\ref{sec:methodology:empirical}).
We then derive architecture-specific fence relations that connect events separated by fence events, which we use in our models and execution graphs.}
Events in an execution are connected by the following relations:
\begin{itemize}
\item $\po$, program order (a.k.a.~sequenced-before);
\item $\addr$/$\ctrl$/$\data$, an address/control/data dependency;
\item $\rmw$, to indicate read-modify-write operations;
\item $\sloc$, between events that access the same location;
\item $\rf$, the `reads-from' relation; and
\item $\co$, the `coherence' order in which writes hit memory.
\end{itemize}
We restrict our attention to executions that are \emph{well-formed} as follows:
$\po$ forms, for each thread, a strict total order over that thread's events;
$\addr$, $\ctrl$, and $\data$ are within $\po$ and always originate at a read;
$\rmw$ links the read of an RMW operation to its corresponding write;
$\rf$ connects writes to reads accessing the same location, with no read having more than one incoming $\rf$ edge; and
$\co$ connects writes to the same location and forms, for each location, a strict total order over the writes to that location.
\paragraph{Notation}
Given a relation $r$, $r^{-1}$ is its inverse, $r^?$ is its reflexive closure, $r^+$ is its transitive closure, and $r^*$ is its reflexive transitive closure.
We use $\neg$ for the complement of a set or relation, implicitly with respect to the set of all events or event pairs in the execution.
We write `${;}$' for relational composition: $r_1\semi r_2 = \{(x,z)\mid \exists y\ldotp (x,y)\in r_1 \wedge (y,z)\in r_2\}$.
We write $[-]$ to lift a set to a relation: $[s] = \{(x,x)\mid x\in s\}$.
To restrict a relation $r$ to being inter-thread or intra-thread, we use $\EXT{r} = r\setminus(\po \cup \po^{-1})^*$ or $\INT{r} = r\cap (\po \cup \po^{-1})^*$, respectively.
Similarly, $\LOC{r} = r\cap\sloc$.
\paragraph{Derived Relations}
The \emph{from-read} ($\fr$) relation relates each read event to all the write events on the same location that are $\co$-later than the write the read observed~\cite{lustig+17}. The $\com$ relation captures three ways events can `communicate' with each other.
\begin{eqnarray*}
\fr &=& ([R]\semi\sloc\semi[W]) \setminus (\rf^{-1}\semi (\co^{-1})^*) \\
\com &=& \rf \cup \co \cup \fr
\end{eqnarray*}
\begin{figure}
\centering
\begin{tikzpicture}[inner sep=1pt,baseline=4mm]
\node (a1) at (0,1) {$\evtlbl{$a$}\evW{}{x}{1}$};
\node (a2) at (0,0) {$\evtlbl{$b$}\evR{}{x}{3}$};
\node (b1) at (1.8,1) {$\evtlbl{$c$}\evW{}{x}{1}$};
\draw[edgeco] (a1) to [auto] node {$\co$} (b1);
\draw[edgepo] (a1) to [auto] node {$\po$} (a2);
\draw[edgerf] (b1) to [auto] node {$\rf$} (a2);
\end{tikzpicture}
\hspace*{2mm}
\renewcommand\arraystretch{0.9}
\begin{tabular}{@{~}r@{~}l||r@{~}l@{~}}
\hline
\multicolumn{4}{c}{Initially: $"[X0]"=x=0$} \\
\hline
"$a$:" & "[X0] $\leftarrow$ 1" & "$c$:" & "[X0] $\leftarrow$ 2" \\
"$b$:" & "r0 $\leftarrow$ [X0]" & \\
\hline
\multicolumn{4}{c}{Test: $"r0"=2 \wedge x=2$} \\
\hline
\end{tabular}
\caption{An execution and its litmus test}
\label{fig:sample}
\end{figure}
\paragraph{Visualising Executions}
We represent executions using diagrams like the one in Fig.~\ref{fig:sample} (left).
Here, the $\po$-edges separate the execution's events into two threads, each drawn in one column.
Each event is labelled with the sets it belongs to, such as $R$ and $W$.
We use location names such as $x$ to identify the $\sloc$-classes.
\subsection{From Executions to Litmus Tests}
\label{sec:memory_models:litmus}
In order to test whether an execution of interest is observable in practice, it is necessary to convert it into a \emph{litmus test} (i.e., a program with a postcondition)~\cite{collier92}.
This litmus test is constructed so that the postcondition only passes when the particular execution of interest has been taken~\cite{alglave+10, wickerson+17}.
As an example, the execution on the left of Fig.~\ref{fig:sample} corresponds to the pseudocode litmus test on the right.
Read events become loads, writes become stores, and the $\po$-edges induce the order of instructions and their partitioning into threads.
To ensure that the litmus test passes only when the intended $\rf$-edges are present, we arrange that each store writes a unique non-zero value, and then check that each local register holds the value written by the store it was intended to observe -- this corresponds to the $"r0" = 2$ in the postcondition.
To ensure that the intended $\co$-edges are present, we check the final value of each memory location -- this corresponds to the $x=2$ in the postcondition.\footnote{
When there are more than two writes to a location, extra constraints on executions are needed to fix all the $\co$-edges~\cite{wickerson+17}.}
\section{Axiomatising Transactions}
\label{sec:transactions}
\newcommand\stxn{\mathit{stxn}}
\newcommand\stronglift{\mathsf{stronglift}}
\newcommand\weaklift{\mathsf{weaklift}}
\newcommand\Order{\ax{Order}}
\newcommand\TxnOrder{\ax{TxnOrder}}
\newcommand\axlabel[1]{\textsc{(#1)}}
\newcommand\axiom[2]{\multicolumn{3}{@{}P}{\hspace*{1.2mm}#2\hfill\axlabel{#1}\hspace*{0.7mm}}}
\newcommand\where{\quad\text{where}~}
\newcommand\header[1]{\multicolumn{3}{@{}>{\raggedright}p{\linewidth}}{#1}}
\newlength{\myframesep}
\setlength{\myframesep}{3pt}
\newlength{\axwidth}
\setlength{\axwidth}{\linewidth}
\addtolength{\axwidth}{-1.3mm}
\addtolength{\axwidth}{-2\myframesep}
\newcommand\newaxiom[2]{\multicolumn{3}{@{}P}{\dashboxed[\axwidth]{}#2 \hfill \axlabel{#1}\hspace*{0.7mm}}}
\newenvironment{axiomatisationWithoutBox}{%
\renewcommand\arraystretch{1.1}%
\begin{tabular*}{\linewidth}{@{}R@{~~}C@{~~}L@{}}
}{
\end{tabular*}
}
\newenvironment{axiomatisation}{%
\renewcommand\FrameSep{\myframesep}%
\begin{framed}%
\begin{axiomatisationWithoutBox}%
}{
\end{axiomatisationWithoutBox}%
\end{framed}%
}
Transactional memory (TM) can be provided either at the architecture level (x86, Power, ARMv8) or in software (C++).
Since we are concerned only with the \emph{specification} of TM, and not its implementation, we can formalise both forms within a unified framework.
In this section, we describe how program executions can be extended to express transactions (\S\ref{sec:transactions:executions}) and how we can derive litmus tests to test for these executions (\S\ref{sec:transactions:litmus}).
We then propose axioms for capturing the \emph{isolation} of transactions (\S\ref{sec:transactions:isolation}), and for strengthening the SC memory model to obtain \emph{transactional} SC (\S\ref{sec:transactions:tsc}).
\subsection{Transactional Executions}
\label{sec:transactions:executions}
To enable transactions in an axiomatic memory modelling framework, we extend executions with an $\stxn$ relation that relates events in the same successful (i.e., committed) transaction.
For an execution to be well-formed, $\stxn$ must be a partial equivalence relation (i.e., symmetric and transitive), and each $\stxn$-class must coincide with a contiguous subset of $\po$.
When generating the candidate executions of a program with transactions, each transaction is assumed to succeed or fail non-deterministically.
That is, each either gives rise to a $\stxn$-class of events, or vanishes as a no-op.
\begin{figure}
\centering
\begin{tikzpicture}[inner sep=1pt]
\node (a1) at (0,1) {$\evtlbl{$a$}\evW{}{x}{1}$};
\node (a2) at (0,0) {$\evtlbl{$b$}\evR{}{x}{3}$};
\node (b1) at (1.5,1) {$\evtlbl{$c$}\evW{}{x}{1}$};
\draw[edgeco] (a1) to [auto,pos=0.6] node {$\vphantom{p}\co$} (b1);
\draw[edgepo] (a1) to [auto] node {$\po$} (a2);
\draw[edgerf] (b1) to [auto] node {$\rf$} (a2);
\node[stxn, fit=(a1)(a2)] {};
\end{tikzpicture}
\hfill
\renewcommand\arraystretch{0.9}
\renewcommand\tabcolsep{1mm}
\begin{tabular}{r@{~}l||r@{~}l}
\hline
\multicolumn{4}{c}{Initially: $"[X0]"=x=0$, $"[X1]"=\mathit{ok}=1$} \\
\hline
& "txbegin L$_{\rm fail}$" & $c$":" & "[X0] $\leftarrow$ 2" \\
$a$":" & "[X0] $\leftarrow$ 1" & \\
$b$":" & "r0 $\leftarrow$ [X0]" & \\
& "txend" & \\
& "goto L$_{\rm succ}$" & \\
"L$_{\rm fail}$:" & "[X1] $\leftarrow$ 0" & \\
"L$_{\rm succ}$:" & & \\
\hline
\multicolumn{4}{c}{Test: $\mathit{ok}=1 \wedge "r0"=2 \wedge x=2$} \\
\hline
\end{tabular}
\caption{A transactional execution and its litmus test }
\label{fig:sample_txn}
\end{figure}
Diagrammatically, we represent $\stxn$ using \smash{%
\begin{tikzpicture}[baseline=(a.base)]
\node[inner sep=0](a){boxes.};
\node[stxn, fit=(a)] {};
\end{tikzpicture}}
For instance, events $a$ and $b$ in Fig.~\ref{fig:sample_txn} form a successful transaction.
\begin{Remark}
To study the behaviour of unsuccessful transactions in more detail, one could add an explicit representation of them in executions, perhaps using
\smash{%
\begin{tikzpicture}[baseline=(a.base)]
\node[inner sep=0](a){dashed boxes.};
\node[ftxn, fit=(a)] {};
\end{tikzpicture}}
However, the behaviour of unsuccessful transactions is tricky to ascertain on hardware because of the rollback mechanism.
Moreover, it is unclear how they should interact with $\co$, since $\co$ is the order in which writes hit the memory, which writes in unsuccessful transactions never do.
\end{Remark}
\subsection{From Transactional Executions to Litmus Tests}
\label{sec:transactions:litmus}
A transactional execution can be converted into a litmus test by extending the construction of \S\ref{sec:memory_models:litmus}.
As an example, the execution on the left of Fig.~\ref{fig:sample_txn} corresponds to the litmus test on the right.
The instructions in the transaction simply need enclosing in instructions that begin and end a transaction.
We write these as "txbegin" and "txend" here; our tooling specialises these for each target architecture.
The postcondition checks that the transaction succeeded using the `$\mathit{ok}$' location, which is zeroed in the transaction's fail-handler, "L$_{\rm fail}$", the label of which is provided with the "txbegin" instruction.
\subsection{Weak and Strong Isolation}
\label{sec:transactions:isolation}
We now explain how the \emph{isolation} property of transactions can be captured as a property of an execution graph.
A TM system provides \emph{weak} isolation if transactions are isolated from other transactions; that is, their intermediate state cannot affect or be affected by other transactions~\cite{blundell+06, harris+10}.
It provides \emph{strong} isolation if transactions are also isolated from non-transactional code.
\begin{figure}
\begin{subfigure}[b]{0.24\linewidth}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evR{}{x}{0}$};
\node (a2) at (0,0) {$\evR{}{x}{1}$};
\node (b1) at (1.3,0.35) {$\evW{}{x}{1}$};
\draw[edgefr] (a1) to [auto] node {$\fr$} (b1);
\draw[edgerf] (b1) to [auto] node {$\rf$} (a2);
\draw[edgepo] (a1) to [auto] node (po) {$~\po$} (a2);
\node[stxn, fit=(a1)(a2)(po)] {};
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evR{}{x}{0}$};
\node (a2) at (0,0) {$\evW{}{x}{2}$};
\node (b1) at (1.3,0.35) {$\evW{}{x}{1}$};
\draw[edgefr] (a1) to [auto] node {$\fr$} (b1);
\draw[edgeco] (b1) to [auto] node {$\co$} (a2);
\draw[edgepo] (a1) to [auto] node (po) {$~\po$} (a2);
\node[stxn, fit=(a1)(a2)(po)] {};
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evW{}{x}{1}$};
\node (a2) at (0,0) {$\evR{}{x}{2}$};
\node (b1) at (1.3,0.35) {$\evW{}{x}{2}$};
\draw[edgeco] (a1) to [auto] node {$\co$} (b1);
\draw[edgerf] (b1) to [auto] node {$\rf$} (a2);
\draw[edgepo] (a1) to [auto] node (po) {$~\po$} (a2);
\node[stxn, fit=(a1)(a2)(po)] {};
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evW{}{x}{1}$};
\node (a2) at (0,0) {$\evW{}{x}{2}$};
\node (b1) at (1.3,0.35) {$\evR{}{x}{1}$};
\draw[edgerf] (a1) to [auto] node {$\rf$} (b1);
\draw[edgefr] (b1) to [auto] node {$\fr$} (a2);
\draw[edgeco, shift right=0.5mm] (a1) to [auto,swap] node (co) {$\co~$} (a2);
\draw[edgepo, shift left=0.5mm] (a1) to [auto] node (po) {$~\po$} (a2);
\node[stxn, fit=(a1)(a2)(po)(co)] {};
\end{tikzpicture}
\caption{}
\end{subfigure}
\caption{Four SC executions that are allowed by weak isolation but forbidden by strong isolation}
\label{fig:weak_strong_isolation}
\end{figure}
\newcommand\AtomicRMW{\ax{RMWIsol}}
The four 3-event SC executions in Fig.~\ref{fig:weak_strong_isolation} illustrate the difference between strong and weak isolation.
In each, the interfering event would need to be within a transaction to be forbidden by weak isolation; strong isolation does not make this distinction.
Executions {\bf (a)} and {\bf (d)} correspond to what \citeauthor{blundell+06} call \emph{non-interference} and \emph{containment}, respectively, and {\bf (b)} is similar to the standard axiom for RMW isolation (cf. \AtomicRMW{} in Fig.~\ref{fig:axioms_x86}).
\newcommand\WeakIsol{\ax{WeakIsol}}
\newcommand\StrongIsol{\ax{StrongIsol}}
Failures of isolation can be characterised as communication cycles between transactions.
To define these cycles, the following constructions are useful:
\begin{eqnarray*}
\weaklift(r,t) &=& t\semi (r\setminus t)\semi t \\
\stronglift(r,t) &=& t^?\semi (r\setminus t)\semi t^?.
\end{eqnarray*}
If $r$ relates events $e_1$ and $e_2$ in different transactions, then $\weaklift(r,\stxn)$ relates all the events in $e_1$'s transaction to all those in $e_2$'s transaction.
The $\stronglift$ version also includes edges where the source and/or the target event are not in a transaction.
Weak and strong isolation can then be axiomatised by treating all the events in a transaction as a single event whenever the transaction communicates with another transaction (\WeakIsol) or any other event (\StrongIsol).
\begin{align}
\tag{\WeakIsol}
\acyclic(\weaklift(\com,\stxn)) \\
\tag{\StrongIsol}
\acyclic(\stronglift(\com,\stxn))
\end{align}
\subsection{Transactional Sequential Consistency}
\label{sec:transactions:tsc}
\begin{figure}
\centering
\begin{axiomatisation}
\axiom{\Order}{\acyclic(\hb) \where \hb = \po\cup\com}
\\
\newaxiom{\TxnOrder}{\acyclic(\stronglift(\hb,\stxn))} \\
\end{axiomatisation}
\caption{SC axioms~\cite{shasha+88}, with TSC extensions \dashboxed{\text{highlighted}}}
\label{fig:axioms_tsc}
\end{figure}
Although isolation is a critical property for transactions, it only provides a lower bound on the guarantees that real architectures provide.
Meanwhile, an upper bound on the guarantees provided by a reasonable TM implementation is \emph{transactional sequential consistency} (TSC)~\cite{dalessandro+09}.
The models we propose in \S\ref{sec:x86_power}--\ref{sec:cpp} all lie between these bounds.
TSC is a strengthening of the SC memory model in which consecutive events in a transaction must appear consecutively in the overall execution order.
Where SC can be characterised axiomatically (Fig.~\ref{fig:axioms_tsc}) by forbidding cycles in program order and communication (\Order{})~\cite{shasha+88}, we can obtain TSC by additionally forbidding such cycles between transactions and non-transactional events (\TxnOrder{}).
Note that \TxnOrder{} subsumes the \StrongIsol{} axiom.
\section{Methodology}
\label{sec:methodology}
\newcommand\conformance{\mathsf{min\mhyphen{}inconsistent}}
\newcommand\maxconsistent{\mathsf{max\mhyphen{}consistent}}
\newcommand\consistent{\mathsf{consistent}}
\newcommand\mfence{\mathit{mfence}}
\newcommand\lwsync{\mathit{lwsync}}
\newcommand\sync{\mathit{sync}}
\newcommand\REL{\mathit{Rel}}
\newcommand\SC{\mathit{SC}}
We identify three components of a memory modelling methodology: (1) developing and refining axioms, (2) synthesising and running conformance tests, and (3) checking metatheoretical properties.
In this section, we explain our approach to each of these components, and in particular, how we have extended the \Memalloy{} tool~\cite{wickerson+17} to support each task.
\paragraph{Background on \Memalloy{}}
The original \Memalloy{} tool, built on top of Alloy~\cite{jackson12a}, was developed for comparing memory models.
It takes two models (say, $M$ and $N$), and searches for a single execution that distinguishes them (i.e., is inconsistent under $M$ but consistent under $N$).
Additionally, if \Memalloy{} is supplied with a translation on executions (e.g., representing a compiler mapping or a compiler optimisation), then it searches for a witness that the translation is unsound.
This translation is defined by a relation, typically named $\pi$, from `source' events to `target' events.
\subsection{Developing and Refining Axioms}
For each model, we make a first attempt at a set of axioms using information obtained from specifications, programming manuals, research papers, and discussions with designers.
Then, for each proposed change to the model, we use \Memalloy{} to generate tests that become disallowed or allowed as a result.
We decide whether to accept the change based on discussing these tests with designers, and running them on existing hardware (where available) using the \Litmus{} tool~\cite{alglave+11a}.
In order to extend \Memalloy{} to support the development of transactional memory models in this way, we augmented the form of executions as described in \S\ref{sec:transactions:executions}, and modified the litmus test generator as described in \S\ref{sec:transactions:litmus}.
\subsection{Synthesising and Running Conformance Tests}
\label{sec:methodology:empirical}
To build confidence in a model, we compare the behaviours it admits against those allowed by the architecture or language being modelled.
It is vital that no behaviour allowed by the architecture/language is forbidden by the model, so we exhaustively generate all litmus tests (up to a bounded size) that our model forbids, and confirm using \Litmus{} that none can be observed on existing hardware.
To achieve this, we extended \Memalloy{} with a mode for exhaustively generating conformance tests for a given model $M$.
The key to exhaustive generation is a suitable notion of \emph{minimality}, without which we would obtain an infeasibly large number of tests.
We closely follow \citet{lustig+17}, and define execution minimality with respect to the following partial order between executions. Let $X\sqsubset Y$ hold when execution $X$ can be obtained from execution $Y$ by:
\begin{enumerate}[label=(\roman*)]
\item removing an event (plus any incident edges),
\item removing a dependency edge ($\addr$, $\ctrl$, $\data$, $\rmw$), or
\item downgrading an event (e.g. reducing an acquire-read to a plain read in ARMv8).
\end{enumerate}
We then calculate the set $\conformance(M) \eqdef \{X \in \Exec \mid X \notin \consistent(M) \wedge \forall X'\sqsubset X\ldotp X'\in\consistent(M)\}$.
Extending \Memalloy{} to support the synthesis of transactional conformance tests requires minimality to take transactions into account.
To do this, we arrange that $X \sqsubset Y$ also holds when $X$ can be obtained from $Y$ by:
\begin{enumerate}[label=(\roman*), start=5]
\item making the first or last event in a transaction non-trans\-actional (i.e. removing all of its incident $\stxn$ edges).
\end{enumerate}
(We avoid the `middle' of a transaction so as not to create non-contiguous transactions and hence ill-formed executions.)
\begin{Remark}
While this is a slightly coarse notion of minimality -- a more refined version would also allow a large transaction to be chopped into two smaller ones -- it is cheap to implement in the constraint solver as it only requires quantification over a single event.
As a result, \Memalloy{} may generate some executions that appear non-minimal, but as we show in~\S\ref{sec:x86_power:testing}, this does not impede our ability to generate and run large batches of conformance tests.
\end{Remark}
\paragraph{Generating Allowed Tests}
Having generated the minimal\-ly-forbidden tests, the question naturally arises of whether we can generate the \emph{maximally-allowed} tests too.
Where the minimally-forbidden tests include just enough fences/de\-pen\-den\-cies/transactions to be forbidden (and failing to observe these tests empirically suggests that the model is not too strong), the maximally-allowed tests include just \emph{not} enough (and observing them suggests that the model is not too weak).
We found the maximally-allowed tests valuable for communicating with engineers about the detailed relaxations permitted by our models.
However, in our experiments, allowed tests are less conclusive than forbidden ones, because where the observation of a forbidden test implies that the model is unsound, the non-observation of an allowed test may be caused by not performing enough runs, or by the machine under test being implemented conservatively.
Moreover, the notion of execution maximality is not as natural as minimality.
For instance, an inconsistent execution is only considered minimally-inconsistent if \emph{removing} any event makes it \emph{consistent}, yet it is not sensible to deem a consistent execution maximally-consistent only when \emph{adding} any event makes it \emph{inconsistent} -- such a condition is almost impossible to satisfy.
Even with event addition/removal set aside, maximal-consistency tends to require executions to be littered with redundant fences and dependencies
Therefore, we approximate the maximally-consistent executions as those obtained via a single $\sqsubset$-step from a minimally-inconsistent execution. That is, we let $\maxconsistent(M) \eqdef \{X \in \Exec \mid \exists Y\in\conformance(M)\ldotp X \sqsubset Y\}$.
\subsection{Checking Metatheoretical Properties}
\label{sec:methodology:libabs}
As explained at the start of this section, \Memalloy{} is able to validate transformations between two memory models, providing they can be encoded as a $\pi$-relation between executions.
In \S\ref{sec:metatheory}, we exploit this ability to check several TM-related transformations and compiler mappings.
In fact, \Memalloy{} can also be used to check libraries under weak memory.
Prior work has (manually) verified that stack, queue, and barrier libraries implement their specifications under weak memory models~\cite{batty+13, sorensen+16a}; here we show how checking these types of properties can be automated up to a bounded number of library and client events.
We see this as a straightforward first-step towards a general verification effort.
The idea, which we apply to a lock elision library in \S\ref{sec:metatheory:hle}, is to treat the replacement of the library's specification with its implementation as a program transformation.
To do this, we first extend executions with events that represent method calls.
Second, we extend execution well-formedness so that illegal call sequences (such as popping from an empty stack) are rejected.
Third, we strengthen the memory model's consistency predicate with axioms capturing the library's obligations (such as pops never returning data from later pushes).
Finally, we constrain $\pi$ so that it maps each method call to an event sequence representing the implementation of that method.
\newcommand\Locked{\mathit{L}}
\newcommand\ppo{\mathit{ppo}}
\newcommand\implied{\mathit{implied}}
\newcommand\tfence{\mathit{tfence}}
\newcommand\Coherence{\ax{Coherence}}
\newcommand\isync{\mathit{isync}}
\newcommand\fence{\mathit{fence}}
\newcommand\cfence{\mathit{efence}}
\newcommand\propI{\mathit{prop_1}}
\newcommand\propII{\mathit{prop_2}}
\newcommand\tpropI{\mathit{tprop_1}}
\newcommand\tpropII{\mathit{tprop_2}}
\newcommand\prop{\mathit{prop}}
\newcommand\ihb{\mathit{ihb}}
\newcommand\thb{\mathit{thb}}
\newcommand\XW{\mathit{XW}}
\newcommand\Propagation{\ax{Propagation}}
\newcommand\Observation{\ax{Observation}}
\newcommand\ExclWrites{\ax{ExclWrites}}
\newcommand\TxnCancelsRMW{\ax{TxnCancelsRMW}}
\section{Transactions in x86 and Power}
\label{sec:x86_power}
Over the next three sections, we show how our methodology can be applied to four different targets.
We begin with x86 and Power, which have both supported TM since 2013~\cite{intel12, cain+13}.
Intel's Transactional Synchronisation Extensions (TSX) provide "XBEGIN", "XEND", and "XABORT" instructions for starting, committing, and aborting transactions, while Power provides "tbegin", "tend", and "tabort".
\subsection{Background: the x86 and Power Memory Models}
Both the x86 memory model~\cite{owens+09} and the Power memory model~\cite{alglave+14, sarkar+11, sarkar+12} allow certain instructions to execute out of program order, with x86 allowing stores to be reordered with later loads and Power allowing many more relaxations.
Both architectures provide fences ("MFENCE" in x86, and "lwsync", "sync", and "isync" in Power) to allow these relaxations to be controlled.
The x86 architecture provides atomic RMWs via "LOCK"-prefixed instructions, while Power implements RMWs using \emph{exclusive} instructions like those seen in Example~\ref{ex:hle}.
Moreover, x86 is \emph{multicopy-atomic}~\cite{collier92}, which means that writes are propagated to all other threads simultaneously.
Power does not have this property, so its memory model includes explicit axioms to describe how writes propagate.
More formally, we extend executions with relations that connect events in program order that are separated by a fence event of a given type.
For x86, we add an $\mfence$ relation, and for Power, we add $\isync$, $\lwsync$, and $\sync$.
\begin{figure}
\centering
\begin{axiomatisation}
\axiom{\Coherence}{\acyclic(\LOC{\po} \cup \com)}
\\
\axiom{\AtomicRMW}{\isempty(\rmw \cap (\fre \semi \coe))} \\
\axiom{\Order}{\acyclic(\hb)}
\\
\where \ppo &=& \stack{((W \times W) \cup (R\times W) \cup (R\times R)) \cap \po}
\\
\dashboxed[63mm]{}\tfence &=& \po\cap((\neg\stxn\semi\stxn) \cup (\stxn\semi\neg\stxn))
\\
\Locked &=& \domain(\rmw) \cup \range(\rmw) \\
\implied &=& \stack{[\Locked]\semi\po \cup \po\semi[\Locked] \dashboxed{{}\cup \tfence}}
\\
\hb &=& \mfence \cup \ppo \cup \implied \cup \rfe \cup \fr \cup \co
\\
\newaxiom{\StrongIsol}{\acyclic(\stronglift(\com,\stxn))} \\
\newaxiom{\TxnOrder}{\acyclic(\stronglift(\hb,\stxn))} \\
\end{axiomatisation}
\caption{x86 consistency axioms~\cite{alglave+14}, with our
TM additions \dashboxed{\text{highlighted}}}
\label{fig:axioms_x86}
\end{figure}
An x86 execution is consistent if it satisfies all of the axioms in Fig.~\ref{fig:axioms_x86} (ignoring the highlighted regions for now).
The \Coherence{} axiom forbids cycles in communication edges and program order among events on the same location; this guarantees programs that use only a single location to have SC semantics.
Happens-before ($\hb$) imposes the event-ordering constraints upon which all threads must agree, and \Order{} ensures that $\hb^*$ is a partial order.
The constraints on $\hb$ arise from: fences placed by the programmer ($\mfence$), fences created implicitly by "LOCK"'d operations ($\implied$), the preserved fragment of the program order ($\ppo$), inter-thread observations ($\rfe$) and communication edges ($\fr$ and $\co$).
\begin{figure}
\centering
\begin{axiomatisation}
\axiom{\Coherence}{\acyclic(\LOC{\po} \cup \com)}
\\
\axiom{\AtomicRMW}{\isempty(\rmw \cap (\fre\semi \coe))}
\\
\axiom{\Order}{\acyclic(\hb)}
\\
\where \ppo &=& \modelcomment{preserved program order, elided}
\\
\dashboxed[63mm]{}\tfence &=& \po\cap((\neg\stxn\semi\stxn) \cup (\stxn\semi\neg\stxn))
\\
\fence &=& \sync \dashboxed{{}\cup \tfence} \cup (\lwsync \setminus (W\times R))
\\
\ihb &=& \ppo \cup \fence
\\
\dashboxed[65mm]{}\thb &=& (\rfe \!\cup ((\fre \!\cup\! \coe)^*\!\semi\ihb))^*\!\semi(\fre \!\cup\! \coe)^*\!\semi\rfe^?
\\
\hb &=& (\rfe^?\semi\ihb\semi \rfe^?) \dashboxed{{}\cup \weaklift(\thb,\stxn)}
\\
\axiom{\Propagation}{\acyclic(\co \cup \prop)}
\\
\where \cfence &=& \rfe^?\semi\fence\semi \rfe^?
\\
\propI &=& [W]\semi \cfence\semi\hb^*\semi[W]
\\
\propII &=& \come^*\!\semi \cfence^*\!\semi\hb^*\!\semi (\sync \dashboxed{{}\cup\tfence}) \semi \hb^*
\\
\dashboxed[32mm]{}\tpropI &=& \rfe\semi\stxn\semi[W]
\\
\dashboxed[25mm]{}\tpropII &=& \stxn\semi\rfe
\\
\prop &=& \propI \cup \propII \dashboxed{{}\cup \tpropI \cup \tpropII}
\\
\axiom{\Observation}{\irreflexive(\fre\semi \prop \semi \hb^*)}
\\
\newaxiom{\StrongIsol}{\acyclic(\stronglift(\com,\stxn))} \\
\newaxiom{\TxnOrder}{\acyclic(\stronglift(\hb,\stxn))} \\
\newaxiom{\TxnCancelsRMW}{\isempty(\rmw \cap \tfence^*)} \\
\end{axiomatisation}
\caption{Power consistency axioms~\cite{alglave+14}, with our
TM additions \dashboxed{\text{highlighted}}, and some details elided for brevity.}
\label{fig:axioms_power}
\end{figure}
A Power execution is consistent if it satisfies all the axioms in Fig.~\ref{fig:axioms_power} (again, ignoring the highlights).
The first axiom not already seen is \Order{}, which ensures that happens-before ($\hb$) is acyclic.
In contrast to x86, the happens-before relation in Power is formed from inter-thread observations ($\rfe$), the preserved fragment of the program order ($\ppo$), and fences ($\fence$).
We elide the definition of $\ppo$ as it is complex and unchanged by our TM additions.
The $\prop$ relation governs how fences restrict ``the order in which writes propagate''~\cite{alglave+14}, and the \Propagation{} axiom ensures that this relation does not contradict the coherence order.
\Observation{} governs which writes a read can observe: if $e_1$ propagates before $e_2$, then any read that happens after $e_2$ is prohibited from observing a write that precedes $e_1$ in coherence order.
\subsection{Adding Transactions}
\label{sec:x86_power:adding_txns}
To extend the x86 and Power memory models to support TM, we make the following amendments, each highlighted in Figs.~\ref{fig:axioms_x86} and~\ref{fig:axioms_power}.
\paragraph{Strong Isolation (x86 and Power)}
The Power manual says that transactions ``appear atomic with respect to both transactional and non-transactional accesses performed by other threads''~\cite[\S5.1]{power30}, and the TSX manual defines conflicts not just between transactions, but between a transaction and ``another logical processor'' (which is not required to be executing a transaction)~\cite[\S16.2]{intel17}.
We interpret these statements to mean that x86 and Power transactions provide \emph{strong} isolation, so we add our \StrongIsol{} axiom from \S\ref{sec:transactions:isolation}.
\paragraph{Implicit Transaction Fences (x86 and Power)}
In both x86 and Power, fences are created at the boundaries of successful transactions. In x86, ``a successfully committed [transaction] has the same ordering semantics as a "LOCK" prefixed instruction''~\cite[\S16.3.6]{intel17}, and in Power, ``[a] "tbegin" instruction that begins a successful transaction creates a [cumulative] memory barrier'', as does ``a "tend" instruction that ends a successful transaction''~\cite[\S1.8]{power30}.
Hence, we define $\tfence$ as the program-order edges that enter ($\neg\stxn\semi\stxn$) or exit ($\stxn\semi\neg\stxn$) a successful transaction, and add $\tfence$ alongside the existing fence relations ($\mfence$ and $\sync$).
\paragraph{Transaction Atomicity (x86 and Power)}
We extend the prohibition on $\hb$ cycles among events to include cycles among transactions (\TxnOrder).
This essentially treats all the transaction's events as one indivisible event, and is justified by the atomicity guarantee given to transactions, which in x86 is ``that all memory operations [\ldots] appear to have occurred instantaneously when viewed from other logical processors''~\cite[\S16.2]{intel17}, and in Power is that each successful transaction ``appears to execute as an atomic unit as viewed by other processors and mechanisms''~\cite[\S1.8]{power30}.
\paragraph{Barriers within Transactions (Power only)}
Each transaction contains an ``integrated memory barrier'', which ensures that writes observed by a successful transaction are propagated to other threads before writes performed by the transaction itself~\cite[\S1.8]{power30}.
This behaviour is epitomised by the \ltest{WRC}-style execution below~\cite[Fig.~6]{cain+13},
\begin{equation}
\label{eq:cain_wrc}
\begin{tikzpicture}[inner sep=1pt, baseline=0.35cm]
\node (a1) at (0,0.7) {\evtlbl{$a$}$\evW{}{x}{1}$};
\node (b1) at (2,0.7) {\evtlbl{$b$}$\evR{}{x}{1}$};
\node (b2) at (2,0) {\evtlbl{$c$}$\evW{}{\smash{y}}{1}$};
\node (c1) at (4,0.7) {\evtlbl{$d$}$\evR{}{\smash{y}}{1}$};
\node (c2) at (4,0) {\evtlbl{$e$}$\evR{}{x}{0}$};
\draw[edgefr, overlay] (c2) to [auto, bend left=28, pos=0.8] node {$\fr$} (a1);
\draw[edgerf] (a1) to [auto,swap] node {$\rf$} (b1);
\draw[edgerf] (b2) to [auto,swap] node {$\rf$} (c1);
\draw[edgepo] (b1) to [auto] node {$~\po$} (b2);
\draw[edgepo] (c1) to [auto] node {$~\ppo$} (c2);
\node[stxn, fit=(b1)(b2)] {};
\end{tikzpicture}
\end{equation}
which must be ruled out because the transaction's write ($c$) has propagated to the third thread before a write ($a$) that the transaction observed.
We capture this constraint by extending the $\prop$ relation so that it connects any write observed by a transaction to any write within that transaction ($\tpropI$).
In execution~\eqref{eq:cain_wrc}, this puts a $\prop$ edge from $a$ to $c$.
The execution is thus forbidden by the existing \Observation{} axiom.
\begin{Remark}\label{remark:rwc}
The following executions are similar to \eqref{eq:cain_wrc}, and like \eqref{eq:cain_wrc}, they could not be observed empirically.
However, the Power manual is ambiguous about whether they should be forbidden.
\begin{equation*}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evW{}{x}{1}$};
\node (b1) at (1.4,0.7) {$\evR{}{x}{1}$};
\node (b2) at (1.4,0) {$\evR{}{\smash{y}}{0}$};
\node (c1) at (2.8,0.7) {$\evW{}{\smash{y}}{1}$};
\node (c2) at (2.8,0) {$\evR{}{x}{0}$};
\draw[edgefr, overlay] (c2) to [auto, bend left=38, pos=0.8] node {$\fr$} (a1);
\draw[edgerf] (a1) to [auto,swap] node {$\rf$} (b1);
\draw[edgefr] (b2) to [auto,swap] node {$\fr$} (c1);
\draw[edgepo] (b1) to [auto] node (po) {$~\po$} (b2);
\draw[edgepo] (c1) to [auto] node {$~\sync$} (c2);
\node[stxn, fit=(b1)(b2)(po)] {};
\end{tikzpicture}
\hspace*{5mm}
\begin{tikzpicture}[inner sep=1pt, baseline=0.5cm]
\node (a1) at (0,0.7) {$\evW{}{x}{2}$};
\node (b1) at (1.4,0.7) {$\evR{}{x}{2}$};
\node (b2) at (1.4,0) {$\evR{}{\smash{y}}{0}$};
\node (c1) at (2.8,0.7) {$\evW{}{\smash{y}}{1}$};
\node (c2) at (2.8,0) {$\evW{}{x}{1}$};
\draw[edgeco, overlay] (c2) to [auto, bend left=38, pos=0.8] node {$\co$} (a1);
\draw[edgerf] (a1) to [auto,swap] node {$\rf$} (b1);
\draw[edgefr] (b2) to [auto,swap] node {$\fr$} (c1);
\draw[edgepo] (b1) to [auto] node (po) {$~\po$} (b2);
\draw[edgepo] (c1) to [auto] node {$~\sync$} (c2);
\node[stxn, fit=(b1)(b2)(po)] {};
\end{tikzpicture}
\end{equation*}
In particular, because the transactions are read-only, we cannot appeal to the integrated memory barrier.
We have reported this ambiguity to IBM architects, and while we await a clarified specification, our model errs on the side of caution by permitting these executions.
\end{Remark}
\paragraph{Propagation of Transactional Writes (Power only)}
Although Power is not multicopy-atomic in general, \emph{transactional} writes are multicopy-atomic; that is, the architecture will ``propagate the transactional stores fully before committing the transaction''~\cite[\S4.2]{cain+13}.
This behaviour is epitomised by another \ltest{WRC}-style execution, in which the middle thread sees the transactional write to $x$ before the right thread does.
\begin{equation}
\label{eq:mca_wrc}
\begin{tikzpicture}[inner sep=1pt, baseline=0.35cm]
\node (a1) at (0,0.7) {\evtlbl{$a$}$\evW{}{x}{1}$};
\node (b1) at (2,0.7) {\evtlbl{$b$}$\evR{}{x}{1}$};
\node (b2) at (2,0) {\evtlbl{$c$}$\evW{}{\smash{y}}{1}$};
\node (c1) at (4,0.7) {\evtlbl{$d$}$\evR{}{\smash{y}}{1}$};
\node (c2) at (4,0) {\evtlbl{$e$}$\evR{}{x}{0}$};
\draw[edgefr,overlay] (c2) to [auto, bend left=25, pos=0.8] node {$\fr$} (a1);
\draw[edgerf] (a1) to [auto,swap] node {$\rf$} (b1);
\draw[edgerf] (b2) to [auto,swap] node {$\rf$} (c1);
\draw[edgepo] (b1) to [auto] node {$~\ppo$} (b2);
\draw[edgepo] (c1) to [auto] node {$~\ppo$} (c2);
\node[stxn, fit=(a1)] {};
\end{tikzpicture}
\end{equation}
To rule out such executions, it suffices to extend the $\prop$ relation with reads-from edges that exit a transaction ($\tpropII$), and then to invoke \Observation{} again.
\paragraph{Read-modify-writes (Power only)}
In Power, when a store-exclusive is separated from its corresponding load-exclusive by ``a state change from Transactional to Non-transactional or Non-transactional to Transactional'', the RMW operation will always fail~\cite[\S1.8]{power30}.
Therefore, the \TxnCancelsRMW{} axiom ensures that no consistent execution has an $\rmw$ edge crossing a transaction boundary.
\paragraph{Transaction Ordering (Power only)}
The Power manual states that ``successful transactions are serialised in some order'', and that it is impossible for contradictions to this order to be observed~\cite[p.~824]{power30}.
We capture this constraint by extending the $\hb$ relation to include a new $\thb$ relation between transactions. The $\thb$ relation imposes constraints on the order in which transactions can be serialised.
By including it in $\hb$ and requiring $\thb$ to be a partial order, we guarantee the existence of a suitable transaction serialisation order, without having to construct this order explicitly.
The definition of the $\thb$ relation is a little convoluted, but the intuition is quite straightforward: it contains all non-empty chains of intra-thread happens-before edges ($\ihb$) and inter-thread communication edges ($\come$), except those that contain an $\fre$ or $\coe$ edge followed by an $\rfe$ edge that does not terminate the chain.
The rationale for excluding $\fre\semi\rfe$ and $\coe\semi\rfe$ chains is that these do not provide ordering in a non-multicopy-atomic architecture.
That is, from
\begin{center}
\begin{tikzpicture}[inner sep=1pt]
\node (a) at (0,0) {\vphantom{A}$a$};
\node (b) at (1,0) {\vphantom{A}$b$};
\node (c) at (2,0) {\vphantom{A}$c$};
\draw[edgefr] (a) to [auto] node {$\fre$} (b);
\draw[edgerf] (b) to [auto] node {$\rfe$} (c);
\end{tikzpicture}
\qquad or \qquad
\begin{tikzpicture}[inner sep=1pt]
\node (a) at (0,0) {\vphantom{A}$a$};
\node (b) at (1,0) {\vphantom{A}$b$};
\node (c) at (2,0) {\vphantom{A}$c$};
\draw[edgeco] (a) to [auto] node {$\coe$} (b);
\draw[edgerf] (b) to [auto] node {$\rfe$} (c);
\end{tikzpicture}
\end{center}
we cannot deduce that $a$ happens before $c$, because this behaviour can also be attributed to the write $b$ being propagated to $c$'s thread before $a$'s thread.
\citeauthor{cain+13} epitomise the transaction-ordering constraint using the \ltest{IRIW}-style execution reproduced below~\cite[Fig.~5]{cain+13}.
\begin{equation}
\label{eq:cain_iriw}
\begin{tikzpicture}[inner sep=1pt, baseline=0.35cm]
\node (a1) at (0,0.7) {\evtlbl{$a$}$\evW{}{x}{1}$};
\node (b1) at (1.8,0.7) {\evtlbl{$b$}$\evR{}{x}{1}$};
\node (b2) at (1.8,0) {\evtlbl{$c$}$\evR{}{\smash{y}}{0}$};
\node (c1) at (3.6,0.7) {\evtlbl{$d$}$\evR{}{\smash{y}}{1}$};
\node (c2) at (3.6,0) {\evtlbl{$e$}$\evR{}{x}{0}$};
\node (d1) at (5.4,0.7) {\evtlbl{$f$}$\evW{}{\smash{y}}{1}$};
\draw[edgefr, overlay] (c2) to [auto, bend left=30, pos=0.8] node {$\fr$} (a1);
\draw[edgefr, overlay] (b2) to [auto,swap, bend right=30, pos=0.8] node {$\fr$} (d1);
\draw[edgerf] (a1) to [auto,swap] node {$\rf$} (b1);
\draw[edgerf] (d1) to [auto] node {$\rf$} (c1);
\draw[edgepo] (c1) to [auto] node {$~\ppo$} (c2);
\draw[edgepo] (b1) to [auto,swap] node {$\ppo~$} (b2);
\node[stxn, fit=(a1)] {};
\node[stxn, fit=(d1)] {};
\end{tikzpicture}
\end{equation}
The execution must be disallowed because different threads observe incompatible transaction orders: the second thread observes $a$ before $f$, but the third observes $f$ before $a$.
Our model disallows this execution on the basis of a $\thb$ cycle between the two transactions.
We must be careful not to overgeneralise here, because a behaviour similar to \eqref{eq:cain_iriw} but with only \emph{one} write transactional was observed during our empirical testing, and is duly allowed by our model.
\subsection{Empirical Testing}
\label{sec:x86_power:testing}
\begin{table}
\caption{Testing our transactional x86 and Power models}
\small
\renewcommand\tabcolsep{3.9pt}
\begin{tabular}{@{}lrrrrrrrr@{}}
\toprule
\textbf{Arch.} & \textbf{$\left|E\right|$} & \raisebox{-1.45ex}{\smash{\begin{tabular}{r@{}}\bf Synthesis \\ \bf time (s) \end{tabular}}} & \multicolumn{3}{c}{\textbf{Forbid}} & \multicolumn{3}{c}{\textbf{Allow}} \\[-4pt]
\cmidrule(l){4-6}\cmidrule(l){7-9}
& & & \phantom{00}T & \phantom{00}S & \phantom{0}$\neg$S & \phantom{00}T & \phantom{00}S & \phantom{0}$\neg$S \\
\midrule
x86 & 2 & 4 & 0 & 0 & 0 & 2 & 2 & 0\\
& 3 & 22 & 4 & 0 & 4 & 24 & 23 & 1\\
& 4 & 87 & 22 & 0 & 22 & 99 & 99 & 0\\
& 5 & 260 & 42 & 0 & 42 & 249 & 244 & 5\\
& 6 & 4402 & 133 & 0 & 133 & 895 & 832 & 63\\
& 7 & $>$7200 & 307 & 0 & 307 & 2457 & 1901 & 556\\
\midrule
\multicolumn{3}{r}{\textbf{Total (x86):}} & 508 & 0 & 508 & 3726 & 3101 & 625\\
\midrule
Power & 2 & 13 & 2 & 0 & 2 & 7 & 7 & 0\\
& 3 & 58 & 9 & 0 & 9 & 44 & 44 & 0\\
& 4 & 318 & 60 & 0 & 60 & 184 & 175 & 9\\
& 5 & 9552 & 353 & 0 & 353 & 1517 & 1330 & 187\\
& 6 & $>$7200 & 922 & 0 & 922 & 5043 & 4407 & 636\\
\midrule
\multicolumn{3}{r}{\textbf{Total (Power):}} & 1346 & 0 & 1346 & 6795 & 5963 & 832\\
\bottomrule
\end{tabular}
\label{table:x86_power}
\end{table}
Table~\ref{table:x86_power} gives the results obtained using our testing strategy from~\S\ref{sec:methodology:empirical}.
We use \Memalloy{} to synthesise litmus tests that are forbidden by our transactional models but allowed under the non-transactional baselines (the {\bf Forbid} set), up to a bounded number of events ($\left|E\right|$).
We then derive the {\bf Allow} sets by relaxing each test.
We report synthesis times on a 4-core Haswell i7-4771 machine with 32GB RAM, using a timeout of 2 hours.
For both sets we give the number of tests (T) found; we say this number is complete if synthesis did not reach timeout and non-exhaustive otherwise.
We say a test is seen (S) if it is observed on any implementation, and not seen ($\neg$S) otherwise.
Each x86 test is run 1M times on four TSX implementations:
a Haswell (i7-4771),
a Broadwell-Mobile (i7-5650U),
a Skylake (i7-6700),
and a Kabylake (i7-7600U).
Each Power test is run 10M times on an 80-core POWER8 (TN71-BP012) machine.
When testing this machine, we use \Litmus{}'s \emph{affinity} parameter~\cite{alglave+11a}, which places threads incrementally across the logical processors to encourage \ltest{IRIW}-style behaviours.
We were able to generate the complete set of x86 {\bf Forbid} executions that have up to 6 events, and the complete set of Power {\bf Forbid} executions up to 5 events.
Regarding these bounds: we remark that since our events only represent memory accesses and fences (not, for instance, starting or committing transactions), we can capture many interesting behaviours with relatively few events.
For instance, these bounds are large enough to include all the executions discussed in this section.
Of our 508 x86 {\bf Forbid} tests, 29\% had one transaction, 44\% had two, and 27\% had three,
and of the 1346 Power {\bf Forbid} tests, 29\% had one transaction, 54\% had two, and 17\% had three.
No {\bf Forbid} test was empirically observable on either architecture, which gives us confidence that our models are not too strong.
Of the x86 {\bf Allow} tests, 83\% could be observed on at least one implementation, as could 88\% of the Power {\bf Allow} tests; this provides some evidence that our models are not excessively weak.
Many of the unobserved Power {\bf Allow} tests are based on the load-buffering (\ltest{LB}) shape, which has never actually been observed on a Power machine, even without transactions~\cite{alglave+14d}.
\def34{34}
\def313{313}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
height=30mm,
width=80mm,
ymin=-15,
ymax=100,
xmin=-1,
xmax=34,
extra x ticks = {34},
every tick/.style={line width=0.4pt},
xlabel={Time (hours)},
ylabel={\rotatebox{-90}{\begin{tabular}{@{}l@{}}Tests \\ found \\ (\%)\end{tabular}}},
axis line style={opacity=0},
tick pos=left,
tick align=outside,
clip=false,
]
\addplot[
mark=no,
draw=black,
]
table[x expr={\thisrow{secs}/3600}, y expr={\thisrow{solns}/313*100}] {
secs solns
0 0
60 0
60 0
120 0
120 27
240 27
240 74
480 74
480 159
960 159
960 212
1920 212
1920 272
3840 272
3840 295
7680 295
7680 307
15360 307
15360 310
30720 310
30720 313
122909 313
};
\draw ({rel axis cs:0,0} -|{axis cs:0,0})
-- ({rel axis cs:0,0} -|{axis cs:34,0});
\draw ({rel axis cs:0,0} |-{axis cs:0,0})
-- ({rel axis cs:0,0} |-{axis cs:0,100});
\end{axis}
\end{tikzpicture}
\caption{The distribution of synthesis times for the 7-event x86 {\bf Forbid} tests}
\label{fig:exec_histogram}
\end{figure}
Increasing the timeout to 48 hours is sufficient to generate the complete set of x86 {\bf Forbid} executions for 7 events.
It takes 34{} hours for \Memalloy{} to find all 313{} tests.
Figure~\ref{fig:exec_histogram} shows how the percentage of executions found is affected by various caps on the synthesis time.
We observe that many tests are found quickly: 98\% of the tests are found within 2 hours (i.e., 6\% of the total synthesis time), and all of the tests are found within 9 hours (the remaining synthesis time is used to confirm that there are no further tests).
During the development process, we exploited this observation to obtain preliminary test results more rapidly.
\section{Transactions in ARMv8}
\label{sec:arm}
\newcommand\dmbld{\mathit{dmbld}}
\newcommand\dmbst{\mathit{dmbst}}
\newcommand\dmb{\mathit{dmb}}
\newcommand\isb{\mathit{isb}}
\newcommand\ob{\mathit{ob}}
\newcommand\aob{\mathit{aob}}
\newcommand\bob{\mathit{bob}}
\newcommand\dob{\mathit{dob}}
\newcommand\notxn{\mathit{notxn}}
\newcommand\poextend{\mathit{poextend}}
The ARMv8 memory model sits roughly between x86 and Power.
Like x86, it is multicopy-atomic~\cite{pulte+17}, but like Power, it permits several relaxations to the program order.
Unwanted relaxations can be inhibited either using barriers ("DMB", "DMB\,LD", "DMB\,ST", "ISB") or using \emph{release}/\emph{acquire} instructions ("LDAR", "STLR") that act like one-way fences.
Formally, ARMv8 executions are obtained by adding six extra fields: $\Acq$ and $\Rel$, which are the sets of acquire and release events, and $\dmb$/$\dmbld$/$\dmbst$/$\isb$, which relate events in program order that are separated by barriers.
\begin{figure}
\centering
\begin{axiomatisation}
\axiom{\Coherence}{\acyclic (\LOC{\po} \cup \com)}
\\
\axiom{\Order}{\acyclic(\ob)}
\\
\where \dob &=& \modelcomment{order imposed by dependencies, elided}
\\
\aob &=& \modelcomment{order imposed by atomic RMWs, elided}
\\
\bob &=& \modelcomment{order imposed by barriers, elided}
\\
\dashboxed[63mm]{}\tfence &=& \po\cap((\neg\stxn\semi\stxn)\cup(\stxn\semi\neg\stxn))
\\
\ob &=& \come \cup \dob \cup \aob \cup \bob \dashboxed{{}\cup \tfence}
\\
\axiom{\AtomicRMW}{\isempty(\rmw \cap (\fre\semi \coe))} \\
\newaxiom{\StrongIsol}{\acyclic(\stronglift(\com,\stxn))} \\
\newaxiom{\TxnOrder}{\acyclic(\stronglift(\ob,\stxn))} \\
\newaxiom{\TxnCancelsRMW}{\isempty(\rmw \cap \tfence^*)} \\
\end{axiomatisation}
\caption{ARMv8 consistency axioms~\cite{arm17,deacon17}, with our TM additions \dashboxed{\text{highlighted}}, and some details elided for brevity.}
\label{fig:axioms_arm}
\end{figure}
An ARMv8 execution is consistent if it satisfies all of the axioms in Fig.~\ref{fig:axioms_arm} (ignoring the highlighted regions).
We have seen the \Coherence{} and \AtomicRMW{} axioms already.
The ordered-before relation ($\ob$) plays the same role as happens-before in x86: it imposes the event-ordering constraints upon which all threads must agree, and must be free from cycles (\Order).
These constraints arise from communication ($\come$), dependencies ($\dob$), atomic RMWs ($\aob$), and barriers ($\bob$).
\subsection{Adding Transactions}
The ARMv8 architecture does not support TM, so the extensions proposed below (highlighted in Fig.~\ref{fig:axioms_arm}) are unofficial.
Nonetheless, the extensions we give are based upon a proposal currently being considered within ARM Research and upon extensive conversations with ARM architects.
\begin{itemize}
\item \StrongIsol{} is a natural choice for hardware TM.
\item As in x86 and Power, we place implicit fences ($\tfence$) at the boundaries of successful transactions.
\item We bring the \TxnOrder{} axiom from x86 and Power to forbid $\ob$-cycles among transactions.
\item Like Power, ARMv8 has exclusive instructions, so it inherits the \TxnCancelsRMW{} axiom to ensure the failure of RMWs that straddle a transaction boundary.
\end{itemize}
\subsection{Empirical Testing}
\label{sec:arm:testing}
ARM hardware does not support TM so we cannot test our model as we did for x86 and Power.
However, we generated the {\bf Forbid} and {\bf Allow} suites anyway, and gave them to ARM architects.
They were able to use these to reveal a bug (specifically, a violation of the \TxnOrder{} axiom) in a register-transfer level (RTL) prototype implementation.
\newcommand\satxn{\mathit{stxn}_{\rm at}}
\newcommand\HbCom{\ax{HbCom}}
\newcommand\NoThinAir{\ax{NoThinAir}}
\newcommand\SeqCst{\ax{SeqCst}}
\newcommand\NoRace{\ax{NoRace}}
\section{Transactions in C++}
\label{sec:cpp}
We now turn our attention from hardware to software.
TM is supported in C++ via an ISO technical specification that has been under development by the C++ TM Study Group since 2012~\cite{c++tm15, shpeisman+09}.
In this section, we formalise how the proposed TM extensions interact with the existing C++ memory model, and detail a possible simplification to the specification.
C++ TM offers two main types of transactions: \emph{relaxed transactions} (written \texttt{synchronized\{...\}}) can contain arbitrary code, but only promise weak isolation, while \emph{atomic transactions} (written \texttt{atomic\{...\}}) promise strong isolation but cannot contain certain operations, such as atomic operations~\cite[\S8.4.4]{c++tm15}.
Some atomic transactions can be aborted by the programmer, but we do not support these in this paper.
\subsection{Background: the C++ Memory Model}
\label{sec:cpp:background}
\newcommand\sw{\mathit{sw}}
\newcommand\cnf{\mathit{cnf}}
\newcommand\psc{\mathit{psc}}
\newcommand\ecom{\mathit{ecom}}
\newcommand\ACQ{\mathit{Acq}}
\newcommand\Ato{\mathit{Ato}}
\newcommand\tsw{\mathit{tsw}}
Our presentation of the baseline C++ memory model follows \citet{lahav+17}.
We choose to build on their formalisation because it incorporates a fix that allows correct compilation to Power -- without this, we could not check the compilation of C++ transactions to Power transactions (\S\ref{sec:metatheory:compilation}).
C++ executions identify four additional subsets of events: $\Ato$ contains the events from atomic operations, while $\ACQ$, $\REL$, and $\SC$ contain events from atomic operations that use the acquire, release, and SC consistency modes~\cite[\S29.3]{c++11}.
\begin{figure}
\centering
\begin{axiomatisation}
\axiom{\HbCom}{\irreflexive(\hb \semi \com^*)} \\
\where \sw &=& \modelcomment{synchronises-with, elided} \\
\dashboxed[32mm]{}\ecom &=& \com \cup (\co\semi\rf) \\
\dashboxed[38mm]{}\tsw &=& \weaklift(\ecom,\stxn)\\
\hb &=& (\sw \cup \dashboxed{\tsw \cup {}} \po)^+ \\
\axiom{\AtomicRMW}{\isempty(\rmw \cap (\fre\semi \coe))} \\
\axiom{\NoThinAir}{\acyclic(\po \cup \rf)} \\
\axiom{\SeqCst}{\acyclic(\psc)} \\
\where \psc &=& \modelcomment{constraints on SC events, elided} \\
\end{axiomatisation}
\vspace*{-3mm}
\begin{axiomatisation}
\axiom{\NoRace}{\isempty(\cnf \setminus \Ato^2 \setminus (\hb \cup \hb^{-1}))} \\
\where \cnf &=& \stack{((W\!\times\!W) \cup (R\!\times\!W) \cup
(W\!\times\!R)) \cap \sloc \setminus \id} \\
\end{axiomatisation}
\caption{C++ consistency and race-freedom
axioms~\cite{lahav+17}, with our TM additions
\dashboxed{\text{highlighted}}, and some details elided.}
\label{fig:axioms_cpp}
\end{figure}
Unlike the architecture-level memory models, the C++ memory model defines \emph{two} predicates on executions (Fig.~\ref{fig:axioms_cpp}).
The first characterises the \emph{consistent} candidate executions.
If any consistent execution violates a second \emph{race-freedom} predicate, then the program is completely undefined.
Otherwise, the allowed executions are the consistent executions.
A C++ execution is consistent if it satisfies all of the consistency axioms given at the top of Fig.~\ref{fig:axioms_cpp} (ignoring highlighted regions for now).
The first, \HbCom{}, governs the happens-before relation, which is constructed from the program order and the \emph{synchronises-with} relation ($\sw$).
Roughly speaking, an $\sw$ edge is induced when an acquire read observes a release write; but it also handles fences and the `release sequence'~\cite{lahav+17, batty+16}.
The second axiom is standard for capturing the isolation of RMW operations.
The \NoThinAir{} axiom is \citeauthor{lahav+17}'s solution to C++'s `thin air' problem~\cite{batty+15}.
Finally, \SeqCst{} forbids certain cycles among $\SC$ events; we omit its definition as it does not interact with our TM extensions.
A consistent C++ execution is race-free if it satisfies the \NoRace{} axiom at the bottom of Fig.~\ref{fig:axioms_cpp}, which states that whenever two conflicting ($\cnf$) events in different threads are not both atomic, they must be ordered by happens-before.
\subsection{Adding Transactions}
\label{sec:cpp:adding_transactions}
The specification for C++ TM makes two amendments to the C++ memory model: one for data races, and one for transactional synchronisation.
\paragraph{Transactions and Data Races}
The definition of a race is unchanged in the presence of TM. In particular, the program
\begin{center}
\begin{tabular}{l||l}
\texttt{atomic\{ x=1; \}} & \texttt{atomic\_store(\&x,2);}
\end{tabular}
\end{center}
is racy -- which is perhaps contrary to the intuition that an atomic transaction with a single non-atomic store should be interchangeable with a non-transactional atomic store.
\begin{Remark}
\label{rem:cpp_abort}
The specification also clarifies that although events in an unsuccessful transaction are unobservable, they can still participate in races.
This implies that the program
\begin{center}
\begin{tabular}{l||l}
\texttt{atomic\{ x=1; abort(); \}} & \texttt{atomic\_store(\&x,2);}
\end{tabular}
\end{center}
must be considered racy.
In our formalisation, transactions either succeed (giving rise to an $\stxn$-class) or fail, giving rise to no events (cf.~\S\ref{sec:transactions:executions}).
This treatment correctly handles races involving unsuccessful transactions, because the race will be detected in the case where the transaction succeeds, but it cannot handle transactions that \emph{never} succeed, such as the one above.
Therefore, we leave the handling of "abort()" for future work.
\end{Remark}
\paragraph{Transactional Synchronisation}
The second amendment by the C++ TM extension defines when two transactions synchronise~\cite[\S1.10]{c++tm15}.
An execution is deemed consistent only if there is a total order on transactions such that:
\begin{enumerate}
\item this order does not contradict happens-before, and
\item if transaction $T_1$ is ordered before conflicting transaction $T_2$, then the end of $T_1$ synchronises with the start of $T_2$.
\end{enumerate}
\newcommand\txnord{\mathit{to}}
We could incorporate these requirements into the formal model by extending executions with a transaction-ordering relation, $\txnord$, that serialises all the $\stxn$-classes in an order that does not contradict happens-before (point 1), and updating the synchronises-with relation to include events in conflicting transactions that are ordered by $\txnord$ (point 2).
However, this formulation is unsatisfying.
It is awkward that $\txnord$ is used to \emph{define} happens-before but is also forbidden to \emph{contradict} happens-before.
Moreover, having the consistency predicate involve quantification over all possible transaction serialisations makes simulation expensive~\cite{batty+16}.
Fortunately, we can formulate the C++ TM memory model without relying on a total order over transactions.
The idea is that if two transactions are in conflict, then their order can be deduced from the existing $\rf$, $\co$, and $\fr$ edges, and if they are not, then there is no need to order them.
In more detail, and with reference to the highlighted parts of Fig.~\ref{fig:axioms_cpp}: observe that whenever two events in an execution conflict ($\cnf$), they must be connected one way or the other by `extended communication' ($\ecom$), which is the communication relation extended with $
\co\semi\rf$ chains.
That is, $\cnf = \ecom\cup\ecom^{-1}$~[\isabelleqed].
We then say that transactions synchronise with ($\tsw$) each other in $\ecom$ order,
and we extend happens-before to include $\tsw$.
By simply extending the definition of $\hb$ like this, we avoid the need for the $\txnord$ relation altogether, and we avoid adding any axioms to the consistency predicate.
To make our proposal concrete, we provide
\ifdefined\EXTENDED
in \S\ref{sec:cpp_amendment}
\else
in our companion material
\fi
some text that the specification could incorporate (currently under review by the C++ TM Study Group).
\paragraph{Strong Isolation for Atomic Transactions}
The semantics described thus far provides the desired weak-isolating behaviour for \emph{relaxed} transactions; that is, the \WeakIsol{} axiom follows from the other C++ consistency axioms~[\isabelleqed].
However, \emph{atomic} transactions must be strongly isolated.
In fact, atomic transactions enjoy strong isolation simply by being forbidden to contain atomic operations.
The idea is that for a non-transactional event to observe or affect a transaction's intermediate state, it must conflict with an event in that transaction.
If this event cannot be atomic, there must be a race.
Thus, for race-free programs, atomic transactions are guaranteed to be strongly isolated.
To formalise this property, we extend C++ executions with an $\satxn$ relation that identifies a subset of transactions as atomic. It satisfies $\satxn\subseteq\stxn$ and $(\satxn\semi\stxn)\subseteq\satxn$. We then prove the following theorem.
\begin{theorem}[Strong isolation for atomic transactions]
If \ax{NoRace} holds, and atomic transactions contain no atomic operations (i.e., $\domain(\satxn)\cap \Ato = \emptyset$), then \[\acyclic(\stronglift(\com,\satxn)).\]
\end{theorem}
\begin{proof}[Proof sketch]
\renewcommand\qed{\hfill\isabelleqed}
A cycle in $\stronglift(\com,\satxn)$ is either a $\com$-cycle or an $r$-cycle, where $r = \satxn\semi (\com \setminus \satxn)^+ \semi \satxn$.
From \ax{NoRace}, we have $\com \setminus \Ato^2 \subseteq \hb$. Using this and the expansion $\com^+ = \ecom \cup (\fr\semi\rf)$ we can obtain $r \subseteq \hb$.
To finish the proof, note that execution well-formedness forbids $\com$-cycles, and that $r$-cycles are forbidden too because they are also $\hb$-cycles, which violate \HbCom{}.
\end{proof}
\paragraph{A Transactional SC-DRF Guarantee}
\label{sec:cpp:tdrf}
A central property of the C++ memory model is its SC-DRF guarantee~\cite{adve+90, boehm+08}: all race-free C++ programs that avoid non-SC atomic operations enjoy SC semantics.
This guarantee can be lifted to a transactional setting~\cite{dalessandro+09, shpeisman+09}: all race-free C++ programs that avoid relaxed transactions and non-SC atomic operations enjoy TSC semantics (cf. \S\ref{sec:transactions:tsc}).
This is formalised in the following theorem, which we prove
\ifdefined\EXTENDED
in \S\ref{sec:tdrf_proof}.
\else
in our companion material.
\fi
\newcommand\tdrfstatement{
If a C++-consistent execution has
\begin{itemize}
\item no relaxed transactions (i.e. $\stxn = \satxn$),
\item no non-SC atomics (i.e. $\Ato = \SC$), and
\item no data races (i.e. \NoRace{} holds),
\end{itemize}
then it is consistent under TSC.
}
\begin{theorem}[Transactional SC-DRF guarantee]
\label{thm:tdrf}
\tdrfstatement
\end{theorem}
\newcommand\scs{\mathit{scr}}
\newcommand\scst{\mathit{scr}^{\rm t}}
\begin{table}
\caption{Summary of our metatheoretical results. Timings are for a machine with four 16-core Opteron processors and 128GB RAM, using the Plingeling solver~\cite{biere10}. A \greencross{} means the property holds up to the given number of events, a \redtick{} means a counterexample was found, and \timeout{} indicates a timeout.}
\label{tab:metatheory_results}
\centering
\renewcommand\tabcolsep{0.9mm}
\begin{tabular}{lll@{\hspace*{-4mm}}rrc}
\toprule
\bf Property & \bf \S & \bf Target & \bf Events & \bf ~~Time & \bf C'ex? \\
\midrule
Monotonicity & \ref{sec:metatheory:monotonicity}
& x86 & 6 & 20m
& \greencross \\
&& Power & 2 & <1s & \redtick \\
&& ARMv8 & 2 & <1s & \redtick \\
&& C++ & 6 & 91h
& \greencross \\
Compilation & \ref{sec:metatheory:compilation}
& C++/x86 & 6 & 14h
& \greencross \\
&& C++/Power & 6 & 16h
& \greencross \\
&& C++/ARMv8 & 6 & 20h
& \greencross \\
Lock elision & \ref{sec:metatheory:hle}
& x86 & 8 & >48h & \timeout \\
&& Power & 9 & >48h & \timeout \\
&& ARMv8 & 7 & 63s & \redtick \\
&& ARMv8 (fixed) & 8 & >48h & \timeout \\
\bottomrule
\end{tabular}
\end{table}
\section{Metatheory}
\label{sec:metatheory}
We now study several metatheoretical properties of our proposed models.
For instance, one straightforward but important property, which follows immediately from the model definitions, is that our TM models give the same semantics to transaction-free programs as the original models~[\isabelleqed].
In this section, we use \Memalloy{} to check some more interesting properties of our models, as summarised in Tab.~\ref{tab:metatheory_results}.
\subsection{Monotonicity}
\label{sec:metatheory:monotonicity}
\newcommand\piedge{\begin{tikzpicture}[baseline=0]
\draw[edgepi] (0,0.09) to (0.4,0.09);
\end{tikzpicture}}
We check that adding $\stxn$-edges can never make an inconsistent execution consistent. This implies that all of the following program transformations are sound: introducing a transaction (e.g., \begin{tikzpicture}[baseline=(a1.base)]
\node[inner sep=0] (a1) at (0,0) {$\bullet$};
\end{tikzpicture}\,\piedge\,\begin{tikzpicture}[baseline=(a2.base)]
\node(a2) at (0.9,0) {$\bullet$};
\node[stxn, fit=(a2), inner sep=0] {};
\end{tikzpicture}), enlarging a transaction (e.g., \begin{tikzpicture}[baseline=(a1.base)]
\node(a1) at (0,0) {$\bullet$};
\node[inner sep=0](b1) at (0.35,0) {$\bullet$};
\node[stxn, fit=(a1), inner sep=0] {};
\end{tikzpicture}\,\piedge\,\begin{tikzpicture}[baseline=(a2.base)]
\node(a2) at (1.2,0) {$\bullet$};
\node(b2) at (1.55,0) {$\bullet$};
\node[stxn, fit=(a2)(b2), inner sep=0] {};
\end{tikzpicture}), and coalescing two consecutive transactions (e.g., \begin{tikzpicture}[baseline=(a1.base)]
\node(a1) at (0,0) {$\bullet$};
\node(b1) at (0.5,0) {$\bullet$};
\node[stxn, fit=(a1), inner sep=0] {};
\node[stxn, fit=(b1), inner sep=0] {};
\end{tikzpicture}\,\piedge\,\begin{tikzpicture}[baseline=(a2.base)]
\node(a2) at (1.35,0) {$\bullet$};
\node(b2) at (1.7,0) {$\bullet$};
\node[stxn, fit=(a2)(b2), inner sep=0] {};
\end{tikzpicture}).
\Memalloy{} confirmed that the transactional x86 and C++ models enjoy this monotonicity property for all executions with up to 6 events.
For Power and ARMv8, it found the following counterexample:
\begin{center}
\begin{tikzpicture}[inner sep=1pt, yscale=-1]
\def0{0}
\def0{0}
\def0.9{0.8}
\def1.5{1.8}
\node (a1) at (0,0+0*0.9) {$\evR{}{"x"}{0}$};
\node (a2) at (0,0+1*0.9) {$\evW{}{"x"}{1}$};
\node[stxn, fit=(a1)] {};
\node[stxn, fit=(a2)] {};
\def5{3}
\def1{0}
\def0.9{0.8}
\node (b1) at (5,1+0*0.9) {$\evR{}{"x"}{0}$};
\node (b2) at (5,1+1*0.9) {$\evW{}{"x"}{1}$};
\node[stxn, fit=(b1)(b2)] {};
\draw[edgepi] (a1) to (b1);
\draw[edgepi] (a2) to (b2);
\draw[edgepo] (a1) to [auto] node {$\rmw$} (a2);
\draw[edgepo] (b1) to [auto] node {$\rmw$} (b2);
\end{tikzpicture}
\end{center}
The left execution is inconsistent in both models because of the \TxnCancelsRMW{} axiom: that a store-exclusive separated from its corresponding load-exclusive by a transaction boundary always fails.
The right execution, however, is consistent.
This counterexample implies that techniques that involve transaction coalescing~\cite{chung+08, stipic+13} must be applied with care in the presence of RMWs.
\subsection{Mapping C++ Transactions to Hardware}
\label{sec:metatheory:compilation}
We check that it is sound to compile C++ transactions to x86, Power, and ARMv8 transactions.
A realistic compiler would be more complex -- perhaps including fallback options for when hardware transactions fail -- but our direct mapping is nonetheless instructive for comparing the guarantees given to transactions in software and in hardware.
Specifically, we use \Memalloy{} to search for a pair of executions, $X$ and $Y$, such that $X$ is an inconsistent C++ execution, $Y$ is a consistent x86/Power/ARMv8 execution, and $X$ is related to $Y$ via the relevant compilation mapping, encoded in the relation $\pi$.
Such a pair would demonstrate that the compilation mapping is invalid.
\citet{wickerson+17} have encoded non-transactional compilation mappings; we only need to extend them to handle transactions, which we do by requiring $\pi$ to preserve all $\stxn$-edges:
\begin{eqnarray*}
\stxn_Y &=& \pi^{-1}\semi \stxn_X \semi \pi.
\end{eqnarray*}
\Memalloy{} confirmed that compilation to x86, Power, and ARMv8 is sound for all C++ executions with up to 6 events.
\subsection{Checking Lock Elision}
\label{sec:metatheory:hle}
We now check the soundness of lock elision in x86, Power, and ARMv8 using the technique proposed in \S\ref{sec:methodology:libabs}.
First, we extend executions with four new event types:
\begin{itemize}
\item $\evL$, the "lock()" calls that will be implemented by acquiring the lock in the ordinary fashion,
\item $\evU$, the corresponding "unlock()" calls,
\item $\evLt$, the "lock()" calls that will be transactionalised, and
\item $\evUt$, the corresponding "unlock()" calls.
\end{itemize}
When generating candidate executions from programs, we assume that each "lock()"/"unlock()" pair gives rise to a $L$-$U$ pair or a $\evLt$-$\evUt$ pair.
(Distinguishing these two modes at the execution level eases the definition of the mapping relation.)
We obtain from these lock/unlock events a derived $\scs$ relation that forms an equivalence class among all the events in the same CR.
Similarly, $\scst$ is a subrelation of $\scs$ that comprises just those CRs that are to be transactionalised.
Second, we extend execution well-formedness so that every $\evL$ event must be followed by a $\evU$ event without an intervening $\evLt$ or $\evUt$, and so on.
Third, the consistency predicates from Figs.~\ref{fig:axioms_x86}, \ref{fig:axioms_power}, and~\ref{fig:axioms_arm} are extended with the following axiom that forces the serialisability of CRs.
\begin{align}
\tag{\ax{CROrder}}
\acyclic(\weaklift(\po\cup\com, \scs))
\end{align}
\begin{table}
\caption{Key constraints on $\pi$ for defining lock elision}
\label{tab:hle_mapping}
\centering
\renewcommand{\tabcolsep}{0.3mm}
\newcommand\xstart{6mm}
\newcommand\xend{5mm}
\newcommand\pstart{6mm}
\newcommand\pend{14mm}
\newcommand\astart{6mm}
\newcommand\aend{11mm}
\begin{tabular*}{\linewidth}{ccccc}
\toprule
\multirow{2}{*}{
\begin{tabular}[b]{@{}c@{}}\textbf{Source}\\\textbf{event}, $e$\end{tabular}}
& \multicolumn{4}{c}{\textbf{Target event(s)}, $\pi(e)$} \\
\cmidrule(l){2-5}
& x86 & Power & ARMv8 & ARMv8 (fixed)\\
\midrule
\arrayrulecolor{black!50}
$\evL$ &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\xstart, trim right=\xend]
\node (c) at (0,-0.6) {$\evR{}{}{}$};
\node (a) at (0,0) {$\evR{}{}{}$};
\node (b) at (0,0.7) {$\evW{}{}{}$};
\draw[edgepo] (c) to[auto] node{$\ctrl$} (a);
\draw[edgepo] (a) to[auto] node{$\rmw$} (b);
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\pstart, trim right=\pend]
\node (a) at (0,0) {$\evR{}{}{}$};
\node (b) at (0,0.7) {$\evW{}{}{}$};
\node (c) at (0,1.4) {"isync"};
\draw[edgepo] (a) to[auto] node{$\rmw,\ctrl$} (b);
\draw[edgepo] (b) to[auto] node{$\ctrl$} (c);
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\astart, trim right=\aend]
\node (a) at (0,0) {\vphantom{$W$}\smash{$\evR{\Acq}{}{}$}};
\node (b) at (0,0.7) {$\evW{}{}{}$};
\draw[edgepo] (a) to[auto] node{$\rmw,\ctrl$} (b);
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\astart, trim right=\aend]
\node (a) at (0,0) {\vphantom{$W$}\smash{$\evR{\Acq}{}{}$}};
\node (b) at (0,0.7) {$\evW{}{}{}$};
\node (c) at (0,1.4) {"dmb"};
\draw[edgepo] (a) to[auto] node{$\rmw,\ctrl$} (b);
\draw[edgepo] (b) to[auto] node{$\po$} (c);
\end{tikzpicture}
\\ \midrule
$\evU$ &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, trim left=-\xstart, trim right=\xend]
\node (a) at (0,0) {$\evW{}{}{}$};
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\pstart, trim right=\pend]
\node (a) at (0,0) {"sync"};
\node (b) at (0,0.7) {$\evW{}{}{}$};
\draw[edgepo] (a) to[auto] node{$\po$} (b);
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, trim left=-\astart, trim right=\aend]
\node (a) at (0,0) {\vphantom{$W$}\smash{$\evW{\Rel}{}{}$}};
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, trim left=-\astart, trim right=\aend]
\node (a) at (0,0) {\vphantom{$W$}\smash{$\evW{\Rel}{}{}$}};
\end{tikzpicture}
\\ \midrule
$\evLt$ &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\xstart, trim right=\xend]
\node (b) at (0,0) {$\evR{}{}{}$};
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\pstart, trim right=\pend]
\node (b) at (0,0) {$\evR{}{}{}$};
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\astart, trim right=\aend]
\node (b) at (0,0) {$\evR{}{}{}$};
\end{tikzpicture} &
\begin{tikzpicture}[inner sep=1pt,yscale=-1, baseline=(b.base), trim left=-\astart, trim right=\aend]
\node (b) at (0,0) {$\evR{}{}{}$};
\end{tikzpicture}
\\ \midrule
$\evUt$ & - & - & - & - \\
\arrayrulecolor{black}
\bottomrule
\end{tabular*}
\begin{axiomatisationWithoutBox}
\header{Moreover:} \\
\axiom{LockVar}{\sloc_Y ~~=~~ I^2 \cup ((\neg I)^2 \cap (\pi^{-1}\semi \sloc_X\semi\pi))} \\
\where I = \pi(\evL\cup\evU\cup\evLt\cup\evUt)
\\
\axiom{TxnIntro}{\scst\setminus (\neg\evUt)^2 ~~=~~ \pi\semi\stxn_Y\semi\pi^{-1}}
\\
\axiom{TxnReadsLockFree}{\isempty([\evL]\semi \pi\semi\rf\semi\pi^{-1}\semi[\evLt])}
\\
\bottomrule
\end{axiomatisationWithoutBox}
\end{table}
Finally, we define a mapping $\pi$ from the events of an `abstract' execution $X$ to those of a `concrete' execution $Y$, that captures the implementation of lock elision.
Table~\ref{tab:hle_mapping} sketches the main constraints we impose on $\pi$ so that it captures lock elision for x86, Power, and ARMv8.
It preserves all the execution structure except for lock/unlock events.
The \ax{LockVar} constraint imposes that all the reads/writes in $Y$ that are introduced by the mapping (call these $I$) access the same location (i.e., the lock variable) and that this location is not accessed elsewhere in $Y$.
The \ax{TxnIntro} constraint imposes that events in the same transactionalised CR in $X$ become events in the same transaction in $Y$.
The $\evL$ and $\evU$ events are mapped to sequences of events that represent the recommended spinlock implementation for each architecture.
Each $\evL$ event maps to a successful RMW on the lock variable, which in ARMv8 is an acquire-RMW~\cite[\S K.9.3.1]{arm17}, in Power is followed by a control dependency\footnote{In Power, $\ctrl$ edges can begin at a store-exclusive~\cite{sarkar+12}.} and an "isync"~\cite[\S B.2.1.1]{power30}, and in x86 is preceded by an additional read (the `test-and-test-and-set' idiom)~\cite[\S8.10.6.1]{intel17}.
Each $\evU$ event maps to a write on the lock variable, which in ARMv8 is a release-write~\cite[\S K.9.3.2]{arm17}, and in Power is preceded by a "sync"~\cite[\S B.2.2.1]{power30}.
Each $\evLt$ event maps to a read of the lock variable.
This read does not observe a write from an $\evL$ event (\ax{TxnReadsLockFree}), to ensure that it sees the lock as free.
Finally, $\evUt$ events vanish (because we do not have explicit events for beginning/ending transactions).
\begin{figure}
\centering
\vspace*{2mm}
\begin{tikzpicture}[inner sep=1pt, yscale=-1]
\def0{0}
\def0{0}
\def0.9{0.8}
\def1.5{1.8}
\node[colorlock] (a1) at (0,0+1*0.9) {$\evL$};
\node (a2) at (0,0+2*0.9) {$\evR{}{"x"}{0}$};
\node (a3) at (0,0+3*0.9) {$\evW{}{"x"}{2}$};
\node[colorlock] (a4) at (0,0+4*0.9) {$\evU$};
\node[colorlock] (b1) at (0+1.5,0+1*0.9) {$\evLt$};
\node (b2) at (0+1.5,0+2*0.9) {$\evW{}{"x"}{1}$};
\node[colorlock] (b3) at (0+1.5,0+3*0.9) {$\evUt$};
\def5{5}
\def1{1}
\def0.9{0.8}
\node[colorlock] (c1) at (5,1+0*0.9) {$\evR{\smash{\Acq}}{"m"}{0}$};
\node[colorlock] (c2) at (5,1+1*0.9) {$\evW{}{"m"}{1}$};
\node (c3) at (5,1+2*0.9) {$\evR{}{"x"}{0}$};
\node (c4) at (5,1+3*0.9) {$\evW{}{"x"}{2}$};
\node[colorlock] (c5) at (5,1+4*0.9) {$\evW{\Rel}{"m"}{0}$};
\node[colorlock] (d1) at (5+1.5,1+1*0.9) {$\evR{}{"m"}{0}$};
\node (d2) at (5+1.5,1+2*0.9) {$\evW{}{"x"}{1}$};
\node[stxn, fit=(d1)(d2)] {};
\draw[edgepi,overlay] (a1) to[bend right=15] (c1);
\draw[edgepi] (a1) to (c2);
\draw[edgepi] (a2) to (c3);
\draw[edgepi] (a3) to (c4);
\draw[edgepi] (a4) to (c5);
\draw[edgepi] (b1) to (d1);
\draw[edgepi] (b2) to (d2);
\draw[edgefr] (a2) to [auto] node {$\fr$} (b2);
\draw[edgeco] (b2) to [auto, bend right=20] node {$\co$} (a3);
\draw[edgepo] (a1) to [auto] node {$\po$} (a2);
\draw[edgepo] (a2) to [auto] node {$\po,\data$} (a3);
\draw[edgepo] (a3) to [auto] node {$\po$} (a4);
\draw[edgepo] (b1) to [auto] node {$\po$} (b2);
\draw[edgepo] (b2) to [auto] node {$\po$} (b3);
\draw[edgepo] (c1) to [pos=0.4, auto] node {$\po, \rmw, \ctrl$} (c2);
\draw[edgepo] (c2) to [auto] node {$\po$} (c3);
\draw[edgepo] (c3) to [auto] node {$\po,\data$} (c4);
\draw[edgepo] (c4) to [auto] node {$\po$} (c5);
\draw[edgepo] (d1) to [auto] node {$\po$} (d2);
\draw[edgefr] (c3) to [auto] node {$\fr$} (d2);
\draw[edgefr] (d1) to [auto] node {$\fr$} (c2);
\draw[edgeco] (d2) to [auto, bend right=20] node {$\co$} (c4);
\draw[edgeco, overlay] ([xshift=-4mm]c2.south) to [auto, swap, bend left] node {$\co$} ([xshift=-4mm]c5.north);
\draw[edgefr] (c1) to [auto, swap, bend left] node {$\fr$} (c2);
\end{tikzpicture}
\caption{A pair of executions that demonstrates lock elision being unsound in ARMv8}
\label{fig:hle_bug}
\end{figure}
Figure~\ref{fig:hle_bug} shows a pair of ARMv8 executions, $X$ (left) and $Y$ (right), and a $\pi$ relation (dotted arrows), that satisfy all of the constraints above.
From this example, which was automatically generated using \Memalloy{} in 63 seconds, we manually constructed the pair of litmus tests shown in Example~\ref{ex:hle}.
It thus demonstrates that lock elision is unsound in ARMv8.
This example is actually one of several found by \Memalloy{}; we provide another example
\ifdefined\EXTENDED
in \S\ref{sec:second_lock_elision_example}.
\else
in our companion material.
\fi
We also used \Memalloy{} to check lock elision in x86 and Power, and again in ARMv8 after applying the fix proposed in \S\ref{sec:intro:hlebug} (appending a "DMB" to the "lock()" implementation).
Given that each architecture implements $\evL$ events with a different number of primitive events (Tab.~\ref{tab:hle_mapping}), we ensured that the event count was large enough in each case to allow examples similar to Fig.~\ref{fig:hle_bug} to be found.
We were unable to find bugs in any of these cases, but \Memalloy{} timed out before it could verify their absence.
As such, we cannot claim lock elision in x86 and Power to be \emph{verified}, but the timeout provides a high degree of confidence that these designs are bug-free up to the given bounds because, as Tab.~\ref{tab:metatheory_results} shows, when counterexamples exist they tend to be found quickly.
\section{Related Work}
\label{sec:related}
In concurrent but independent work, \citet{dongol+18} have also proposed adding TM to the x86, Power, and ARMv8 memory models.
Like us, \citeauthor{dongol+18} build their axioms by lifting relations from events to transactions.
However, their models are significantly weaker than ours, because they capture only the \emph{atomicity} of transactions, not the \emph{ordering} of transactions.
Because of this, their Power model is too weak to validate the natural compiler mapping from C++.
This is demonstrated by the following execution, which is forbidden by C++ (owing to an $\hb$ cycle), but allowed by their Power model (though not actually observable on hardware).
\begin{center}
\begin{tikzpicture}[inner sep=1pt, yscale=-1]
\def0{0}
\def0{0}
\def0.9{0.9}
\def1.5{1.5}
\node (b1) at (0+1.5, 0+1*0.9)
{$\evW{}{"x"}{1}$};
\node (c1) at (0+1.5, 0+2*0.9)
{$\evW{}{"y"}{1}$};
\node (d1) at (0+2*1.5, 0+1*0.9)
{$\evR{}{"y"}{1}$};
\node (e1) at (0+2*1.5, 0+2*0.9)
{$\evR{}{"x"}{0}$};
\draw[edgepo] (b1) to [auto,swap] node (po) {$\po$} (c1);
\node[stxn, fit=(b1)(c1)(po)] {};
\node[stxn, fit=(d1)] {};
\node[stxn, fit=(e1)] {};
\draw[edgerf] (c1) to [auto,swap,pos=0.3] node {$\rf$} (d1);
\draw[edgepo] (d1) to [auto] node {$\po$} (e1);
\draw[edgefr] (e1) to [auto,swap,pos=0.7] node {$\fr$} (b1);
\end{tikzpicture}
\end{center}
Moreover, unlike our work, \citeauthor{dongol+18}'s models have not been empirically validated -- and nor have earlier models that combine TM and weak memory~\cite{maessen+07, dalessandro+10}.
Nonetheless, our models being stronger than \citeauthor{dongol+18}'s implies that our endeavours are complementary: our experiments validate their models, and their proofs carry over to our models.
\citet{cerone+15} have studied the weak consistency guarantees provided by transactions in database systems.
A key difference is that for \citeauthor{cerone+15}, weak behaviours are attributed to weakly consistent transactions, but in our work, weak behaviours are attributed to weakly consistent non-transactional events surrounding strongly consistent transactions.
Nonetheless, similar axiomatisations can be used in both settings, and similar weak behaviours can manifest.
Our models follow the axiomatic style, but there also exist operational memory models for x86~\cite{owens+09}, Power~\cite{sarkar+11}, ARMv8~\cite{pulte+17}, and C++~\cite{nienhuis+16}. It would be interesting to consider how these could be extended to handle TM.
Other architectures and languages that could be targetted by our methodology include RISC-V, which plans to incorporate TM in the future~\cite{riscv17}, and Java.
Indeed, \citet{grossman+06} and \citet{shpeisman+07} identify several tricky corner cases that arise when attempting to extend Java's weak memory model to handle transactions, and our methodology can be seen as a way to automate the generation of these.
Regarding the analysis of programs that \emph{provide} TM, an automatic tool for testing (software) TM implementations above a weak memory model has been developed by \citet{manovit+06}.
Like us, they use automatically-generated litmus tests to probe the implementations, but where our test suites are constructed to be exhaustive and to contain only `interesting' tests, their tests are randomly generated.
Regarding the analysis of programs that \emph{use} TM, we note that the formulation of the C++ memory model by \citet{lahav+17} leads to an efficient model checker for multithreaded C++~\citet{kokologiannakis+18}.
Since our C++ TM model builds on \citeauthor{lahav+17}'s model, it may be possible to get a model checker for C++ TM similarly.
Regarding tooling for axiomatic memory models in general: our methodology builds on tools due to \citet{wickerson+17} and \citet{lustig+17}, both of which build on Alloy~\cite{jackson12a}.
Related tools include \Diy{}~\cite{alglave+10}, which generates litmus tests by enumerating relaxations of SC. Compared to \Diy{}, \Memalloy{} is more easily extensible with constructs such as transactions, and only generates the tests needed to validate a model.
\MemSynth{}~\cite{bornholt+17} can synthesise memory models from a corpus of litmus tests and their expected outcomes, though it does not currently handle software models or control dependencies.
\section{Conclusion}
We have extended axiomatic memory models for x86, Power, ARMv8, and C++ to support transactions.
Using our extensions to \Memalloy{}, we synthesised meaningful sets of litmus tests that precisely capture the subtle interactions between weak memory and transactions.
These tests allowed us to validate our new models by running them on available hardware, discussing them with architects, and checking them against technical manuals.
We also used \Memalloy{} to check several metatheoretical properties of our models, including the validity of program transformations and compiler mappings, and the correctness -- or lack thereof -- of lock elision.
\specialcomment{acknowledgements}{%
\begingroup
\section*{Acknowledgements}
\phantomsection\addcontentsline{toc}{section}{Acknowledgements}
}{%
\endgroup
}
\begin{acknowledgements}
We are grateful to Stephan Diestelhorst, Matt Horsnell, and Grigorios Magklis for extensive discussions of TM and the ARM architecture,
to Nizamudheen Ahmed and Vishwanath HV for RTL testing, and
to Peter Sewell for letting us access his Power machine.
We thank the following people for their insightful comments on various drafts of this work:
Mark Batty,
Andrea Cerone,
George Constantinides,
Stephen Dolan,
Alastair Donaldson,
Brijesh Dongol,
Hugues Evrard,
Shaked Flur,
Graham Hazel,
Radha Jagadeesan,
Jan Ko\'nczak,
Dominic Mulligan,
Christopher Pulte,
Alastair Reid,
James Riely,
the anonymous reviewers,
and our shepherd, Julian Dolby.
This work was supported by
an \grantsponsor{john-icrf}{Imperial College Research Fellowship}{http://www.imperial.ac.uk/research-fellowships/} and
the \grantsponsor{epsrc}{EPSRC}{https://www.epsrc.ac.uk/} (\grantnum{epsrc}{EP/K034448/1}).
\end{acknowledgements}
\balance
|
2,877,628,088,321 | arxiv | \section{Introduction}
Markov chains play a central role in modern data science algorithms \cite{ching2013markov}, e.g., data assimilation and forecasting, uncertainty quantification, machine learning, stochastic optimization etc. One important scenario is when the mixing properties of Markov chains are used to sample statistical quantifies of interest with respect to the equilibrium density $\pi$. However, simulating a Markov chain for this purpose can be a computationally demanding task, due to the dimensionality and mixing time. The seminal works Aharonov et al. and Szegedy \cite{aharonov2001quantum,szegedy2004quantum} put forward a paradigm to simulate Markov chains using quantum walks.
A particular emphasis in this line of works \cite{wocjan2008speedup,richter2007almost,magniez2007search,dunjko2015quantum, wocjan2021szegedy} has been placed on the speedup in terms of the spectral gap $\delta$, from the classical $\frac{1}{\delta}$ complexity to $\frac{1}{\sqrt{\delta}}$, without the unfavorable dependence on $\pi_*=\min_{x}\pi(x)$.
An important approach for achieving this quadratic speedup is by constructing a sequence of Markov chains, $P_0, P_1, \cdots, P_r$, for which the stationary distribution of two successive Markov chains are close \cite{aharonov2003adiabatic,wocjan2008speedup,somma2008quantum}. Assuming that the initial chain $P_0$ can be easily mixed, the overall complexity is $\mathcal{O}\left(\frac{r}{\sqrt{\delta}}\right),$ times the complexity of implementing each quantum walk.
This approach has been combined with the Chebyshev cooling schedule, together with the quantum mean estimation to compute expected values and partition functions \cite{montanaro2015quantum}, which yields the sampling complexity $\mathcal{O}\left(\frac{r}{\epsilon\;\sqrt{\delta}}\right)$ .
One important example of slowly varying Markov chains is the Gibbs distribution parameterized by the inverse temperature, which is the primary mechanism behind simulated annealing algorithms. Although the quadratic speedup has been envisioned to be a generic property, explicit construction of the sequence of slowly varying Markov chains with uniformly bounded spectral gaps is still an open issue.
This paper presents an alternative approach to construct multiple Markov chains. We consider the discretization of a Markov chain with continuous (infinite-dimensional) state space, which stem from general deterministic or stochastic dynamical systems. The discretization, using Ulam-Galerkin projection \cite{ulam1960collection}, lumps states into finitely many bins with bin size $\mathpzc{h}$. The novel aspect is that we can construct multiple Markov chains $P_\mathpzc{h}$ by varying $\mathpzc{h}$. When $\mathpzc{h}$ is large, e.g., $\mathpzc{h}=\mathpzc{h}_\Max$, the state space of the Markov chain is small, $P_\mathpzc{h}$ can be mixed quickly using either a classical or a quantum algorithm. On the other hand, when $\mathpzc{h}$ is small, e.g., $\mathpzc{h}=\mathpzc{h}_\Min$, the continuous Markov chain is well approximated by $P_\mathpzc{h}$. To some extent, this removes the assumption in the framework of multiple Markov chains \cite{wocjan2008speedup} on the preparability of the initial chain. We show that by varying $\mathpzc{h}$, e.g., from $2\mathpzc{h}$ to $\mathpzc{h}$ at each stage, the density functions of the Markov chains at two successive levels have significant overlap, thus enabling a smooth transition.
Our main finding (Theorem 1) is that simulating the sequence of such multiple Markov chains $\{P_\mathpzc{h}\}_{\mathpzc{h}_\Max \geq \mathpzc{h} \geq h_\Min}$ has a cost that is comparable to simulating the Markov chain $P_{\mathpzc{h}_\Min}$, as if it had a warm start.
Although our approach is constructed from a finite-dimensional approximation of a Markov chain with infinite state space, the same methodology can be applied directly to certain finite Markov chains, especially those that have been treated by multigrid methods \cite{de2010smoothed}.
\medskip
\emph{Problem Setup. ---} We consider a Markov chain $\{X_n\}_{n\geq 0}$ with state space $S=\mathbb{R}^d$ and the (right) transition density $K(x,y)$, with $K(x,dy)$ indicating the probability that, given $x$, the Markov chain moves to state $y$ at the next step, which can be described in terms of the conditional expectation of any statistical quantity $f(\cdot)$, i.e.,
$\mathbb{E}[f(X_{n+1})|X_n]=\int K(X_n,y) f(y) dy.$ An alternative description of the Markov chain is the Chapman-Kolmogorov (CK) equation for the change of the probability density function (PDF) from step $n$ to $n+1,$
\begin{equation} \label{eq: chkol}
p_{n+1} = \int_{\mathbb{R}^d} p_{n}(x) K(x, y) dx,
\end{equation}
which is convenient in the study of mixing properties. The relation in Eq. \eqref{eq: chkol} is often written in a matrix/vector multiplication form $p_{n+1}^T= p_n^T K$. In particular, a stationary density $p(x)$ is the left eigenvector associated with the eigenvalue $1$: $p^T= p^T K$.
\medskip
\noindent\emph{Problem: } Given a precision $\epsilon,$ find a finite-dimensional approximation $p_\mathpzc{h}(x)$ of the stationary PDF $p$ with $\mathpzc{h}$ indicating a numerical parameter, such that, $\norm{p - p_\mathpzc{h}}_1 < \epsilon. $
\medskip
\emph{Ulam-Galerkin projection. --- } Many existing quantum algorithms work with a finite-dimensional form of the CK equation \eqref{eq: chkol}. To approximate a Markov chain with continuous state space and implement the algorithms on quantum computers, we have to quantize the problem. This is done in two steps: First we consider a large domain $\mathpzc{D}$, where the probability of reaching states outside $\mathpzc{D}$ is negligible. This is a reasonable assumption if we consider the states close to equilibrium: since the PDF integrates to 1, the probability in the far field is negligible. For simplicity of the presentation, we choose $\mathpzc{D}= [-1,1]^{\otimes d}$. In the second step, we can introduce a partition of $\mathpzc{D}$ with uniform spacing $\mathpzc{h}$, $(\mathpzc{h}\coloneqq \frac{1}{N}$ for some $N \in \mathbb{N}$),
\begin{equation}
\mathpzc{D} = \bigcup_{\bm j\in N(\mathpzc{h}) } \mathpzc{D}_{\bm j }(\mathpzc{h}), N(\mathpzc{h}):= \Big\{\bm j: -N\leq j_k < N, \forall k\in [d]\Big\}.
\end{equation}
where the subdomain is given by,
\begin{equation}
\mathpzc{D}_{\bm j}(\mathpzc{h})= \bigotimes_{k=1}^d \big[j_k \mathpzc{h}, (j_k+1) \mathpzc{h}\big).
\end{equation}
With this partition, the continuous states have been grouped into small non-intersecting bins with bin size $\mathpzc{h}^d$, amounting to a finite state space,
\begin{equation}\label{Sh}
S_\mathpzc{h} \coloneqq \{ \mathpzc{D}_{\bm j }(\mathpzc{h})| \bm j\in N(\mathpzc{h}) \}, \quad \abs{S_\mathpzc{h}}= \left(\frac{2}{\mathpzc{h}}\right)^d.
\end{equation}
For brevity, we simplify refer to one such state in $S_\mathpzc{h}$ by $\bm j.$
Associated with the partitions of the domain is a discretization of a PDF. This can be done by integrating the PDF $p(x)$ over each bin:
\begin{equation}\label{lump}
\pi_\mathpzc{h}(\bm j)= \int_{\Omega_{\bm j }(\mathpzc{h})} p(x) dx.
\end{equation}
Meanwhile, the discrete probability can be mapped back to a continuous one by piecewise constant interpolation,
\begin{equation}\label{interp}
p_\mathpzc{h}(x) = \sum_{\bm j} \pi_\mathpzc{h} ({\bm j}) \chi_{\bm j}(x).
\end{equation}
Here $\chi_{\bm j}(x)$ is the indicator function for the domain $\Omega_{\bm j }(\mathpzc{h})$: $\chi_{\bm j}(x)=\mathpzc{h}^{-d},$ if $x\in \Omega_{\bm j }(\mathpzc{h})$, and zero otherwise. We denote such class of functions by $\Delta_\mathpzc{h}.$
\emph{Approximation properties. --- } Clearly the function $p_\mathpzc{h}$ defined in \eqref{interp} is also a PDF. In addition, \eqref{interp} indicates a one-to-one correspondence between the vector $\bm \pi_\mathpzc{h}\in \mathbb{R}^{\abs{S_\mathpzc{h}}}$ and a piecewise constant function $p_\mathpzc{h}$ in $\Delta_\mathpzc{h}.$ We will write the relation in \eqref{lump} as $\bm \pi_\mathpzc{h}= \mathcal{A}^\mathpzc{h} p(x)$ with $\mathcal{A}^\mathpzc{h}$ representing an averaging operator, and \eqref{interp} as an interpolation $p_\mathpzc{h}= \mathcal{I}_\mathpzc{h} \bm \pi_\mathpzc{h}$.
The $L_1$ norm of $p_\mathpzc{h}$ also coincides with the vector $1$-norm of $\pi_\mathpzc{h},$ so we simply
denote this norm by $\norm{\cdot}_1.$ We first provide an error bound for an approximation of $p(x)$ using piecewise constant functions \cite{devore1993constructive}, i.e., those in $\Delta_\mathpzc{h}$.
\begin{lemma}\label{lmm: errbd}
Assume that $p(x)$ is Lipschitz continuous with Lipschitz constant $\Lambda.$
\[\abs{ p(x) - p(y) } \leq \Lambda \norm{x-y}_{\infty}.\]
When $p(x)$ is approximated by \eqref{interp} with coefficients from \eqref{lump},
the following estimate holds,
\begin{equation}
\norm{ p(x) - p_\mathpzc{h}(x)}_1 \leq \Lambda \mathpzc{h}, \quad \norm{ p(x) - p_\mathpzc{h}(x)}_\infty \leq \Lambda \mathpzc{h}.
\end{equation}
\end{lemma}
By applying the same procedure to the transition kernel $K(x,y)$, we obtain a matrix approximation,
\begin{equation}\label{Ph}
P_\mathpzc{h}(\bm i, \bm j)= \int_{ \Omega_{\bm i}(\mathpzc{h})} \int_{ \Omega_{\bm j}(\mathpzc{h})} K(x,y) dy dx.
\end{equation}
One immediate observation is that $P_\mathpzc{h}$ is a finite stochastic matrix, implying that there exists an invariant density $\bm \pi_\mathpzc{h}$ (resp. $p_\mathpzc{h}$). Namely, all entries of $P_\mathpzc{h}$ are non-negative and the row sum is one. The stochastic matrix $ P_\mathpzc{h}(\bm i, \bm j)$ has a one-to-one correspondence with the continuous transition kernel,
\begin{equation}
K_\mathpzc{h}(x,y) = \sum_{\bm i, \bm j} P_\mathpzc{h}(\bm i, \bm j) \chi_{\bm i}(x)\chi_{\bm j}(y).
\end{equation}
This reduction procedure using piecewise constant interpolation is known as the Ulam-Galerkin projection \cite{ulam1960collection}. Properties of the resulting Markov chains with finite state spaces were part of Ulam's conjectures, some of which have been addressed for several cases \cite{li1976finite,murray1997discrete,ding1996finite,froyland1998approximating}. Of particular importance to the current context are those results that assert that the density function
$p_\mathpzc{h}$ of $K_\mathpzc{h}$ converges to the density $p$ of $K$ with rate $\mathpzc{h} \log \frac{1}{\mathpzc{h}}. $
Quantum algorithms will work with the discrete probability $\{\pi_\mathpzc{h} ({\bm j}) \}$. The standard approach is to encode the PDF coherently into a quantum state,
\begin{equation}\label{quan-st}
\ket{\pi_\mathpzc{h}} = \sum_{\bm j} \sqrt{\pi_\mathpzc{h} ({\bm j})} \ket{\bm j},
\end{equation}
where the index $\bm j$ is mapped to a computational basis $\ket{\bm j}.$
Meanwhile, the transition matrix can be encoded in a unitary operator, either via a Hamiltonian operator \cite{aharonov2003adiabatic,childs2009universal} or quantum walks built from reflections \cite{szegedy2004quantum,wocjan2008speedup}.
By varying the resolution parameter $\mathpzc{h}$, we obtain a sequence of Markov chains,
\[\{ P_\mathpzc{h}: \mathpzc{h}_\text{max} \geq \mathpzc{h} \geq \mathpzc{h}_\text{min} \}. \]
The simplest progression is done by reducing $\mathpzc{h}$ by a factor of 2 at each stage. Namely, $\mathpzc{h}_\text{min} = 2^{-r} \mathpzc{h}_\text{max}$. In light of the convergence rate of the Ulam-Galerkin projection, we have
$ r = \CO{\log \frac{1}{\epsilon}},$
in order for the descritization error to be within $\epsilon.$
\emph{The multilevel approach. --- }The ability to vary $\mathpzc{h}$, thus introducing a slow transition from a Markov chain with small state space to one with a larger state space, suggests a new approach to build a sequence of Markov chains. Specifically, one can start with the equilibrium PDF $\pi_{2\mathpzc{h}}$ computed from the Markov chain $P_{2\mathpzc{h}}$, followed by an interpolation to prepare the next Markov chain $P_\mathpzc{h}$. The overall procedure is outlined, starting with $ \mathpzc{h}=\mathpzc{h}_\text{max} $, as follows
{\small
\begin{equation}\label{diag}
\ket{\pi_\mathpzc{h}} \overset{Interp.}{\xrightarrow{\hspace*{1cm}}} \mathcal{I}_{\mathpzc{h}}^{\frac{\mathpzc{h}}{2}} \ket{\pi_{{\mathpzc{h}}}} \overset{Q.~ Walks}{\xrightarrow{\hspace*{1cm}}} \ket{\pi_{\frac{\mathpzc{h}}{2}}} \overset{Interp.}{\xrightarrow{\hspace*{1cm}}} \mathcal{I}_{\frac{\mathpzc{h}}{2}}^{\frac{\mathpzc{h}}{4}} \ket{\pi_{\frac{\mathpzc{h}}{2}}} \cdots \overset{Interp.}{\xrightarrow{\hspace*{1cm}}}
\mathcal{I}_{2\mathpzc{h}_\text{min}}^{\mathpzc{h}_\text{min}} \ket{\pi_{2\mathpzc{h}_\text{min}}} \overset{Q.~ Walks}{\xrightarrow{\hspace*{1cm}}} \ket{\pi_{\mathpzc{h}_\text{min}} }.
\end{equation}
}
It is worthwhile to point out that our approach departs from the prior works \cite{wocjan2008speedup} in that we work with a sequence of Markov chains with varying state spaces. When $\mathpzc{h}$ is large, the dimension of the state space is much smaller, which can be treated much more efficiently, thus relaxing the preparability assumption on the initial chains. In addition, although the lumped Markov chain $P_\mathpzc{h}$ shows resemblance with the Markov chain on a lattice considered in \cite{richter2007almost}, $P_\mathpzc{h}$ in the current case is not necessarily symmetric and the equilibrium PDF may not be uniform.
In order to carry out this program, we define unitary operators that map density functions at successive levels. Specifically, following \eqref{lump}, we let $\mathcal{A}_\mathpzc{h}^{2\mathpzc{h}}$ be the ``averaging operator" that reduce PDFs on level $\mathpzc{h}$ to PDFs on level $2\mathpzc{h}$. Similarly, we define $\mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} p_{2\mathpzc{h}},$ as the ``interpolation operator", which is similar to \eqref{interp}.
\begin{figure}[hpt]
\centering
\includegraphics[scale=0.23]{px.jpg}
\caption{An illustration of the approximate density $p_\mathpzc{h}$ and $p_{2\mathpzc{h}}$ for a PDF $p(x).$}
\label{fig:my_label}
\end{figure}
Figure \ref{fig:my_label} provides an illustration of the PDFs at the levels $\mathpzc{h}$ and ${2\mathpzc{h}}$. The following properties can be directly established.
\begin{lemma}
$\mathcal{A}_\mathpzc{h}^{2\mathpzc{h}}$ is the operator adjoint of $\mathcal{I}_{2\mathpzc{h}}^\mathpzc{h}$ and is proportional to the left inverse of $\mathcal{I}_{2\mathpzc{h}}^\mathpzc{h}$: For any $\bm{v}_\mathpzc{h}$ (in $ \mathbb{R}^{\abs{S_\mathpzc{h}}}$)
and $\bm v_{2\mathpzc{h}}$ (in $ \mathbb{R}^{\abs{S_{2\mathpzc{h}}}}$), the following identifies hold
\begin{equation}\label{aiia}
\big(\bm v_\mathpzc{h}, \mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \bm v_{2\mathpzc{h}}\big)= \big(\mathcal{A}_{\mathpzc{h}}^{2\mathpzc{h}} \bm v_\mathpzc{h}, \bm v_{2\mathpzc{h}}\big), \quad
\mathcal{A}_{\mathpzc{h}}^{2\mathpzc{h}} \mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \bm v_{2\mathpzc{h}}= 2^d \bm v_{2\mathpzc{h}},
\end{equation}
where the parenthesis indicate standard inner products. Furthermore, the stochastic matrices \eqref{Ph} at consecutive levels are related as follows
\begin{equation}\label{2h2h}
{P}_{2\mathpzc{h}}=\mathcal{A}_{\mathpzc{h}}^{2\mathpzc{h}} P_{\mathpzc{h}} \mathcal{I}_{2\mathpzc{h}}^\mathpzc{h}.
\end{equation}
\end{lemma}
The last equation \eqref{2h2h} provides an important vehicle to move across Markov chains at different levels.
\bigskip
Next, we state the assumptions under which the multilevel procedure has provable performance.
\noindent{\emph{Main Assumptions: Each finite Markov chain $P_\mathpzc{h}$ is reversible with stationary PDF, given by $\bm \pi_\mathpzc{h}$ (or $p_\mathpzc{h}$ according to \eqref{interp}). The spectral gaps $\delta_\mathpzc{h}$ of the Markov chains are uniformly bounded, in the sense that
there exists a constant $\gamma$, such that $\frac{\delta_\mathpzc{h}}{\delta_{2\mathpzc{h}}}\leq \gamma.$
Further, each $p_\mathpzc{h}(x)$ in the range of $K_\mathpzc{h}$ has the following bounded variation property at the coarse level $2\mathpzc{h}$,
\begin{equation}
\abs{p_\mathpzc{h}(x) - p_\mathpzc{h} (x')} \leq \Lambda \mathpzc{h}, \forall x, x' \in \mathpzc{D}_{\bm j}(2\mathpzc{h})
\end{equation}
}}
The uniqueness of the stationary density $p_\mathpzc{h}$ has been proved in \cite{de2010smoothed} in the context of multigrid methods. The spectral gaps have a direct influence on the convergence rate of the Markov chain. For the Ulam-Galerkin projection, the uniform bound for the spectral gaps have been verified for certain stochastic dynamics, e.g., \cite[Cor. 3.5.3]{murray1997discrete}.
We first show that the uniformity parameter $\gamma$ for the spectral gaps is $\mathcal{O}(1)$, i.e, $\delta_{2\mathpzc{h}} \approx \delta_\mathpzc{h}.$. To this end, we use the notion of coefficient $\tau$ of ergodicity by Seneta \cite{seneta1993sensitivity} (which is related to the spectral gap by $\delta= 1 - \tau$),
\begin{equation}\label{eq: tau}
\tau_\mathpzc{h} = \max_{ (\bm \pi_\mathpzc{h}, \mathbf{1}_\mathpzc{h})=0 } \frac{ \norm{P_h \bm \pi_\mathpzc{h}}_1 }{\norm{\bm \pi_\mathpzc{h}}_1}.
\end{equation}
Here $\mathbf{1}_\mathpzc{h}$ is the vector with values being all ones.
\begin{lemma}
Under the assumption above on the bounded variation, one has $\abs{\tau_\mathpzc{h} -\tau_{2\mathpzc{h}}} \leq \Lambda \mathpzc{h}$.
\end{lemma}
Let $\bm \pi_{2\mathpzc{h}}$ be the vector with $\norm{\bm \pi_{2\mathpzc{h}}}_1=1 $ that achieves the maximum for the stochastic matrix $P_{2\mathpzc{h}}$, i.e.,
$\tau_{2\mathpzc{h}}= \norm{P_{2\mathpzc{h}}\bm \pi_{2\mathpzc{h}} }_1.$
Then, using the optimality condition \eqref{eq: tau}, we have
\begin{align*}
\tau_\mathpzc{h} \ge & \norm{P_h \mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \bm \pi_{2\mathpzc{h}}}_1
= \CO{\mathpzc{h}} + \norm{\mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \mathcal{A}_{\mathpzc{h}}^{2\mathpzc{h}} P_h \mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \bm \pi_{2\mathpzc{h}}}_1\\
= & \CO{\mathpzc{h}} +\norm{\mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} P_{2h} \bm \pi_{2\mathpzc{h}}}_1
= \CO{\mathpzc{h}} + \tau_{2\mathpzc{h}} \Longrightarrow \abs{\tau_\mathpzc{h} -\tau_{2\mathpzc{h}}} \leq \Lambda \mathpzc{h}.
\end{align*}
In the second step, we used the Lemma \ref{lmm: IA} below, and the Big$-\mathcal{O}$ includes the constant $\Lambda.$ The third step is based on \eqref{2h2h} and the observation that $\mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}}$ preserves the $L$-1 norm. As a result, $\delta_{2\mathpzc{h}}=1-\tau_{2\mathpzc{h}} \geq 1 - \tau_{\mathpzc{h}} + \CO{\mathpzc{h}}=\delta_{\mathpzc{h}}+ \CO{\mathpzc{h}}.$ This shows that in general the specral gap does not change significantly with $\mathpzc{h}$.
\emph{The PDF Overlaps. --- }
In our multilevel approach, the stationary density $p_{2\mathpzc{h}}$ will be interpolated to approximate the stationary density $p_{\mathpzc{h}}$, i.e., $p_{\mathpzc{h}} \approx \mathcal{I}_{2\mathpzc{h}}^\mathpzc{h} p_{2\mathpzc{h}}$. We now show that these PDFs have significant overlap.
\begin{lemma}\label{lmm: IA}
Under the Assumption on the bounded variation, for each $p_\mathpzc{h} \in \Delta_\mathpzc{h},$ the corresponding discrete density $\bm \pi_\mathpzc{h}$ follows the bound,
\begin{equation}\label{iah}
\norm{\mathcal{I}_{2\mathpzc{h}}^{\mathpzc{h}} \mathcal{A}_{\mathpzc{h}}^{2\mathpzc{h}} \bm \pi_{\mathpzc{h}} - \bm \pi_\mathpzc{h}}_1 \leq \Lambda \mathpzc{h}.
\end{equation}
Let $\bm \pi_\mathpzc{h}$ and $\bm \pi_{2\mathpzc{h}}$ be respectively the stationary density of $P_\mathpzc{h}$ and $P_{2\mathpzc{h}}$. Then,
\begin{equation}
\norm{\bm \pi_\mathpzc{h} - \mathcal{I}_{2\mathpzc{h}}^\mathpzc{h} \bm \pi_{2\mathpzc{h}} }_1 \leq C \frac{\mathpzc{h}}{\delta_\mathpzc{h}}.
\end{equation}
Finally, the last inequality implies that the corresponding quantum states \eqref{quan-st} have an overlap with the following lower bound,
\begin{equation}
\braket{\pi_\mathpzc{h}}{\mathcal{I}_{2\mathpzc{h}}^\mathpzc{h} \pi_{2\mathpzc{h}}} =
1- q_\mathpzc{h}, \quad \textrm{with}\; q_\mathpzc{h} \leq C\frac{\mathpzc{h}}{\delta_\mathpzc{h}}.
\end{equation}
\end{lemma}
\medskip
Now we turn to the implementation of the multi-level algorithm outlined in the {\bf Diagram} \ref{diag}. For instance, for $d=1$, $\ket{\pi_{2\mathpzc{h}}}$ has components with labels between $-1/\mathpzc{h}$ and $1/\mathpzc{h}$, which requires 1 less qubit to store than $\ket{\pi_{\mathpzc{h}}}$. The interpolation operator $\mathcal{I}_{2\mathpzc{h}}^\mathpzc{h}$ can be defined as,
\begin{equation}
\braket{x}{\pi_\mathpzc{h}} =
\left\{
\begin{aligned}
&\frac{1}{\sqrt{2}} \braket{x}{\pi_{2\mathpzc{h}}}, \quad &\text{if} \; -\frac{1}{\mathpzc{h}} < x< \frac{1}{\mathpzc{h}}, \qquad \\
&\frac{1}{\sqrt{2}}\braket{x \mp \frac{1}{\mathpzc{h}}}{\pi_{2\mathpzc{h}}}, \quad & \text{if} \; \frac{1}{\mathpzc{h}} \leq \pm x < \frac{2}{\mathpzc{h}}.
\end{aligned}\right.
\end{equation}
The extension to high dimensions is straightforward.
To assess the complexity of implementing each Markov chain chain, $P_\mathpzc{h}$, we consider the quantum walk approach in \cite[Theorem 2, $r=1$]{wocjan2008speedup}:
\begin{lemma}
For the Markov chain $P_\mathpzc{h}$, assume that the initial density $\ket{\pi^0_\mathpzc{h}}$ has an overlap at least $1-q$ with the stationary density $\ket{\pi_\mathpzc{h}}$: $\braket{\pi^0}{\pi}^2\geq 1-q$, and the Markov chain has spectral gap $\delta_\mathpzc{h}$, then after $n$ steps of the quantum walks, $W(P_\mathpzc{h})$, the algorithm produces an approximate density $\ket{\pi^n_\mathpzc{h}}$ with error within $\epsilon$,
provided that,
\[
n= \frac{\log \frac{1}{\epsilon}}{\sqrt{\delta} \log \frac{1}{q} } \log \left(\frac{\log \frac{1}{\epsilon}}{\sqrt{\delta} \log \frac{1}{q} }\right).
\]
\end{lemma}
Meanwhile, we notice that our algorithm involves Markov chains with varying state spaces.
Therefore, the dependence of the complexity on the state space dimension should be taken into account. Such dependence has been quantified by Chiang et al \cite{chiang2009efficient}.
\begin{lemma}\cite[Theorem 1]{chiang2009efficient}\label{lmm:chi}
Suppose that the transition matrix on a state space with dimension $2^m$ is $s-$ sparse, and the transition matrix can be accessed with correct $t-$bit digits: $t\geq \log \frac{1}{\epsilon} + \log s$. There is an quantum algorithm that simulates each step of the Markov chain with precision $\epsilon$ and with complexity that scales linearly with $m$ and $d$, but logarithmically with $\frac{1}{\epsilon}.$
\end{lemma}
\emph{Overall Computational Complexity. ---}
Recall that in order to prepare an initial state of $P_\mathpzc{h}$ with state space $S_\mathpzc{h}$, we run the Markov chain $P_{2\mathpzc{h}}$, which according to \eqref{Sh}, has a state space with much smaller dimension $\abs{S_{2\mathpzc{h}}} =2^{-d}\abs{S_\mathpzc{h}}$. Thus, based on the above complexity estimate, running $P_{2\mathpzc{h}}$ requires much less resources. The same pattern holds for $P_{4\mathpzc{h}}, P_{8\mathpzc{h}}, \cdots, P_{\mathpzc{h}_\Max}.$ We can quantify the overall complexity as follows,
\begin{theorem}
Under the previous assumptions, the multi-level approach (in {\bf Diagram} \ref{diag}) can be implemented via quantum walks with complexity that is equivalent to
\begin{equation}
\mathcal{O} \left( {\frac{d\gamma }{d-1} \frac{1}{\sqrt{\delta_{\mathpzc{h}_\Min}}
}
}
\right),
\end{equation}
steps of quantum walks
of $W\big(P_{\mathpzc{h}_{\Min}}\big),$ excluding logarithmic factors.
\end{theorem}
By keeping the dominate terms, the overall complexity from Lemma \eqref{lmm:chi} is given by,
\begin{equation}
C_\mathpzc{h}= \frac{m_\mathpzc{h} s_\mathpzc{h}}{ \sqrt{\delta_\mathpzc{h} } \log \frac{1}{q_\mathpzc{h}} }.
\end{equation}
Due to the fact that $s_\mathpzc{h} \geq s_{2\mathpzc{h}}$, and $m_\mathpzc{h}= \log \left(\frac{2}{\mathpzc{h}}\right)^d = \mathcal{O}\left(d \log \frac{1}{\mathpzc{h}}\right), $ we obtain,
\begin{equation}
\frac{ C_{2\mathpzc{h}}} {C_\mathpzc{h}} = \mathcal{O}\left(\frac{\log \frac{1}{\mathpzc{h}} }{\log \frac{1}{q_\mathpzc{h}} } \frac{1}{d} \sqrt{\frac{\delta_\mathpzc{h}}{\delta_{2\mathpzc{h}}}}\right) =\mathcal{O}\left( \frac{1}{d} \sqrt{\frac{\delta_\mathpzc{h}}{\delta_{2\mathpzc{h}}}} \right)=\mathcal{O}\left( \frac{\sqrt{\gamma}}{d}\right).
\end{equation}
Here we have used the bound in Lemma \ref{lmm: IA} and the assumption on the spectral gap.
Therefore, the total cost is given by,
\begin{equation}
C_\text{total}= C_{\mathpzc{h}_{\Min}} \big(1 + d^{-1} + d^{-2} + \cdots + d^{-L} \big) \leq \frac{d\sqrt{\gamma} }{d-1} C_{\mathpzc{h}_{\Min}}.
\end{equation}
The remarkable observation is that for high dimensional problems, in using low-resolution Markov chains to prepare the Markov chain $P_{\mathpzc{h}_{\Min}},$ the complexity associated with implementing the low-resolution Markov chains is almost negligible.
\emph{Summary. } We presented a multi-level approach to mix a Markov chain with quantum speedup. The strategy is to create a transition to the Markov chain from low-resolution, coarse-grained Markov chains. This effectively introduces $r$ Markov chains that can be easily initialized. This fits the general framework of using
a slowly-varying sequence of Markov chains \cite{aharonov2003adiabatic,wocjan2008speedup}. But
we show, by leveraging the multi-level properties, that overall, the complexity, excluding logarithmic factors, is independent of the chain length. The problem has been placed in the context of approximating a Markov chain with continuous state space. But the techniques are applicable to general Markov chains that can be reduced to a Markov chains with very small state space, through a multi-level procedure. One important class of examples are those from multigrid methods \cite{de2010smoothed}, including large-scale graphs. The main ingredient needed are the interpolation properties of the eigenvectors of $P_\mathpzc{h}$. Overall, this approach adds another piece to the entire puzzle associated with the generic quadratic speed of quantum algorithms for Markov chains.
\noindent\emph{ Acknowledgement.} The author's research is supported by the National Science Foundation Grants DMS-2111221 and a seed grant from the Institute of Computational and Data Science (ICDS) at Penn State. The author would also like to thank Dr. Patrick Rall for fruitful discussions on quantum walks.
\bibliographystyle{plain}
|
2,877,628,088,322 | arxiv | \section{Introduction}
\IEEEPARstart{M}{ore} than a decade has passed from the initial publication from HP labs \cite{Strukov} that linked Resistive Random Access Memory (RRAM) characteristics with Chua's definition of memristors \cite{Chua}. The solid-state implementations of memristive technologies led to a sudden increase in interest in the area of reconfigurable electronics. Publications covering this new area of interest made their appearance initially describing the resistive switching phenomenon \cite{SCHROEDER},\cite{SAWA}. Over the following years multiple aspects of these devices were thoroughly examined, such as, the impact of device area on performance \cite{Govoreanu} and the impact of using different metal oxide dielectrics \cite{Wei},\cite{Biju} as layers where resistive switching takes place. Currently prevalent theories for the mechanism underpinning resistive switching include oxygen vacancy movement \cite{Yang}\cite{Lai} which leads to valence changes in the active layer material \cite{Valov},\cite{Pan},\cite{Waser}. Up to this point the majority of research carried out centered around the applications of resistive switching systems in static conditions, in circuits with direct current (DC) stimuli.
\begin{figure}[h]
\centering
\includegraphics[width = 9cm, height = 10cm]{Final submission pics/figure1.pdf}
\caption{a) Metal oxide memristor structure. Equivalent electrical model of the device is depicted at the left side. Physical parameters which affect the equivalent model are depicted on the right side. Namely, W for electrode width and d for active core thickness. Active core can be any oxide material or layers of materials, which produce a memristive device.
b) Initial impedance bode plot of devices, c) Initial phase bode plot of tested devices}
\label{fig: 1}
\end{figure}
Circuits operating in alternating current (AC) mode could also benefit from using resistive switching devices, but up to this point publications concerning the effect of Metal Oxide memristor frequency depedencies were mainly centered around transient analysis of devices \cite{Mazady}, measurements in singular frequencies \cite{Yan} or as a way to characterize the conduction mechanisms of devices \cite{Lee}. At the same time, however, several publications emerged on simulating AC circuits with memristive devices \cite{Wang} \cite{Rajagopal}, indicating that there is a need for characterization of metal-oxide memristor devices in the frequency spectrum. Whilst the aforementioned publications focused on the frequency response of the proposed circuits overall, they did not account for the frequency response of the individual memristor cells used, often emulating these as distinct static resistors with uniform response across the employed frequency range. This important oversight was, on one side, ascribed to the lack of existing memristor models to account for small AC signals superimposed on DC stimuli as well as the researchers focus that in the interest to show how such tunable resistive components can tweak the response of an AC circuit, only accounted for distinct static resistive loads.
In this work, we utilise standard electron device testing methodologies and adopt these accordingly for use in memristors. We then employ this for studying the frequency response of metal oxide memristors that are found to show a predominantly resistive behavior up to a certain frequency, but they are not equally as stable in the entirety of the frequency spectrum investigated herein. We further show that such devices are much more than static tunable resistors and need to be represented via an RC empirical model, as depicted in Figure 1(a). Variations of this electrical model have been hinted in the literature for memristors with different characteristics \cite{Lee}, \cite{Park}, \cite{DASH}, but, to the best of our knowledge, a full RC-level description with regards to the devices' switching remained outstanding till this work. We further present an analysis of the devices' behavior in a range of frequencies and assess the electrical reconfigurability of the devices over the same frequency range. Finally, we expand the remit of this study by investigating a variety of memristive devices comprising different electrode areas, dielectric thicknesses and core materials and the results are discussed.
\section{Experimental Methodology}
\subsection{Device Prototyping}
All devices were fabricated on 6-inch silicon wafers, on top of which 200 nm of silicon oxide (SiO$_2$) was thermally grown. All top and bottom electrodes consist of Platinum (12 nm), deposited by electron beam evaporation. To ensure good bottom electrode adhesion a Titanium film (5 nm) was used prior the deposition of Platinum. Devices with three different types of resistive switching layers were fabricated (TiO$_x$ , TiO$_x$/Al$_2$O$_3$ , SnO$_x$), all deposited by magnetron sputtering. The deposition power for TiO$_x$, Al$_2$O$_3$ and SnO$_x$ was respectively 2 kW, 100 W and 70 W. Gas flow rates inside the chamber were 8 sccm for O$_2$ and 35 sccm for Ar gas for the TiO$_x$ and TiO$_x$/Al$_2$O$_3$ dielectrics, whilst the ratio used for SnO$_x$ was 10 sccm Ar and 20 sccm O$_2$. Patterning of all layers was carried out by negative tone photolithography. After each deposition a lift-off process followed, with N-Methyl-2-pyrrolidone (NMP), and a gentle surface clean with O2 plasma, by Reactive Ion Etching techniques.
\subsection{Electrical characterization}
Electrical characterization of devices in DC was carried out with the in-house testing system ArC One\textsuperscript{TM} \cite{Arc}. Initially, devices were electroformed using the corresponding built-in module of this system, whereby continuous pulses of 100 μs width and an amplitude progressively ranging from 3 to 7 V were applied to the device until a partial dielectric breakdown was achieved. After electroforming, I-V curves of the devices were taken to ensure uniform working conditions. Subsequently a brief retention test was done to evaluate the short-term stability of devices, in order to avoid any negative impact from volatile behaviour, during the measurement window. Throughout this work, the read pulse amplitude was set at 0.5 V for all measurements.
After completing the standard DC characterization process, the device behavior under AC conditions was probed by using a Keithley 4200A-SCS. To probe the frequency characteristics of these devices an initial sweep would take place to ascertain the starting condition and directly connect it to the resistive state given by the ArC system. Afterwards, a switching event would be induced by applying a voltage stress to the device, by means of a voltage sweep, and the device would be submitted to a new frequency sweep. This process is repeated for each device by applying voltage sweeps ranging from 1.5 to 3 V in magnitude, with alternating polarity. Through this process, the device is switched amongst mutliple states and a new frequency scan is taken at each state. All frequency sweeps were carried out by using a 0.5 V DC bias along with a 100 mV superimposed AC stimulus. This study was carried out for frequencies ranging between $10^3$ and $10^7$ Hz.
\begin{figure}[b]
\centering
\includegraphics[width = 9cm, height = 7cm]{Final submission pics/figure2.pdf}
\caption{Device electrically programmed in different resistive states, as shown in legend. Black hexagons depict the experimental values received from testing, while the coloured lines are the simulated results for the depicted Resistances.}
\label{fig 2}
\end{figure}
\section{Electrically programmable behavior}
There have been a few reports in the literature \cite{Lee} \cite{Park} \cite{DASH} where memristors have been described via an RC circuit. In figures 1(b,c) the initial state of a device is given with a behavior akin to that of a low pass filter. This equates to the devices showing a stable impedance in regards to frequency, until a sharp drop occurs, defining a cut-off point. All devices tested exhibited this type of behavior thus, this type of frequency response could be extended to other devices fabricated as a Metal-Insulator-Metal (MIM) capacitor and then electroformed.
\begin{figure*}[h]
\centering
\includegraphics[width = 16cm, height = 14cm]{Final submission pics/figure3.pdf}
\caption{Tuning of programability window by changing area of device. a) Variation of electrode area in TiO$_x$/Al$_2$O$_3$ devices, b) Variation of electrode area in TiO$_x$ monolayer devices, c) Variation of electrode area in SnO$_x$ devices, d) Simulation of model depicted in Fig.1(a) with varying Capacitive values}
\label{fig 3}
\end{figure*}
During electroforming the partial breakdown of the dielectric can lead to distinct resistive levels, depending on the forming parameters\cite{Michalas}. This process is partially controlled and guided with a conservative voltage pulsing. Throughout this process, it was found that the cut-off frequency was largely dependent on the registered resistive state of the device. Due to the nature of electroforming, damage to the dielectric is inevitable and small differences may arise between the capacitive values of different devices. Analysis of the data extracted from measurements of the devices' capacitive and resistive parameters showed a rather small device-to-device variability in capacitive values, demonstrating an overall good uniformity. On the other hand, resistive variability across all measured devices is found to be higher and thus constitutes the main driving force behind changes observed in cut-off frequency. It is important to note that it was found that line resistance had a minimum impact on the devices' switching behavior.
By adhering to the resistive switching protocol discussed in section II.B it was possible to measure the device response in different resistive states. The incremental and symmetrical increase of switching voltages also enabled us to factor-in the multi-bit functionality of some types of devices \cite{Stathopoulos}. These devices tested herein, switch in a non-volatile manner that was confirmed via a short retention test performed by the ArC One capability. Figure 2 depicts the impedance frequency response of a device for distinct memory-resistive states. The majority of devices tested herein, were switchable when stimulated with potentials outside the zone of -1.5 to 1.5 V.
The cut-off frequency, identified in Figure 2, follows the slope with each subsequent switching event. This suggests that the dielectric undergoes a slight modification, which is not however sufficient for creating an observable change in capacitance. On the other hand, the resistive switching results in altering the cut-off, as expected when changing the resistive value of an RC. This is depicted in simulated results overlaid as lines on top of the switching points in Figure 2, whereby plotting the impedance response of this circuit and changing the material resistance, by using the one received from measuring the device, leads to a change in cut-off frequency. Having established that metal oxide resistive switching devices posses an intrinsic low-pass filter response, it is now also clear that they can be programmed to have a variable cut-off frequency by changing the resistance of the device, thus single devices acting as tunable low pass filters.
Whilst the dynamic range of switching for any device is defined by its OFF/ON ratio, this ratio can be tuned by appropriately selecting the physical dimensions of devices (W and d) as well as the active core material. The DC characteristics of memristive devices do not seem to be heavily impacted by small changes in device size and thickness. Nonetheless, altering these parameters can have a substantial impact on the cut-off programmability window across the frequency axis. This enables optimising the development of variable low-pass filters for a range of frequencies, and broadening the field of possible applications where such a component can be used for trimming. To test the validity of this assumption and to partially control capacitance in devices, we have expanded the scope of this study to include changes in such key parameters. Section IV centers around testing of fabricated devices, categorized by parameter changed, namely, electrode area, dielectric material and dielectric thickness.
These resistive switching devices are initially fabricated as MIM capacitors and they obtain their memristive behaviour through electroforming; the characteristics defining their correspondent capacitance are thus still relevant post-electroforming. This assumption stems from the underpinning switching mechanism of such devices. A change in valence brought about by oxygen vacancy movement should not short or otherwise alter the capacitive characteristics of the device. The only change the devices sustain is the initial electroforming, which, although known to be destructive, does not result in a complete breakdown of the active layer, as shown in Figure 1(b,c). To check the validity of this assumption, key parameters were changed, such as, active layer (TiO$_x$ , TiO$_x$/Al$_2$O$_3$ and SnO$_x$), electrode area (a range from 10$\times$10 μm$^2$ up to 60$\times$60 μm$^2$) and dielectric thickness (15 nm and 25 nm).
\section{Impact of device characteristics}
\subsection{Variation of device area}
Devices were fabricated with an assortment of different electrode areas, from 10$\times$10um to 60$\times$60um, effectively defining distinct effective capacitance values per device. Initial tests showed that devices with distinct capacitance values, exhibit similar initial resistive states and electric programmability characteristics, as mentioned in section III. By increasing or decreasing capacitance the entire spectrum is transported parallel to the frequency axis. This is evident in Figure 3(a), where the cumulative initial resistances and cut-off frequencies for each electrode area subset of the TiO$_x$/Al$_2$O$_3$ devices, form the slope whose range ultimately represents the programmability window of said devices. Its has already been established that the variable cut-off frequency of each device must fall on this line. This behavior is consistent amongst devices of different size, and we further show that by scaling the area it is also possible to move this slope. This is also in excellent agreement with our simulated results, Figure 3(d), where the same equivalent model is employed, whilst only amending the corresponding capacitance value dictacted by the material of choice.
In the case of bilayer prototype memristors, the resistive state of each device under test was not found to impact their cut-off slope. On the other hand, single layer , TiO$_x$, devices exhibit this behavior only for devices formed inside a specific resistance window, as shown in Figure 3(b). When formed in lower resistive states, their slope becomes steeper. This behavior is further analyzed in section IV.B . Lastly, all devices with a SnO$_x$ switching layer appear to act in a comparable fashion, yet, these tend to form only in low resistive levels as captured in Figure 3(c). Thus these samples do not exhibit the same wide programmability window seen in other devices. Nevertheless, they exhibit a steeper slope, just as TiO$_x$ devices do when formed in lower resistive states.
One notable observation that came as a result of changing the device area is that by electroforming, the devices do not lose their capacitive characteristics. Thus forming and subsequent switching in low resistive states does not lead to shorting the capacitor.
\begin{figure}[t]
\centering
\includegraphics[width = 9cm, height = 7cm]{Final submission pics/figure4.pdf}
\caption{TiO$_x$ devices formed in low resistance skew the slope downwards indicating a possible dielectric change which could explain this change of behaviour}
\label{fig 4}
\end{figure}
\subsection{Variation of dielectric}
Through fabricating devices with different dielectric layers the intent was to, firstly, confirm that the frequency response of such devices does not change and observe what overall effect, if any, that change in dielectric constant will have on the devices.
The switching behavior for devices with a single TiO$_x$ active core, followed the characteristics observed in TiO$_x$/Al$_2$O$_3$ devices for initial resistances of over approximately 20 kΩ. Nonetheless, when such memristors were formed around the 20 kΩ level, differences in behavior were observed. In this case, the cut-off slope becomes steeper, thus indicating that a significant change in the dielectric of the devices has transpired. Figure 4 illustrates the cut-off frequencies plotted against the initial resistance of TiO$_x$ devices. Here, we have also added devices that formed on lower resistances for comparison. When comparing Figure 4 to Figure 3(b) it is evident that the slope becomes steeper for lower resistances. This could be attributed to the potential appearance of a second , steeper, cut-off or a change in the dielectric. Similar observations for this specific device morphology were made by Michalas et al \cite{Michalas}, indicating that TiO$_x$ devices formed in specific resistive ranges exhibit a distinct behavior from other devices formed in higher resistance regimes. It is possible that irreversible alteration to the dielectric could lead to this behavior and this could mean a morphological change in the dielectric itself. For all other cases, where the dielectric does not change in a measurable and irreversible way, the behaviour is consistent.
Another observation is the deviation from a linear trend. Devices with a TiO$_x$/Al$_2$O$_3$ core stack, show a stricter adherence to a linear fit. TiO$_x$ devices on the other hand show a bigger margin of linearity deviation. This could be attributed to the forming process once again, as these deviations can be attributed to small deviations from expected capacitance for a given device area. It is possible that electroformed devices through the added Al$_2$O$_3$ layer may lead to a more streamlined and predictable device behaviour, with more gentle changes to the dielectric, in comparison with devices comprising a single-layer core. This is also supported by findings in earlier publications \cite{Stathopoulos2} \cite{Jeon}.
\begin{figure}
\centering
\includegraphics[width = 9cm, height = 7cm]{Final submission pics/figure5.pdf}
\caption{Devices with SnO$_x$ dielectric form in lower resistive ranges and exhibit repeatable switching. High resistive values lead to instability with applied frequency, which is only exacerbated in higher resistances. Legend shows the maximum voltage reached in each switching sweep.}
\label{fig 5}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width = 9cm, height = 7cm]{Final submission pics/figure6.pdf}
\caption{Cutoff slopes of devices with different dielectric. All devices had a 20$\times$20 um electrode area. The results depicted in Cyan are from devices with TiO$_x$ dielectric but a smaller dielectric thickness of 15nm}
\label{fig 6}
\end{figure}
SnO$_x$ devices exhibit an overall steeper cut-off slope. Otherwise, their switching behaviour remains comparable to the other devices presented herein, meaning that semiconducting oxides as an active layer do not have an impact on it. One apparent change is that their impedance appears to be unstable in high resistive states, as observed in Figure 5; a characteristic also observed when running a retention test on devices at relatively high resistive states.
Stability in the low resistive state follows the behaviour of TiO$_x$/Al$_2$O$_3$ devices. The studied behaviour of devices with regards to the stimuli frequency could potentially help when trying to link different conduction mechanisms with devices that have different characteristics, as already shown for TiO$_x$ devices. Along this line, Figure 6 displays a compilation of the cut-off slopes for devices with 20$\times$20 μm$^2$ area and different dielectric.
\begin{figure}[b]
\centering
\includegraphics[width = 9cm, height = 7cm]{Final submission pics/figure7.pdf}
\caption{A change in dielectric thickness results in a higher capacitance, thus altering the programmability window of the device and offering another way to tailor device behavior to the needs of prospective applications}
\label{fig 7}
\end{figure}
\subsection{Variation by changing dielectric thickness}
To investigate the effect of thickness in device behaviour, memristive cells with thinner dielectrics were also fabricated and tested. As expected, this increases the overall effective capacitance of the cells. Devices with 15 nm of TiO$_x$ dielectric were fabricated and their behaviour and cut-off was compared to the original TiO$_x$ devices, which had a 25 nm thick dielectric. Oxygen content and thickness of the dielectric are considered crucial parameters, which have an active role in device behavior \cite{Regoutz}. Generally, changes in these two parameters affect the ability to electroform the devices consistently. Nevertheless, after being electroformed, their frequency characteristics are consistent with what has already been discussed in previous sections. In Figure 7, a comparison between devices with 25 nm thick dielectric and devices with 15 nm thick dielectric is given. As expected in a parallel plate capacitor, a decrease in dielectric thickness leads to an increase in capacitance, thus making the cut-off slope move towards lower frequencies, in line with the simulated results in Figure 3(d). Thus, by changing dielectric thickness, the expected characteristics of devices can be fine tuned.
\section{Conclusion}
In this paper, we presented results from the behavior of Metal Oxide memristive devices in alternating current conditions, for a range of frequencies from $10^3$ up to $10^7$ Hz. The electrical programmability of resistive switching carries over into the frequency range tested here with the devices presenting an RC low-pass filter-like behavior, with the cut-off frequency of the device being tuned by its resistive state.
This study expanded to account for most common device physical parameters, such as, area, dielectric material and dielectric thickness; a large range of samples were fabricated and characterized. It was found that changing parameters affecting the device's capacitance, such as device area and dielectric thickness, corresponds to a modulation of the frequency response via transferring the entire impedance plot parallel to the frequency axis. This was consistent with an RC empirical model description, as shown through the excellent agreement between simulated and measured data for different capacitance values. This lends credence to the initial hypothesis that these parameters could be used to control the window of programmability by disproportionately altering capacitance, while leaving resistance of devices largely untouched.
A change in dielectric leads to a distinct capacitance value, while also changing the slope of the impedance plot, thus the impact from different dielectrics is a key parameter on setting the frequency response of tunable cells, offering opportunities to tailor this for specific application. There are indications that other underpinning parameters which impact the DC characteristics of the devices, such as conduction mechanism, may have an effect in AC conditions, as for example shown for HRS states in SnO$_x$ devices and for TiO$_x$ devices formed in a lower resistive range. The selection of device area and dielectric thickness can both be used as a secondary parameter that allows tailoring the frequency response of said device in accordance to the applications' needs. While area can be changed with minimal impact on device behaviour, the same can not be said for dielectric thickness, where a change might make the electroforming process harder, thus the preferred parameter to change is electrode area.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\printbibliography
\end{document}
|
2,877,628,088,323 | arxiv | \section{Introduction}
The existence of a K\"ahler-Einstein metric on a compact complex manifold $M$ has been known
since 1970's in the case when $c_1(M) < 0$ by Aubin \cite{aubin76} and Yau \cite{yau78} where the K\"ahler class is the canonical class $K_M$, and in the case when $c_1(M) = 0$ by Yau \cite{yau78} where the K\"ahler class is arbitrary positive $(1,1)$-class.
In the remaining case when $c_1(M) > 0$, i.e. in the case when $M$ is a Fano manifold,
the existence of a K\"ahler-Einstein metric is characterized by a condition called the K-stability by
the recent works of
Chen-Donaldson-Sun \cite{CDS3} and Tian \cite{Tian12}.
The K-stability is a condition in geometric invariant theory where the GIT weight, called
the Donaldson-Futaki invariant \cite{donaldson02}, is defined extending an obstruction, now called
the classical Futaki invariant, obtained in \cite{futaki83.1}, \cite{futaki83.2}. The latter is defined for smooth compact
K\"ahler manifolds and is an obstruction to admit a
constant scalar curvature K\"ahler metrics (cscK metrics for short). Note that for a K\"ahler form
in the anti-canonical class on a Fano manifold, being a cscK metrics is equivalent to being a
K\"ahler-Einstein metric. On the other hand,
the Donaldson-Futaki invariant is defined for possibly singular
central fibers of $\mathbf C^\ast$-equivariant degenerations, called the test configurations,
and a polarized K\"ahler manifold $(M,L)$ is said to be K-stable if the Donaldson-Futaki
invariant of the central fiber is non-negative for any test configurations and if the equality holds exactly when
the test configuration is product.
Note that for the product configurations the Donaldson-Futaki invariant coincides with the classical
Futaki invariant.
The Fano case is the core of the conjecture known
as the Yau-Tian-Donaldson conjecture stating that a polarized K\"ahler manifold $(M,L)$ should admit
a cscK metric with its K\"ahler form in $c_1(L)$ if and only if
$(M,L)$ is K-stable. In the K\"ahler-Einstein problem for the Fano case we take $L = K_M^{-1}$.
The Yau-Tian-Donaldson conjecture for cscK problem with general polarizations is still remaining unsolved.
There are many variants of the Yau-Tian-Donaldson conjecture. For example, K-stability
characterizations for K\"ahler-Ricci solitons and Sasaki-Einstein metrics have been obtained
respectively in \cite{DatarSzeke16} and \cite{CollinsSzeke15}.
It is usually difficult to check whether a manifold is K-stable since there are infinitely
many test configurations. However, in the cases with large symmetry groups checking K-stability
can be easier, see \cite{IltenSuss17}, \cite{Delcroix17}, \cite{Delcroix2016}. For alternate proofs for
the Yau-Tian-Donaldson conjecture for
the Fano case, other important contributions, recent further developments and applications, the reader is referred to the two survey papers
of Donaldson \cite{Donaldson2017a}, \cite{Donaldson2017b}.
The present survey paper focuses on extensions of the classical Futaki invariants for
K\"ahler-Ricci solitons, Sasaki-Einstein metrics and Einstein-Maxwell K\"ahler metrics.
Existence problems for these three types of metrics have a common feature that they
depend on the choice of a holomorphic
Killing vector field, and accordingly their obstructions have parameter space consisting
of holomorphic Killing vector fields in an appropriate Lie algebra.
The Ricci solitons are self-similar solutions of the Ricci flow and important object in the study
of singularity formations of the Ricci flow. On a compact K\"ahler manifold,
a K\"ahler-Ricci soliton is a K\"ahler metric $g$ satisfying
\begin{equation}\label{KRsoliton}
\mathrm{Ric}_g = g + L_{\mathrm{grad}f} g
\end{equation}
which is equivalent to
$$ \rho_g = \omega_g + i\partial{\overline \partial} f$$
where $f$ is a Hamiltonian function for a holomorphic Killing vector field $X$, i.e. $X = J\mathrm{grad}f$, and $\rho_g$ and $\omega_g$ are respectively
the Ricci form and the K\"ahler form of $g$. Since $\rho_g/2\pi$ represents the first
Chern class, if a K\"ahler-Ricci soliton exists, the compact manifold $M$ is necessarily a Fano
manifold. Note also a Killing vector field on a compact K\"ahler manifold is necessarily holomorphic.
Given a Killing vector field $X$ we consider the toral group $T$ obtained by taking the closure of the flow generated by $X$,
and ask
if there is a $T$-invariant K\"ahler-Ricci soliton $g$ satisfying (\ref{KRsoliton}) with
$X = J\mathrm{grad}f$. This problem is reduced to solving a Monge-Amp\`ere type equation.
However, Tian and Zhu \cite{TZ02} showed that there is an obstruction $\mathrm{Fut}_X$ to solving (\ref{KRsoliton}).
Thus if one choose an $X$ with non-vanishing $\mathrm{Fut}_X$ then one can never get a solution to the Monge-Amp\`ere equation. Tian and Zhu \cite{TZ02} showed that there is a twisted volume
functional $\mathrm{Vol}$ on the space of $X$ such that the derivative at $X$ of $\mathrm{Vol}$ is equal to $\mathrm{Fut}_X$:
\begin{equation}\label{derivative1}
d\mathrm{Vol}_X = \mathrm{Fut}_X.
\end{equation}
They further showed that the volume functional is proper and convex on the space of $X$.
Since holomorphic Killing vector fields on a compact K\"ahler manifold constitute a finite
dimensional vector space the volume functional has a unique minimum on the space of $X$. This gives the right choice to solve the equation (\ref{KRsoliton}).
Sasaki-Einstein metrics caught considerable attention in mathematical physics through its
role in the AdS/CFT correspondence, and the volume minimization
is the key to find Sasaki-Einstein metrics. The Sasakian structure on an odd dimensional
manifold $S$ is by definition a Riemannian structure on $S$ such that its Riemannian cone
$C(S)$ has a K\"ahler structure. Fixing a complex structure on the cone $C(S)$, the deformation
of the Sasakian structure on $S$ is given by the deformation of the cone structure on $C(S)$,
namely the deformation of the radial function $r$. The Reeb vector field is then given by
$Jr\partial/\partial r$. To each Reeb vector field one can assign a Sasakian structure on $S$.
Thus one can define the volume functional $\mathrm{Vol}$ on the space of Reeb vector fields.
The volume depends only on the Reeb vector field and is independent of the choice of the Sasakian structure with the given Reeb vector field. This fact is similar to the fact in K\"ahler geometry
that the volume depends only on the K\"ahler class and is independent of the choice of the
K\"ahler form in the given K\"ahler class.
The space of Reeb vector fields is the inside of
the dual cone to the moment map image of the K\"ahler cone $C(S)$, and
the volume functional $\mathrm{Vol}$ is a homogeneous function on this space.
Thus we may consider a slice which gives a
bounded domain sitting inside the dual cone.
On the other hand the Sasaki-Einstein condition is equivalent to the
K\"ahler cone $C(S)$ being Ricci-flat,
and is also equivalent to the local transverse geometry of the Reeb flow being K\"ahler-Einstein
with positive scalar curvature.
One can then associate to each Reeb vector field $\xi$ an obstruction
$\mathrm{Fut}_\xi$ similarly to the Fano K\"ahler-Einstein problem \cite{FOW}, \cite{BGS}.
Martelli-Sparks-Yau \cite{MSY2}
show for transversely Fano Sasakian manifolds
\begin{equation}\label{derivative2}
d\mathrm{Vol}_\xi = \mathrm{Fut}_\xi.
\end{equation}
In the case when $S$ is toric Sasakian, meaning when the cone $C(S)$ is toric K\"ahler, Martelli-Sparks-Yau further show that $\mathrm{Vol}$ is a proper convex function on the slice
in the dual cone consisting of the Reeb vector fields
for which the volume functional $\mathrm{Vol}$ is defined.
Thus there is a unique minimum $\xi$, and it is shown in \cite{FOW}, for any transversely Fano toric Sasakian manifold, there is a Sasaki-Einstein metric
with the choice of the unique minimum $\xi$ as the Sasakian structure.
Conformally K\"ahler Einstein-Maxwell metrics are relatively newer subject. The Einstein-Maxwell equation has been studied in general relativity in real dimension 4. In \cite{L1}, LeBrun
showed that, on a compact K\"ahler surface $(M,g)$, if there is a positive smooth function $f$ with
$J\mathrm{grad} f$ being a Killing vector field such that the Hermitian metric $\Tilde g = f^{-2}g$
has constant scalar curvature then $\Tilde g$ corresponds to a solution of the Einstein-Maxwell
equation. Thus, fixing a holomorphic Killing
vector field $K$ and a K\"ahler class $\Omega$, to find a K\"ahler form $\omega_g \in \Omega$
such that $\Tilde g = f_K^{-2} g$ has constant scalar curvature is a problem in K\"ahler geometry, where $f_K$ is the Hamiltonian
function of $K$ with respect to $\omega$. In fact, if $K = 0$ then the problem is exactly the same
as the Yau-Tian-Donaldson conjecture.
Apostolov and Maschler \cite{AM} further set the problem into the Donaldson-Fujiki picture,
and formulated an extension $\mathrm{Fut}_K$ of the classical Futaki invariant parametrized by $K$.
In \cite{AM}, such $\Tilde g$ is called a conformally K\"ahler, Einstein-Maxwell metric. But we
consider the problem of finding $(g,f_K)$ with $\omega_g$ in a fixed K\"ahler class, and therefore
it is more convenient to call such $g$ a (conformally) Einstein-Maxwell K\"ahler metric, or
even preferably omitting the word ``conformally''.
We then showed in \cite{FO17} that the derivative at $K$ of
a suitably defined volume functional $\mathrm{Vol}$ on the space of $K$ satisfies
\begin{equation}\label{derivative3}
d\mathrm{Vol}_K = \mathrm{Fut}_K.
\end{equation}
However the volume functional is neither convex nor proper in general, and can have several
critical points.
In all these three cases, the critical points correspond to the cases when the classical Futaki
invariant vanishes. However, this may not be enough to have a solution, but the K-stability may be
the next issue.
In section 2, 3 and 4 we give more details on
K\"ahler-Ricci solitons, Einstein-Maxwell K\"ahler metrics and Sasaki-Einstein metrics respectively.
\vspace{0.3cm}
\noindent
Acknowledgment.\
The first auther would like to thank Yau Mathematical Sciences Center
at Tsinghua University for its hospitality
where this survey paper was completed.
\section{K\"ahler-Ricci solitons}
In this section, we see how
a holomorphic Killing vector field
which admits a K\"ahler-Ricci soliton
is determined through the idea of volume minimization \cite{TZ02}.
Let $M$ be an $m$-dimensional Fano manifold.
A K\"ahler metric $g$ on $M$ with the K\"ahler form $\omega_g\in
2\pi c_1(M)$ is called a {\it K\"ahler-Ricci soliton}
if there exists a holomorphic vector field $X$ on $M$ such that
\begin{equation}\label{eq:2.1}
\rho_g-\omega_g=L_X\omega_g
\end{equation}
holds, where $\rho_g$ denotes the Ricci form of $g$ and $L_X$ is the Lie
derivative along $X$.
In particular, if $X=0$, $g$ is a K\"ahler-Einstein metric.
Since $\rho_g$ and $\omega_g$ represent $2\pi c_1(M)$,
there exists a real-valued smooth function $h_g$ such that
\begin{equation}\label{eq:2.2}
\rho_g-\omega_g=i\partial \overline{\partial} h_g.
\end{equation}
On the other hand, for any holomorphic vector field $X$, the $(0,1)$-form
$\iota_X\omega_g$ is $\overline{\partial}$-closed. Therefore, by the Hodge theorem,
there exists a unique complex-valued smooth function $\theta_X(g)$
such that
\begin{equation}\label{eq:2.3}
\iota_X\omega_g=i\overline{\partial} \theta_X(g),\ \ \
\int_Me^{\theta_X(g)}\omega_g^m=\int_M\omega_g^m.
\end{equation}
Hence we have
\begin{equation}\label{eq:2.4}
L_X\omega_g=i\partial \overline{\partial} \theta_X(g).
\end{equation}
By \eqref{eq:2.1}, \eqref{eq:2.2} and \eqref{eq:2.4},
a K\"ahler metric $g$ is a K\"ahler-Ricci soliton with respect to a
holomorphic vector field $X$ if and only if
$h_g-\theta_X(g)$ is constant.
It is difficult to determine $h_g-\theta_X(g)$ explicitly.
However, Tian and Zhu \cite{TZ02} proved that
the integral of $v(h_g-\theta_X(g))e^{\theta_X(g)}$ is independent of the
choice of $g$, where $v$ is a holomorphic vector fields,
and it defines a holomorphic invariant.
\begin{thm}[\cite{TZ02}]\label{TZ-inv}
Let $\mathfrak h(M)$ be the Lie algebra which consists of all holomorphic vector fields on
$M$. For a K\"ahler form $\omega_g\in 2\pi c_1(M)$ and $X\in \mathfrak h(M)$,
we define a linear function $\mathrm{Fut}_X$ on $\mathfrak h(M)$ as
\begin{equation}\label{eq:2.5}
\mathrm{Fut}_X(v)=\int_M v(h_g-\theta_X(g))e^{\theta_X(g)}\omega_g^m,\ \ v\in \mathfrak h(M).
\end{equation}
Then $\mathrm{Fut}_X$ is independent of the choice of $\omega_g\in 2\pi c_1(M)$.
If $M$ admits a K\"ahler-Ricci soliton with respect to $X\in \mathfrak h(M)$,
then $\mathrm{Fut}_X$ vanishes identically on $\mathfrak h(M)$.
\end{thm}
Note here that when $X=0$, this holomorphic invariant coincides with
the Futaki invariant, which is an obstruction to the existence of K\"ahler-Einstein
metrics in $c_1(M)$ \cite{futaki83.1}.
We next see that
the invariant $\mathrm{Fut}_X$ can be
obtained as the first variation of some function on $\mathfrak h(M)$ \cite{TZ02}.
Such characterization of the holomorphic invariant plays a key role
in \S $3$ and \S $4$.
Let $X\in \mathfrak h(M)$. We renormalize the function $\theta_X(g)$ defined by
\eqref{eq:2.3} to $\tilde{\theta}_X(g)$ by adding a constant such that
\begin{equation}\label{eq:2.6}
\int_M\tilde{\theta}_X(g)e^{h_g}\omega_g^m=0.
\end{equation}
\begin{prop}[\cite{TZ02}]\label{1stvar}
Let a function $f$ on $\mathfrak h(M)$ be given by
\begin{equation}\label{eq:2.7}
f(Z)=\int_M e^{\tilde{\theta}_Z(g)}\omega_g^m.
\end{equation}
Then
$f(Z)$ is independent of the choice of K\"ahler metrics with the K\"ahler class
$2\pi c_1(M)$.
Moreover the differential of $f$ at $X$ in the direction of $v\in \mathfrak h(M)$
is a constant multiple of $\mathrm{Fut}_X(v)$.
\end{prop}
By this proposition, if there exists a K\"ahler-Ricci soliton
with respect to a holomorphic vector field $X$,
it is a critical point of $f$.
Let $\mathrm{Aut}^0(M)$ be the identity component of the
holomorphic automorphism group of $M$ and $K$ a maximal compact subgroup.
Then the Chevalley decomposition allows us to write
$\mathrm{Aut}^0(M)$ as a semi-direct product
\begin{equation}\label{eq:2.8}
\mathrm{Aut}^0(M)=\mathrm{Aut}_r(M)\ltimes R_u,
\end{equation}
where $\mathrm{Aut}_r(M)$ is a reductive algebraic subgroup of
$\mathrm{Aut}^0(M)$ which is the complexification of $K$, and
$R_u$ is the unipotent radical of $\mathrm{Aut}^0(M)$.
Let $\mathfrak h_r(M)$ and $\mathfrak h_u(M)$ be the Lie algebras of
$\mathrm{Aut}_r(M)$ and $R_u$ respectively. From the decomposition
\eqref{eq:2.8}, we obtain
\begin{equation}\label{eq:2.9}
\mathfrak h(M)=\mathfrak h_r(M)+\mathfrak h_u(M).
\end{equation}
\begin{prop}[\cite{TZ02}]\label{conv-proper}
Let $\mathop{\mathrm{Vol}}\nolimits$ be the restriction of $f$ to $\mathfrak h_r(M)$.
Then $\mathop{\mathrm{Vol}}\nolimits$ is a convex, proper real-valued function.
Hence there exists a unique minimum point $X_0\in \mathfrak h_r(M)$ of
$\mathop{\mathrm{Vol}}\nolimits$.
\end{prop}
By Proposition \ref{1stvar}, $\mathrm{Fut}_{X_0}$ vanishes identically on
$\mathfrak h_r(M)$. This minimum $X_0$ is the right choice to solve
the K\"ahler-Ricci soliton equation.
Note here that, to combine Proposition \ref{conv-proper} with
the result of Saito \cite{Saito},
$\mathrm{Fut}_{X_0}$ vanishes identically on $\mathfrak h(M)$.
For toric Fano manifold,
we can calculate $X_0$ as follows
\cite{Wang-Zhu}.
Let $M$ be an $m$-dimensional toric Fano manifold with the K\"ahler class
$c_1(M)$ and $\Delta_M\subset {\mathbf R}^m$ the
corresponding moment polytope.
It is well-known that $\Delta_M$ is an
$m$-dimensional reflexive Delzant polytope.
Let $T$ be the maximal torus of $\mathrm{Aut}(M)$ and $\mathfrak h_0(M)$
its Lie algebra.
$T$ is isomorphic to the $m$-dimensional algebraic torus $({\mathbf C}^\times)^m$
and $\mathfrak h_0(M)$ is the maximal Abelian Lie subalgebra of $\mathfrak h(M)$.
If we take the affine logarithm coordinates
$(w_1,\dots,w_m)=(x_1+i\theta_1,\dots,x_m+i\theta_m)$
on $T\cong {\mathbf R}^m\times (S^1)^m$,
$\mathfrak h_0(M)$ is spanned by the basis
$\{\frac{\partial}{\partial w_1},\dots \frac{\partial}{\partial w_m}\}$.
Since $X_0\in \mathfrak h_0(M)$, $X_0$ can be expressed in the form
\begin{equation}\label{eq:2.10}
X_0=\sum _{i=1}^m c_i\frac{\partial}{\partial w_i}.
\end{equation}
\begin{prop}[\cite{Wang-Zhu}]\label{toric-HVF}
The constants $c_1,\dots,c_m$ in \eqref{eq:2.10} are given by the
following conditions:
\begin{equation}\label{eq:2.11}
\int_{\Delta_M}
y_i\exp \left\{\sum_{l=1}^m c_ly_l\right\}\,dy=0,\ \ i=1,\dots,m.
\end{equation}
\end{prop}
\section{Einstein-Maxwell K\"ahler geometry.}
In this section, we first introduce the notion
of conformally K\"ahler, Einstein-Maxwell (cKEM for short)
metrics defined by Apostolov-Maschler \cite{AM}
and give non K\"ahler examples
of cKEM metrics in any dimension.
We then define an obstruction to the existence of cKEM metrics
called cKEM-Futaki invariant and consider it from the view point
of volume minimization.
At the end of this section we give some results of computations
on toric surfaces.
Let $(M,J)$ be a compact K\"ahler manifold. We call a Hermitian metric
$\Tilde{g}$ on $(M,J)$
a {\it conformally K\"ahler, Einstein-Maxwell metric}
if it satisfies the following three conditions:
(a) There exists a positive smooth function $f$ on $M$ such that
$g=f^2\Tilde{g}$ is K\"ahler.
(b) The Hamiltonian vector field $K=J\mathrm{grad}_gf$
is Killing for both $g$ and $\Tilde{g}$.
(c) $s_{\Tilde{g}}$ is constant.
As we mentioned in the Introduction,
we call the K\"ahler metric $g$ in (a)
an {\it Einstein-Maxwell K\"ahler metric}.
By the definition above, cscK metrics are cKEM metrics.
However we consider them as trivial cKEM metrics.
The notion of cKEM metrics were introduced by
Apostolov-Maschler in \cite{AM}
as a generalization of strongly Hermitian solutions
of the Einstein-Maxwell equation.
We review some results by LeBrun on
strongly Hermitian solutions, see \cite{L1}, \cite{L2}.
Let $M$ be a compact manifold.
A pair $(g,F)$ of a Riemannian metric $g$ and a real $2$-form $F$ is called
a solution of the
{\it Einstein-Maxwell equation} if it satisfies
$$
dF=0,\ d*_g F=0,\ [\mathrm{Ric}_g+F\circ F]_0=0,
$$
where $(F\circ F)_{jk} =
F_j\,^\ell F_{\ell k}$ and $[\ ]_0$ denotes the trace free part.
This equation is the Euler-Lagrange equation of the following
functional which is studied in general relativity:
$$
(g,F)\mapsto \int_M \left(s_g+\lvert F\rvert^2_g \right) dv_g.
$$
LeBrun investigated Einstein-Maxwell equation
when $M$ is a complex surface in detail,
especially he introduced the notion of strongly Hermitian solutions:
Let $(g,F)$ be a solution of Einstein-Maxwell equation on a
complex surface $(M,J)$.
It is called a
{\it strongly Hermitian solution} if
it satisfies
$$
\mathrm{Ric}_g(J\cdot,J\cdot)=\mathrm{Ric}_g(\cdot, \cdot),\ \
F(J\cdot,J\cdot)=F(\cdot,\cdot).
$$
LeBrun \cite{L1} pointed out that
the metric component of a strongly Hermitian solution is a
cKEM metric.
Conversely, he also showed that for a cKEM metric $\tilde{g}$, one obtains
a strongly Hermitian solution
$$
(\tilde{g},\omega_g+\frac12 f^{-2}[\rho_{\tilde{g}}]_0).
$$
We next give some examples
of cKEM metrics other than cscK metrics.
Typical known examples are conformally K\"ahler,
Einstein metrics by Page \cite{Page78} on the one point
blow up of ${\mathbf C} {\mathbf P}^2$, by Chen-LeBrun-Weber
\cite{ChenLeBrunWeber} on the two point blow up
of ${\mathbf C} {\mathbf P}^2$, by Apostolov-Calderbank-Gauduchon \cite{ACG15}
on $4$-orbifolds and by B\'erard-Bergery \cite{BB82}
on ${\mathbf C} {\mathbf P}^1$-bundle over Fano K\"ahler-Einstein
manifolds.
Non-Einstein cKEM examples are constructed by LeBrun
\cite{L1}, \cite{L2} showing that there are ambitoric examples
on ${\mathbf C} {\mathbf P}^1\times {\mathbf C} {\mathbf P}^1$ and the one point
blow up of ${\mathbf C} {\mathbf P}^2$, and by Koca-T\o nnesen-Friedman \cite{KT}
on ruled surfaces of higher genus. The authors extended
LeBrun's construction on ${\mathbf C} {\mathbf P}^1\times {\mathbf C} {\mathbf P}^1$
to ${\mathbf C} {\mathbf P}^1\times M$ where $M$ is a compact
cscK manifold of arbitrary dimensions as follows \cite{FO17}.
Let $g_1$ be an $S^1$-invariant metric on ${\mathbf C} {\mathbf P}^1$ with
$\mathop{\mathrm{Vol}}\nolimits({\mathbf C} {\mathbf P}^1,g_1)=2\pi$ and
$g_2$ a K\"ahler metric with $s_{g_2}=c$ on an $(m-1)$-dimensional compact complex manifold $M$.
The $S^1$-invariant metric $g_1$
can be written in the action-angle coordinates $(t,\theta)\in
(a,a+1)\times (0,2\pi]$ as
$$g_1=\frac{dt^2}{\Psi(t)}+\Psi(t)d\theta^2$$
for some smooth function $\Psi(t)$ which satisfies
the following boundary condition:
\begin{equation}\label{bc}
\Psi(a)=\Psi(a+1)=0,\ \ \Psi'(a)=-\Psi'(a+1)=2, \Psi>0\ \text{on }(a,a+1).
\end{equation}
Then we see that the constant scalar curvature equation
$s_{\tilde{g}}=d$
for
the metric $\tilde{g}=(g_1+g_2)/t^2$ on ${\mathbf C} {\mathbf P}^1\times M$
reduces to the following ODE:
\begin{equation}\label{eq:3.2}
t^2\Psi''-2(2m-1)t\Psi'+2m(2m-1)\Psi=ct^2-d.
\end{equation}
\begin{thm}[\cite{FO17}]
Let $c>8m-8$. Then there exist $a>0$ and $d>0$ such that
there exists a unique solution $\Psi$ of the ODE \eqref{eq:3.2}
which satisfies the condition \eqref{bc}.
As a result,
for any K\"ahler metric $g_2$ with $s_{g_2}=c$ on an $(m-1)$-dimensional compact complex manifold $M$,
$$
\tilde{g}=\frac{1}{t^2}\left(\frac{dt^2}{\Psi(t)}+\Psi(t)d\theta^2+g_2\right)
$$
is an $S^1$-invariant cKEM metric on ${\mathbf C} {\mathbf P}^1\times M$.
\end{thm}
We next consider the existence problem of cKEM metrics.
Let $(M,J)$ be a compact complex manifold of $\dim_{\mathbf C} M=m$.
We fix a compact subgroup $G\subset \mathrm{Aut}(M,J)$,
a K\"ahler class $\Omega$, $K\in \mathfrak g$ and a
sufficiently large $a\in {\mathbf R}$.
Denote by $\mathcal K^G_\Omega$ the space of $G$-invariant K\"ahler
metrics $g$ with $\omega_g\in \Omega$. For $g\in \mathcal K^G_\Omega$,
there exists a unique function $f_{K,a,g}\in C^\infty(M)$ satisfying the following two
conditions:
\begin{equation}\label{eq:3.3}
\iota_K\omega_g=-df_{K,a,g},\ \int_M f_{K,a,g}\frac{\omega_g^m}{m}=a.
\end{equation}
Note here that, for fixed $(K,a)$, $\min\{f_{K,a,g}(x)\,|\,x\in M \}$ is independent of
$g\in \mathcal K^G_\Omega$, see \cite{AM}.
So if we choose $a$ sufficiently large, $f_{K,a,g}$ is positive for any
$g\in \mathcal K^G_\Omega$.
Then we can ask the following existence problem;
does there exists a K\"ahler metric $g$ in $\mathcal K^G_\Omega$
such that $\tilde{g}_{K,a}=f^{-2}_{K,a,g}g$ is a cKEM metric?
When $K=0$, this is just the existence problem of
cscK metrics in $\mathcal K^G_\Omega$.
As a generalization of the Futaki invariant
\cite{futaki83.1}, \cite{futaki83.2},
Apostolov-Maschler \cite{AM} defined
the following integral invariant for non-zero $K$.
\begin{thm}[\cite{AM}]\label{cKEM-Futaki}
The linear function
\begin{equation}\label{eq:3.4}
\mathrm{Fut}^G_{\Omega,K,a}:\mathfrak g\to {\mathbf R},\ \
\mathrm{Fut}^G_{\Omega,K,a}(H):=
\int_M
\dfrac{s_{\tilde{g}_{K,a}}-c_{\Omega,K,a}}{f_{K,a,g}^{2m+1}}
f_{H,b,g}
\dfrac{\omega_g^m}{m!},
\end{equation}
is independent of the choice of K\"ahler metric $g\in \mathcal K^G_\Omega$
and $b\in {\mathbf R}$. Here
\begin{equation}\label{eq:3.5}
c_{\Omega,K,a}:=
\dfrac{\displaystyle{\int_M}s_{\tilde{g}_{K,a}}f_{K,a,g}^{-2m-1}\dfrac{\omega_g^m}{m!}}
{\displaystyle{\int_M}f_{K,a,g}^{-2m-1}\dfrac{\omega_g^m}{m!}}.
\end{equation}
is a constant which is independent of the choice of $g\in \mathcal K^G_\Omega$.
If there exists a K\"ahler metric $g\in \mathcal K^G_\Omega$
such that $\tilde{g}_{K,a}$ is a cKEM metric, then
$\mathrm{Fut}^G_{\Omega,K,a}$ vanishes identically.
\end{thm}
We call this linear function $\mathrm{Fut}^G_{\Omega,K,a}$ as the {\it cKEM-Futaki invariant}
for $(K,a)$.
We notice here that cKEM-Futaki invariant is parametrized by the pair
$(K,a)$. This situation bears resemblance to the holomorphic invariant
\eqref{eq:2.5} which is an obstruction to the existence of K\"ahler-Ricci solitons.
In fact, we now see that the cKEM-Futaki invariant can be
characterized as the first variation of the volume function.
To that end, we recall that constant scalar curvature Riemannian metrics
can be characterized as follows.
Let $M$ be a compact manifold with $n=\dim M\ge 3$ and
$Riem(M)$ the set consists of all Riemannian metrics on $M$.
The scalar curvature $s_{g_0}$ of a Riemannian metric $g_0\in Riem(M)$
is constant if and only if $g_0$ is a critical point of the following
normalized Einstein-Hilbert functional on the conformal class of $g_0$:
\begin{equation}\label{eq:3.6}
EH(g):=\dfrac{\displaystyle{\int_M s_gdv_g}}{\displaystyle{\left(
\mathop{\mathrm{Vol}}\nolimits(M,g)\right)^{\frac{n-2}{n}}}}
\end{equation}
In our case, this functional gives the ``integral'' of the cKEM-Futaki invariant!
\begin{prop}[\cite{AM}]\label{inv-of-EH}
For a fixed $(K,a)$,
$EH(\tilde{g}_{K,a})$ is independent of the choice of $g\in \mathcal K^G_\Omega$.
\end{prop}
As a consequence, if there exists $g\in \mathcal K^G_\Omega$ such that
$\tilde{g}_{K,a}$ is a cKEM metric, then the pair $(K,a)$
is a critical point of the function
\begin{equation}\label{eq:3.7}
(K,a)\mapsto EH(K,a):=EH(\tilde{g}_{K,a}).
\end{equation}
The set of pairs
$$\mathcal P^G_\Omega
:=\{
(K,a)\in \mathfrak g\times {\mathbf R}\,|\,
f_{K,a,g}>0,\ g\in \mathcal K^G_\Omega
\}
$$
is a cone in the finite dimensional real vector space $\mathfrak g\times {\mathbf R}$.
Since the normalized Einstein-Hilbert functional is scale invariant,
the function $EH$ on $\mathcal P^G_\Omega$
reduces to the function on the quotient space $\mathcal P^G_\Omega/{\mathbf R}_+$.
If we choose representatives normalized as follows,
$EH$ can be represented as a power of the volume function.
We define a constant
$d_{\Omega.K,a}$ by
\begin{equation}\label{eq:3.8}
d_{\Omega,K,a}:=\dfrac{\displaystyle{\int_M s_{\tilde{g}_{K,a}}dv_{\tilde{g}_{K,a}}}}
{\mathop{\mathrm{Vol}}\nolimits(M,\tilde{g}_{K,a})}
=
\dfrac{\displaystyle{\int_M}s_{\tilde{g}_{K,a}}f_{K,a,g}^{-2m}\dfrac{\omega_g^m}{m!}}
{\displaystyle{\int_M}f_{K,a,g}^{-2m}\dfrac{\omega_g^m}{m!}}.
\end{equation}
By the argument in \cite{AM},
$d_{\Omega,K,a}$ is independent of the choice of $g\in \mathcal K^G_\Omega.$
Note here that, for general $(K,a)\in \mathcal P^G_\Omega$,
$c_{\Omega,K,a}\not=d_{\Omega,K,a}$. However if there exists a cKEM
metric $\tilde{g}_{K,a}$ then $c_{\Omega,K,a}=d_{\Omega,K,a}.$
Hence $c_{\Omega,K,a}-d_{\Omega,K,a}$ gives an obstruction to
the existence of cKEM metric $\tilde{g}_{K,a}$.
If we set
$$
\tilde{\mathcal P}^G_\Omega(\gamma):=
\{(K,a)\in \mathcal P^G_\Omega\,|\,
d_{\Omega,k,a}=\gamma\}
$$
for a constant $\gamma$, then
\begin{equation}\label{eq:3.9}
EH(K,a)=\gamma \mathop{\mathrm{Vol}}\nolimits (K,a)^{\frac{1}{m}}:=\gamma \mathop{\mathrm{Vol}}\nolimits (\tilde{g}_{K,a})^{\frac{1}{m}}
\end{equation}
on
$\tilde{\mathcal P}^G_\Omega(\gamma)$.
By the first variation formula of the normalized Einstein-Hilbert functional
(cf. \cite{B}), we have
\begin{equation}\label{eq:3.10}
\frac{d}{dt}_{|t=0}EH(K+tH,a)=
\dfrac{2-2m}{\mathop{\mathrm{Vol}}\nolimits(K,a)^{\frac{m-1}{m}}}\int_M\left(\frac{s_{\tilde{g}_{K.a}}-d_{\Omega,K,a}}
{f_{K,a,g}
^{2m+1}}\right)f_{H,0,g}\frac{\omega_g^m}{m!}
\end{equation}
and
\begin{equation}\label{eq:3.11}
\frac{d}{dt}_{|t=0}EH(K,a+tb)=\dfrac{2-2m}{\mathop{\mathrm{Vol}}\nolimits(K,a)^{\frac{m-1}{m}}}
(c_{\Omega,K,a}-d_{\Omega,K,a})\int_M
\frac{1}{f_{K,a,g}^{2m+1}}\frac{\omega_g^m}{m!}.
\end{equation}
Therefore cKEM metrics have the following volume minimizing property.
\begin{thm}[\cite{FO17}]\label{cKEM-volmin}
Suppose that there exists a
K\"ahler metric $g\in \mathcal K^G_\Omega$ such that
$\tilde{g}_{K,a}$ is a cKEM metric for
$(K,a)\in \tilde{\mathcal {P}}^G_\Omega(\gamma)$.
Then $(K,a)$ is a critical point of the volume function
$\mathop{\mathrm{Vol}}\nolimits:\tilde{\mathcal P}^G_\Omega(\gamma)\to {\mathbf R}$.
Furter, $(K,a)$ is a critical point of $\mathop{\mathrm{Vol}}\nolimits$ if and only if
$\mathrm{Fut}^G_{\Omega,K,a}\equiv 0$.
\end{thm}
For example,
let $(M,J,g)$ be an $m$-dimensional compact toric K\"ahler manifold.
We denote by $\Delta\subset {\mathbf R}^m$ the moment polytope.
Then we see that
\begin{equation}\label{eq:3.12}
EH(K,a)=
\frac{4\pi}{(m!)^{\frac{1}{m}}}
\frac
{\displaystyle{\int_{\partial \Delta}\frac{1}{f_{K,a}^{2m-2}}d\sigma}}
{\displaystyle{\left(\int_\Delta \frac{1}{f_{K,a}^{2m}}d\mu \right)^{\frac{m-1}{m}}}}
\end{equation}
for
\begin{equation}\label{eq:3.13}
(K,a)\in \mathcal P^{T^m}_\Delta
\simeq
\{
f_{K,a}(\mu):=\sum_{i=1}^m K_i\mu_i+a\,|\,
f_{K,a}>0\text{ on }\Delta
\}
\end{equation}
(cf. \cite{AM} or \cite{FO17}.)
Therefore, when $m=2$, we want to know
the critical points of
\begin{equation}\label{eq:3.14}
EH(a,b,c)^2=
8\pi^2
\frac
{\displaystyle{\left(\int_{\partial \Delta}\frac{1}{(a\mu_1+b\mu_2+c)^{2}}d\sigma\right)^2}}
{\displaystyle{\int_\Delta \frac{1}{(a\mu_1+b\mu_2+c)^{4}}d\mu }}
\end{equation}
with $a\mu_1+b\mu_2+c$ is positive on $\Delta$.
For ${\mathbf C} {\mathbf P}^2,{\mathbf C} {\mathbf P}^1\times {\mathbf C} {\mathbf P}^1$
and the one point blow up of ${\mathbf C} {\mathbf P}^2$,
we summarize results of computations.
\vspace{4mm}
$\bullet\ M={\mathbf C} {\mathbf P}^2:$
In this case, up to scale and translations, $\Delta$ is the convex hull of the three points
$(0,0),(1,0)$ and $(0,1)$. The critical point of the function $EH$ on
$\mathcal P^{T^2}_\Delta/{\mathbf R}_+$
is only $[(0,0,1)]$.
\vspace{3mm}
$\bullet\ M={\mathbf C} {\mathbf P}^1\times {\mathbf C} {\mathbf P}^1:$
Let $\Delta_p$ be the convex hull of $(0,0),(p,0),(p,1)$ and $(0,1)$, where
$p\ge 1$.
When $1\le p\le 2$, $EH$ has the unique critical point
$[(0,0,1)]$.
On the other hand, when $p>2$, there exist three critical points
$$
[(0,0,1)],\ \left[\left(
\pm 1,0,\frac12\left(
\frac{p^{\frac32}}{\sqrt{p-2}}\mp p
\right)
\right)\right].
$$
We emphasize that this result shows that
the volume function is not convex unlike the case
of K\"ahler-Ricci solitons and of Sasaki-Einstein metrics, see \S $2$ and \S $4$.
\vspace{3mm}
$\bullet\ M=$ one point blow up of ${\mathbf C} {\mathbf P}^2:$
Let $\Delta _p$ be the convex hull of $(0,0),(p,0),(p,1-p)$ and
$(0,1)$, where $0<p<1$.
For $0<p<1$,
\begin{equation}\label{eq:3.15}
\left[
\left(
1,0,\frac{p(1-\sqrt{1-p})}{2\sqrt{1-p}+p-2}
\right)
\right]
\end{equation}
is a critical point of $EH$.
When $\frac89 <p<1$ there are the following two more critical points
\begin{equation}\label{eq:3.16}
\left[
\left(
-1,0,
\frac{p(3p\pm \sqrt{9p^2-8p})}{2(p\pm \sqrt{9p^2-8p})}
\right)
\right].
\end{equation}
Let $0<\alpha <\beta<1$ be the real roots of
$$
F(p):=p^4-4p^3+16p^2-16p+4=0.
$$
When $0<p<\alpha$, there are the following two critical points
\begin{equation}\label{eq:3.17}
\left[
\left(
p^2-4p+2\pm \sqrt{F(p)},\pm 2\sqrt{F(p)},
p^2+2p-2\mp \sqrt{F(p)}
\right)
\right].
\end{equation}
An extension of Lichnerowicz-Matsushima theorem asserting the reductiveness of the
automorphism group on a cKEM manifold is obtained in \cite{FO_reductive17} and \cite{Lahdili17}.
\section{Sasakian Geometry.}
A Sasakian structure is often referred to as an odd dimensional analogue of the K\"ahler structure.
It roughly consists of a contact structure, a Riemannian structure compatible with the contact strucure
and an almost complex structure on the
contact bundle.
There are many equivalent definitions, but the following definition is the most simple and rigorous one.
In Riemannian point view, a Sasakian manifold
is a Riemannian manifold $(S, g)$
whose cone manifold
$(C(S), \overline g)$ with $C(S) \cong S\times {\mathbf R}^+$ and $\overline g = dr^2 + r^2g$
is K\"ahler, where $r$ is the standard coordinate on ${\mathbf R}^+$.
In this paper we always assume $S$ is closed and connected.
From the definition, $S$ is odd-dimensional and we put $\dim S = 2m + 1$, and thus
$\dim_{\mathbf C} {C(S)} = m+1$. $S$ is identified with the submanifold $\{r=1\} \subset C(S)$.
The K\"ahler form on $C(S)$ is given by $i\partial{\overline \partial} r^2$. From this we see that,
fixing the holomorphic structure on $C(S)$,
the Sasakian structure is determined by the radial function $r$ since the Riemannian structure
is induced from the K\"ahler structure of $C(S)$.
We consider the deformations of the Sasakian structure on $S$
fixing the complex structure $J$ on $C(S)$.
On the other hand, $S$ also inherits a contact structure
with the contact form
$$\eta = (i({\overline \partial} - \partial) \log r)|_{r=1}.$$
It is well known \cite{BGbook} that the Sasakian structure is determined
by the transverse K\"ahler structure of the flow generated by the Reeb vector field $\xi$ of $\eta$.
The Reeb vector field $\xi$ is obtained by restricting the vector field $\tilde \xi := J(r\frac{\partial}{\partial r})$ on $C(S)$ to
$S = \{r=1\} \subset C(S)$.
This is a standard fact known as the ``K\"ahler sandwich'': The Sasakian structure is
equivalently given by the K\"ahler strcuture on the cone or given by the transverse
K\"ahler structure on the local orbit spaces of the Reeb flow, see \cite{BGbook} for the
detail.
From this we see that the Sasakian structure can be deformed
by the deformation of the choice of Reeb vector field in the Lie algebra $\mathrm{Lie}(T_\xi)$ of
the torus $T_\xi$ obtained by taking the closure of the flow generated by $\xi$ since the
deformed Reeb flow still has transverse K\"ahler structure.
Then by choosing a rational point in $\mathrm{Lie}(T_\xi)$ we obtain a Reeb vector field obtained
as an $S^1$-action on an ample line bundle over an orbifold. Thus $C(S)$ has an affine
algebraic variety $\mathcal A$ with only one singular point at the apex as an underlying space.
Let $G$ be the group of biholomorphisms of $\mathcal A = C(S)$ preserving the cone structure, that is,
$\mathrm{Lie}(G)$ consists of the real parts of holomorphic vector fields on $\mathcal A$ commuting with $r\frac{\partial}{\partial r}$.
Let $T$ be the maximal torus of $G$ containing $T_\xi$. Note here that it is a standard fact that $r\frac{\partial}{\partial r}$
preserves $J$ and that $\tilde \xi - iJ\tilde \xi$ is a holomorphic vector field. The deformation space of $T$-invariant Sasakian structures containing the Sasakian structure of $S$,
or equivalently $T$-invariant K\"ahler cone structures on $\mathcal A$, is given by
the space $\mathcal R$ of $T$-invariant smooth positive functions $r : \mathcal A \to
\mathbf R$ such that $i\partial{\overline \partial} r^2$ is positive $(1,1)$-form:
$$ \mathcal R := \{ r : \mathcal A \to \mathbf R\ |\ T\text{-invariant},\ i\partial{\overline \partial} r^2 > 0\}.$$
Since the Reeb vector field $\tilde \xi = Jr\frac{\partial}{\partial r}$ is the real part of a holomorphic Killing vector field and $T$ is the maximal torus in $G$,
$Jr\frac{\partial}{\partial r}$ is in $\mathrm{Lie}(T)$ for any $r \in \mathcal R$.
The set of all Reeb vector fields corresponding to $r \in \mathcal R$ is the dual cone of the cone obtained as the moment map image of
$C(S)$, and is called the {\it Sasaki cone}.
We define the volume functional $\mathrm{Vol} : \mathcal R \to \mathbf R$ by
\begin{equation}\label{vol1}
\mathrm{Vol}(r) = \mathrm{vol}(S_r)
\end{equation}
where $\mathrm{vol}(S_r)$ denotes the volume of the Sasakian manifold $S_r = \{r = 1\}$ for $r \in \mathcal R$.
Let $r(t)_{-\epsilon < t < \epsilon}$ be a one parameter family in $\mathcal R$ with $r(0) = r$, and put
$Y := \frac{d}{dt}\vert_{t=0} \tilde \xi(t)$ where $\tilde \xi(t) = Jr(t)\frac{\partial}{\partial r(t)}$.
Then the first variaton of $ \mathrm{Vol}(r)$ is given by
\begin{equation}\label{vol2}
\frac{d \mathrm{Vol}(r(t)) }{dt}\vert_{t=0} = -4(m+1)\int_{S_r} \eta(Y) dvol_r
\end{equation}
where $dvol_r$ is the volume element of $S_r$, see \cite{FOW}, Proposition 8.3, or \cite{MSY2}, Appendix C1.
The second variation is given by
\begin{equation}\label{vol3}
\frac{d}{dt}\vert_{t=0}\left( -4(m+1)\int_{S_{r(t)}} \eta(X) dvol_{r(t)}\right) = 4(m+1)(2m+4)\int_{S_r} \eta(X)\eta(Y) dvol_r,
\end{equation}
see \cite{FOW}, Proposition 8.4, or \cite{MSY2}, Appendix C2.
The second variation formula shows that the volume functional is convex.
A Sasakian manifold $S$ is called a Sasaki-Einstein manifold if it is an Einstein manifold as a Riemannian manifold. This occurs exactly when
$C(S)$ is a Ricci-flat K\"ahler cone (i.e. Calabi-Yau cone). From the view point of the K\"ahler sandwich, this occurs exactly
when the transverse K\"ahler structure of the Reeb flow is K\"ahler-Einstein with positive scalar curvature.
A typical example is the $(2m+1)$-dimensional standard sphere which is Sasaki-Einstein. In this case, the cone is $\mathbf C^{m+1}$
which is Ricci-flat K\"ahler, the Reeb flow is the standard $S^1$-action, and the orbit space is the complex projective space which
is a K\"ahler-Einstein manifold of positive scalar curvature.
When the Reeb flow generates an $S^1$-action the quotient space is a Fano orbifold. For general Sasakian structures the complex geometry of the local orbit spaces are described as ``basic'' geometry. For example, we have the basic $\partial$-operator $\partial_B$,
the basic
${\overline \partial}$-operator ${\overline \partial}_B$, the basic
Dolbeault cohomology $H^\ast_{{\overline \partial}_B}$, the basic K\"ahler metric $g_B$, the basic K\"ahler form $\omega_B$, and the basic Ricci form $\rho_B$, the basic first Chern class $c_1^B$ and etc.
With these notations, the Sasaki-Einstein equation becomes
$$ \rho_B = (2m+2) \omega_B.$$
Thus a necessary
condition for the existence of a Sasaki-Einstein metric is that the basic first Chern class is represented by a positive multiple
of the basic K\"ahler class:
$$ 2\pi c_1^B = (2m+2) [\omega_B]$$
in $H^2_{{\overline \partial}_B}$.
This last condition is equivalent to the topological condition that $c_1(D) = 0$ and that $c_1^B > 0$
where $D$ denotes the contact structure determined by the contact form $\eta$, see \cite{FOW}, Proposition 4.3. We say in this paper
that $S$ is transversely Fano if $c_1(D) = 0$ and $c_1^B > 0$.
Let $\xi$ be the Reeb vector field on a Sasakian manifold $S$. A smooth function $f$ on $S$
is said to be basic if $\xi f = 0$. A basic function is obtained locally by pulling back a smooth function on the local orbit space of the Reeb
flow. A holomorphic vector field $Y$ in $\mathrm{Lie}(G)$ descends to a complex vector field on $S$ and also to a complex vector field
on each local orbit space of the Reeb flow, both of which we also denote by the same
letter $Y$. Then $Y$ is written on the local orbit space of the Reeb flow, which is K\"ahler, as
\begin{equation}\label{grad} Y = \mathrm{grad}_{g_B}^\prime u = g_B^{i{\overline j}}\frac{\partial u}{\partial \overline{z^j}}\frac{\partial}{\partial z^i} \end{equation}
where $z^1, \cdots, z^m$ are local holomorphic coordinates and $g_B$ is the transverse K\"ahler metric
on the local orbit space of the Reeb flow. There is a real valued basic function $F_B$ such that
\begin{equation}\label{Ricci}
\rho_B - (2m+2)\omega_B = i\partial_B{\overline \partial}_B F_B.
\end{equation}
Just as in the case of Fano manifolds (c.f. \cite{futaki88}, Theorem 2.4.3), there is an isomorphism between $\mathrm{Lie}(G)$ and
the space $\Lambda_{2m+2}$ of eigenfunctions $u$ of the elliptic operator $\Delta^F_B $ defined by
\begin{equation}\label{eigen}
\Delta^F_B u := \Delta_B u - \nabla^i u \nabla_iF_B
\end{equation}
where $\Delta_B = {\overline \partial}_B^\ast {\overline \partial}_B$ is the transverse ${\overline \partial}_B$-Laplacian and $\nabla$ denotes the Levi-Civita connection of the transverse K\"ahler structure, see \cite{FOW}, Theorem 5.1.
Noting $\eta(Y)$ in (\ref{vol2}) is basic, if $\eta(Y) = u$ in $\Lambda_{2m+2}$, then
the right hand side of (\ref{vol2}) is equal to
\begin{eqnarray}
-2 \int_S (2m+2)u\ dvol &=& -2 \int_S (\Delta_B u - \nabla^i u \nabla_iF_B) dvol\\
&=& 2\int_S (\mathrm{grad}_{g_B}^\prime u)F_B\ dvl.
\end{eqnarray}
The right hand side is equal to $\mathrm{Fut}_\xi$ where $\xi$ is the Reeb vector field which is determined by the Sasakian structure
of $S$. This proves the volume minimization formula (\ref{derivative2}).
A Sasakian manifold
$(S, g)$ is said to be toric if the K\"ahler cone manifold $C(S)$ is toric, namely
$\dim_{\mathbf C} G = m+1$. When $S$ is toric and transversely Fano, Martelli-Sparks-Yau \cite{MSY2}
showed that the volume functional is proper on the space $\Sigma$ of Reeb vector fields
of charge $n$, which is a slice in the the Sasaki cone, i.e. the dual cone
of the moment map image of $C(S)$. Since the volume functional is convex by (\ref{vol3}), there is a unique
minimum on $\Sigma$ at which $\mathrm{Fut}_\xi$ vanishes. In \cite{FOW} it is shown that for this minimum
$\xi$ there is a Sasaki-Einstein metric.
Uniqueness assertion is also shown in \cite{CFO}. To sum up the following holds.
\begin{thm}[\cite{FOW}, \cite{CFO}]\label{Main1} Let $(S, g)$ be a compact toric Sasakian manifold with $c_1^B > 0$
and $c_1(D) = 0$. Then there exists a Sasaki-Einstein metric. Further,
the identity component of the automorphism
group for the transverse holomorphic structure acts transitively
on the space of all Sasaki-Einstein metrics.
\end{thm}
In K\"ahler geometry, the Yau-Tian-Donaldson conjecture relates the existence problem of constant scalar curvature
K\"ahler (cscK for short) metrics to K-stablity. Simlilarly in Sasakian geometry, the existence problem of constant scalar curvature
Sasaki (cscS for short) metrics is related to K-stablity, see \cite{CollinsSzeke12}, \cite{CollinsSzeke15}, \cite{TiplervanCoev15}, \cite{BoyervanCoev2016} for example.
The cscS metrics are critical points of the Einstein-Hilbert functional $H : \mathcal R \to \mathbf R$ defined by
\begin{equation}\label{EH}
H(r) = \frac{\mathrm{TS}(r)^{m+1}}{\mathrm{Vol}(r)^m}
\end{equation}
where $\mathrm{TS}(r)$ denotes the total scalar curvature of $S_r$.
In the transversely Fano case, $\mathrm{TS}(r) = \mathrm{Vol}(r)$ and the Einstein-Hilbert functional
coincides with the volume functional.
For general Sasakian manifolds, i.e. for Sasakian manifolds which are not necessarily transversely Fano,
it is known that the convexity fails for the Einstein-Hilbert functional, and there can be several critical
points, see Legendre \cite{Legendre11_2}, and also \cite{BHLT17}.
This fact has resemblance in the study of Einstein-Maxwell K\"ahler metrics as can be seen in
the ambitoric examples by LeBrun \cite{L2} on the one-point-blow-up of $\mathbf C\mathbf P^2$.
But it is shown by Boyer-Huang-Legendre \cite{BHL17} that all of the volume functional, the total scalar
curvature and the Einstein-Hilbert functional are proper in that they tend to $+\infty$ as the Reeb vector
field tends to the boundary of the Sasaki cone. This was shown by using the Duistermaat-Heckman formula.
The idea of volume minimization for Sasaki-Einstein metrics has been extended and applied to
algebraic geomrty. Odaka \cite{Odaka15} considered generalizations of the normalized volume
functional
and Donaldson-Futaki invariant obtained as the derivative of the volume functional.
Odaka observed the decrease of the Donaldson-Futaki invariant along the minimal model
program using the concavity of the volume functional.
Li \cite{LiChi15_1}, \cite{LiChi15_2} considered normalized volume functional on the space of valuations
on Fano manifolds
and characterized K-semistabilty in terms of volume minimization. Note that when
a Sasakian manifold is the circle bundle of an ample line bundle $L$ over $M$,
then the Reeb vector field defines a valuation of the ring $\oplus_{k=0}^\infty H^0(M,L^k)$.
In view of this, to define volume functional for valuations is natural. The normalization corresponds
to the restriction of the Reeb vector fields to the ones with charge $n$.
On the other hand the Gromov-Hausdorff limit of a sequence of K\"ahler-Einstein manifolds is
homeomorphic to a normal algebraic variety and admits a K\"ahler-Einstein metric in the sense
of pluripotential theory \cite{DonaldsonSun14}. The tangent cone at a singular point admits a Ricci-flat cone structure,
and thus it is a cone over a Sasaki-Einstein manifold on the regular set. Li-Xu \cite{LiXu17}
applies the volume minimization to show an algebraic nature of those tangent cones, answering
to a question of Donaldson-Sun \cite{DonaldsonSun15}.
See also \cite{LiLiu16}, \cite{LiXu16}.
|
2,877,628,088,324 | arxiv | \section{Introduction}
Ultrafast laser-driven sources based on high-order harmonic generation (HHG) produce light pulses over a broad photon energy range from vacuum ultraviolet to X-rays. Favorable properties of this radiation, namely, femto- to attosecond pulse duration, regular wavefront, high temporal and spatial coherence as well as natural synchronization with the driving laser make HHG sources extremely powerful and versatile tools for applications in physics, chemistry, biology and materials science. Combined research efforts at novel ultrafast light sources based on laser-driven HHG as well as accelerator-driven free electron lasers (FELs) have advanced the field of high intensity laser-matter interactions, atomic, molecular and optical (AMO) sciences and investigations of ultrafast phenomena at short wavelengths \cite{ueda2019,young2018,hochlaf2017,mudrich2014}.
Novel applications of HHG sources in the femtosecond time domain include measurements of electronic structure and dynamics in molecules \cite{nishitani2017,rouzee2014,reid2012,svoboda2019}, mapping the pathways of photochemical reactions \cite{attar2017,smith2018,conta2018,chang2020,warne2020}, coherent diffractive imaging (CDI) both on fixed samples \cite{rothhardt2018,helk2019} and on substrate-free isolated nanoscale samples \cite{rupp2017}, and studies of the interaction of intense extreme ultraviolet (XUV) light with complex systems \cite{murphy2008,bunermann2012,schutte2014,schutte2016}. Time-compensating monochromators allow for the selection of individual harmonics while almost preserving the femtosecond pulse duration \cite{frassetto2011}. This development provides a unique tool for experiments with high temporal and spectral resolution \cite{reid2012,conta2018,warne2020,wernet2011}. A technical challenge for many applications remains in increasing the pulse energy of individual harmonics. Recent progress in the development of driving lasers based on optical parametric chirped pulse amplification (OPCPA) technology enables the up-scaling of the output power of HHG using a loose focusing geometry \cite{hergott2002,takahashi2002,hong2014,heyl2016}.
In addition to HHG sources, which enable novel investigations, advanced sample delivery systems open new possibilities in AMO sciences and CDI. Molecular beams are of interest for chemical physics \cite{smith2018} or for measurements of photoelectron angular distributions in the molecular frame \cite{reid2012}, while cluster/nanodroplet beams are attractive targets for electron spectroscopy \cite{mudrich2014,toennies2004} or for studies of nanoscale matter in extreme conditions \cite{fennel2010,krainov2002,tisch2003}. Liquid targets such as ultrathin liquid sheet jets have enabled investigations of ultrafast radiolysis of water \cite{loh2020} and, due to their high sample velocities, can be used for high-repetition-rate experiments \cite{wiedorn2018}. Moreover, aerosol injectors can deliver a wide range of nanoscale targets ranging from simple sucrose nano-balls \cite{ho2020,rath2014} to viruses and other biological targets for state-of-the-art CDI experiments \cite{hantke2014,seibert2011,schot2015,sobolev2020}.
ELI Beamlines is a part of the European ELI (Extreme Light Infrastructure) project, developing cutting-edge laser systems for user applications. The L1 Allegra laser system is an in-house development program to deliver 100~mJ laser pulses of $<15$~fs pulse duration at $\sim$830~nm central wavelength and 1~kHz repetition rate \cite{batysta2016}. This laser system can be used to drive a high intensity XUV beamline based on HHG in gas targets \cite{hort2019,nejdl2021}. Here we present the technical design and the current status of the MAC end-station: a \textbf{M}ultipurpose station for \textbf{A}MO sciences and \textbf{C}DI for user experiments with intense HHG pulses.
We provide an overview of the sample delivery instruments and detection systems, together with results from commissioning experiments. Upcoming upgrades are also discussed.
\section{End-station overview}
\subsection{General layout of the beamline}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=15cm]{fig1_E1beamline4.pdf}
\caption{\label{fig_beamline} Layout of the HHG beamline and the MAC end-station. The main laser propagates in a vacuum distribution system from the bottom left. 10\% of the beam is split off and directed out of the vacuum system. The remaining 90\% drives the high harmonic source. The two beams are recombined in the MAC end-station. Dimensions in mm are indicated. Distance between HHG chambers VC1 and VC2 can be modified to accommodate for different focal lengths of the HHG focusing mirror.}
\end{center}
\end{figure*}
The MAC end-station is located at the HHG beamline \cite{hort2019,nejdl2021} of the ELI Beamlines facility (Fig.~\ref{fig_beamline}). The HHG beamline is primarily designed to be driven by the ELI Beamlines L1 laser Allegra \cite{batysta2016}. The Allegra laser system is based on a broadband OPCPA pumped by picosecond Yb:YAG thin-disk lasers. After amplification, the broadband pulses are compressed to $< 15$~fs using a chirped mirror compressor. The system operates at 1~kHz repetition rate with a central wavelength of around 830~nm and an expected energy per pulse of up to 100~mJ. The carrier-envelope phase is not stabilized. Additionally, two commercial support lasers (Legend Elite Duo: $<35$~fs, 12~mJ, 1~kHz, and Hidra-100: $<40$~fs, 100~mJ, 10~Hz, both from Coherent) can be used to drive the beamline. Note that Legend and L1 Allegra have different spectra, resulting in different photon energies of harmonics generated by these lasers (Fig.~\ref{fig_spectra}).
Before entering the HHG beamline, 10\% of the main laser beam is transmitted through a beamsplitter and directed out of the vacuum beam transport system. This beam serves as an auxiliary beam for pump-probe experiments at the MAC end-station (Section~\ref{sec_aux}). 90\% of the main laser beam is reflected from the beamsplitter and directed to the HHG beamline \cite{hort2019}. In this beamline, the driving laser beam is loosely focused to a gas cell to generate high harmonics in the $5 - 120$~nm wavelength range (photon energies $10 - 250$~eV). Depending on experimental needs, the photon yield in different photon energy ranges can be adjusted by changing the generating gas and f-number of the laser. After generation the XUV beam propagates through the beamline, where the driving NIR beam is rejected by three grazing incidence mirrors (with anti-reflective coating for NIR and high reflectivity for XUV) preserving the original beam axis. A thin metallic filter (typically aluminum, zirconium or indium) can be used to block the residual NIR beam and transmit a specific part of the high harmonic spectrum. Transmission of the beam-rejection system and filters has to be taken into account when designing experiments in a specific photon energy range.
\begin{figure}
\begin{center}
{\includegraphics[width=10cm]{fig2_spectra3.pdf}}
\caption{\label{fig_spectra} (a) Measured spectra of Legend and L1 Allegra lasers. Spectrum of Hidra (not shown) is very similar to the spectrum of Legend. (b) Measured spectra of high-harmonics generated in Kr with Legend (792~nm, 12~mJ, 35~fs, blue line) and with L1 (830~nm, 18.5~mJ, 15~fs, red line). Harmonic orders are indicated. An aluminum filter was used to reject the NIR beam.}
\end{center}
\end{figure}
For experiments at the MAC end-station that require a monochromatized beam, a time-preserving grating monochromator is implemented \cite{frassetto2011,hort2019,frassetto2017,poletto2018}. The monochromator consists of two toroidal mirrors and a grating in an off-plane diffraction geometry. With this geometry the wavefront tilt is less pronounced than in a classical diffraction mount \cite{frassetto2017,poletto2018}, see Table~\ref{tab_gratings}. Slots for four gratings are mounted on a motorized stage in the monochromator vacuum chamber allowing a quick exchange of a grating. Altogether, six gratings with parameters listed in Table~\ref{tab_gratings} and a flat golden mirror (if a broadband beam is needed) are available. A slit at the monochromator output selects the specific part of the spectrum to be used in the experiment. One of five slits (with widths of 50, 80, 100, 200 and 400~$\mu$m) can be selected. If the monochromator is not needed its components can be moved out of the XUV beampath.
\begin{table}
\caption{\label{tab_gratings}Gratings available for the monochromator. Energy resolution $\Delta E$ and pulse temporal front tilt (half-width) $\Delta\tau$ after diffraction are calculated for a beam with full divergence of 0.5~mrad at a photon energy in the center of the optimal range for each grating. A slit width of 100~$\mu$m was considered for the $\Delta E$ calculation \cite{frassetto2011}.}
\begin{center}
\begin{tabular}{c|cccc}
\hline\noalign{\smallskip}
Grating&Lines/mm&\begin{tabular}{c} Optimal spectral \\ range (eV)\\\end{tabular}&$\Delta E$~(eV)&$\Delta\tau$~(fs)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
1&86&$10-28$&$0.14$&$22$\\
2&150&$13-28$&$0.10$&$35$\\
3&158&$25-54$&$0.34$&$20$\\
4&300&$22-40$&$0.11$&$48$\\
5&600&$51-98$&$0.31$&$40$\\
6&985&$86-120$&$0.36$&$47$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\end{table}
The default polarization of the XUV beam in the MAC end-station is vertical and monochromator transmission is optimized for this polarization. Nevertheless, it is possible to rotate the polarization of the driving laser field using a thin half-waveplate or a periscope and thus rotate the polarization of the XUV beam. In the future, circular and elliptical polarization of the XUV beam will also be available, employing a two-color driving scheme for the HHG \cite{kfir2015}.
A major planned upgrade of the HHG beamline is to provide two independent synchronized XUV beams for pump-probe experiments.
\subsection{MAC vacuum chamber}
The design of the MAC vacuum chamber is based on the design of the CAMP end-station at the FLASH FEL in Hamburg \cite{struder2010,erk2018} and the LAMP instrument at the Linac Coherent Light Source FEL \cite{osipov2018}. This provides mutual compatibility of the chambers, eg. instruments used at CAMP can easily be integrated into the MAC end-station. The main body of the MAC chamber is a DN400CF cylinder made of stainless steel 1.4429~ESU (316LN~ESR) to ensure low magnetic permeability. The interaction region is in the center of the first part of the chamber, as indicated in Figs.~\ref{fig_focusing}(a) and \ref{fig_pump_beam}(a). There are four DN250CF ports around the interaction region and a number of other ports on the chamber. For installation of optical components inside the MAC chamber, three optical breadboards are available: one at the top and two at the bottom part of the DN400CF cylinder. The flexible, multi-purpose configuration of the MAC station allows for the integration of user's instruments on request.
The MAC chamber is located in an ISO-7 class cleanroom (experimental hall E1) with an ambient temperature of $20^{\circ}{\rm C} - 22^{\circ}{\rm C}$, a temperature stability of $\pm 0.5^{\circ}$C over a 24 hour period and a humidity of $50\% \pm 5\%$. The MAC chamber is pumped by a turbomolecular pump with a pumping speed of 2100~l\,s$^{-1}$, reaching a vacuum level of 10$^{-8}$~mbar. Additional turbomolecular pumps or cold traps can be added if required.
\subsection{XUV beam focusing at MAC end-station}
The XUV beam can be focused to the MAC end-station in two main ways. In the first focusing geometry, the output slit of the monochromator is imaged with an ellipsoidal mirror to the MAC chamber with a 1:5 imaging ratio (Fig.~\ref{fig_focusing}(a)). The focusing mirror, located in a separate chamber in front of the MAC chamber, is a gold-coated ellipsoidal mirror with an entrance arm of 2500~mm, an exit arm of 500~mm and a grazing angle of an incidence of 5~degrees. This geometry has been implemented and the measured focal spot of 21st harmonic (photon energy 33~eV) is shown in Fig.~\ref{fig_focusing}(b). In this measurement, a grating with 150~lines/mm was used in the first diffraction order and the slit size was 200~$\mu$m. The vertical full width at half maximum (FWHM) of the measured focal spot is 40~$\mu$m in accordance with the 1:5 imaging ratio. The FWHM of the focal spot in the horizontal direction ($\sim$60~$\mu$m) corresponds to the width of the harmonic beam, not affected by the slit.
To achieve tight focusing, a second geometry employing an off-axis parabola (OAP) can be implemented. The OAP has a focal length of 268.94~mm and an angle between the incoming and the focused beam of $\sim 21.5^{\circ}$. In this configuration, the expected focal spot size in the MAC end-station is below $3\;\mu$m (FWHM). With the corresponding estimated intensity on the target of above $10^{13}$~W\,cm$^{-2}$ we expect to meet the experimental requirements for single-shot single-harmonic CDI and for studies of non-linear effects in the XUV regime \cite{nayak2018,senfftleben_2020}.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig3_MACfocusing3.pdf}
\caption{\label{fig_focusing} HHG beam focusing to the MAC end-station. (a) An ellipsoidal mirror images the output slit of the monochromator to the interaction region in the MAC chamber. (b) Measured focal spot of the 21st harmonic focused with the ellipsoidal mirror. The focal spot was measured with an in-vacuum CCD camera (PI-MTE) with a pixel size of 13.5~$\mu$m.}
\end{center}
\end{figure}
\subsection{Auxiliary beam for MAC end-station}
\label{sec_aux}
For pump-probe experiments, an auxiliary beam synchronized with the XUV beam, is provided at the MAC end-station. The auxiliary beam is split from the main laser beam before the HHG beamline (Fig.~\ref{fig_beamline}) and propagates in air to the MAC end-station. The auxiliary beam passes through a delay line with a total travel of 1~m and a bi-directional repeatability of $<0.5~\mu$m (Newport, M-IMS1000LM-S), providing a total delay of up to 6~ns with a delay step of $<3$~fs. After the delay line, the auxiliary beam is directed to the MAC chamber and focused on the interaction point. Different focusing geometries (colinear, non-colinear, using a lens or an OAP) are possible (examples are shown in Fig.~\ref{fig_pump_beam}(a) and \ref{fig_pump_probe}(a)).
The spatial and temporal overlap of the driving and the auxiliary beam is found by propagating only the driving NIR beam on the same path as the XUV beam. The auxiliary beam and the driving beam are imaged on a camera in the interaction region to ensure their spatial overlap. For a rough temporal overlap (with an uncertainty of few picoseconds) a fast photodiode is used. Second harmonic (SH) generation in a non-linear crystal (BBO) is used to find the zero delay between the two beams with $<3$~fs uncertainty (Fig.~\ref{fig_pump_beam}(b-e)). The maximum of the SH cross-correlation signal (Fig.~\ref{fig_pump_beam}(e)) determines the zero delay between the two pulses.
The FWHM of the SH cross-correlation signal is around 135~fs, which is much larger than the 35~fs initial pulse duration. This is because the auxiliary beam undergoes a large amount of positive dispersion on the way to the MAC chamber. An implementation of chirped mirrors is planned to post-compress the auxiliary beam to a pulse duration of $< 35$~fs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{fig4_aux_beam.pdf}
\caption{\label{fig_pump_beam} (a) Model of the XUV (blue) and auxiliary (red) beams focused colinearly in the MAC end-station.(b) Schematic of the experimental setup for non-colinear second-harmonic (SH) cross-correlation. Spatial profiles of the driving and auxiliary beam: (c) not overlapped, (d) overlapped in space and time in the interaction region. Interference fringes are visible. Camera pixel size 4.4~$\mu$m. (e) SH cross-correlation trace of the auxiliary and driving beam.}
\end{center}
\end{figure}
Upgrades of the auxiliary beam are planned to cover a wide spectral range. A high-energy optical parametric amplifier (OPA, Light Conversion, HE-TOPAS + ViS/UV extension) will be installed to produce radiation in the wavelength range of $240-2600$~nm. Additionally, THz radiation can be generated on the auxiliary beampath. Generation of single-cycle broadband $\sim 1$~THz pulses will be realized in the tilted wave-front geometry in a LiNbO$_3$ crystal \cite{yeh2007,hebling2008}. To generate a narrow-band high-intensity tunable THz field, a setup based on difference frequency generation of two pulses in a DAST crystal \cite{liu2017} is planned.
\subsection{Pump-probe setup for XUV and auxiliary beam}
Many experiments at the MAC end-station require synchronized XUV and auxiliary beams. A pump-probe setup for XUV and NIR pulses is shown in Fig.~\ref{fig_pump_probe}(a). To ensure a colinear focusing geometry, the XUV beam is focused by the ellipsoidal mirror and passes through a flat mirror with a hole. The auxiliary beam is coming from the top. It is focused with a lens and then recombined with the XUV beam using the flat mirror with the hole. A YAG:Ce fluorescence screen, mounted on a motorized stage, can be placed into the interaction region to image the XUV focal spot. The interaction plane is imaged with an infinity-corrected microscope objective (magnifications of 2$\times$ and 5$\times$ are available) on a camera (Manta G-125B, pixel size 3.75~$\mu$m) placed outside the MAC chamber (Fig.~\ref{fig_pump_probe}(b,c)). Spatial overlap is ensured by centering the two focal spots on the same position on the camera. The mirror with the hole is mounted on a motorized mount allowing fine-tuning of the spatial overlap of the beams. Besides the YAG:Ce screen, a calibrated XUV photodiode can be moved into the XUV beam to determine the photon flux in the interaction region.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{fig5_pump_probe2.pdf}
\caption{\label{fig_pump_probe} (a) Photograph of the setup to determine spatial and temporal overlap of the XUV and auxiliary beams. The beams are focused colinearly into the interaction region. Ions are accelerated by the electrical field of 100~V\,mm$^{-1}$ between two accelerator (accel.) plates and detected with a microchannel plate detector (not shown). (b) 5$\times$ magnified image of the focal spot of the 15th harmonic on a YAG:Ce screen. (c) 5$\times$ magnified image of the auxiliary beam focal spot at the same position as the XUV beam focus. Camera pixel size 3.75~$\mu$m. (d) Transient He$^+$ yield measured as a function of the delay between XUV and NIR pulses (15th harmonic, photon energy 23.5~eV, 10$^7$~photons/shot, NIR peak intensity $10^{13}$~W\,cm$^{-2}$) with time-delay steps of 10~fs. Each data point is an average over 3000 single shots. Red solid line is a fit with the Gaussian function.}
\end{center}
\end{figure}
The temporal overlap of the XUV and auxiliary beams is found by detecting ions created by the ionization of an atomic gas in combined XUV and NIR fields. A metal capillary with inner diameter of 0.88~mm, placed few millimeters from the interaction region, is used to deliver a gas target. Ions created in the interaction are detected by an ion time-of-flight spectrometer (section~\ref{sec_tof}). Alternatively, velocity map imaging optics (section~\ref{sec_vmi}) can be used. Fig.~\ref{fig_pump_probe}(d) shows the transient He$^+$ yield measured as a function of delay between the 15th harmonic of the Legend laser (photon energy 23.5~eV, 10$^7$~photons/shot) and an auxiliary beam with a peak intensity of $10^{13}$~W\,cm$^{-2}$. There is no He$^+$ signal in the presence of only one beam, because the 15th harmonic photon energy of 23.5~eV is below the ionization energy of He (24.6~eV) and the NIR beam intensity is not sufficient to tunnel-ionize He. He$^+$ ions are created by non-sequential two-color ionization \cite{bottcher2007} in combined harmonic and NIR pulses. The FWHM of the cross-correlation trace is 130~fs.
\section{Sample delivery systems}
The MAC end-station is primarily designed for experiments on low-density targets, such as atoms, molecules, clusters or aerosols. In this section, sample delivery systems at the MAC end-station are reviewed. Besides low-density targets, it is also possible to install solid samples into the MAC chamber. Piezo-driven manipulators with linear (xyz) degrees of freedom as well as sample rotation are available for fine alignment of fixed targets.
\label{sec_samples}
\subsection{Molecular and cluster beams based on pulsed valves}
\label{sec_mol_beams}
Beams of cold molecules or clusters are of interest for chemical physics, fundamental investigations in AMO sciences \cite{reid2012} or for spectroscopic studies of pure or doped clusters and nanodroplets \cite{mudrich2014,toennies2004}. These beams are produced by gas expansion into a vacuum. Typically a pulsed valve is used for gas expansion into a vacuum in order to achieve high flux in the droplet beam \cite{mudrich2014} while limiting the requirements on the pumping speed.
Two instruments based on pulsed valves are available at the MAC end-station: a molecular beam source (Fig.~\ref{fig_cluster_source}(a)) and a cluster source (Fig.~\ref{fig_cluster_source}(b)).
In the molecular beam setup (Fig.~\ref{fig_cluster_source}(a)) the gas expands through a valve (Amsterdam Piezo Valve, model ACPV2-100) equipped with a conical 100~$\mu$m diameter nozzle with an opening angle of 40$^{\circ}$. The valve operates at room temperature with backing pressures of up to 30~bar and a repetition rate of up to 5~kHz. It produces gas pulses with a duration of 20~$\mu\mathrm{s} - 1$~ms (or continuous) and with $>10^{16}$ particles per pulse. The pulsed valve is mounted on a stage with three degrees of freedom (xyz), while a skimmer (Beam Dynamics, available diameters between $0.2-3$~mm) is placed on a fixed mount. The distance between the skimmer and the interaction region is 195~mm and the nozzle-skimmer distance can be varied from 10~mm to 130~mm. The vacuum chamber for the molecular beam source is pumped by a 2100~l\,s$^{-1}$ turbomolecular pump. Additional pumping can be added if required.
The second instrument using a pulsed valve is dedicated for the production of clusters or helium nanodroplets (Fig.~\ref{fig_cluster_source}(b)). The gas expands through a cryo-cooled Even-Lavie valve \cite{even2015} that can operate at a maximum repetition rate of 500~Hz, with a typical opening time of $15-30~\mu$s, a backing pressure up to 100~bar, and operating temperatures down to 4~K. Low temperatures are achieved using a two-stage helium cryo-cooler (Sumitomo, RDK-408D2). The nozzle is trumpet shaped with a $70 - 80\;\mu$m hole and a 40$^{\circ}$ cone angle (Even-Lavie $\#$ 2-70-T-Ry Sapphire (Ruby)). After the gas expands, it propagates through two skimmers (Beam Dynamics) to ensure efficient differential pumping of the vacuum chambers. Both skimmers are mounted on 2-axes motorized stages to allow their alignment under vacuum. The distances between valve, skimmers and the interaction region are indicated in Fig.~\ref{fig_cluster_source}(b). The Even-Lavie valve is mounted on an xyz manipulator, thus the distance between the valve and the first skimmer can be adjusted in the range of $\sim110-210$~mm. The whole setup is separated from the main MAC chamber by a gate valve with a window. The cluster source setup is pumped by two turbomolecular pumps with pumping speeds of 2200 and 1600~l\,s$^{-1}$, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{fig6_molecular_cluster_source.pdf}
\caption{\label{fig_cluster_source} (a) Model of the molecular beam setup. Gas is expanded to vacuum through a pulsed valve. A skimmer selects the central part of the molecular beam. (b) Model of the cluster source based on a cryo-cooled Even-Lavie valve. Dimensions in mm are indicated. TMP -- turbomolecular pump.}
\end{center}
\end{figure}
An upgrade of the cluster apparatus will introduce the capability to dope clusters with atoms or molecules. For this purpose a doping stage can be added to the second vacuum chamber (between the two skimmers). A leak valve is installed in this chamber to provide a source of atoms that can be picked up by the helium droplets. Additionally, an oven will be used to evaporate atoms or molecules of different species that can attach to helium droplets. Altogether, the cluster and the molecular beam sources provide important targets for investigations in AMO sciences.
\subsection{Microfluidic liquid sheets and jets}
Free-flowing liquid jets, both cylindrical \cite{deponte2008,nelson2016} or sheet jets \cite{koralek2018}, are available for use at the MAC end-station.
Ultrathin cylindrical liquid jets can be produced by a gas dynamic virtual nozzle (GDVN) \cite{deponte2008,muhlig2019}. A GDVN system consists of two concentric capillaries: an inner capillary for the sample liquid and an outer capillary for a carrier gas (typically helium). The coaxial gas dynamic forces compress the liquid jet, leading to a jet with a diameter much smaller than the solid nozzle diameter. GDVNs are generally manufactured either from glass capillaries \cite{deponte2008} or by 3-dimensional printing with sub-micrometer resolution \cite{nelson2016}. The GDVNs available at ELI Beamlines have been manually fabricated at Uppsala University. They typically operate with flow rates in the range of $0.5-5\;\mu$l\,min$^{-1}$ and carrier gas pressures around $15-30$~bar, producing liquid jets with $0.3-1\;\mu$m diameter.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{fig7_sheets2.pdf}
\caption{\label{fig_sheets} (a) Vacuum-compatible microfluidic sheet jet assembly during operation. (b) Flat liquid sheet jet produced from a nozzle made of two 0.4~mm-thick borosilicate glass wafers etched together. The width of the water sheet at its widest part is around 200~$\mu$m.}
\end{center}
\end{figure}
Ultrathin sheet jets at ELI Beamlines have been developed as part of a collaboration with LCLS SLAC \cite{koralek2018}. The liquid sheet jets are generated using a microfluidic glass chip (Fig.~\ref{fig_sheets}) that contains three microfluidic channels: two side channels for gas and a central channel for liquid. The liquid is compressed by two colliding gas jets to form a thin sheet. Using gas dynamic forces instead of colliding liquid jets \cite{ekimova2015,george2019} leads to a lower flow rate and reduced sheet thickness. Under typical operating conditions (liquid flow rate $0.1-0.5$~ml\,min$^{-1}$ and a gas flow rate up to $50-300$~ml\,min$^{-1}$), the thickness of the liquid sheet can range from $> 1~\mu$m down to $\sim 20$~nm. The flat sheet length is typically on the order of 100~$\mu$m with a width between 10 and 100~$\mu$m.
Although the microfluidic chip has been primarily designed for ultrathin liquid sheet jet generation, it has two alternative modes of operation. When the liquid is running through the central channel and side channels are not used, a cylindrical jet with diameter of around $20-30\;\mu$m is formed. Alternatively, when liquid is injected through the two side channels and the central channel is unused, a colliding sheet jet with a thickness in the range of $3-10\;\mu$m is produced \cite{schulz2018}.
The liquid sheet jet system at ELI Beamlines has been commissioned in air. However, based on experiments performed elsewhere, we expect the liquid sheet jet to operate stably at a vacuum level of 10$^{-3}$~mbar \cite{koralek2018}. The vacuum level can be further improved by employing a differentially-pumped shroud around the liquid jet source or using a cold trap.
\subsection{Aerosol nanoparticle injectors}
\label{sec_injector}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig8_injector4.pdf}
\caption{\label{fig_injector} (a) Schematic diagram of the aerosol injector comprising of an aerosolization unit (virtual gas dynamic nozzle (GDVN) or an electrospray (ES) system), a differentially pumped skimmer chamber (or two skimmer chambers for ES) and aerodynamic lens stack for focusing particles. (b) Dusting test with CsI nanoparticles at ELI Beamlines. Top left -- microscope image of the dusting spot on a glass covered with grease.}
\end{center}
\end{figure}
The aerosol nanoparticle injector, the so-called “Uppsala injector”, was originally developed at Uppsala University to deliver a focused beam of substrate-free nanoparticles, biomolecules, viruses or cells to vacuum for CDI with FELs \cite{ho2020,rath2014,hantke2014,seibert2011,schot2015,sobolev2020,andreasson2014,hantke2018,bielecki2019}. Two types of sample aerosolization for the injection can be used: a GDVN \cite{deponte2008,hantke2018} or an electrospray (ES) unit \cite{bielecki2019,chen1995,ganan-calvo2009}. Both are available at ELI Beamlines.
Using the injector with a GDVN, an aerosol is created from the liquid sample as the jet from the GDVN breaks up into droplets. The aerosol propagates through a skimmer chamber for excess gas removal (Fig.~\ref{fig_injector}). After that it enters an aerodynamic lens stack (ALS) \cite{wang2005,wang2006,liu2007}, consisting of a set of apertures that guide and focus the nanoparticle beam into the MAC chamber. Water from the sample solution is evaporated on the way to the vacuum chamber and dry container-free nanoparticles are delivered for the interaction with the laser. Nanoparticles with sizes in the range of $\sim70-2000$~nm can be injected by the injector with the
GDVN and the focused particle beam waist can have a diameter down to $\sim10~\mu$m \cite{hantke2018}. The volume density of the injected nanoparticles is typically of the order of $10^8$~cm$^{-3}$.
The injection at ELI Beamlines has been verified by a dusting test of the injected nanoparticles (Fig.~\ref{fig_injector}(b)). A 0.1\% water solution of CsI salt was injected onto a microscope glass covered with vacuum grease, located 25~mm from the ALS tip. After the injection, a white dusting spot on the glass was visible by eye, confirming injection of CsI to the chamber (Fig.~\ref{fig_injector}(b)). For a more accurate determination of particle sizes and velocities a Rayleigh scattering setup is under development \cite{hantke2018}.
The second aerosolization system available for the MAC end-station is an electrospray unit \cite{chen1995,ganan-calvo2009}. For ES aerosolization, the sample is prepared in a conducting solvent (typically ammonium acetate) and a voltage of about $2 - 3$~kV is applied to it. The sample liquid flows through a capillary placed close to a grounded plate and the liquid jet from the capillary is squeezed by electrostatic forces into a Taylor cone (Fig.~\ref{fig_electrospray}(a)). The tip of the Taylor cone breaks into charged droplets, which are neutralized by an X-ray source, and then enter an injector setup consisting of two skimmer chambers and an ALS, similar to that of the injector with the GDVN (Fig.~\ref{fig_injector}(a)). The injector with ES unit is suitable for injection of (bio)particles with diameters down to $\sim$10~nm \cite{bielecki2019}.
The operation of the ES unit was commissioned with a differential mobility analyzer (DMA, TSI Incorporated). For the DMA measurement, the output of the ES unit is connected with a conductive resin tube to the DMA instrument, where nanoparticles drift in an applied electric field and their diameters are determined from their electrical mobility. Results of the size measurement of sucrose nanoparticles (1\% sucrose concentration in 25~mM ammonium acetate) are shown in Fig.~\ref{fig_electrospray}(b). The mean diameter was found to be 22.5~nm, which is within the expected size range for this concentration.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig9_electrospray.pdf}
\caption{\label{fig_electrospray} Electrospray injection at ELI Beamlines. (a) Taylor cone created by electrospraying a 10\% sucrose solution in ammonium acetate. (b) Histogram of sucrose particle sizes measured with the differential mobility analyzer. Mean diameter is 22.5~nm.}
\end{center}
\end{figure}
The injector, either with GDVN or ES, can be installed on the top DN250CF port of the MAC chamber on an xyz manipulator. When the injector is in operation, the vacuum level in MAC chamber rises typically to $10^{-6}-10^{-5}$~mbar, still allowing ion or electron spectroscopy \cite{klimesova2019} (section~\ref{sec_tof}, Fig.~\ref{fig_itof}(b)).
\section{Charged particle spectrometers and photon detectors}
\label{sec_detectors}
\subsection{Electron and ion time-of-flight spectrometer}
\label{sec_tof}
Electrons or ions created during interactions in the MAC end-station can be detected by a linear electron or ion time-of-flight (ToF) spectrometer. Kinetic energies of electrons and the mass-to-charge ratio of ions are determined from their flight times. To increase the acceptance solid angle of electrons, electrostatic lenses can be used. The detector is a 40~mm diameter microchannel plate (MCP) coupled to an anode. Data from the anode can be acquired at 1~kHz repetition rate on a single-shot basis using a digitizer (SP Devices, 10~GSa/second, 14~bit resolution).
For detection of ions, a set of plates with an applied voltage for ion acceleration, mounted on a piezo-driven xyz stage, can be used (see inset in Fig.~\ref{fig_itof}(b) as an example). Measured ion ToF traces from a xenon gas (chamber backfilled at $7.4\times10^{-7}$~mbar) irradiated with an intense 800~nm laser beam (pulse duration 120~fs, peak intensity $2\times10^{15}$~W\,cm$^{-2}$) are displayed in Fig.~\ref{fig_itof}(a). Xe ions with charges up to 5+ can be observed, which is expected for the laser intensity used. Five Xe isotopes with mass numbers $A = 129$, 131, 132, 134 and 136, respectively, can be well resolved in all charge states, confirming sufficient mass resolution of the ion ToF spectrometer.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{fig10_itof.pdf}
\caption{\label{fig_itof} (a) Measured ion time-of-flight trace from xenon gas (chamber backfilled at $7.4\times10^{-7}$~mbar) irradiated by an intense laser beam (800~nm, 120~fs, $2\times10^{15}$~W\,cm$^{-2}$). Five Xe isotopes are resolved. (b) Measured single-shot ion time-of-flight traces from injected CsCl nanoparticles (diameter $\sim$100~nm) irradiated by an intense laser beam (800~nm, 120~fs, $2\times10^{15}$~W\,cm$^{-2}$). Traces are presented with a vertical offset for clarity. Top trace: background and carrier gas. Bottom trace: CsCl nanoparticle, (ph. stands for photon peak). Inset: 3D model of electrostatic plates for ion acceleration.}
\end{center}
\end{figure}
The ion ToF spectrometer can be used together with the nanoparticle injector (section~\ref{sec_injector}).
In the first commissioning experiments, the gas background from the nanoparticle injector has been thoroughly characterized by the determination of energies of ions created in the interaction with a strong NIR laser field. It has been shown that a plasma channel can be created in the interaction volume and the measured ion energies can be used to estimate the gas density \cite{klimesova2019}.
Using this setup, we have also performed ion spectroscopy on injected CsCl nanoparticles (diameter $\sim$100~nm) irradiated by a strong laser beam (wavelength 800~nm, pulse duration 120~fs, peak intensity $2\times10^{15}$~W\,cm$^{-2}$), as shown in Fig.~\ref{fig_itof}(b). Ions were detected for single laser shots. When a nanoparticle is hit, Cl$^+$ and Cs$^+$ ions from the nanoparticle and a photon peak appear in the ion ToF spectrum. Cl$^+$ and Cs$^+$ peaks are rather broad because they contain ions with different kinetic energies, coming from the explosion of the laser-irradiated nanoparticle.
This measurement demonstrates the possibility to perform ion ToF spectroscopy on different types of nanoparticles injected into the vacuum chamber using the aerosol injector.
\subsection{Velocity map imaging spectrometer}
\label{sec_vmi}
In a velocity map imaging (VMI) spectrometer the velocity vector of charged particles (ions or electrons) is mapped on the position on the detector. The VMI spectrometer at the MAC end-station is based on the design of Eppink and Parker \cite{eppink1997} and has been manufactured by Photek (Velocitas). It consists of a repeller, extractor and a ground electrode, followed by a field-free region and a position sensitive detector (Fig.~\ref{fig_vmi}(a)). For sharp velocity imaging, the voltage on the extractor is set to around 0.7-times the repeller voltage. The detector is a 75~mm diameter MCP with a P43 phosphor screen and a camera (IDS, UI-3060CP-M-GL, 166 frames per second). The MCP can be gated at a repetition rate of up to 1~kHz with a width of the temporal gate ranging from 9~ns (FWHM) up to few $\mu$s.
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{fig11_vmi5.pdf}
\caption{\label{fig_vmi} (a) Schematics of the velocity map imaging (VMI) electrostatic lens showing simulated trajectories of 20~eV electrons. Voltages on the electrodes in the simulation: $V_R=-1.8$~kV, $V_E=-1.26$~kV. The VMI lens is attached to the position sensitive detector (Photek/Velocitas), shown in the bottom panel. (b,d) Raw VMI images of electrons created by (b) single photon ionization of argon by high harmonic beam (photon energies $23-29$~eV) and (d) above-threshold ionization of xenon by the auxiliary NIR beam (800~nm, 130~fs, $2\times10^{14}$~W\,cm$^{-2}$). (c,e) Slices through 3D photoelectron momenta distributions reconstructed from (b) and (d), respectively, by the polar onion peeling algorithm \cite{roberts2009}. The laser polarization direction is indicated by arrows.}
\end{center}
\end{figure}
The VMI spectrometer at the MAC end-station has been tested both with the high harmonic beam and the auxiliary NIR beam (Fig.~\ref{fig_vmi}(b-e)). In the first case, an argon atomic beam (section~\ref{sec_mol_beams}) was ionized by the HHG beam with photon energies in the range of $23-29$~eV (15th$- 19$th harmonic). In the second case, the MAC chamber was backfilled with xenon that was ionized by an auxiliary NIR beam (800~nm, 130~fs, $2\times10^{14}$~W\,cm$^{-2}$).
The raw photoelectron images on the camera (Fig.~\ref{fig_vmi}(b,d)) were reconstructed by the polar onion peeling algorithm \cite{roberts2009} (Fig.~\ref{fig_vmi}(c,e)). In the single-photon ionization of argon (Fig.~\ref{fig_vmi}(b,c)), three rings, corresponding to electrons from Ar $3p$ orbital released by three distinct harmonics (15th, 17th and 19th), are well resolved in the reconstructed image. The noise in the center of the reconstructed image comes from the chamber background and from numerical errors accumulated at small radii \cite{roberts2009}. In the above-threshold ionization of xenon (Fig.~\ref{fig_vmi}(d,e)), rings spaced by a photon energy of 1.55~eV are visible. Moreover, a pattern arising from interference of different quantum trajectories \cite{korneev2012} is observed. These results confirm the good performance of the VMI spectrometer.
\subsection{Magnetic bottle electron spectrometer}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig12_MBES.pdf}
\caption{\label{fig_mbes} Model of the magnetic bottle electron spectrometer (MBES) installed on the top port of the MAC end-station. A frame around the MBES and a frame connecting the MBES to the wall are also shown.}
\end{center}
\end{figure}
A magnetic bottle electron spectrometer (MBES) is under development for the MAC end-station. The design of the MBES (Fig.~\ref{fig_mbes}) is similar to that described by Eland \textit{et al.} \cite{eland2003} with an about 2~m long time-of-flight tube. The MBES will provide a very high collection and detection efficiency of essentially all electrons emitted into a solid angle of 4$\pi$ in the interaction region over a large range of electron energies \cite{roos2016}. With the implementation of a funnel type MCP detector (Hamamatsu, model F9892-31) with an open area ratio of >90\%, the electron detection efficiency has the potential to reach 90\% for electrons with energies ranging from near-zero to about 200~eV. The MBES should provide an energy resolution of about $\Delta E = E/50$. It will be possible to mount the MBES in either vertical or horizontal orientation onto the MAC chamber to accommodate for different configurations.
It will also be possible to operate the MBES as an electron-ion coincidence spectrometer when combined with a Wiley-McLaren type mass spectrometer \cite{eland2006}. The expected electron energy resolution in this configuration is around $\Delta E = E/20$ \cite{roos2018}. The ion collection efficiency may reach 50\% \cite{roos2018}, giving an overall collection and detection efficiency of about 45\%. Thus, the MBES will provide an efficient detection tool for experiments that benefit from high collection efficiency, very good spectral resolution and simultaneous detection of ions and electrons.
\subsection{Photon imaging detectors}
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{fig13_CDImodel3.pdf}
\caption{\label{fig_cdi} (a) Setup for coherent diffractive imaging of solid samples in the back-focusing geometry. The sample surface is observed by the in-line microscope. (b) Raw diffraction image measured from a solid sample resembling the ELI logo irradiated by the 21st harmonic selected by the monochromator. (c) Image reconstructed from the diffraction pattern in (b).}
\end{center}
\end{figure}
Besides ion and electron spectroscopy, XUV photons scattered from the sample can be detected at the MAC end-station. For CDI experiments, an in-vacuum back-illuminated CCD camera is available (PI-MTE, Princeton Instruments, $2048\times2048$ imaging array, 13.5~$\mu$m pixel size, 100\% fill factor, photon energy range 1~eV $- 10$~keV).
Two configurations are possible: (i) a forward focusing geometry (Fig.~\ref{fig_focusing}(a)) for single-harmonic CDI with a monochromatized beam and (ii) a back-focusing geometry with a coated OAP (Fig.~\ref{fig_cdi}(a)), allowing CDI with multiple harmonics depending on the reflectivity of the specific coating.
In both geometries, the sample is mounted on a piezo-driven stage (providing xyz movement and rotation around the vertical axis). Multiple samples can be mounted on the sample holder. An in-line microscope can be used to monitor the sample surface and optimize the focal spot. This microscope has an objective lens with a hole, allowing propagation of the XUV beam through the hole, while observing the sample surface.
The feasibility of single-harmonic CDI in the forward focusing geometry (i) has been verified experimentally \cite{nejdl2021}. A sample resembling the ELI logo with a size of $5\times4\;\mu\rm{m}^2$ was illuminated by the 21st harmonic (photon energy 32.9~eV, wavelength 37.7~nm) selected by the monochromator. A coherent diffraction pattern (Fig.~\ref{fig_cdi}(b)) was captured by the PI-MTE camera. The image was reconstructed using a phase retrieval algorithm based on Fienup’s hybrid input-output \cite{fienup1978,bauschke02} combined with Luke’s relaxed-averaged-alternating-reflections algorithm \cite{luke2004}. Optimized retrieval was achieved after 35 iterations, resulting in good agreement between the reconstructed image (Fig.~\ref{fig_cdi}(c)) and the SEM image of the sample (shown in \cite{nejdl2021}).
Other experimental geometries for CDI experiments can be investigated together with the user community. CDI experiments with high-harmonic source on fixed samples \cite{malm2020,vodungbo2012} or substrate-free nanoscale objects \cite{rupp2017} are foreseen.
\section{Conclusions}
The MAC end-station at the ELI Beamlines facility is a modular station for AMO sciences and CDI, exploiting an XUV beam produced by high harmonic generation. To enable a pump-probe capability, a synchronized auxiliary beam, which can cover a broad range of the electromagnetic spectrum, is available. The station is equipped with state-of-the art instruments for delivery of low-density samples and jets: atomic, molecular and cluster beams, liquid jets, injected organic or inorganic nanoparticles, and with advanced detectors for ions and electrons as well as scattered photons. The commissioning experiments have demonstrated the functionality of electron and ion spectrometers, of the molecular and cluster beam sources, the aerosol injector and the ability to perform pump-probe measurements. Nanoparticle injection together with ion spectroscopy has been shown. The MAC end-station is thus ready for user experiments in AMO sciences and CDI.
\section{Facility access for users}
The MAC end-station is open to users for scientific investigations in the area of AMO sciences and CDI. Access is obtained through peer-review of applications based on scientific excellence. More information on the user policy and open calls can be found at the ELI Beamlines user's web-page (\url{https://www.eli-beams.eu/users/}). Key capabilities, featured upgrades and recent research achievements are also updated at the instrument's web-page (\url{https://www.eli-beams.eu/facility/experimental-halls/e1-material-and-biomolecular-applications/mac/}). Users at ELI Beamlines can also access support labs (eg. Biolab, biochemical and chemical labs) that are equipped with advanced instruments for sample preparation and characterization, microscopes, spectrometers and controlled environment chambers (\url{https://www.eli-beams.eu/facility/laboratories-workshops/laboratories/}).
\section{Acknowledgments}
The authors thank the research groups of Marcel Mudrich (Aarhus University, Denmark), Russell Minns (University of Southampton, United Kingdom), Thomas M\"oller (Technische Universit\"at Berlin, Germany) and Raimund Feifel (University of G\"oteborg, Sweden) as well as Thomas Gebert (Max Planck Institute, Hamburg, Germany), Bernd Sch\"utte (Max Born Institute, Berlin, Germany), Sa\v{s}a Bajt (DESY, Hamburg, Germany), Daniel DePonte and Jake Koralek (SLAC National Accelerator Laboratory, USA), Pamir Nag (J.~Heyrovský Institute of Physical Chemistry, Prague, Czech Republic), Tim Oelze (Technische Universit\"at Berlin, Germany) and Laura Dittrich for support during the commissioning phase and the contributions to the development of the MAC station through scientific discussions.
The authors are grateful to Zden\v{e}k Svoboda, Alexey Sterenzon, Kamil Kropielnicki and Martin P\v{r}e\v{c}ek for their technical help. The authors acknowledge the ELI Beamlines L1 and control system teams and Janos Hajdu for continuous support of the project. The authors thank Rachael Jack for manuscript revision.
This work was supported by the projects Advanced research using high intensity laser produced photons and particles (ADONIS) (CZ.02.1.01/0.0/0.0/16\_019/0000789), Structural dynamics of biomolecular systems (ELIBIO) (CZ.02.1.01/0.0/0.0/15\_003/ 0000447) (both from the European Regional Development Fund and the Ministry of Education, Youth and Sports) and the European Cluster of Advanced Laser Light Sources (EUCALL) via the Horizon 2020 Research and Innovation Programme under grant agreement no.~654220.
\section{Author contributions}
J.A. conceived the project.
M.K. coordinated the development and operation of the MAC end-station. J.N. coordinated the development and operation of the HHG beamline.
E.K., O.K., Z.H., A.H.R., K.P.K., M.R., M.J., M.A., O.F., R.L., O.H., D.D.M., J.N., M.S., D.W., A.W., T.L., F.F., L.P., J.A. and M.K. contributed to the development of instruments and experimental infrastructure.
E.K., O.K., Z.H., A.H.R., K.P.K., M.J., M.A., O.F., O.H., D.D.M., R.B.F., L.B.L, F.F., L.P., J.A. and M.K. contributed to commissioning experiments.
E.K. analyzed the data and prepared figures.
The manuscript was written by E.K. with guidance from M.K. and input from O.K., Z.H., A.H.R., O.H., D.D.M., J.N., A.W. and J.A. All authors reviewed the manuscript.
|
2,877,628,088,325 | arxiv | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[ width=8cm, height=4.5cm]{fig/background.pdf}
\includegraphics[ width=8cm, height=4.5cm]{fig/purpose.pdf}
\caption{Motivation of our privacy-preserving system. (a) represents the risk of privacy leakage in current surveillance systems. The potential abuse of sensitive identity information leads to the deficiency of public datasets, which restricts related research. (b) represents our privacy protection method in surveillance systems. Our anonymized pedestrian images not only protect sensitive information from abuse, but also are suitable for person re-identification research that can be applied to crime investigation. Besides, original raw images cannot be recovered from our anonymized images by authorized users.}
\label{fig:motivation}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[ width=17cm, height=6.5cm]{fig/application.pdf}
\caption{{Various utilities of our surveillance systems for different targets. Our anonymized images can not only protect privacy from abuse by malicious attackers, but also retain utilities for authority and the researchers. }}
\label{fig:application}
\end{figure*}
With the development of machine learning and the expansion of personal private data, various intelligent applications emerge and bring lots of utility value to individuals and the society. However, sensitive personal information raises serious privacy issues that are becoming increasingly prominent. Due to the privacy concerns, a lot of organizations have to make a compromise, e.g., Meta (Facebook) is shutting down its facial recognition software recently \cite{facebook}. Moreover, public research datasets are also influenced by the privacy concerns, e.g., DukeMTMC dataset \cite{ristani2016performance} and Tiny Image dataset \cite{torralba200880} were taken down and all the facial areas in ImageNet \cite{yang2021study} are blurred. As a result, privacy protection provides security while restricting technology improvement. Therefore, finding a privacy-utility trade-off point is of great importance.
In surveillance scenarios, privacy issues are particularly evident, as shown in Fig.~\ref{fig:motivation}(a). Ubiquitous surveillance systems take a huge number of raw pedestrian images and videos, which are stored in local storage or uploaded to the cloud servers. On one hand, it is useful for legal users in many scenarios, e.g., in crime investigation. On the other hand, this raises serious privacy concerns for individuals and public safety, since original images or videos contain sensitive information about the pedestrians, e.g., realistic identity information of a specific person or special community. Without careful protection, the highly sensitive information might be leaked and abused by malicious parties for nefarious purposes. For example, malicious attackers may recognize individual realistic identities and spy on the individuals for further crime or even forge them via Deepfake \cite{deepfake}. Moreover, due to potential privacy concerns, public surveillance datasets are scarce and sometimes taken down like DukeMTMC-reid \cite{ristani2016performance}. The deficiency of public datasets restricts the improvement of related research like person re-identification, and thus limits the development of intelligent video surveillance. Therefore, there is an urgent need to address the privacy issues for pedestrian images while retaining the utility value.
To tackle the above issues, a feasible anonymization solution is illustrated in Fig.~\ref{fig:motivation}(b). To prevent abuse after leakage, such an anonymization method is expected to have a satisfying visual obfuscation effect to ensure malicious parties cannot draw identity information from the anonymized images by human eyes. With the privacy preserved, the anonymized images can be securely used as public datasets. To retain the data utility for various users, the original raw images should be able to be recovered from the protected images for authorized utility, e.g., for police officers to investigate crime. Moreover, considering person re-identification (Re-ID) is imperative in intelligent surveillance systems with significant research impact and practical importance \cite{ye2021deep}, the anonymized images are supposed to retain necessary information for researchers to perform Re-ID tasks. In summary, \textit{an ideal anonymization method for pedestrian images should be reliable for privacy security, reversible for authorized utility and suitable for person re-identification research.}
Lots of research work has been done on image anonymization. \textit{Conventional anonymization methods}, e.g., blurring, pixelation and Gaussian noise adding, face the problem of semantic information loss, causing significant declines in utility value. To explore the privacy-utility trade-off, new techniques and mechanisms are proposed to de-identify images or videos to fool identification models while achieving various identity-irrelevant utility goals \cite{maximov2020ciagan, li2019anonymousnet, chen2021perceptual, ren2018learning, proencca2020uu, you2021reversible}, such as reversibility \cite{proencca2020uu}, privacy preserving action detection \cite{ren2018learning}, smile recognition \cite{you2021reversible} and so on. However, these anonymization techniques change the original individual identity to get a low identification rate by recognition model. In terms of crime investigation scenario, the identity variance is unsuitable for person re-identification task which is identity-relevant and requires that original and anonymized images of a specific pedestrian share the same virtual identity. Recently, Dietlmeier et al. \cite{dietlmeier2022improving} propose an anonymization dataset for Re-ID, which detect and blur the facial regions. However, non-face regions may also cause privacy leakage and the original images cannot be reconstructed from their anonymized images.
To achieve the goal illustrated in Fig.~\ref{fig:motivation}(b), we propose a new reversible anonymization framework for pedestrian images, which can reversibly generate full-body anonymous images with little performance drop on Re-ID task. As shown in Fig.~\ref{fig:application}, the identity information of our anonymized pedestrian images can be invisible to attackers, but recoverable for authorized users and computable for researchers to perform Re-ID task on hybrid domains (raw and anonymized). The core idea of our work is that we first desensitize raw images by conventional methods (i.e., blurring, pixelation, or noise adding). Then these desensitized images are adopted as initial supervision images for an anonymization encoder which can translate raw images to privacy-preserving images in a learnable manner. To preserve necessary features for recovery and person re-identification, we jointly optimize the anonymization encoder with a recovery decoder and a Re-ID model. Through supervised and joint learning, our anonymized images can achieve good performance on privacy protection, recovery, and person re-identification. Besides, to further improve Re-ID performance, we propose a progressive training strategy referred to as \textit{supervision upgradation}. The supervision is upgraded by replacing the original desensitized images with the learned anonymized images, which are constrained by both privacy protection and Re-ID performance. Our main contributions are summarized as follows:
\begin{itemize}
\item To the best of our knowledge, we are the first to explore the privacy-utility trade-off for pedestrian images from a Re-ID perspective, in which anonymized images cannot be recognized by third parties, but are recoverable for authorized users and suitable for person re-identification research.
\item We propose a reversible anonymization framework for Re-ID, which jointly optimizes an anonymization encoder with a recovery decoder and achieves the goal of obfuscating the image while keeping the identity for the authorized model.
\item We design a progressive training strategy called \textit{supervision upgradation}, which improves Re-ID performance by progressively upgrading the supervision of anonymization target.
\item We experimentally show that our anonymized images achieve good performance for privacy protection, recovery, and person re-identification tasks.
\end{itemize}
\section{Related Work}
\textbf{Person Re-IDentification.}
Re-ID aims at retrieving a person of interest across multiple non-overlapping cameras \cite{ye2021deep}.
With the development of deep neural networks, many works adopt deep convolutional neural networks (CNNs) as the backbone to extract the features of person images \cite{ye2021collaborative, ye2021dynamic, ye2022aug}, and incorporate domain generalization \cite{zhou2021domainsurvey, zhou2021domain} to generalize better to unseen domains \cite{zhou2019omni, zhou2021learning}. The CNN-based baselines \cite{wang2018learning, luo2019strong, ye2021deep}, such as AGW \cite{ye2021deep} and so on, achieve great success and play a key role in Re-ID community.
However, public Re-ID datasets face the challenge of privacy concerns, e.g., DukeMTMC-reID \cite{ristani2016performance} dataset was taken down due to privacy issues. To tackle this problem, we propose an anonymization method for Re-ID research, which can protect privacy while retaining necessary features for Re-ID tasks.
\textbf{Image-to-Image Translation.} Image-to-image translation is the task of transforming original images into the target images with a different style. Zhu \textit{et al.} proposed Pix2pix network \cite{isola2017image} and its unsupervised variant CycleGAN \cite{zhu2017unpaired} which achieved impressive performance on paired and unpaired cross-domain image translation. In this work, we use two Pix2pix networks for anonymization and recovery. Other advanced models can also be applied.
\textbf{Face Anonymization.} Conventional face anonymization methods include pixelation, blurring, and noise adding. However, these methods cause semantic information loss, leading to performance degradation in detection and recognition. Therefore, researchers proposed many learnable anonymization methods based on face swapping \cite{li2019anonymousnet, ren2018learning, maximov2020ciagan, proencca2020uu, chen2021perceptual, sun2018natural, sun2018hybrid, gafni2019live, hukkelaas2019deepprivacy, kuang2021unnoticeable, kuang2021effective, wu2018privacy} to preserve important features for various utilities.
However, in some special cases, generated faces might overlap with realistic faces. Therefore, You \textit{et al.} \cite{you2021reversible} proposed a reversible face privacy-preserving framework based on a learned mosaic for smile recognition. Although the sole smile recognition task is easy and insufficient, the impressive results show that the semantic information for recovering and recognition can be invisibly embedded in protected images. This idea inspires us to anonymize pedestrian images for person re-identification.
\textbf{Privacy-Preserving Methods.}
To tackle the privacy concerns, a lot of approaches are proposed, including differential privacy \cite{dwork2006calibrating, dwork2008differential}, federated learning \cite{mcmahan2017communication, kairouz2019advances}, and so on.
By contrast, our method protects privacy from the source, and thus the protected data can be stored securely and centrally for easy access.
\begin{figure*}[t]
\centering
\includegraphics[ width=17cm, height=5.5cm]{fig/framework.pdf}
\caption{{Framework of the proposed method. The framework consists of four components. The anonymization model aims to produce privacy-preserving images in a learnable manner. The recovery model and Re-ID model are added to jointly optimize the anonymization model. The supervision (desensitized images) are progressively upgraded by learned anonymized images to further improve Re-ID performance. Different colors are corresponding to the different utilities illustrated in Fig.~\ref{fig:application}.}}
\label{fig:framework}
\end{figure*}
\section{Proposed Method}\label{sec:method}
In this section, we detail the methodology to reversibly anonymize images for Re-ID. As illustrated in Fig.~\ref{fig:framework}, our framework contains the following four components.
1) \textit{Anonymized Image Generation $\S$~\ref{sec:ano}.} We exploit the power of image-to-image translation with conditional adversarial networks to generate anonymized images in a learnable manner. With learnable anonymization, further objectives can be achieved by joint learning.
2) \textit{Raw Image Recovery $\S$~\ref{sec:rec}.} To achieve reversibility, a recovery model is added to embed necessary recovery information into the anonymized images.
3) \textit{Joint Learning with a Re-ID Model $\S$~\ref{sec:reid}.} To preserve features for re-identification, we jointly optimize the anonymization generator with a Re-ID model.
4) \textit{Progressive Supervision Upgradation $\S$~\ref{sec:upgrade}.} In order to further improve Re-ID performance, the supervision images are progressively upgraded according to the performance of anonymized images on privacy protection and Re-ID.
\subsection {Anonymized Image Generation}\label{sec:ano}
To generate anonymized images, our inspiration comes from the method of image-to-image translation, which can transform an input image into an output image of a specified form in a learnable manner. Our goal is to convert the original input image into a protected image form, and blurred, pixelated, or noise-added images can be adopted as initial supervision to guide the generation of privacy-preserving images. Specifically, the pix2pix \cite{isola2017image} framework for image translation based on GAN \cite{Goodfellow2014GenerativeAN} is introduced.
The training samples contain $\{x_i\}_{i=1}^n \in X$ with labels $\{label_i\}_{i=1}^m$, where $X$ are original images. We denote $\{y_i\}_{i=1}^n \in Y$, where $Y$ are supervision images which are initialized by conventionally desensitized images and can be further updated.
The goal of the anonymization generator $G_X$ is to learn a mapping function $G : X \rightarrow Y$. To achieve this goal, the generator $G_X$ is trained in an adversarial manner with a discriminator $D_Y$, where $G_X$ tries to generate images $G_X(x)$ similar to $y$ while $D_Y$ aims to distinguish between $G_X(x)$ and $y$. The adversarial objective of $G_X$ can be expressed as
\begin{equation}
\begin{aligned}
\mathcal{L}_{adv_1} = \ & \frac{1}{n} \sum\nolimits_{i=1}^{n} \log (D_Y(x_{i}, y_{i})) \ + \\
& \frac{1}{n} \sum\nolimits_{i=1}^{n} \log (1-D_Y(x_{i}, G_X(x_{i}))),
\end{aligned}
\label{eq:adv1}
\end{equation}
where n represents the number of training samples within each batch and $G_X$ tries to minimize the objective against an adversarial $D_Y$ that tries to maximize it.
Besides, $L_1$ loss between $G_X(X)$ and $Y$ is adopted to guarantee that the learned function can map an individual input $x_{i}$ to a desired output $y_{i}$ \cite{isola2017image}.
\begin{equation}
\label{eq:la}
\mathcal{L}_{1_{ano}} = \frac{1}{n} \sum\nolimits_{i=1}^{n} \|y_{i}-G_X(x_{i})\|_{1}.
\end{equation}
The total loss of the anonymization generator is
\begin{equation}
\label{eq:la}
\mathcal{L}_{ano} = \mathcal{L}_{adv_1} + \lambda_{L_1} \mathcal{L}_{1_{ano}},
\end{equation}
where $\lambda_{L_1}$ is a hyperparameter to reduce the artifacts \cite{isola2017image}.
\subsection{Raw Image Recovery}\label{sec:rec}
To make the process of generating anonymized images reversible, we design the raw image recovery to obtain generative raw images by inputting anonymized images. The pix2pix \cite{isola2017image} framework can be used to jointly optimize the anonymization generator.
The raw image recovery is similar to anonymization process. In contrast to $G_X$, the recovery generator $G_Y$ is trained to learn a mapping function $F : Y \rightarrow X$ and produce recovered images $G_Y(G_X(x))$, which cannot be distinguished from original raw images $x$. The adversarial objective function is similar to Eq.~\ref{eq:adv1}:
\begin{equation}
\begin{aligned}
\mathcal{L}_{adv_2} \ = \ & \frac{1}{n} \sum\nolimits_{i=1}^{n} \log (D_X(G_X(x_{i}), x_{i})) \ + \\ & \frac{1}{n} \sum\nolimits_{i=1}^{n} \log (1-D_X(G_X(x_{i}), G_Y(G_X(x_{i})))),
\end{aligned}
\end{equation}
where $G_Y$ and $D_X$ oppose each other like $G_X$ and $D_Y$.
To make $G$ and $F$ forward cycle-consistent, i.e., $x \rightarrow G(x) \rightarrow F(G(x)) \approx x$, a cycle consistency loss \cite{zhu2017unpaired} is adopted:
\begin{equation}
\mathcal{L}_{1_{rec}} = \frac{1}{n} \sum\nolimits_{i=1}^{n} \|x_{i}-G_Y(G_X(x_{i}))\|_{1}.
\end{equation}
The total loss of the recovery generator is:
\begin{equation}
\label{eq:la}
\mathcal{L}_{rec} = \mathcal{L}_{adv_2} + \lambda_{L_1} \mathcal{L}_{1_{rec}}.
\end{equation}
\subsection{Joint Learning with a Re-ID Model} \label{sec:reid}
We design to embed the Re-ID model into our architecture for joint learning, which follows the powerful baseline AGW \cite{ye2021deep} in the existing Re-ID research. In the field of Re-ID, anonymization is a solution to the privacy issues. However, directly adopting desensitized images with conventional obfuscation methods will greatly affect the performance of Re-ID. Therefore, we propose to apply hybrid images (original and anonymized) to jointly train the Re-ID model and the anonymization generator.
The Re-ID model takes paired inputs: raw images and their corresponding anonymized images with the same labels. By training on paired data, the Re-ID model learns to map original raw images and anonymized images of a specific person to the same virtual identity.
In detail, the Re-ID model contains three main components. \textit{a) backbone.} ResNet50 \cite{he2016deep} pre-trained on ImageNet \cite{deng2009imagenet} is adopted as the backbone with the stride of the last spatial down-sampling operation changed from 2 to 1. \textit{b) Generalized-mean (GeM) pooling.} The Global Average Pooling in the original ResNet50 is replaced with GeM \cite{ye2021deep} whose output is adopted for computing center loss and triplet loss during training process. \textit{c) BNNeck.} BNNeck \cite{luo2019strong} is added as a BN layer between features and FC layers.
The loss function is denoted by $\mathcal{L}_{AGW}(x)$, which combines three commonly used losses in Re-ID tasks including identity classification loss ($\mathcal{L}_{id}$), center loss ($\mathcal{L}_{ct}$) \cite{wen2016discriminative} and weighted regularization triplet loss ($\mathcal{L}_{wrt}$) \cite{ye2021deep} for optimization. To make our Re-ID model adaptive to both raw and privacy-preserving scenarios, both raw and anonymized images are added as the input. Therefore, the total loss of the Re-ID model is
\begin{equation}
\label{eq:la}
\mathcal{L}_{reid} = \mathcal{L}_{AGW}(x) + \mathcal{L}_{AGW}(G_X(x)).
\end{equation}
In summary, the final objective of our anonymization model for Re-ID on hybrid images is:
\begin{equation}
\mathcal{L} = \mathcal{L}_{ano} + \mathcal{L}_{rec} + \mathcal{L}_{reid} .
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[ width=8cm]{fig/upgrade-explanation.pdf}
\caption{{Illustration of supervision upgradation. The upgradation is based on the performance of protection and Re-ID.}}
\label{fig:upgrade-explanation}
\end{figure}
\subsection{Progressive Supervision Upgradation}\label{sec:upgrade}
In $\S$~\ref{sec:reid}, a Re-ID model is added for joint learning to make anonymized images $G_X(x)$ suitable for identity preserving. However, initial supervision images (i.e., blurred, pixelated or noise-added images) are not optimal as final supervision images because the semantic information loss leads to identity variance, and thus restricts the improvement of Re-ID performance. Therefore, as shown in Fig.\ref{fig:framework} briefly and Fig.~\ref{fig:upgrade-explanation} in detail, we progressively upgrade the supervision images during the training process to satisfy both privacy protection (unrecognizability) and Re-ID constraints. Specifically, the privacy constraint is met when PSNR and SSIM calculated between anonymized images and raw images (i.e., $PSNR_{ano}$ and $SSIM_{ano}$) are lower than those calculated between desensitized images and raw images (i.e., $PSNR_{des}$ and $SSIM_{des}$) plus a small positive value (i.e., $\epsilon_{psnr}$ and $\epsilon_{ssim}$). The Re-ID constraint is met when the rank-1 accuracy (i.e., $R1_{raw-ano}$) with raw images as $query$ and anonymized images as $gallery$ is higher than the previous maximum rank-1 value.
To begin with, we train a Re-ID model with raw and desensitized images as input and get the rank-1 value (i.e., $R1_{raw-des}$) with raw images as $query$ and desensitized images as $gallery$. Then, the desensitized images are adopted as the initial supervision images and the maximum rank-1 value is initialized to a $R1_{raw-des}$ plus small negative value ($-\epsilon_{r1}$).
While training, the supervision images will keep as desensitized images to guarantee the unrecognizability if the privacy need is not met and will be upgraded to $G_X(x)$ only when both constraints are satisfied. Through the supervision upgradation, our supervision images are becoming more adequate for Re-ID research while preserving privacy.
\begin{figure}[t]
\centering
\includegraphics[ width=8cm]{fig/architecture.pdf}
\put(-34.5,146.5){\scriptsize\textcolor{red}{ $\S$~\ref{sec:ano}}}
\put(-34.5,83){\scriptsize\textcolor{red}{ $\S$~\ref{sec:rec}}}
\put(-34.5,119){\scriptsize\textcolor{red}{ $\S$~\ref{sec:reid}}}
\put(-102,145){\scriptsize\textcolor{red}{$\S$~\ref{sec:upgrade}}}
\caption{{Flowchart of our training process. ``R1'' represents rank-1 accuracy with anonymized images as $query$ and desensitized images as $gallary$.}}
\label{fig:architecture}
\end{figure}
\subsection{Scheme of Training Process}\label{sec:scheme}
In Fig.~\ref{fig:architecture}, we show the flowchart of the training process. The input raw images are first desensitized by conventional methods, which initialize the supervision images. Under the supervision, the anonymization model learns to translate raw images to anonymized images. The anonymized images are then fed into the Re-ID model and recovery model to jointly learn to preserve necessary features for recovery and retrieval. Then our training process is continued with the supervision images upgraded according to the performance of our anonymized images on privacy protection and person re-identification. After training, the output anonymized images can be used to recover original raw images with the parameters of the trained recovery model. Meanwhile, proper Re-ID performance can be achieved after supervised learning on these anonymized images.
\begin{figure*}[t]
\centering
\includegraphics[ width=17cm, height=3.5cm]{fig/qualitative.pdf}
\caption{{Qualitative comparison on privacy protection. (a) raw images; (b)/(d)/(f) blurred/pixelated/noise-added images; (c)/(e)/(g) anonymized images guided by blurring/pixelation/noise adding. Best view in color. Zoom in for details.}}
\label{fig:qualitative}
\end{figure*}
\section{Experimental Results}
\textbf{Preliminary.} In the following parts, we denote ``OI'' as the original raw images, ``PI'' as the protected images, ``w/ U'' and ``w/o U'' as anonymization with/without using supervision upgradation.
\subsection{Datasets and Evaluation Metrics}
We conduct experiments on three widely used datasets: Market-1501 \cite{zheng2015scalable}, MSMT17 \cite{wei2018person} and CUHK03 \cite{wei2018person}. The Market-1501 dataset comprises 32,668 annotated bounding boxes under six cameras. The MSMT17 dataset consists of 4,101 identities and 126,441 bounding boxes taken by a 15-camera network. The CUHK03 dataset contains 1,467 identities and 14,097 detected bounding boxes.
We evaluate our model under image quality and re-identification metrics. For privacy protection and recovery, we adopt two widely used metrics: PSNR and SSIM \cite{wang2004image}. For Re-ID performance, Cumulative Matching Characteristics (\textit{a.k.a.,} Rank-k matching accuracy) \cite{wang2007shape}, mean Average Precision (mAP) \cite{zheng2015scalable}, and a new metric mean inverse negative penalty (mINP) \cite{ye2021deep} are used in our experiments.
\subsection{Implementation Details}
\textbf{Training setup.} We first split the original training set into a new training set and a validation set in a ratio of $4:1$. Then we further split the validation set into a gallery set and a query set in the same ratio. All the performance while training is obtained by testing on the validation set. In all experiments, we jointly trained our three models for 120 epochs with batch size 64. All input images are resized to $256\times128$ and then desensitized by $blurring\ 12\times 12$, or $pixelation\ 24\times 24$, or adding $Gaussian\ noise\ N(0,0.5)$. We use Adam optimizer \cite{kingma2014adam} with $\beta_1 = 0.5, \beta_2 = 0.999$ for two generators and with default values $\beta_1 = 0.9, \beta_2 = 0.999$ for the Re-ID model. Learning rate is linearly increasing from 3.5 × 10$^{-5}$ to 3.5×10$^{-4}$ in the first 10 epochs, and then is decayed to 3.5×10$^{-5}$ and 3.5 × 10$^{-6}$ at 40th epoch and 80th epoch respectively.
\textbf{Models of anonymization, recovery and Re-ID.} Our implementation for the anonymization and recovery models follow Pix2pix network \cite{isola2017image} and $\lambda_{L_1}$ is set to 100 as suggested by \cite{isola2017image}, while the Re-ID model follows the practice in \cite{ye2021deep} and uses the same hyperparameters. These might be replaced by other advanced methods.
\textbf{Supervision Upgradation.} To get a competitive effect of privacy protection, we set $\epsilon_{psnr}$ and $\epsilon_{ssim}$ (see in Fig.~\ref{fig:upgrade-explanation}) to small values of 1.0 and 0.05. And $\epsilon_{r1}$ is set to a small value of 0.05.
\subsection{Results of Privacy Protection}
\textbf{Qualitative Results.} In Fig.~\ref{fig:qualitative}, a qualitative comparison of privacy protection performance is conducted. Compared to raw images, our anonymized images achieve good visual privacy protection performance. The individual's body contour line and details of face and clothes are all concealed, and thus one cannot obtain the identity from the anonymized images by human eyes. Compared with the desensitized images, our corresponding anonymized images attain a competitive visual obfuscation effect in a different style.
\textbf{Quantitative Results.} Table~\ref{tab:privacy_reid} shows the Re-ID performance of unprotected AGW model on protected images. Compared with raw images (i.e., Raw), our anonymized images obtain extremely low rank-1 values and mAP values, showcasing that the common Re-ID model is not able to correctly identify our anonymized images. Compared with baselines, our anonymized images achieve similar Re-ID performance, indicating our anonymization method can obtain close privacy protection performance.
\begin{table}[t]
\caption{\label{tab:privacy_reid}{Re-ID performance of common AGW on protected images. ``Base'' means traditionally desensitized images. }}
\begin{threeparttable}
\resizebox{8cm}{22mm}{
\begin{tabular}{c|cc|cc|cc}
\hline
\multicolumn{1}{c|}{Dataset} &\multicolumn{2}{c|}{Market-1501} & \multicolumn{2}{c|}{MSMT17} &\multicolumn{2}{c}{CUHK03}\\\hline
\multicolumn{1}{c|}{Images} & rank-1 & mAP & rank-1 & mAP& rank-1 & mAP \\\hline
\multicolumn{7}{l}{\textit{(a) Evaluation of blurring.}} \\\hline
\multirow{1}{*}{{ Base}}
& 20.6 & 8.7& 3.9 & 1.4& 1.9 & 2.5 \\\hline
\multirow{1}{*}{{ Ours}}
& 18.4 & 7.6& 8.3 & 2.6& 3.9 & 3.9 \\\hline
\multicolumn{7}{l}{\textit{(b) Evaluation of pixelation.}} \\\hline
\multirow{1}{*}{{ Base}}
& 20.2 & 9.1& 1.8 & 0.7& 2.1 & 2.3 \\\hline
\multirow{1}{*}{{ Ours}}
& 17.5 & 7.7& 6.7 & 2.1& 1.3 & 1.6 \\\hline
\multicolumn{7}{l}{\textit{(c) Evaluation of Gaussian noise.}} \\\hline
\multirow{1}{*}{{Base}}
& 0.6 & 0.4& 0.2 & 0.1& 0.1 & 0.3 \\\hline
\multirow{1}{*}{{Ours}}
& 1.4 & 0.6& 0.4 & 0.1& 0.2 & 0.4 \\\hline
Raw & 95.7 & 88.6& 68.6 & 49.8& 67.3 & 65.8 \\\hline
\end{tabular}
}
\end{threeparttable}
\end{table}
\begin{table}[t]
\caption{\label{tab:privacy}{Human evaluation results. ``Base'' represents conventional anonymization methods. Privacy value(\%) denotes verification accuracy by human eyes. Lower privacy value indicates better privacy protection, while higher Re-ID rank-1 accuracy means better Re-ID performance.}}
\begin{threeparttable}
\begin{tabular}{c|cc|cc|c|c}
\hline
\multicolumn{1}{c|}{Image} &\multicolumn{2}{c|}{A} & \multicolumn{2}{c|}{B} &\multicolumn{1}{c|}{\multirow{2}{*}{{ Privacy value $\downarrow$}}}&\multicolumn{1}{c}{\multirow{2}{*}{{Re-ID rank-1 $\uparrow$}}}\\
\cline{1-5}
\multicolumn{1}{c|}{Method} & OI & PI & OI & PI &\multicolumn{1}{c|}{~}& \multicolumn{1}{c}{~} \\\hline
\multicolumn{6}{l}{\textit{(a) Evaluation of blurring.}} \\\hline
\multirow{2}{*}{{ Base}}
& $\checkmark$ & & &$\checkmark$& 79 & 40.1\\
& & $\checkmark$ & &$\checkmark$& 82& 67.3 \\\hline
\multirow{2}{*}{{ Ours}}
& $\checkmark$ & & &$\checkmark$& 83& 88.2\\
& & $\checkmark$ & &$\checkmark$& 82 & 89.2 \\\hline
\multicolumn{6}{l}{\textit{(b) Evaluation of pixelation.}} \\\hline
\multirow{2}{*}{{ Base}}
& $\checkmark$ & & &$\checkmark$& 71& 75.3 \\
& & $\checkmark$ & &$\checkmark$& 75& 64.3 \\\hline
\multirow{2}{*}{{ Ours}}
& $\checkmark$ & & &$\checkmark$& 71& 88.5\\
& & $\checkmark$ & &$\checkmark$& 64& 87.0 \\\hline
\multicolumn{6}{l}{\textit{(c) Evaluation of Gaussian noise.}} \\\hline
\multirow{2}{*}{{ Base}}
& $\checkmark$ & & &$\checkmark$& 84& 50.8 \\
& & $\checkmark$ & &$\checkmark$& 83& 68.7 \\\hline
\multirow{2}{*}{{ Ours}}
& $\checkmark$ & & &$\checkmark$& 88& 83.5\\
& & $\checkmark$ & &$\checkmark$& 84 & 91.2 \\\hline
Upper &$\checkmark$ & & $\checkmark$ & & 92 & 95.7\\\hline
\end{tabular}
\end{threeparttable}
\end{table}
\textbf{Human Evaluation.} Table~\ref{tab:privacy} shows the human evaluation of privacy protection effects of our method and baselines (blurring, pixelation and noise adding). We randomly sampled a pair of images from the raw or privacy-preserving Market-1501 testing set and ask participants whether the pair corresponds to the same person. The image pairs were divided into 13 groups (i.e., the 13 rows in Table~\ref{tab:privacy}) according to the protection method. Each group sampled 100 images that are distributed equally to 10 participants. The optimal privacy effect is when the privacy value equals 50\%, which indicates random guessing. Compared with raw image pairs (i.e., Upper), the pairs with our anonymized image achieve a substantially lower privacy value and a slight decrease in Re-ID accuracy. Compared with baselines, our method obtains comparable verification accuracy (i.e., privacy value) by human eyes and significantly better Re-ID rank-1 accuracy by trained Re-ID models.
\subsection{Results of Recovery}
\textbf{Qualitative Results.} In Fig.~\ref{fig:recovery}, we qualitatively compare recovered images with raw images. Our recovered images achieve similar visual quality to original images. It is difficult to distinguish the recovered images from the original raw images by human eyes.
\begin{figure}[t]
\centering
\includegraphics[ width=8cm, height=1.8cm]{fig/recovery.pdf}
\caption{{Qualitative results of the recovered images. (a) represents raw images; (b) represents recovered images.}}
\label{fig:recovery}
\end{figure}
\begin{table}[t]
\caption{\label{tab:recovery}{Image quality of the recovered image. PSNR and SSIM are reported. Higher value indicates better quality.}}
\begin{threeparttable}
\begin{tabular}{c|cc|cc|cc}
\hline
\multicolumn{1}{c|}{Dataset} &\multicolumn{2}{c|}{Market-1501} & \multicolumn{2}{c|}{MSMT17} &\multicolumn{2}{c}{CUHK03}\\\hline
\multicolumn{1}{c|}{Method} & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\\hline
\multicolumn{7}{l}{\textit{(a) Evaluation of blurring.}} \\\hline
\multirow{1}{*}{Ours} & 26.78 & 0.92 & 30.02 & 0.93 & 23.67 & 0.89 \\\hline
\multicolumn{7}{l}{\textit{(b) Evaluation of pixelation.}} \\\hline
\multirow{1}{*}{Ours} & 29.74 & 0.93 & 25.94 & 0.89 & 27.00 & 0.93 \\\hline
\multicolumn{7}{l}{\textit{(c) Evaluation of Gaussian noise.}} \\\hline
\multirow{1}{*}{Ours} & 26.80 & 0.92 & 27.80 & 0.92 & 23.00 & 0.91 \\\hline
\multirow{1}{*}{Upper} & +$\infty$ & 1 & +$\infty$ & 1 & +$\infty$ & 1 \\\hline
\end{tabular}
\end{threeparttable}
\end{table}
\textbf{Quantitative Results.} In Table~\ref{tab:recovery}, we show the image quality of recovered images. Approximately, in all three datasets, the PSNR and SSIM values of recovered images are higher than 25 and 0.9, indicating that our recovered images have good image quality. Moreover, as shown in Table~\ref{tab:recovery_reid}, both the common AGW model and our protected Re-ID model can obtain Re-ID performance on our recovered images comparable to that of original raw images. Besides, compared to AGW model, our model suffers a slight degradation of performance on raw and recovered images since it is also trained to improve performance on the anonymized images whose style is obviously different from raw and recovered images.
\begin{table}[t]
\caption{\label{tab:recovery_reid}{Re-ID performance of the recovered images. Rank at $r$ accuracy(\%) and mAP(\%) are reported. }}
\begin{threeparttable}
\resizebox{8cm}{12mm}{
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multicolumn{1}{c|}{} &\multicolumn{2}{c|}{images}&\multicolumn{2}{c|}{Market-1501} & \multicolumn{2}{c|}{MSMT17} &\multicolumn{2}{c}{CUHK03}\\\hline
\multicolumn{1}{c|}{Model} & OI & RI & r=1 & mAP & r=1 & mAP & r=1 & mAP \\\hline
\multirow{2}{*}{AGW} & \checkmark & & 95.7 & 88.6 & 68.6 & 49.8& 67.3 & 65.8 \\
& & \checkmark & 93.8 & 84.0 & 63.2 & 43.1& 64.6& 62.2 \\\hline
\multirow{2}{*}{Ours} & \checkmark & & 91.7 & 78.2 & 48.6 & 29.8& 38.8 & 42.4 \\
& & \checkmark & 90.4 & 75.5 & 49.8 & 29.5 & 33.3 & 33.1 \\\hline
\end{tabular}
}
\end{threeparttable}
\end{table}
\subsection{Results of Person Re-identification}
\textbf{Experiments under Four Test Settings.} As shown in Table~\ref{tab:reid}, we test the Re-ID performance under four settings with different queries and galleries. These four settings represent different scenarios:
1) \textbf{Original Setting} ($query=OI$, $gallery=OI$): This setting represents that we use original raw images for both query and gallery sets. The result shows that our proposed model achieves comparable performance to the existing widely used setting (Rank-1:91.7\% \textit{v.s.} 95.7\% on the Market1501 dataset) with only a minor performance drop. This demonstrates that the model trained on our anonymized dataset can still be applied to practical scenarios when the testing pedestrian images are not anonymized. It brings in another interesting research topic, i.e., designing algorithms on the anonymized dataset without the invasion of privacy, and testing in practical non-anonymized scenarios, which alleviates the major ethical concern of recent research on human subjects.
2) \textbf{Protected Setting} ($query=PI$, $gallery=PI$): This setting indicates that we use privacy-preserving images for both query and gallery sets. Compared to baselines of blurring, pixelation and noise adding, our model achieves an average improvement of 26.8\%, 26.3\% and 28.0\% in Rank-1 on three datasets. Compared to the original setting, our model under protected setting achieves comparable results. The results indicate that our anonymized images are suitable for Re-ID research and can be applied to practical scenarios when the testing pedestrian images are anonymized.
3) \textbf{Crossed Settings} ($query=OI$, $gallery=PI$ and $query=PI$, $gallery=OI$): These settings represent that we use different types of images for query and gallery sets. Compared to the baselines, our model significantly improves the performance on three datasets averagely by 34.2\% and 41.1\% for blurring, 22.4\% and 21.7\% for pixelation and 29.2\% and 33.0\% for noise adding. This indicates that our anonymization model is robust against privacy protection on the query and gallery sets. Besides, the performance is also comparable to the original setting, indicating that our model can be applied to Re-ID on hybrid images. It can be inferred that the feature distribution of our anonymized images has a close distance to that of raw images. Additionally, our model performs averagely better in the protected setting than in the crossed settings, probably because there still exists a minor domain gap between raw images and anonymized images. In summary, our anonymized images are suitable for Re-ID and the method can be applied to scenarios when testing images containing both raw and privacy-preserving images.
\begin{figure}[t]
\centering
\includegraphics[ width=8cm, height=2.7cm]{fig/upgrade-results.pdf}
\caption{{Qualitative comparison on supervision upgradation. (a) raw images; (b)pixelated images; (c) anonymized images w/o U; (d) anonymized images w/ U.}}
\label{fig:upgrade-results}
\end{figure}
\renewcommand\arraystretch{0.87}
\begin{table*}[t]
\centering
\caption{\label{tab:reid}Evaluation of Re-ID performance on three Re-ID datasets. ``Base'' means AGW model trained on desensitized images. ``Upper'' indicates original AGW model. Rank at $r$ accuracy(\%), mAP (\%) and mINP (\%) are reported.}
\begin{tabular}{c|cc|cc|cccc|cccc|cccc}\hline
& \multicolumn{2}{c|}{query}
& \multicolumn{2}{c|}{gallery} &\multicolumn{4}{c|}{Market-1501} & \multicolumn{4}{c|}{MSMT17} & \multicolumn{4}{c}{CUHK03}\\\hline
& OI & PI & OI & PI & $r=1$ & $r=5 $ & mAP & mINP & $r=1$ & $r=5$ & mAP & mINP & $r=1$ & $r=5$ & mAP & mINP \\\hline
\multicolumn{7}{l}{\textit{(a) Evaluation of blurring.}}\\\hline
\multirow{4}{*}{\shortstack{ Base}}
& $\checkmark$ & & $\checkmark$& & 84.8 & 94.1 & 67.4 & 32.2
& 30.5 & 44.0 & 17.1 & 2.8 & 30.4 & 50.4 & 31.5 & 22.3 \\
& $\checkmark$ & & & $\checkmark$ & 40.1 & 59.4 & 25.4 & 6.3
& 21.3 & 35.1 & 10.7 & 1.4 & 14.6 & 28.4 & 14.8 & 8.6 \\
& & $\checkmark$ & $\checkmark$ & & 18.3 & 31.7 & 15.5 & 5.2
& 16.2 & 28.0 & 9.4 & 1.5 & 10.4 & 20.4 & 12.4 & 8.4 \\
& & $\checkmark$ & & $\checkmark$ & 67.3 & 83.5 & 44.2 & 13.7
& 15.2 & 24.3 & 7.2 & 0.8 & 8.2 & 19.8 & 10.7 & 6.9\\\hline
\multirow{4}{*}{\shortstack{Ours\\(w/o U)}}
& $\checkmark$ & & $\checkmark$& & 83.1 & 93.7 & 61.9 & 24.8
& 43.6 & 57.6 & 24.0 & 3.7 & 31.6& 52.6 & 32.5 & 23.0 \\
& $\checkmark$ & & & $\checkmark$ & 68.3 & 84.9 & 45.9 & 14.6
& 28.4 & 42.3 & 9.1 & 1.2 & 13.8 & 25.8 & 14.8 & 9.0 \\
& & $\checkmark$ & $\checkmark$ & & 46.8 & 67.7 & 34.2 & 11.7
& 10.0 & 19.2 & 5.8 & 0.8 & 8.9 & 19.4 & 10.8 & 7.1 \\
& & $\checkmark$ & & $\checkmark$ & 75.8 & 88.9 & 52.4 & 18.3
& 14.7 & 23.6 & 6.0 & 0.5 & 14.3 & 28.5 & 15.0 & 8.9 \\\hline
\multirow{4}{*}{{ Ours}}
& $\checkmark$ & & $\checkmark$& & \textbf{91.6} & \textbf{97.4} & \textbf{79.4} & \textbf{47.4}
& \textbf{51.5} & \textbf{65.3} & \textbf{31.1} & \textbf{6.0} & \textbf{41.9} & \textbf{62.0} & \textbf{41.7} & \textbf{30.4} \\
& $\checkmark$ & & & $\checkmark$ & 88.2 & 95.8 & 72.0 & 37.0
& 51.1 & 64.9 & 29.7 & 5.2 & 39.2 & 59.3 & 38.4 & 27.2 \\
& & $\checkmark$ & $\checkmark$ & & 82.5 & 93.6 & 67.5 & 36.0
& 50.5 & 64.7 & 30.5 & 5.7 & 35.3 & 55.4 & 35.5 & 25.4 \\
& & $\checkmark$ & & $\checkmark$ & 89.2 & 96.4 & 74.3 & 39.4
& 48.7 & 62.4 & 28.5 & 4.9 & 33.2 & 55.3 & 34.7 & 25.0\\\hline
\multicolumn{7}{l}{\textit{(b) Evaluation of pixelation.}}\\\hline
\multirow{4}{*}{{ Base}}
& $\checkmark$ & & $\checkmark$& & 87.4 & 96.1 & 73.4 & 39.5
& 25.0 & 38.3 & 15.3 & 2.6 & 28.5 & 50.0 & 31.5 & 22.9\\
& $\checkmark$ & & & $\checkmark$ & 75.3 & 91.1 & 53.6 & 17.2
& 16.3 & 29.0 & 8.7 & 1.0 & 17.7 & 36.5 & 17.6 & 9.1 \\
& & $\checkmark$ & $\checkmark$ & & 70.9 & 86.4 & 54.7 & 24.1
& 14.6 & 24.7 & 9.0 & 1.6 & 15.1 & 29.5 & 17.7 & 12.3 \\
& & $\checkmark$ & & $\checkmark$ & 64.3 & 83.5 & 43.4 & 13.0
& 10.6 & 19.7 & 5.7 & 0.7 & 8.8 & 20.3 & 9.9 & 5.3\\\hline
\multirow{4}{*}{\shortstack{Ours\\(w/o U)}}
& $\checkmark$ & & $\checkmark$& & 86.3 & 95.3 & 69.8 & 34.0
& 34.3 & 47.7 & 18.8 & 2.8 & 24.9 & 44.9 & 27.3 & 19.1 \\
& $\checkmark$ & & & $\checkmark$ & 80.1 & 92.6 & 57.4 & 18.6
& 26.2 & 40.4 & 12.3 & 1.3 & 24.2 & 44.8 & 23.3 & 13.6 \\
& & $\checkmark$ & $\checkmark$ & & 75.1 & 89.1 & 57.1 & 24.3
& 24.6 & 37.0 & 12.9 & 1.9 & 19.6 & 35.1 & 20.8 & 14.1 \\
& & $\checkmark $ & & $\checkmark$ & 73.2 & 89.1 & 49.7 & 15.7
& 20.5 & 32.4 & 9.3 & 1.0 & 12.1 & 25.9 & 13.2 & 7.5\\\hline
\multirow{4}{*}{{ Ours}}
& $\checkmark$ & & $\checkmark$& & \textbf{89.4} & \textbf{96.2} & \textbf{75.4} & \textbf{42.4}
& 48.6 & 63.2 & \textbf{29.8} & \textbf{5.9} & \textbf{38.8} & \textbf{64.3} & \textbf{42.4} & 31.6 \\
& $\checkmark$ & & & $\checkmark$ & 88.5 & 95.7 & 71.9 & 35.9
& \textbf{49.1} & \textbf{63.6} & 29.3 & 5.5 & 37.8 & 60.5 & 41.4 & \textbf{32.2} \\
& & $\checkmark$ & $\checkmark$ & & 86.8 & 94.9 & 72.3 & 39.1
& 48.5 & 63.6 & 29.8 & 5.7 & 30.4 & 50.6 & 30.6 & 20.7 \\
& & $\checkmark$ & & $\checkmark$ & 87.0 & 95.6 & 70.5 & 34.8
& 48.1 & 62.7 & 29.3 & 5.6 & 27.6 & 47.3 & 28.5 & 19.2 \\\hline
\multicolumn{7}{l}{\textit{(c) Evaluation of Gaussian noise.}}\\\hline
\multirow{4}{*}{{ Base}}
& $\checkmark$ & & $\checkmark$& & 75.9 & 89.4 & 51.5 & 16.5
& 24.0 & 34.2 & 11.1 & 1.3 & 14.0 & 27.4 & 15.7 & 9.9\\
& $\checkmark$ & & & $\checkmark$ & 50.8 & 70.3 & 30.2 & 5.8
& 20.4 & 32.4 & 8.6 & 0.9 & 9.1 & 19.4 & 9.9 & 5.4 \\
& & $\checkmark$ & $\checkmark$ & & 41.7 & 62.9 & 26.5 & 6.8
& 18.5 & 28.4 & 8.4 & 0.9 & 8.6 & 18.9 & 9.9 & 5.9 \\
& & $\checkmark$ & & $\checkmark$ & 68.7 & 85.9 & 43.2 & 11.9
& 18.2 & 27.6 & 7.8 & 0.8 & 8.1 & 18.7 & 10.2 & 5.9\\\hline
\multirow{4}{*}{\shortstack{Ours\\(w/o U)}}
& $\checkmark$ & & $\checkmark$& & 90.4 & 96.7 & 75.9 & 41.9
& 41.9 & 55.6 & 24.0 & 4.2 & 28.1 & 47.6 & 30.2 & 21.8 \\
& $\checkmark$ & & & $\checkmark$ & 77.5 & 89.7 & 57.3 & 21.0
& 36.0 & 51.5 & 18.4 & 2.7 & 30.4 & 51.1 & 30.2 & 20.0 \\
& & $\checkmark$ & $\checkmark$ & & 67.5 & 82.0 & 50.8 & 20.7
& 29.4 & 41.4 & 15.7 & 2.5 & 30.4 & 50.4 & 29.9 & 20.3 \\
& & $\checkmark $ & & $\checkmark$ & 84.4 & 93.6 & 62.8 & 25.8
& 31.3 & 43.4 & 15.0 & 1.9 & 31.9 & 53.2 & 32.4 & 22.8\\\hline
\multirow{4}{*}{{ Ours}}
& $\checkmark$ & & $\checkmark$& & \textbf{91.7} & 96.8 & \textbf{78.2} & \textbf{45.4} &46.9 & 60.7 & \textbf{27.6} & \textbf{4.9} &35.8 & 56.1 & 36.8 & 26.7 \\
& $\checkmark$ & & & $\checkmark$ & 83.5 & 92.8 & 68.0 & 33.6 &\textbf{48.1} & \textbf{62.7} & 27.3 & 4.4 &36.4 & 57.3 & 36.3 & 25.6 \\
& & $\checkmark$ & $\checkmark$ & & 83.8 & 92.7 & 67.3 & 32.4 &46.2 & 59.4 & 26.1 & 4.4 &37.9 & 57.1 & 36.9 & 25.5 \\
& & $\checkmark$ & & $\checkmark$ & 91.2 & \textbf{96.9} & 77.0 & 44.3 &46.0 & 59.4 & 26.0 & 4.4 &\textbf{41.9} & \textbf{62.4} & \textbf{41.6} & \textbf{30.4}\\\hline
Upper &$\checkmark$ & & $\checkmark$ & & 95.7 & 98.4 & 88.6 & 66.7 & 68.6 & 79.7 & 49.8 & 15.0 & 67.3 & 82.8 & 65.8 & 54.6 \\\hline
\end{tabular}
\end{table*}
\textbf{Effect of Supervision Upgradation.} As shown in Table~\ref{tab:reid}, compared to our model without upgradation, the model with upgradation generally performs better under all metrics and settings, e.g., the average Rank-1 increases 19.4\% under the evaluation of blurring on Market1501 dataset. This shows that utilizing supervision upgradation indeed helps in improving Re-ID performance on our anonymized images. Fig.~\ref{fig:upgrade-results} shows that the anonymized images with upgradation retain a strong visual obfuscation effect. Instead of those without upgradation which follow the style of pixelation, raw images are obfuscated in a different learned style.
\textbf{Discussion.}
In our privacy-preserving system, given a frame of raw video, pedestrians can be anonymized based on the detected bounding boxes. These anonymized bounding boxes can further be used to recover original raw images by police officers and adopted as public datasets for researchers. However, given an anonymized frame, anonymized pedestrians are unable to be detected by standard person detectors without training on them. It needs further joint learning with pedestrian detection tasks.
\section{Conclusion}
This paper proposes a new reversible anonymization framework to explore the privacy-utility trade-off for pedestrian images from Re-ID perspective, which can reversibly generate full-body anonymous images with little performance degradation in Re-ID tasks. We further propose a progressive training strategy to improve the Re-ID performance. Extensive experiments further demonstrate the effectiveness of our method using anonymized pedestrian images for privacy protection, recovery, and person re-identification.
\section*{ACKNOWLEDGMENTS}
This work is supported by National Natural Science Foundation of China (62176188), Key Research and Development Program of Hubei Province (2021BAA187), Special Fund of Hubei Luojia Laboratory (220100015), Zhejiang Lab (NO.2022NF0AB01).
\section{Appendix}
\begin{figure*}[t]
\centering
\includegraphics[ width=17cm]{fig/tsne.pdf}
\caption{\small{Comparison of the feature distribution extracted by (a) non-protective Re-ID model, (b) the baseline blurring-based model, (c) ours (w/o U), and (d) ours (w/ U) on Market1501 dataset. Features with the same color are from images of the same person.}}
\label{fig:tsne}
\end{figure*}
\subsection*{A. Feature Distribution Comparison}
In order to show more intuitively that our anonymized images are suitable as the public dataset for Re-ID research, we visualize the feature distribution of both raw and protected images in Fig.~\ref{fig:tsne}. The features are produced from the same batch of test samples and are extracted from the current non-protective Re-ID model, the baseline blurring-based model, and our blurring-based Re-ID model with/without supervision upgradation. If the protected feature distribution can be clustered distinctly by classes, and is similar to the original raw feature distribution, then the protected images should be suitable as a public Re-ID dataset.
\textit{a) Non-protective Model}: Fig.~\ref{fig:tsne}(a) illustrates the feature distribution extracted from the current non-protective Re-ID model AGW \cite{ye2021deep} which is trained on only raw images and tested on both raw and blurred images. It can be clearly seen that raw features are clustered by classes while protected features of different classes are mostly mixed together, indicating the Re-ID model trained on only raw images achieves poor performance when testing images contain protected images.
\textit{b) Baseline}: The baseline Re-ID model is trained and tested on paired raw and blurred images. As illustrated in Fig.~\ref{fig:tsne}(b), compared to non-protective model, the protected features are clustered better and the distance between raw and protected feature distribution is narrowed. This indicates that the Re-ID performance is improved when the test set consists of protected images. However, the blurred images are not able to replace raw images as a Re-ID dataset due to the large deviation of feature distribution after blurring.
\textit{c) Ours (w/o U)}: Fig.~\ref{fig:tsne}(c) shows the feature distribution extracted from our model that is jointly trained without supervision upgradation and tested on raw and anonymized images. Compared to the baseline, the raw and protected feature distributions are significantly pulled in. However, there still exists observable misalignment between these two feature distributions.
\textit{d) Ours (w/ U)}: To further narrow the distance between these two feature distributions, we propose a training strategy, i.e., progressive supervision upgradation. As shown in Fig.~\ref{fig:tsne}(d), compared to baseline and our method w/o U, our method with supervision upgradation eliminates the deviation between these two types of feature distribution while retaining the distinction between classes. Compared to raw feature distribution from non-protective model, our method achieves comparable performance of forming clusters. The results indicate that the model trained on raw and our anonymized images can perform well under original, protected, and crossed settings and our anonymized images are suitable as the public Re-ID dataset.
\subsection*{B. Results of Privacy Protection and Recovery}
In Fig.~\ref{fig:qualitative} and Fig.~\ref{fig:recovery} of the main paper, we separately showed qualitative results of privacy protection and recovery. Besides, in Fig.~\ref{fig:upgrade-results} of the main paper, we performed comparison on supervision upgradation. In this part, we combine the three qualitative experiments and show more images in Fig.~\ref{fig:results}. It can be seen that our anonymized images achieve good visual obfuscation effect and our recovered images are visually similar to raw images.
\begin{figure*}[t]
\centering
\includegraphics[ width=13cm]{fig/results.pdf}
\caption{\small{Qualitative comparison between baseline and our method. (a)/(b) denote raw/recovered images; (c) indicates blurred images; (d)/(e) represent anonymized images guided by blurred images without/with supervision upgradation; (f)/(g)/(h) are similar to (c)/(d)/(e) with blurred images being replaced by pixelated images. }}
\label{fig:results}
\end{figure*}
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,088,326 | arxiv | \section{Introduction}
The Hubble constant $H_0$ is one of the fundamental parameters in
cosmology, but there is a tension\footnote{\url{https://github.com/shsuyu/H0LiCOW-public/tree/master/H0_tension_plots} \citep{bonvin_2020}}
of $>$4$\sigma$ \citep{Verde:2019ivm} between the early Universe measurements inferred
from the cosmic microwave background \citep{Planck:2018vks} and late
Universe measurements from the SH0ES project
\citep[e.g.,][]{Riess:2016jrr,Riess:2018byc,Riess:2019cxk,Riess:2020fzl}, although
results from \cite{Freedman:2019jwv,Freedman_2020} or
\cite{Khetan:2020hmh} are consistent with both.
Lensing time-delay cosmography, as an independent probe, can address
this tension by measuring $H_0$ in a single step. This method, first
envisaged by \cite{Refsdal:1964}, combines the measured time delay
from the multiple images of a variable source, with lens mass modeling
and line-of-sight mass structure to infer $H_0$. The H0LiCOW
\citep{Suyu:2016qxx} and COSMOGRAIL \citep{2017Courbin}
collaborations, together with the SHARP collaboration
\citep{Chen:2019ejq}, have applied this method successfully to lensed
quasar systems
\citep[e.g.,][]{Bonvin:2018dcc,Birrer:2018vtm,2019MNRAS.490..613S,Rusu:2019xrq,Chen:2019ejq}.
The latest $H_0$ measurement from H0LiCOW using physically-motivated
mass models is consistent with measurements from SH0ES but in
$>$3$\sigma$ tension with results from the cosmic microwave background
\citep{Wong:2019kwg}. The STRIDES collaboration has further analyzed a
new lensed quasar system \citep{Shajib:2019toy}. The newly formed
TDCOSMO organization \citep{Millon:2019slk}, consisting of H0LiCOW,
COSMOGRAIL, SHARP and STRIDES, has recently considered a one-parameter
extension to the mass model to allow for the mass-sheet transformation
\citep[e.g.,][]{Falco:1985,Schneider:2013a, Kochanek:2020}. \citet{Birrer+2020} used
the stellar kinematics to constrain this single parameter, resulting
in an $H_0$ value with a larger uncertainty, which is statistically consistent with the previous results using physically-motivated mass models .
In addition to strongly lensed quasars, supernovae (SNe) lensed into
multiple images are promising as a cosmological probe and are in fact
the sources envisioned by \citet{Refsdal:1964}. Even though these
systems are much rarer in comparison to quasars, they have the
advantage that SNe fade away over time, facilitating measurements of
stellar kinematics of the lens galaxy
\citep{Barnabe2011,2017:Yildirim,Shajib:2018,Yildirim:2019vlv} and surface brightness distributions of the lensed-SN host galaxy \citep{DingEtal21} to
break model degeneracies, e.g., the mass-sheet transformation
\citep{Falco:1985,Schneider:2013wga}. Furthermore, strongly
lensed type Ia supernovae (LSNe Ia) are promising given that they are
standardizable candles and therefore provide an additional way to
break model degeneracies, for lens systems where lensing
magnifications are well characterized
\citep{2003MNRAS.338L..25O,Foxley-Marrable:2018dzu}.
So far only three LSNe with resolved multiple images have been observed,
namely SN ``Refsdal" \citep{Kelly:2015xvu,Kelly:2015vjq} a
core-collapse SN at redshift of $z = 1.49$, the LSN Ia iPTF16geu
\citep{Goobar:2016uuf} at $z=0.409$, and the AT2016jka \citep{Rodney:2021keu}
at $z = 1.95$, which is most likely a LSN Ia. Nonetheless, with the upcoming Rubin
Observatory Legacy Survey of Space and Time
\citep[LSST;][]{Ivezic:2008fe}, we expect to find $\sim$%
$10^3$ LSNe, of
which 500 to 900 are type Ia SNe
\citep{Quimby:2014,GoldsteinNugent:2017,Goldstein:2017bny,Wojtak:2019hsc}.
Considering only LSNe Ia with spatially resolved images and peak
brightness\footnote{of the fainter image for a double system; for a
quad system, the peak brightness of the third brightest image is considered.} brighter than 22.6 in the
$i$ band, as in the \cite{Oguri:2010} lens catalog (OM10), leads to 40
to 100 LSNe Ia depending on the LSST observing strategy, of which 10
to 25 systems yield accurate time-delay measurements
\citep{Huber:2019ljb}.
To measure time delays between multiple images of LSNe Ia,
\cite{Huber:2019ljb} used the free-knot spline estimator from Python
Curve Shifting \citep[\texttt{PyCS},][]{2013:Tewesb,Bonvin:2015jia}
and therefore the characteristic light curve shape of a SN Ia is not
taken into account. Furthermore, they do not explicitly model the
variability due to microlensing
\citep{Chang_Refsdal_1979,Irwin:1989,Wambsganss:2006nj,2016aagl.book.....M},
an effect where each SN image is separately influenced by lensing
effects from stars in the lens, leading to additional magnification
and distortion of light curves
\citep{Yahalomi:2017ihe,Goldstein:2017bny,Foxley-Marrable:2018dzu,Huber:2019ljb,
PierelRodney+2019,Huber:2020dxc}.
While PyCS has the advantage of being flexible without making assumptions on the light-curve forms, model-based methods are complementary in providing additional information to measure the time delays more precisely.
One such model-based time-delay measurement method is implemented by
\cite{PierelRodney+2019} where template SN light curves are
used. Even though microlensing is taken into account in this work, it
is done in the same way for each filter. A more realistic microlensing
treatment for SNe Ia, with variations in the SN intensity distribution across wavelengths, was first introduced by \cite{Goldstein:2017bny}
using specific intensity profiles from the theoretical W7
\citep{1984:Nomoto} model calculated via the radiative transfer code
\texttt{SEDONA} \citep{Kasen:2006ce}. \cite{Huber:2019ljb,
Huber:2020dxc} have built upon this study, but using the radiative
transfer code \texttt{ARTIS} \citep{Kromer:2009ce} to calculate
synthetic observables for up to four theoretical supernova explosion
models. In this work, we follow the approach of \cite{Huber:2019ljb,
Huber:2020dxc} to calculate realistic microlensed light curves for
LSNe Ia to train a fully connected neural network (FCNN) and a Random Forest (RF) model
for measuring time delays. In addition, this method also allows us to
identify dominant sources of uncertainties and quantify different follow-up strategies.
This paper is organized as follows. In Section \ref{sec: simulated
light curves for LSNe Ia}, we present our calculation of mock light
curves including microlensing and observational uncertainties. The
creation of our training, validation and test set is explained on an
example mock observation in Section \ref{sec: Example data used for machine learning},
followed by an introduction to the machine learning (ML)
techniques used in this work in Section \ref{sec: Machine learning
techniques}. We apply these methods to the example mock
observation in Section \ref{sec: Machine learning on example
mock-observation} where we also test transfer learning.
In Section \ref{sec: Microlensing, observational noise and choice of filters} we investigate based on our example mock observation potential filters for follow-up observations and the impact of microlensing and noise on the uncertainty, before we investigate more mock observations in Section \ref{sec: Machine learning on further
mock-observation}.
We discuss our results in Section \ref{sec: Discussion}
before we summarize in Section \ref{sec: Summary}.
Magnitudes in this paper are in the AB system.
\section{Simulated light curves for LSNe Ia}
\label{sec: simulated light curves for LSNe Ia}
The goal is to develop a software which takes photometric light curve observations of a
LSN Ia as input and predicts as an output the time delay between the
different images. For a machine-learning approach, we need to
simulate a realistic data set where we account for different sources
of uncertainties. We therefore specify in Section \ref{sec:
microlensing} our calculation of microlensing, and we explain in Section
\ref{sec: observational uncertainty} our determination of
observational uncertainties including estimates of the moon phase.
\subsection{Microlensing and SNe Ia models}
\label{sec: microlensing}
To calculate light curves for a LSN Ia with microlensing, we combine
magnifications maps from {\tt GERLUMPH} \citep{Vernardos:2015wta} with
theoretical SN Ia models, where synthetic observables have been
calculated with \texttt{ARTIS} \citep{Kromer:2009ce}. The basic idea
is to place a SN in a magnification map and solve for the observed flux:
\begin{equation}
F_{\lambda,\mathrm{o}}(t)=\frac{1}{{D_\mathrm{lum}}^2(1+z_{\rm s})}\int \mathrm{d} x \int \mathrm{d} y \, I_{\lambda,\mathrm{e}}(t,p(x,y)) \, \mu(x,y),
\label{eq: microlensed flux}
\end{equation}
where ${D_\mathrm{lum}}$ is the luminosity distance to the source, $z_{\rm s}$ is
the redshift of the source, $\mu(x,y)$ is the magnification factor
depending on the positions $(x,y)$ in the magnification map, and
$I_{\lambda,\mathrm{e}}(t,p)$ is the emitted specific intensity at the source
plane as a function of wavelength $\lambda$, time since explosion $t$
and impact parameter $p$, i.e. the projected distance from the ejecta
center, where we assume spherical symmetry similar to \cite{Huber:2019ljb, Huber:2020dxc}. Lensing magnification maps depend on three main parameters, namely the
convergence $\kappa$, the shear $\gamma$ and the smooth matter
fraction $s=1-\kappa_*/\kappa$, where $\kappa_*$ is the convergence of
the stellar component. Further, our maps have a resolution of 20000
$\times$ 20000 pixels with a total size of 20 $R_\mathrm{Ein}$ $\times$ 20 $
R_\mathrm{Ein}$, where the Einstein Radius $R_\mathrm{Ein}$ is a characteristic size of the map
which depends on the source redshift $z_{\rm s}$, lens redshift
$z_{\rm d}$, and masses of the microlenses. As in \cite{Huber:2020dxc}, we follow \citet{ChanEtal21} for generating the microlensing magnification maps and assume a Salpeter initial mass
function (IMF) with a mean mass of the microlenses of $0.35 M_\odot$; the specifics of the assumed IMF
have negligible impact on our studies. From the flux we obtain the AB-magnitudes via
\begin{equation}
\scalebox{1.13}{$
m_\mathrm{AB,X}(t_i) = -2.5 \log_{10} \left(\frac{\int \mathrm{d}\lambda \, \lambda S_\mathrm{X}(\lambda) \, F_{\lambda,\mathrm{o}}(t) }{\int \mathrm{d} \lambda \, S_\mathrm{X}(\lambda) \, c \, / \lambda} \times \si{\square\mathrm{cm}\over\mathrm{erg}} \right) - 48.6$}
\label{eq: microlensed light curves for ab magnitudes}
\end{equation}
\citep{Bessel:2012}, where $c$ is the speed of light and
$S_\mathrm{X}(\lambda)$ is the transmission function for the filter X
(that can be \textit{u}, \textit{g}, \textit{r}, \textit{i},
\textit{z}, \textit{y}, \textit{J}, or \textit{H} in this work). This
calculation is discussed in much greater detail by
\cite{Huber:2019ljb}, which was initially motivated by the work of
\cite{Goldstein:2017bny}.
The calculation of microlensing of LSNe Ia requires a theoretical
model for the SN Ia which predicts the specific intensity. To increase
the variety of different light curve shapes we use four SNe Ia
models computed with \texttt{ARTIS} \citep{Kromer:2009ce}. These models have also been used in
\cite{Suyu:2020opl} and \cite{Huber:2020dxc}, and are briefly
summarized in the following: i) the parameterized 1D deflagration model
W7 \citep{1984:Nomoto} with a Chandrasekhar mass ($M_\mathrm{Ch}$) carbon-oxygen (CO) white
dwarf (WD), ii) the delayed detonation model N100
\citep{Seitenzahl:2013} of a $M_\mathrm{Ch}$ CO WD, iii) a
sub-Chandrasekhar (sub-Ch) detonation model of a CO
WD with $1.06 M_\odot$ \citep{Sim:2010}, and iv) a merger model of
two CO WDs of $0.9 M_\odot$ and $1.1 M_\odot$
\citep{Pakmor:2012}.
Figure \ref{fig: SN Ia models vs SNEMO} shows the light curves for the
four SN Ia models in comparison to the empirical \texttt{SNEMO15} model
\citep{Saunders:2018rjn}. The light curves are normalized by the peak.
Magnitude
differences between SN Ia models are within 1 magnitude. To produce
the median and $2\sigma$ (97.5th percentile $-$ 2.5th percentile)
light curves of \texttt{SNEMO15}, we consider all 171 SNe Ia from
\cite{Saunders:2018rjn}. Data of the empirical models covers only $
\SI{3305}{\angstrom}$ to $\SI{8586}{\angstrom}$ and therefore the
\textit{u} band, starting at $\SI{3200}{\angstrom}$, is only an
approximation, but an accurate one since the filter transmission in
the missing region is low. The rest-frame $u$ and $g$ cover
approximately the observed $r$, $i$, and $z$ bands for a system with redshift of
0.76, which we investigate in Sections \ref{sec: Example data used for
machine learning} and \ref{sec: Machine learning on example
mock-observation}. Light curves from theoretical and empirical models
show the same evolution, although there are quite some differences in
the shapes. The variety of different theoretical models is helpful to
encapsulate the intrinsic variation of real SNe Ia.
In building our training, validation and test
sets for our machine-learning methods, we also normalize the light curves after the calculation of the observational noise, which we describe next.
\begin{figure}
\centering
\subfigure{\includegraphics[width=0.45\textwidth]{figures/Light_curves_snemo15_filter_u_norm_True}}
\subfigure{\includegraphics[width=0.45\textwidth]{figures/Light_curves_snemo15_filter_g_norm_True}}
\caption{Normalized LSST $u$ and $g$ band rest-frame light curves for four theoretical SN Ia models (merger, N100, sub-Ch, and W7) in comparison to the empirical model \texttt{SNEMO15}.}
\label{fig: SN Ia models vs SNEMO}
\end{figure}
\subsection{Observational uncertainty and the moon phase}
\label{sec: observational uncertainty}
Magnitudes for filter X including observational uncertainties can be calculated via
\begin{equation}
m_\mathrm{data,X} = m_{\mathrm{AB,X}} + r_\mathrm{norm} \sigma_{1,\mathrm{X}},
\label{eq:noise realization random mag including error LSST science book}
\end{equation}
where $m_{\mathrm{AB,X}}$ is the intrinsic magnitude without observational noise, $r_\mathrm{norm}$ is a random gaussian number with a standard deviation of unity, and $\sigma_{1,\mathrm{X}}$ is a
quantity which depends mainly on $m_{\mathrm{AB,X}}$ relative to the
$5\sigma$ depth $m_5$. This calculation is based on
\cite{2009:LSSTscience} and for more details see also Appendix
\ref{sec:Appendix LSST uncertainty}.
In order to calculate $m_\mathrm{data,X}$, the $5\sigma$ depth of the
corresponding filter X is needed. In this work we consider 8 filters, namely the
six LSST filters \textit{u, g, r, i, z,} and \textit{y}, as well as
two infrared bands \textit{J} and \textit{H}. To estimate the moon
phase dependency of filter X, we use the exposure time calculator (ETC)
of the European Southern Observatory (ESO) with a flat template spectrum. For \textit{ugriz} we use the
ETC of
OmegaCAM\footnote{\url{https://www.eso.org/observing/etc/bin/gen/form?INS.NAME=OMEGACAM+INS.MODE=imaging}},
and for \textit{yJH} we use the ETC of
VIRCAM\footnote{\url{https://www.eso.org/observing/etc/bin/gen/form?INS.NAME=VIRCAM+INS.MODE=imaging}},
where we assume an airmass of 1.2. Further, we use the typical fixed
sky model parameters with seeing $\le$%
$1''$ as provided by the ETC, which we found to be a
conservative estimate of the $5\sigma$ depth, by testing other sky
positions. We investigate one cycle phase (25th August 2020 to 24th
September 2020) to obtain relative changes of the $5\sigma$ depth with
time, and match these relative changes to the typical mean of the single-epoch LSST-like
$5\sigma$ depth plus one magnitude, given by (23.3+1, 24.7+1, 24.3+1,
23.7+1, 22.8+1, and 22.0+1) for (\textit{u, g, r, i, z,} and \textit{y}), respectively, assuming a fixed exposure time.
These mean values
take into account that in typical LSST observing strategies, redder
bands are preferred around full moon, while bluer bands are used more
around new moon. Going one magnitude deeper than the LSST $5\sigma$
depth provides a better quality of photometric measurements for time-delay measurements, and is feasible even
for a 2\,m telescope \citep{Huber:2019ljb}. The absolute values for
\textit{J} and \textit{H} band are set by the ETC of VIRCAM in
comparison to the \textit{y} band.
The results for one cycle phase are
shown in Figure \ref{fig: moon phase}, where we find full moon around
day 8 and new moon around day 23. As expected, bluer bands are much
more influenced by the moon phase in comparison to redder bands. As we
are typically interested in getting LSNe Ia with time delays greater
than 20 days \citep{Huber:2019ljb}, it is important to take the moon
phase into account.
Furthermore, we note that our approach on the $5\sigma$ depth assumes
an isolated point source, where in reality we have also contributions
from the host and lens light, which are the lowest for faint hosts and
large image separations. Even though these are the systems we are
interested in targeting, our uncertainties are on the optimistic side. The
construction of light curves in the presence of the lens and host is
deferred to future work, although LSNe have the advantage that the SNe
fade away and afterwards an observation of the lensing system without
the SN can be taken and used as a template for
subtraction.
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/moon_phase.png}
\caption{Estimated $5\sigma$ depth for 8 different filters \textit{u, g, r, i, z, y, J,} and \textit{H} accounting for the moon phase. Day 0 corresponds to the first quarter in the moon phase. Full moon is around day 8 and new moon is on day 23.}
\label{fig: moon phase}
\end{figure}
\section{Example mock-observation and data set for machine learning}
\label{sec: Example data used for machine learning}
In this section, we present a specific mock observation as an example,
to explain the data structure required for our machine-learning
approaches.
\subsection{Mock observation}
\label{sec: mock observation 187}
As an example, we take a LSN Ia double system of the OM10
catalog \citep{Oguri:2010}, which is a mock lens catalog for strongly
lensed quasars and supernovae. The parameters of the mock LSN Ia are
given in Table \ref{tab: Example double LSNe Ia}, where we have picked
a system with a source redshift close to the median source redshift
$z_{\rm s} = 0.77$ of LSNe Ia in OM10 \citep{Huber:2020dxc}. The
corresponding mock light curves are produced assuming the W7 model, where the $i$-band is shown in Figure \ref{fig: example light
curve 187 system} and all bands ($ugrizyJH$) together are shown in Appendix
\ref{sec:Appendix further bands of mock observation}. To calculate magnitudes with observational noise we use Equation (\ref{eq:noise realization random mag
including error LSST science book}). For the moon phase we assume a configuration where the $i$ band light curve peaks around new moon.
Further configurations in the moon
phases will be discussed in Section \ref{sec: moon phases}.
To avoid
unrealistic noisy data points $m_\mathrm{data,X}$ for our mock system in Figure \ref{fig: example light curve 187 system}, we only take points $m_\mathrm{AB,X}$ brighter than $m_5
+ 2 \mathrm{mag}$ into account, before we add noise on
top. Furthermore, we assume a two day cadence with a few random gaps.
\begin{table}
\begin{tabular}{cccccc}
$z_{\rm s}$ & $z_{\rm d}$ & image 1 $(\kappa,\gamma)$ & image 2 $(\kappa,\gamma)$ & time delay [days] \\
\midrule
0.76 & 0.252 & (0.251, 0.275) & (0.825, 0.815) & 32.3
\end{tabular}
\caption{Mock system of OM10 catalog for generating mock light curves to train our
ML techniques. Further we assume similar as \cite{Huber:2020dxc} $s=0.6$. The image separation for this double system is 1.7 arcsec and therefore typically resolvable with certain ground based telescopes under most seeing conditions.}
\label{tab: Example double LSNe Ia}
\end{table}
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/observed_light_curve_lnc_187_cad_2_magCut_2.0_uncLSSTplus_1.0_startday_0_moonPhase_new_filter_i_.png}\caption{Simulated
observation for which ML models will be trained to measure
the time delay. The gray dashed curve marks the 5$\sigma$
point-source depth accounting for the moon phase.}
\label{fig: example light curve 187 system}
\end{figure}
\subsection{Data set for machine learning}
\label{sec: Data set for machine learning}
Our data contain measurements of light curves in one or more filters of the two SN images.
The input data for our machine-learning approaches are ordered, such that for a
given filter, all magnitude values from image 1 are listed (first
observed to last observed), followed by all magnitude values from
image 2. This structure is illustrated in the following definition and
will be referred to as a single sample:
\begin{equation}
m_{i1, 1} \, \, ... \, m_{i1, N_{i1}} \, m_{i2, 1} \, \, ... \, m_{i2, N_{i2}} \, \, \equiv \, \, d_{1} \, d_{2} ... \, d_{N_\mathrm{d}}
\label{eq: data structure}
\end{equation}
for an example of a double LSN Ia with observations in the \textit{i}
band. There are $N_{i1}$ photometric measurements in the light curve for SN image 1, and $N_{i2}$ photometric measurements for SN image 2. The magnitude value of the second data point in $i$ band from
the first image in Figure \ref{fig: example light curve 187 system} is
denoted as $m_{i1, 2}$ and $m_{i1, N_{i1}}$ represents the last data point. For simplification, we define $N_{\rm d} = N_{i1} + N_{i2}$, and $d_j$ as the $j$-th magnitude value in Equation (\ref{eq: data structure}).
If multiple filters are available, then a ML model
can be trained per band, or multiple bands can be used for a single
ML model, which will be explored in Section \ref{sec: filters used for
training}.
We will introduce our fully connected neural network (FCNN) and Random Forest (RF) methods in detail in Section \ref{sec: Machine learning techniques}; we describe here the data set required for these two approaches in the remainder of this section. Important to note is that both methods always require
the same input structure as defined in Equation (\ref{eq: data structure}), with exactly the same number of data points\footnote{To avoid unrealistically noisy data points we limit the maximum amount of noise which is allowed as described in the Appendix \ref{sec:Appendix LSST uncertainty}.}.
From this input, we can then build a FCNN or a RF which
predicts the time delay. As additional
information, the $5\sigma$ depth is required for each data
point, to create noise in a similar way as in our mock observation.
Furthermore, microlensing uncertainties are taken into account
by using the $\kappa, \gamma,$ and $s$ values of each LSN Ia image.
The weakness of this approach is that we need to train individually a system
with a given set of observations, but the advantage is that we can
train our system very specifically for the observation pattern, noise
and microlensing uncertainties such that we expect an accurate result
with a realistic account of the uncertainties. Given that the data
production and training of such a system take less than a week and
multiple systems can be trained in parallel, this approach is easily
able to measure the delays of the expected 40 to 100 potentially promising LSNe
in the 10 year LSST survey
\citep{Huber:2019ljb}.
Our ML approaches
require the same number of data points in each sample. We therefore produce our data
set, for training, validation and testing of the ML models,
such that the number of data points is always the same as in our mock
observation in Figure \ref{fig: example light curve 187 system}. We calculate the light curves for the SN images via Equation (\ref{eq: microlensed flux}) and
(\ref{eq: microlensed light curves for ab magnitudes})
where we use random microlensing map positions. We then shift
the light curves for each SN image randomly in time around a first
estimate of the delay. In our example, we use the true observed time
values of the mock observation $t_{\rm obs,1}= 0.0 \, \mathrm{d}$ and $t_{\rm obs,2}= 32.3 \, \mathrm{d}$ as the first estimate for the SN images 1 and 2, respectively. For a real
system, we do not know these time values exactly and therefore probe a range of values around these first estimates in our training, validation and test sets. In particular, for each sample in the training set, we pick random values between
$t_\mathrm{obs} - 10 \, \mathrm{d}$ and $t_\mathrm{obs} + 12 \,
\mathrm{d}$ as the ``ground truth'' (input true time value) for that specific sample. Different samples in the training set have different ground truth values.
We also tested more asymmetric ranges with
$t_\mathrm{obs} - 10 \, \mathrm{d}$ and $t_\mathrm{obs} +
t_\mathrm{est}$, where $t_\mathrm{est} = 16, 18, 22, 30 \,
\mathrm{d}$, and find results in very good agreement, with no
dependency on asymmetries in the initial estimate.
Data points are then created at the same epochs as the initial
observation. Using the $5 \sigma$ depth of each data point of our
observation, we calculate for each random microlensing position
10 random noise realizations following Equation (\ref{eq:noise
realization random mag including error LSST science book}). Since we
are not interested in the overall magnitude values we normalize the resulting light curve
by its maximum. Our total data set used for training has a size of
400000 samples
coming from 4 theoretical SN Ia models, 10000 microlensing map positions and 10 noise
realizations. Each sample has the data structure of Equation (\ref{eq: data structure}). For the validation and test sets, we calculate two
additional microlensing maps with the same $\kappa$, $\gamma$ and $s$
values as the training set, but with different microlensing patterns from random realizations of the stars/microlenses. This provides ``clean'' validation and test sets that the ML methods have not encountered during training
in order to fairly assess the performance of the methods.
Our validation and test
set have each a size of 40000 samples, from 4 models, 1000 microlensing map
positions and 10 noise realizations.
Two examples of our training data are shown in Figure \ref{fig: data
to train NN} in open circles. The first panel (sample 5) shows for the first SN image a
good match to the observation (in solid circle). The simulated training data is
therefore almost the same as the observation. Differences in fainter
regions (higher normalized magnitudes) come from observational
noise. For the second image the time value $t_2$ of the simulated
training data is larger than the true value $t_{\rm obs,2}$ and therefore the peak is
observed a few data points later. The general idea of providing data
in such a way is that the ML model learns to translate the
location of the peak region into the time value $t$. The difference
between the two time values from the first and second image is then
the time delay we are interested in. The second panel (sample 33) in
Figure \ref{fig: data to train NN} is a nice example illustrating why going
directly for the time delay is not working that well in this
approach. We see that both simulated images for training are offset to
the right by almost the same amount. This would in the end lead to a very similar time delay as the initial mock observations,
even though the input values are very different from those of the initial observations.
Our described approach can be seen as a fitting process which
has the weakness, that if the models for training are very
different in comparison to our real observation, our approach will
fail. From Figure \ref{fig: SN Ia models vs SNEMO} we see that the
four SN Ia models predict different shapes of the light curves and
locations of peaks. Therefore to compensate different peak locations,
we randomly shift the four SN Ia models in time by $-5$ to 5
days. Furthermore, to make the noise level more random and compensate
different peak brightness, we vary also the overall magnitude values by
$-0.4$ to 0.4. The random shifts in time and magnitude are the same for
a single sample, and therefore this approach creates basically a new
model with the same light curve shape, but slightly different peak
location and brightness. Since the ML models do not know the
actual values of the random shifts in time or magnitude the location
of the peak for a certain SN Ia model is smeared out.
Therefore, this approach introduces a much
larger variety in the SN Ia models and Appendix \ref{sec:Appendix Bias
training just on 3 models} shows that this helps to generalize to
light curves from sources that were not used in training the ML model. We also tested
random multiplication factors to stretch or squeeze the light curves
in time (instead of the random constant shift in time as just described), but our approach with the random shifts works slightly better
as discussed in Appendix \ref{sec:Appendix Bias training just on 3 models}. We therefore use the random shifts for the rest of this paper.
\begin{figure}
\subfigure{\includegraphics[trim=14 17 22 16,width=0.45\textwidth]{figures/data_model_ww_norm_obs_vs_sim_filter_i_iter_5.png}}
\subfigure{\includegraphics[trim=14 17 22 16,width=0.45\textwidth]{figures/data_model_ww_norm_obs_vs_sim_filter_i_iter_33.png}}\caption{Simulated data to train a ML model. The filled dots correspond to the mock observation shown in Figure \ref{fig: example light curve 187 system}. The open dots represent the simulated training samples, where two out of the 400000 are shown for the $i$ band in the top and bottom panels.}
\label{fig: data to train NN}
\end{figure}
\section{Machine learning techniques}
\label{sec: Machine learning techniques}
In this section, we explain the two different ML models
used in this work, namely a Deep Learning network using fully connected layers and a Random Forest. We use these simple ML approaches to get started,
because if they work well, then more complicated models might not be necessary.
Results from these simple approaches would also serve as a guide for the development of more complex ML models.
The techniques all use the input data structure as
described in Section \ref{sec: Example data used for machine learning},
and provide for each image of the LSN Ia a time value $t$ as shown in
Figure \ref{fig: example light curve 187 system}. For the first
appearing image, the (ground truth) time $t = 0$ is the time of explosion and for the
next appearing image it is the time of explosion plus the time delay
$\Delta t$. Given our creation of the data set, which is done like a
fitting process for each light curve, we do not train the system to predict only the time
delay, but instead we have as output one time value per image as
described in Section \ref{sec: Data set for machine learning}.
\subsection{Deep Learning - Fully Connected Neural Network}
\label{sec: Deep Learning Network}
Neural networks are a powerful tool with a broad range of
applications.
To solve our regression problem, we use a FCNN,
consisting of an input layer, two hidden layers and one output layer
as shown in Figure \ref{fig: fully connected neural network}. Although universal
approximation results suggest that a FCNN with only one hidden layer of arbitrarily large
width can approximate any continuous function, FCNNs with finite width but more layers have
shown to be more useful in practice. We therefore use two hidden layers instead of one and
test different widths of the networks by introducing the scaling factor $f$ for a variable
number of nodes in the hidden layers, in order to optimise the number of hidden nodes.
In our FCNN, each
node of the input layer corresponds to a magnitude value of a single
observation for a given filter and image, sorted as in the example of
Equation (\ref{eq: data structure}). Each node of the input layer ($d_j$) is
connected by a weight ($w_{1,jk}$) to each node of the first hidden layer ($h_{1,k}$).
In addition, a bias ($b_{1,k}$) is assumed and we introduce non linearities, by using a Rectified Linear Units
(ReLU) activation function \citep[e.g.,][]{Glorot:2011,Maas:2013},
which is 0 for all negative values and the identity function for all
positive values. Therefore the nodes of the first hidden layer can be calculated via:
\begin{equation}
h_{1,k} = \mathrm{ReLU}\big(\sum_{j=1}^{N_{\rm d}} w_{1,jk} \, d_j + b_{1,k}\big), \qquad k=1,2,...10f.
\end{equation}
Further, all nodes in the first hidden layer are connected to all nodes in the second hidden layer in a similar manner:
\begin{equation}
h_{2,k} = \mathrm{ReLU}\big(\sum_{j=1}^{10 f} w_{2,jk} \, h_{1,j} + b_{2,k}\big), \qquad k=1,2,...5f.
\end{equation}
The nodes from the second hidden layer are then finally connected to the output
layer to produce the time values
\begin{equation}
t_{k} = \sum_{j=1}^{5 f} w_{3,jk} \, h_{2,j} + b_{3,k}, \qquad
k=\left\{ \begin{array}{ll}
1,2 \ (\mathrm{double\ system})\\
1,2,3,4 \ (\mathrm{quad\ system}).\\
\end{array} \right.
\end{equation}
The output layer consists of two nodes for a double LSN Ia and four nodes for a quad LSN Ia.
We tested also other
FC network structures like using a different network for each image, using three hidden layers, or
using a linear or leaky ReLU activation function, but our default approach described above works best.
We train our system for a certain number of epochs $N_\mathrm{epoch}$,
where we use the ML library \texttt{PyTorch}
\citep{NEURIPS2019_9015}. At each epoch, we subdivide our training data
randomly into mini batches with size $N_\mathrm{batch}$. Each mini batch is propagated through
our network to predict the output
which we compare to the ground-truth values by using the mean squared error
(MSE) loss. To optimize the loss function, we use the Adaptive Moment
Estimation (Adam) algorithm \citep{Kingma:2014} with a learning rate
$\alpha$ on the MSE loss to update the weights in order to improve the
performance of the network\footnote{For the other hyperparameters of
the Adam optimizer we use the PyTorch default values.}. Per epoch, we calculate the MSE
loss of the validation set from our FCNN, and store in the end the network at the epoch with the lowest validation loss. By selecting the epoch with the lowest validation loss, we minimize the chance of
overfitting to the training data. Typically we reach the lowest validation loss around epoch 200 and an example for the training and validation curve for our FCNN is shown in Appendix \ref{sec:Appendix train and validation loss}.
The test data set is used in the end to compare different FCNNs,
which have been trained with different learning rates $\alpha$, sizes
$f$ and mini-batch sizes $N_\mathrm{batch}$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{figures/Deep_Learning_network.png}
\caption{Fully connected neural network, where the input layer has $N_{\rm d}$ data points and $d_{j}$ stands for the magnitude value of the $j$-th data point in Equation (\ref{eq: data structure}). The size of the
two hidden layers scales by a factor $f$ and the output size are two
(four) time values for a double (quad) LSN Ia.}
\label{fig: fully connected neural network}
\end{figure}
\subsection{Random Forest}
\label{sec: Random Forest network}
The RF \citep[Random Forest;][]{breiman2001random} is a method used for
classification and regression problems, by constructing many random
decision trees. In this section, we give a brief introduction on the
idea of a RF and explain the setup we are using.
To build a RF, we construct many random regression trees, which are a
type of decision trees, where each leaf represents numeric values (for the outputs).
For our case, we create a total of $N_\mathrm{trees}$ random regression trees where a schematic example for a single regression tree is shown in Figure
\ref{fig: regression tree}. The root node is shown in magenta, the
internal nodes in gray and the leaf nodes in green.
The root node splits our whole data set containing samples as defined by Equation (\ref{eq:
data structure}),
into two groups based on a certain criterion (e.g., $m_{i1,2} < 1.2$): first where the criterion is true, and second where it is not. The internal nodes split the data in the same manner, until no
further splitting is possible and we end up at a leaf node to predict the two time values $t_1$ and $t_2$ as output.
To create random regression trees we use a bootstrapped data set,
which draws randomly samples
from the whole training data (400000 samples) until it reaches a given size
$N_\mathrm{max \, samples}$. Importantly, an individual sample of
the original training data can be drawn multiple times and each random regression tree
is built from an individual bootstrapped data set, which is used to create the root, internal and leaf nodes. However, only a random subset of the features (e.g., just
$m_{i1,2}$, $m_{i2,5}$, and $m_{i2,9}$) is considered to construct the root node or a single internal node, where the splitting criterion (e.g., $m_{i1,2} < 0.5$) of a single feature is defined based on the mean value (e.g., $\bar{m}_{i1,2} = 0.5$) from all samples under investigation ($N_\mathrm{max \, samples}$ for the root node and fewer samples for the internal nodes depending on how the data set was split before). The number of available features we pick randomly from all features for the creation of a node is $N_\mathrm{max \, features}$\footnote{A single feature
like $m_{i1,2}$ can be picked multiple times.}.
In the following, we demonstrate the construction of the root node for a regression tree, as shown in
Figure \ref{fig: regression tree}, for the example of $N_\mathrm{max \, features} = 3$. Therefore we pick randomly three features from Equation (\ref{eq: data structure}), which we assume to be $m_{i1,2}$, $m_{i2,5}$, and $m_{i2,9}$. From a bootstrapped data set with $N_\mathrm{max \, samples}$ samples of our training data set we assume to find the mean values $\bar{m}_{i1,2} = 1.2$, $\bar{m}_{i2,5} = 1.0$, and $\bar{m}_{i2,9} = 0.6$. Therefore we investigate the three criteria $m_{i1,2} < 1.2$, $m_{i2,5} < 1.0$, and ${m}_{i2,9} < 0.6$ as potential candidates for the root node, where each of the criteria splits the $N_\mathrm{max \, samples}$ training samples into two groups. We select the best criterion/feature/split as the one that results in the lowest variance in the predictions within each of the groups created by the split. In other words, we can compute through this comparison a residual for $t_1$ and $t_2$ for each sample. From this, we can
calculate the sum of squared residuals for each candidate criterion, and the criterion which predicts the
lowest sum of squared residuals will be picked as our root node, which would be $m_{i1,2} < 1.2$ in our schematic example.
For each of the resulting two groups,
we do exactly the same procedure to construct
internal nodes which split the data further and further until no
further splitting is possible or useful\footnote{Further splitting is not useful if non of the investigated splitting criteria would lead to further improvements of the sum of squared residuals in comparison to not splitting the remaining samples.} and we end up at a leaf node to predict
the output. To avoid that a leaf node contains just a single training sample we use two parameters, namely, $N_\mathrm{msl}$,
the \textbf{m}inimum number of \textbf{s}amples required to be in a
\textbf{l}eaf node, and $N_\mathrm{mss}$, the \textbf{m}inimum number of
\textbf{s}amples required to \textbf{s}plit an internal node. From the multiple training samples in a leaf node, the $t_1$ and $t_2$ values of a leaf node
are the average of all samples in the leaf node.
Following the above
procedure, many random regression trees are built; to create an output for a
single (test) sample, all regression trees are considered and the final output
is created from averaging over all trees.
For this approach we use the object
\texttt{sklearn.ensemble.RandomForestRegressor} of the software \texttt{scikit-learn}
\citep{scikit-learn,sklearn_api}, where we assume the
default parameters except for the previously mentioned
$N_\mathrm{msl}$, $N_\mathrm{mss}$, $N_\mathrm{trees}$, $N_\mathrm{max
\, samples}$ and $N_\mathrm{max \, features}$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{figures/Random_Forest.png}
\caption{Schematic example of a regression tree for a double system
predicting two time values for certain input data as in Equation
(\ref{eq: data structure}). The root node is represented by the
magenta box, the internal nodes by gray boxes and the leaf nodes by
green boxes.}
\label{fig: regression tree}
\end{figure}
\section{Machine learning on example mock-observation}
\label{sec: Machine learning on example mock-observation}
In this section, we apply the ML techniques
from Section \ref{sec: Machine learning techniques} to our example
mock observation of a double LSNe Ia described in Section \ref{sec:
Example data used for machine learning}. In Section \ref{sec: Best fit, DL vs RF} we
find the best FCNN and RF and compare results from
the corresponding test sets based on the four theoretical models also used in the training process.
In Section \ref{sec: Evaluation on SNEMO test set} we use the best FCNN and RF and apply it
to an empirical data set not used in the training process to test if both approaches are also able
to apply transfer learning. This final test is very important since in reality we can never assure
that our assumed light curve shapes in the training process will fully match a real observation.
\subsection{Best fit: fully connected neural network vs. Random Forest}
\label{sec: Best fit, DL vs RF}
To find a FCNN and a RF which provides the best fit to our mock
observation from Figure \ref{fig: example light curve 187 system}, we
explore a set of hyperparameters as listed in Table \ref{tab: DL
varied parameter for training} for the FCNN and Table
\ref{tab: RF varied parameter for training} for the RF.
To find the best ML model, we use the test set to evaluate each set of
hyperparameters. This is just to find an appropriate set of hyperparameters which we will use from
there on. Our final judgment of the performances of the ML models will be based on
the ``SNEMO data set'' where
light curves will be calculated using an empirical model (see Section \ref{sec: Evaluation on SNEMO test set}).
For each sample $i$ of the test set, we get two time
values $t_{1,i}$ and $t_{2,i}$, from which we can calculate the time delay
$\Delta t_i = t_{1,i} - t_{2,i}$, which we compare to the true time delay
$\Delta t_{\mathrm{true},i}$ to calculate the ``time-delay deviation'' of the sample
as
\begin{equation}
\tau_i = \Delta t_i - \Delta t_{\mathrm{true},i}.
\label{eq:td_deviation}
\end{equation}
We investigate here the absolute time-delay deviation instead of the relative one
($\tau_i / \Delta t_{{\rm true},i}$), because this allows us to draw conclusions
about the minimum time delay required to achieve certain goals in precision and accuracy.
From our results we do not find a dependency on the absolute time-delay used in the training process
which is what we expect from the setup of the FCNN and the RF.
For the FCNN, we find that ($\alpha, f, N_\mathrm{batch},
N_\mathrm{epoch}) = (0.0001, 40, 256, 400)$ provides the best result,
meaning that the median of $\tau_i$ of the whole test set is lower
than 0.05 days (to reduce the bias) and the 84th$-$16th percentile (1$\sigma$ credible interval) of the test set is
the lowest of all networks considered. For the RF the
hyperparameters $(N_\mathrm{trees}, N_\mathrm{mss}, N_\mathrm{msl},
N_\mathrm{max \, samples}, N_\mathrm{max \, features}) = (800, 4, 1,
200000, \sqrt{N_\mathrm{all \, features}})$ provide the best
result. In the following we always use these two sets of
hyperparameters for the FCNN or the RF, unless specified
otherwise. We note that $N_\mathrm{trees} = 800$ is on the upper
side of what we investigated, but increasing the number of trees
further makes the computation even more costly. Nevertheless, we
tested also $N_\mathrm{trees} = 1000, 1200, 1600, 2000$ and
$N_\mathrm{trees} = 3000$ with $(N_\mathrm{mss}, N_\mathrm{msl},
N_\mathrm{max \, samples}, N_\mathrm{max \, features})$ from the
best fit as listed above. We find results which are basically the
same as for $N_\mathrm{trees} = 800$ or slightly worse (0.02 d at most)
and therefore we stick with $N_\mathrm{trees} = 800$ which is sufficient.
The comparison between the FCNN and the RF is shown in
Figure \ref{fig: best fit DL vs RF for example light curve 187
system}, where we present the median (50th percentile), with the 84th-50th percentile (superscript) and 16th-50th percentile (subscript) of the test set under consideration. The results include microlensing and observational
uncertainties as described in Section \ref{sec: simulated light curves
for LSNe Ia}. For the training and testing, we considered the four SN
Ia models, merger, N100, sub-Ch and W7 (therefore we use the description ``corresponding test set'' in the title of Figure \ref{fig: best fit DL vs RF for example light curve 187
system}). Further, the results are based
on using just the \textit{i} band, assuming the data structure as
defined in Equation (\ref{eq: data structure}).
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/DL_vs_RF.png}
\caption{FCNN and RF for the mock observation in Figure \ref{fig: example light curve 187 system}. The ML models' hyperparameters are set to the values where the test set yields a bias below 0.05 days and the smallest 68\% credible interval of the time-delay deviation (in Equation (\ref{eq:td_deviation})).}
\label{fig: best fit DL vs RF for example light curve 187 system}
\end{figure}
We find that both ML models provide accurate measurements of the time
delay with the $1 \sigma$ uncertainty for the FCNN around 0.7
days and the RF around 0.8 days, where both have low bias ($\leq$%
$0.04$ days). Nevertheless, the training and test set is produced by using the
same SN Ia models. If light curves in the
test sets are different from the ones used for training, this can lead to broadened
uncertainties, and more critically, also to biases (see Appendix \ref{sec:Appendix
Bias training just on 3 models}). Further, we
learn from Appendix \ref{sec:Appendix Bias training just on 3 models}
that, as soon as the different light curves used for training cover a
broad range, the trained ML model can be used for light curve shapes it
has never seen. Therefore in Section \ref{sec: Evaluation on SNEMO
test set}, we evaluate the RF and the FCNN trained on four
theoretical models on a data set based on the empirical
\texttt{SNEMO15} model.
\begin{table}
\centering
\begin{tabular}{cc}
$\alpha$ & 0.01, 0.001, 0.0001, 0.00001 \\
\midrule
$f$ & 5, 10, 20, 40, 80, 160\\
\midrule
$N_\mathrm{batch}$ & 64, 128, 256, 512 \\
\midrule
$N_\mathrm{epoch}$ & 400 \\
\end{tabular}
\caption{Investigated parameters for the training process of the FCNN
(see Figure \ref{fig: fully connected neural network}) for
the system listed in Table \ref{tab: Example double LSNe Ia}. We
vary the learning rate $\alpha$, the size of the hidden layers by a
factor $f$, and the size of the mini batches
$N_\mathrm{batch}$. Furthermore, 400 training epochs
($N_\mathrm{epoch}$) are sufficient given that the minimum loss of
the validation set is typically reached around 200 training
iterations.}
\label{tab: DL varied parameter for training}
\end{table}
\begin{table}
\centering
\begin{tabular}{cc}
$N_\mathrm{trees}$ & 200, 400, 800 \\
\midrule
$N_\mathrm{mss}$ & 2, 4, 8\\
\midrule
$N_\mathrm{msl}$ & 1, 2, 4 \\
\midrule
$N_\mathrm{max \, samples}$ & 50000, 100000, 200000, 300000, 400000 \\
\midrule
$N_\mathrm{max \, features}$ & 1, $\sqrt{N_\mathrm{all \, features}}, N_\mathrm{all \, features}$
\end{tabular}
\caption{Investigated parameters for the training process of the RF
(see Figure \ref{fig: regression tree} for a single
regression tree) for the system listed in Table \ref{tab: Example
double LSNe Ia}. We vary the number of trees $N_\mathrm{trees}$,
the minimum number of samples required to split an internal node
$N_\mathrm{mss}$, the minimum number of samples required to be in a
leaf node $N_\mathrm{msl}$, the size of the bootstrapped data set
$N_\mathrm{max \, samples}$, and the maximum number of features
$N_\mathrm{max \, features}$ considered to create a root or internal
node.}
\label{tab: RF varied parameter for training}
\end{table}
\subsection{Transfer learning: evaluation on \textit{SNEMO15} data set}
\label{sec: Evaluation on SNEMO test set}
To test if the ML models trained on four SN Ia models with the random shifts in time and
magnitude as introduced in Section \ref{sec: Data set for machine
learning} can generalize well enough to real SN Ia data, we created a data
set based on the empirical \texttt{SNEMO15} model, which is shown in
Figure \ref{fig: SN Ia models vs SNEMO}. The empirical model covers
only a wavelength range from 3305 $\AA$ to 8586 $\AA$, and
with $z_{\rm s} = 0.76$ (Table \ref{tab: Example double LSNe Ia}), the $i$ band is the bluest band we can
calculate.
To account for macrolensing and brightness deviations for the
\texttt{SNEMO15} model in comparison to the theoretical SN models, we
set the median \texttt{SNEMO15} light curve equal to the mean value of
the four macrolensed SN Ia models. Since the light curves are
normalized before the training process, this is only
important to avoid over- or underestimations of the observational
noise. Furthermore, to include microlensing, we use microlensed light
curves from the four theoretical models, initially created for the
corresponding test set, and subtract the macrolensed light curve, assuming
$\mu_\mathrm{macro} = 1/((1-\kappa)^2-\gamma^2)$. Therefore, we get
from our 4 models 4000 microlensing contributions for the light
curves, which the FCNN or the RF have not seen in its training
process. For each of the microlensing contributions, we then draw
randomly one of the 171 \texttt{SNEMO15} light curves to create a
microlensed \texttt{SNEMO15} light curve. From the 4000 microlensing contributions, we have a sample of 4000 microlensed light curves. For each light curve, we then draw 10
random noise and time delay realizations to create a data set, as described
in Section \ref{sec: Data set for machine learning}. We call this the ``\texttt{SNEMO15} data set''.
Figure \ref{fig: SENOM15 test for DL and RF using filters iz} shows
the results where we evaluate the FCNN and the RF from Figure \ref{fig: best fit DL vs RF for example light curve 187 system}, trained on four
theoretical SN Ia models, on the corresponding test set (built from the same four theoretical SN Ia models) and on the \texttt{SNEMO15} data set. The first important thing we note is that the RF shows almost
no bias, whereas the FCNN has a higher bias when evaluated on the \texttt{SNEMO15} data set. To investigate this
further, we look at results from the RF and the FCNN for the set of
hyperparameters as listed in Table \ref{tab: DL varied parameter for
training} and \ref{tab: RF varied parameter for training} for three
different cases using $i$ band, $z$ band, or $y$ band.
We find
that the absolute bias of the FCNN for the different hyperparameters and bands ($i,z,$ and $y$)
is mostly below 0.4 days but
higher values are also possible. The problems are that these
variations in the bias in the \texttt{SNEMO15} data set are not related to biases we see in the
corresponding test sets or due to a specific set of hyperparameters. As a result, we cannot identify the underlying source of the bias, apart from that it is due to suboptimal generalisation of the theoretical SN Ia models to \texttt{SNEMO15} in the FCNN framework.
The RF works much better in this context, as the absolute bias is always lower than 0.12
days for $i$, $z$, and $y$ band. Only the hyperparameter $N_\mathrm{max
\, features} = 1$ can lead to a higher bias up to 0.22 days, but
this hyperparameter is excluded because of its
much worse performance in precision on the corresponding test set
in comparison to $N_\mathrm{max \,
features} = \sqrt{N_\mathrm{all \, features}}$ or $N_\mathrm{max \,
features} = N_\mathrm{all \, features}$. Therefore, as long as we
restrict ourselves to LSNe Ia with delays longer than 12 days we can
achieve a bias below 1 percent, which allows accurate measurements of
$H_0$. Furthermore, the
bias is not the same in all filters. While the absolute bias in the
$y$ band goes up to 0.12 days, we have a maximum of 0.08 days in $z$
band and 0.03 days in $i$ band. The comparison of multiple bands
therefore helps to identify some outliers.
The bias investigation of the FCNN and the RF is summarized in Figure \ref{fig: DL and RF bias
on corresponding and SNEMO15 test set} using all hyperparameters (except $N_\mathrm{max
\, features} = 1$ which is excluded because of its bad performance on the corresponding test set)
and the $i, z,$ and $y$ band. From the upper panel we see that the large biases of our FCNN
on the \texttt{SNEMO15} data set are not related to biases we see in the corresponding test
set and therefore identifying a set of hyperparameters just from the corresponding test set which works
also well on the \texttt{SNEMO15} data set is not possible. From the lower panel of Figure
\ref{fig: DL and RF bias on corresponding and SNEMO15 test set} we see that also the biases in the
RF from the corresponding test set and the \texttt{SNEMO15} data set are not directly related with
each other but this is not a problem as the biases on the \texttt{SNEMO15} data set are low enough for
precision cosmology. From this example we see that the RF is able to perform transfer learning,
which means that training on a particular kind of data works also on another kind of data, which
does not work well for our FCNN.
In principle this was already suggested by the
investigation done in Appendix \ref{sec:Appendix Bias training just on 3 models}, but with the
random shifts in time we introduced, it seemed to significantly improve the transfer learning part, but it
was still not enough for the final test on the \texttt{SNEMO15} data set. Investigating the
importance of all the input features as listed in Equation (\ref{eq: data structure}),
we find that the FCNN focuses mostly on the peak directly whereas for the RF the features before
and after the peak are the most important ones.
More about this is discussed in Appendix \ref{sec:Appendix Feature importance}.
In the remainder of the paper, we proceed to present results based on the RF,
because the significant bias in our FCNN makes accurate
cosmology difficult to achieve especially for LSN Ia systems with short delays.
Using deeper networks would not be enough to improve our FCNN,
as this would just allow to fit the
training data better but does not ensure any improvement on the transfer learning part.
Therefore it would be necessary
to provide more realistic input light curves for the training process,
as it has problems to generalize to light curve shapes it has not
seen. Such an improvement could be achieved by using the
\texttt{SNEMO15} light curves as well in the training process, but then
a test set with light curve shapes it has never seen would be
missing. An other approach would be to incorporate regularization
or dropout into our FCNN, but given that this was not necessary
for the corresponding test set to perform well, it would be some kind of fine tuning
to our \texttt{SNEMO15} data set, because all tests before were encouraging to proceed to
the final test.
Therefore we postpone further investigations of FCNNs to future studies, especially
since other neural networks, such as recurrent neural networks or long
short-term memory networks \citep{Sherstinsky_2020}, might perform
even better.
Another thing we learn is that the distribution of the recovered time delays from the \texttt{SNEMO15} data
set is $\sim$0.5 days broader than that of the corresponding test
sets. This is not surprising as the RF and the FCNN have never seen
such light curves in the training process. A $\sim$1.4 day precision
on a single LSN Ia is still a very good measurement and allows us to
conduct precision cosmology from a larger sample of LSNe
Ia. Nevertheless, we see in this section that even though the
uncertainties for the RF are larger than that of the FCNN, the RF
provides low bias when used on empirical data and is therefore
preferred.
\begin{figure}
\subfigure{\includegraphics[width=0.49\textwidth]{figures/DL_uncLSSTplus_1.0_justMacroTrain_False_justMacroFinalTestSet_False_40_256_DL_final_test_set_i.png}
}
\subfigure{\includegraphics[width=0.49\textwidth]{figures/RF_uncLSSTplus_1.0_justMacroTrain_False_justMacroFinalTestSet_False_4_1_RF_final_test_set_i.png}
}
\caption{FCNN and RF trained on four theoretical models for the
\textit{i} band evaluated on two data sets. The dashed black line
represents the corresponding test set based on the four theoretical
models and the data set of the blue line is based on the empirical
\texttt{SNEMO15} model.}
\label{fig: SENOM15 test for DL and RF using filters iz}
\end{figure}
\begin{figure}
\subfigure{\includegraphics[width=0.45\textwidth]{figures/20DL_bias_final_test_set0.png}}
\subfigure{\includegraphics[width=0.45\textwidth]{figures/100RF_bias_final_test_set0.png}}
\caption{Bias of FCNN and RF on the corresponding test set, composed of four theoretical SN Ia models used for training, and a data set based on the empirical \texttt{SNEMO15} model, not used during training, for a variaty of different hyperparameters and filters ($i, z,$ and $y$). The large biases on the \texttt{SNEMO15} data set up to 1 day in our FCNN approach come from the different hyperparameters even though the corresponding test set provides biases within 0.25 days. The RF provides much lower biases in all cases and it depends only weakly on the hyperparameters but instead is mostly set by the filters under consideration.}
\label{fig: DL and RF bias on corresponding and SNEMO15 test set}
\end{figure}
\section{Microlensing, noise and choice of filters}
\label{sec: Microlensing, observational noise and choice of filters}
In this Section we use the RF from Section \ref{sec: Best fit, DL vs RF} and apply it
to the mock observation from Section \ref{sec: Example data used for machine learning}, for hypothetical assumptions about microlensing and noise to find sources of uncertainties (Section \ref{sec: Microlensing map parameters kappa, gamma, s} and \ref{sec: Uncertainties due to microlensing and noise}).
We further investigate potential bands to target for follow-up observations (Section \ref{sec: filters used for training}). In this section all results presented are based on the RF on test sets from the four theoretical models. The conclusions drawn in this section would be the same if the results from the FCNN would be presented.
\subsection{Microlensing map parameters $\kappa, \gamma, s$}
\label{sec: Microlensing map parameters kappa, gamma, s}
To investigate uncertainties in the microlensing characterization,
we use the RF from Section \ref{sec: Best
fit, DL vs RF}, but evaluate it on different test sets with varying
$\kappa, \gamma$, and $s$ values, which deviate from the original
training data.
Figure \ref{fig: evaluated on different kappa, gammas} shows the RF
evaluated on different test sets. The black dashed line
represents the evaluation of the RF on the corresponding test
set, which is calculated according to Section \ref{sec: Data set for
machine learning}. The blue and orange lines represent very similar
test sets, but calculated on a different microlensing map. Instead of
the $\kappa$ and $\gamma$ values listed in Table \ref{tab: Example
double LSNe Ia}, we assume for the first image $(\kappa, \gamma) =
(0.201, 0.225)$ and for the second image $(\kappa, \gamma) = (0.775,
0.765)$ to calculate the test set corresponding to the blue line. The
orange line represents a LSNe Ia where we have for the first image
$(\kappa, \gamma) = (0.301, 0.325)$ and for the second image $(\kappa,
\gamma) = (0.875, 0.865)$. Even though the RF has never seen $(\kappa, \gamma)$ configurations as represented by the orange and blue line
in the training process, the results are
very similar to the corresponding test set of the RF and given
that typical model uncertainties are around $0.05$
\citep[e.g.,][]{More:2016sys}, uncertainties in $\kappa$ and $\gamma$ are not
critical for our procedure.
In Figure \ref{fig: evaluated on differen s} we do a similar investigation,
but this time we vary the $s$ value of
the microlensing maps. From the comparison of the black dashed line to
the orange line, which represents almost the same $s$ value, we see
that the uncertainties are almost comparable. Therefore the much wider
uncertainty for $s=0.3$ (blue line) is not due to variations from
different microlensing maps for the same parameter set, but from the
fact that lower $s$ values provide more micro caustics in the map,
which leads to more events where these caustics are crossed and
therefore to more microlensing events and higher uncertainties. This
also explains the much tighter uncertainties of $s=0.9$, which
corresponds to a much smoother microlensing map. These results are in
good agreement with those of \cite{Huber:2020dxc}, who also showed that higher $s$
values lead to lower microlensing uncertainties.
For a real observation, the $s$ value is often not known very
precisely, which is no problem as the RF still works very
well. The only thing one has to be careful about is that an underestimation
of the $s$ value leads to an overestimation of the overall
uncertainties. Therefore going for a slightly lower $s$ value as one
might expect is a good way to obtain a conservative estimate of the
uncertainties.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/not_trained_for_test_set_kappa_gamma_RF.png}
\caption{The RF on its corresponding test set (black dashed
line, where training and test sets have the same $\kappa$ and $\gamma$ values)
and on two other test sets (blue and orange), with slightly
different $\kappa$ and $\gamma$ values of the microlensing map in
comparison to that of the training data.}
\label{fig: evaluated on different kappa, gammas}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/not_trained_for_test_set_s_RF.png}
\caption{The RF on its corresponding test set (black dashed
line, where training and test sets have the same $s$ value) and on three
other test sets (blue, orange, and green), with different
$s$ values of the microlensing map in comparison to that of the training
data.}
\label{fig: evaluated on differen s}
\end{figure}
\subsection{Uncertainties due to microlensing and noise}
\label{sec: Uncertainties due to microlensing and noise}
In this section, we compare the RF from Section \ref{sec:
Best fit, DL vs RF} to other RF models with various
assumptions about microlensing and noise as shown in Figure \ref{fig:
no micro, no noise, no micro noise}.
From the two cases containing microlensing in comparison to the two
cases without microlensing, we find that microlensing increases the
uncertainties almost by a factor of two. Although this is quite
substantial, we see that the contribution of the observational noise
is much higher and is the dominant source of uncertainty in the time-delay measurement. Therefore, to
achieve lower uncertainties, deeper observations with smaller photometric uncertainties are required. This is
in agreement with \cite{Huber:2019ljb}, who found that a substantial
increase in the number of LSNe Ia with well measured time delays can be
achieved with greater imaging depth.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/noise_micro_RF.png}
\caption{Comparison of the RF model from Section \ref{sec:
Best fit, DL vs RF} to three other RF models with hypothetical
assumptions about noise and microlensing. For our realistic mock observation, the noise in the light curves dominates over microlensing as the main source of uncertainty for measuring the time delays.}
\label{fig: no micro, no noise, no micro noise}
\end{figure}
\subsection{Filters used for training}
\label{sec: filters used for training}
In this section we investigate eight different filters
(\textit{ugrizyJH}) and possible combinations of them to get more
precise measurements. Figure \ref{fig: 8 different bands, each one
seperatly} shows eight RF models where each is trained and
evaluated on a single band. The $i$ band, presented first in Section
\ref{sec: Best fit, DL vs RF} provides the most precise
measurement. The next promising filters are $r$, $z$, $g$, and $y$ in
that order. For the bands $u$, $J,$ and $H$, the precision of the
measurement is poor and therefore almost not usable. The reason
for the strong variation between different bands is the quality of the
light curve which becomes clear from Figure \ref{fig: appendix further bands of mock
observations}, where only $g$ to $y$ band provide observations where
the peak of the light curves can be identified. Light curves with the
best quality are the $r$ and $i$ band which therefore work best for
our RF.
There are different ways to combine multiple filters to measure the
time delay. The first possibility would be to construct color curves
to reduce the effect of microlensing in the so-called achromatic phase
\citep{Goldstein:2017bny,Huber:2020dxc}. However, as pointed out by
\cite{Huber:2020dxc} our best quality color curve $r-i$ would be not
ideal as there are no features for a delay measurement within the
achromatic phase. Further, we saw in Section \ref{sec: Uncertainties
due to microlensing and noise} that our dominant source of
uncertainty is the observational noise instead of
microlensing. Therefore using color curves for this mock example is
not practical. We further see that even though color curves are in
theory a good way to reduce microlensing uncertainties, in a real
detection it might fail because not enough bands with high quality
data are available.
Another way of combining multiple filters is to train a single RF model
for multiple filters.
Generalising Equation (\ref{eq: data structure}) for the $r$ and $i$ bands,
we use as input structure
\begin{equation}
m_{r1, 1} \, m_{r1, 2} \, .. \, m_{r1, N_{r1}} \, m_{r2, 1} \, .. \, m_{r2, N_{r2}} \, m_{i1, 1} \, .. \, m_{i1, N_{i1}} \, m_{i2, 1} \, .. \, m_{i2, N_{i2}},
\label{eq: data structure ri}
\end{equation}
and more bands will be attached in the same way. The results are
summarized in Figure \ref{fig: different bands using in a single
network}, where we see that combining the two most promising bands
improves the uncertainty by about $0.1$ days, but adding more bands
does not help. Comparing these results to Figure \ref{fig: different
bands multiplying distributions from singe networks}, where
different distributions from Figure \ref{fig: 8 different bands, each
one seperatly} are multiplied with each other\footnote{We assume that different filters have independent detector noise.}, we see that a single
RF model for multiple filters does not profit much from multiple
bands. Therefore it is preferable to use a single RF model per band and
combine them afterwards. Using three or more filters can also help to
identify potential biases in a single band as pointed out in Section
\ref{sec: Evaluation on SNEMO test set}. Combining $r$, $i$, and $z$
band via multiplication helps to reduce the uncertainty by more than a
factor of two in comparison to using just the $i$ band for our system
with $z_{\rm s} = 0.76$. Further bands which might be considered for
follow-up observations are the $g$ and $y$ band.
The choice of the ideal filters depends on the source redshift and
therefore we show in Figure \ref{fig: appendix filters for different
redshifts} a similar plot as in Figure \ref{fig: 8 different bands,
each one seperatly} but for $z_{\rm s} = 0.55$ and $z_{\rm s} = 0.99$,
which corresponds to the 16th and 84th percentile of the source
redshift from LSNe Ia in the OM10 catalog. From this we learn that the
three most promising filters are $g, r,$ and $i$ band for $z_{\rm s} \lesssim 0.6$,
whereas for $z_{\rm s} \gtrsim 0.6$ the $r, i,$ and $z$ band are preferred.
The main reason for this behavior is the low rest-frame UV flux of SNe Ia due to line blanketing,
which gets shifted more and more into the $g$ band for higher $z_{\rm s}$.
If four filters could be used, then we have $g, r, i,$ and $z$ for $z_{\rm s} \lesssim 0.8$ and
$r, i, z,$ and $y$ for $z_{\rm s} \gtrsim 0.8$. If resources for five filters are available, we recommend $g, r, i, z,$ and $y$, where only for high source redshifts ($z_{\rm s} > 1.0$) the $J$ band might be preferred over the $g$ band, but given the poor precision in $g$ and $J$ band at such high redshifts it is questionable anyway how useful the fifth band is in these cases.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/RF_uncLSSTplus_1.0_RF_different_filters_ugrizyJH.png}
\caption{Eight different RF models, each trained on a data set from a single band (as indicated in the legend) and evaluated on the corresponding test set, similar in procedure to Section \ref{sec: Best fit, DL vs RF}.}
\label{fig: 8 different bands, each one seperatly}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/RF_uncLSSTplus_1.0_RF_compare_different_filters.png}
\caption{Multiple filters used to train a single RF. Using more than two filters does not improve the results further.}
\label{fig: different bands using in a single network}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/RF_uncLSSTplus_1.0_RF_sep_different_filters.png}
\caption{We train a single RF per filter as shown in Figure
\ref{fig: 8 different bands, each one seperatly} and the combination
of multiple filters is done by multiplying the corresponding
distributions. We see that multiple filters help drastically to
reduce the uncertainties. Therefore observing three to four bands
would be ideal.}
\label{fig: different bands multiplying distributions from singe networks}
\end{figure}
\section{Machine learning on further mock observations}
\label{sec: Machine learning on further mock-observation}
In this section we investigate further mock systems. We test systems with different moon phases (Section \ref{sec: moon phases}) and source/lens redshifts (Section \ref{sec: source and lens redshifts}) to investigate the change of the uncertainties in comparison to our mock system from Sections \ref{sec: Example data used for machine learning}, \ref{sec: Machine learning on example mock-observation} and \ref{sec: Microlensing, observational noise and choice of filters}. Furthermore, we test the number of data points required before peak to achieve good time delay measurements (Section \ref{sec: data points before peak}) and a quad system with various different properties in comparison to our previous studies (Section \ref{sec: quad lsne ia and higher microlensing uncertainties}).
\subsection{Different moon phases}
\label{sec: moon phases}
In this section we address the effect of different moon phases. We
assume the same LSN Ia as in Sections \ref{sec: Example data used for
machine learning}, \ref{sec: Machine learning on example
mock-observation} and \ref{sec: Microlensing, observational noise and choice of filters}, but place it differently in time. From Figure
\ref{fig: appendix further bands of mock observations}, we can already
estimate that if we ignore the $u$ band, which has too low signal-to-noise anyway, mostly the $g$ band will be influenced as other bands
are significantly brighter than the 5$\sigma$ point-source depth or
there is only a minor dependency on the moon phase.
For the LSN Ia presented in Sections \ref{sec: Example data used for
machine learning}, \ref{sec: Machine learning on example
mock-observation} and \ref{sec: Microlensing, observational noise and choice of filters}, we see from Figure \ref{fig: appendix further
bands of mock observations} that for the $g$ band, the observations
before the peak are significantly affected by moon light,
which according to Figure \ref{fig: 8 different bands, each one
seperatly} leads to an uncertainty around $2.1 \, \mathrm{d}$.
For a case where the peak in the $g$ band overlaps with the full moon we find a
similar uncertainty, whereas a case where the peak in the $g$ band
matches the new moon has an uncertainty around $1.7 \,
\mathrm{d}$. For cases where the peak is not significantly brighter
than the 5$\sigma$ point-source depth, the moon phase is important, but
given that our ML models work with a variable 5$\sigma$ point-source
depth, the effect of the moon phase is taken into account in our uncertainties.
In terms of follow-up observations,
one might consider to observe longer at full moon especially in the bluer bands
to reach a greater depth or resort to redder bands if the moon will likely affect the observations in the bluer bands adversely, but apart from that, we recommend in general to follow-up
all LSNe Ia independently of the moon phase.
\subsection{Source and lens redshifts}
\label{sec: source and lens redshifts}
The mock system we have investigated in Sections \ref{sec: Example data
used for machine learning} and \ref{sec: Machine learning on example
mock-observation} has $z_{\rm s} = 0.76$ which corresponds roughly to
the median source redshift of the OM10 catalog. Furthermore, we have
learned from Section \ref{sec: Uncertainties due to microlensing and noise} that
the observational noise is the dominant source of uncertainty and we
therefore expect a large dependency of the time-delay measurement on
$z_{\rm s}$ (assuming a fixed exposure time during observations).
We therefore investigate in this section $z_{\rm s} = 0.55$ and
$z_{\rm s} = 0.99$, which correspond to the 16th and 84th percentiles, respectively,
of the source redshift from LSNe Ia in the OM10 catalog. To probe just
the dependency on $z_{\rm s}$, we leave all other parameters as defined
in Table \ref{tab: Example double LSNe Ia}. We do not scale the absolute time delay
with the source redshift, since this is just a hypothetical experiment to demonstrate
how different brightnesses, related to the source redshift, influence the time-delay measurement.
The two cases are shown in
Figure \ref{fig: example light curve different source and lens
redshifts}, where we see the much better quality of the light curve
for $z_{\rm s} = 0.55$ (upper panel) in comparison to $z_{\rm s} = 0.99$
(lower panel). Further, we also probe the lens redshift by
investigating $z_{\rm d}=0.16$ and $z_{\rm d}=0.48$, which also corresponds
to the 16th and 84th percentile of the OM10 catalog and where we also
leave other parameters unchanged.
The results are summarized in Table \ref{tab: source and lens redshift
investigation}. We see that in comparison to $z_{\rm s} = 0.76$, the
case with $z_{\rm s} = 0.55$ has an improved uncertainty by $\sim$%
$0.2 \, \mathrm{d}$, where the case $z_{\rm s} = 0.99$ has a reduced
uncertainty by $\sim$%
$0.7 \, \mathrm{d}$. This trend is expected, and
means that especially for the case of $z_{\rm s} = 0.99$, a greater
depth would improve the results significantly. Comparing the results of
varying lens redshifts, we see a much smaller impact on the
uncertainty. Still there is a slight trend that higher lens redshifts
correspond to larger time-delay uncertainties, which is in good agreement with
\cite{Huber:2020dxc}, who find the tendency that microlensing
uncertainties increase with higher lens redshift if everything else is
fixed. The reason for this is that the
physical size of the microlensing map decreases with higher lens redshift, which makes a SN Ia
appear larger in the microlensing map and therefore events where micro
caustics are crossed are more likely. For more details see
\cite{Huber:2020dxc}.
The impact of the source redshift on the best filters to target is
discussed previously in Section \ref{sec: filters used for training}.
\begin{table}
\begin{tabular}{ccc}
$z_{\rm s}, z_{\rm d}$ & corresponding test set & \texttt{SNEMO15} data set \\
\midrule
0.76, 0.252 (Fig. \ref{fig: example light curve 187 system}) & $0.04^{+0.83}_{-0.87} \, \mathrm{d}$ & $0.02^{+1.38}_{-1.42} \, \mathrm{d}$ \\
\midrule
0.55, 0.252 (Fig. \ref{fig: example light curve different source and lens redshifts})& $0.04^{+0.59}_{-0.67} \, \mathrm{d}$ & $0.01^{+1.29}_{-1.25} \, \mathrm{d}$ \\[0.07cm]
0.99, 0.252 (Fig. \ref{fig: example light curve different source and lens redshifts})& $0.01^{+1.64}_{-1.66} \, \mathrm{d}$ & $0.02^{+2.1}_{-2.16} \, \mathrm{d}$ \\
\midrule
0.76, 0.16 & $0.04^{+0.83}_{-0.89} \, \mathrm{d}$ & $-0.09^{+1.26}_{-1.30} \, \mathrm{d}$ \\[0.07cm]
0.76, 0.48 & $0.06^{+0.97}_{-1.06} \, \mathrm{d}$ & $-0.09^{+1.45}_{-1.51} \, \mathrm{d}$ \\
\end{tabular}
\caption{Time-delay measurement of different LSNe Ia with varying source and lens redshifts.}
\label{tab: source and lens redshift investigation}
\end{table}
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/observed_light_curve_lnc_18701_cad_2_magCut_2.0_uncLSSTplus_1.0_startday_0_moonPhase_new_filter_i.png}
\includegraphics[width=0.48\textwidth]{figures/observed_light_curve_lnc_18702_cad_2_magCut_2.0_uncLSSTplus_1.0_startday_0_moonPhase_new_filter_i.png}
\caption{Two LSNe Ia similar to Figure \ref{fig: example light curve
187 system} but with different source redshifts. The LSNe Ia in
the upper panel has $z_{\rm s} =0.55$ and the one in the lower panel
has $z_{\rm s} =0.99$.}
\label{fig: example light curve different source and lens redshifts}
\end{figure}
\subsection{Data points before peak}
\label{sec: data points before peak}
In this section, we discuss the number of data points required before
peak to achieve a good time-delay measurement. The case presented in
Section \ref{sec: Example data used for machine learning} has a large
number of data points before peak, which is not always achievable in
practice, especially since vetting of transient candidates and triggering of light-curve observations often require additional time. Therefore, we investigate a similar mock system as in Figure
\ref{fig: example light curve 187 system}, but with a later
detection in the first-appearing SN image. In Figure \ref{fig: data points before peak}, we show a case
where we have the first data point at the peak in the $i$ band in
comparison to three other cases where we have four, three, or two data
points before the peak. The case for the at-peak detection provides as
expected the worst precision but more worrying is the large bias of
0.83 days. Already two data points before peak improve the results
significantly and allow precision cosmology for LSNe Ia with a
time delay greater than 22 days. Nevertheless, we aim for four data
points before peak as we could achieve a bias below 1 percent already
for a delay greater than 10 days; furthermore, the precision is also
improved substantially and almost at the level of the observation in
Figure \ref{fig: example light curve 187 system} and corresponding results in
Figure \ref{fig: SENOM15 test for DL and RF using
filters iz}. This would correspond in the observer frame to a
detection about eight to ten days before the peak in the $i$ band.
Given that a SN Ia typically peaks $\sim$18 rest-frame days after explosion and the typical lensed SN redshift is $\sim$0.7, we would need to detect and start follow-up observations of the first-appearing SN image within $\sim$%
$15$ days (observer frame) in order to measure accurate time delays.
The results presented here are in good agreement with the feature importance investigations shown in Figure \ref{fig: appendix feature importance}, where we find that especially the rise slightly before the peak is very important for the RF.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/RF_uncLSSTplus_1.0_justMacroTrain_False_4_1_sqrt_800_0.5_new_startday_187_RF_startday_final_test_set_i0.png}
\caption{Mock observation similar as in Figure \ref{fig: example light
curve 187 system}, but with a later detection, meaning fewer data
points before the peak in the $i$ band of the first-appearing SN image. We compare the cases where
we have four, three, or two data points before the peak in
comparison to an at-peak detection.}
\label{fig: data points before peak}
\end{figure}
\subsection{Quad LSNe Ia and higher microlensing uncertainties}
\label{sec: quad lsne ia and higher microlensing uncertainties}
So far we have only discussed double LSNe Ia, but in this section, we
present a LSN Ia with four images. Our mock quad LSN Ia is
similar as the one presented in Section \ref{sec: Example data used
for machine learning}, but we varied the source position for the
double system in the same lensing environment using the GLEE software
\citep{Suyu:2010,Suyu:2012ApJ} such that we get a quad system, where the
parameters are listed in Table \ref{tab: quad LSN Ia mock} and the light curves from the
system are shown in Figure \ref{fig: quad LSN Ia mock}. For images one
to three, the $\kappa$ and $\gamma$ values are closer to 0.5 in
comparison to the double system from Table \ref{tab: Example double
LSNe Ia}, which means that the macro magnification is higher but
microlensing uncertainties are increased as shown in
\cite{Huber:2020dxc}. For image four, we have $\kappa$ and $\gamma$
values far away from 0.5, which leads to lower microlensing
uncertainties but therefore also to a much fainter image which can be
seen in Figure \ref{fig: quad LSN Ia mock}.
\begin{table}
\begin{tabular}{cccccc}
& $z_{\rm s}$ & $z_{\rm d}$ & $(\kappa,\gamma)$ & t [d] \\
\midrule
image 1 & 0.76 & 0.252 & (0.435, 0.415) & $\equiv0.00$\\[0.07cm]
image 2 & 0.76 & 0.252 & (0.431, 0.424) & 0.01\\[0.07cm]
image 3 & 0.76 & 0.252 & (0.567, 0.537) & 0.34\\[0.07cm]
image 4 & 0.76 & 0.252 & (1.28, 1.253) & 20.76
\end{tabular}
\centering
\caption{Source redshift $z_{\rm s}$, lens redshift $z_{\rm d}$, convergence $\kappa$, shear $\gamma$ and the time values for the four images of a mock quad LSN Ia. The image separation varies between 0.6 and 1.6 arcsec and therefore it might be challenging to resolve all images with ground based telescopes given limits due to seeing.}
\label{tab: quad LSN Ia mock}
\end{table}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/observed_light_curve_lnc_18707_cad_2_magCut_2.0_uncLSSTplus_1.0_startday_0_moonPhase_new_filter_i.png}
\caption{Light curves of the mock quad LSN Ia from Table \ref{tab: quad LSN Ia mock} for the $i$ band.}
\label{fig: quad LSN Ia mock}
\end{figure}
In principle such a quad system can be investigated in two ways. The
first approach is to train a separate RF per
pair of images, leading to six RF models in total. The other way is to
train a single RF for the whole quad system which takes as input
magnitude values of four images instead of two images, similar to
Equation (\ref{eq: data structure}). The outputs as shown in Figures
\ref{fig: fully connected neural network} and \ref{fig: regression
tree} are then four instead of two time values.
The results for both approaches are summarized in Table \ref{tab: quad
LSN Ia mock uncertainties trained as double vs. quad} and the
correlation plots are shown in Appendix \ref{sec:Appendix correlation
plots}. We find for the approach ``separate RF per pair of images"
less correlations than for the approach ``single RF for all images", especially for the cases where the noisy fourth
image is included in the time-delay measurement. This is because in
the first case, six RF models are trained independently from each other,
whereas the second case only uses a single RF that predicts four
time values for the four images. Still the case ``separate RF per pair
of images" is preferred because it provides lower biases and tighter
constraints. This is not surprising, as providing all the data from the
four images at once is a much more complex problem to handle in
comparison to training a RF for just two images. While the time-delay
deviations between both approaches are almost comparable for pairs of images among the first, second and third images, for
the cases where the fourth image is included, the single RF for
the whole quad system performs much worse. This suggests that
especially handling noisy data can be treated better in the approach
of a separate RF for each pair of images and therefore it is
always preferred to train a separate RF per pair of images.
In the following we analyze the different uncertainties of
the time-delay measurements from different pairs of images as shown in Table
\ref{tab: quad LSN Ia mock uncertainties trained as double vs. quad}. The most
precise time delay is the one between the first and second image, but
if we compare this uncertainty to the uncertainty of the lower panel
of Figure \ref{fig: SENOM15 test for DL and RF using filters iz} for
the double LSNe Ia from Figure \ref{fig: example light curve 187
system}, we see that the precision is 0.2 days worse. This can be
easily explained by the higher microlensing uncertainties coming from
the $\kappa,$ and $\gamma$ values much closer to $0.5$ as shown in
Table \ref{tab: quad LSN Ia mock} in comparison to Table \ref{tab: Example double LSNe
Ia}. Higher microlensing uncertainties are also the reason why
uncertainties of $\Delta t_{31}$ and $\Delta t_{32}$ are larger than
$\Delta t_{21}$, even though the third image is the brightest one which
therefore has the lowest amount of observational noise. The precision
and also accuracy of the time-delay measurement where image four is
involved are the worst in Table \ref{tab: quad LSN Ia mock
uncertainties trained as double vs. quad}, which is explained by the
very poor quality of the light curve from the fourth image. We further
see that $\Delta t_{31} $ and $\Delta t_{32}$ as well as $\Delta
t_{41}$ and $\Delta t_{42}$ have very similar uncertainties, which is
expected since light curves from image one and two are almost
identical and therefore this is a good check of consistency.
Even though the time-delay measurements between the first
three images have the lowest time-delay deviation in days, the absolute time
delay is very short, which leads to a very high relative deviation. For
this specific mock quad LSNe Ia, it would only make sense to measure
time delays with respect to the fourth image, where we would achieve a
precision around 10 percent and an accuracy of 0.7 percent.
\begin{table}
\centering
\begin{tabular}{cccccc}
& separate RF & single RF \\
& per pair of images & for all images \\
\midrule
Time-delay dev. of $\Delta t_{21}$ & $0.01^{+1.63}_{-1.63} \, \mathrm{d} $ & $-0.01^{+1.65}_{-1.64} \, \mathrm{d} $\\[0.07cm]
Time-delay dev. of $\Delta t_{31}$ & $-0.05^{+1.85}_{-1.85} \, \mathrm{d} $ & $0.01^{+1.89}_{-1.87} \, \mathrm{d} $\\[0.07cm]
Time-delay dev. of $\Delta t_{41}$ & $0.15^{+1.84}_{-1.96} \, \mathrm{d} $ & $0.26^{+2.24}_{-2.36} \, \mathrm{d} $\\[0.07cm]
Time-delay dev. of $\Delta t_{32}$ & $-0.03^{+1.86}_{-1.89} \, \mathrm{d} $ & $0.01^{+1.89}_{-1.88} \, \mathrm{d} $\\[0.07cm]
Time-delay dev. of $\Delta t_{42}$ & $0.14^{+1.81}_{-1.93} \, \mathrm{d} $ & $0.26^{+2.25}_{-2.33} \, \mathrm{d} $\\[0.07cm]
Time-delay dev. of $\Delta t_{43}$ & $0.15^{+2.07}_{-2.19} \, \mathrm{d} $ & $0.25^{+2.40}_{-2.55} \, \mathrm{d} $\\
\end{tabular}
\caption{Deviations of the time-delay measurements
($\tau_{ij} = \Delta t_{ij} - \Delta t_{\mathrm{true},ij}$)
for the LSNe Ia
quad system shown in Figure \ref{fig: quad LSN Ia mock}. The second
column shows the case where a separate RF is trained per
pair of images, leading to six RF models in total, in comparison
to a single RF (third column) for the whole quad system.}
\label{tab: quad LSN Ia mock uncertainties trained as double vs. quad}
\end{table}
\section{Discussion}
\label{sec: Discussion}
We train a FCNN with two hidden layers
and a RF using four theoretical SN Ia models, to measure time delays in LSNe
Ia. We find that both ML models work very well on a test set based on the same four
theoretical models used in the training process, providing uncertainties around 0.7 to 0.9 days for
the $i$ band almost without any bias. Applying the trained ML models to
the \texttt{SNEMO15} data set, which is composed of empirical SN Ia light
curves not used in the training process, we find that the uncertainties increase
by about 0.5 days, but this is not surprising as such light curves have
never been used in the training process and a measurement with 1.5
days uncertainty on a single band is still a very good measurement.
However, when applied to the \texttt{SNEMO15} data set, the FCNN yields
biased results. The biases are mostly within 0.4 days, but larger
ones are also possible, making our FCNN approach not suitable for precision cosmology.
Furthermore, this shows that transfer learning for our FCNN approach is not working,
since biases on the corresponding test set composed of four theoretical models as used in
the training process are negligible.
This was already suggested by results presented in
Figure \ref{fig: three models for training evaluating on single model test sets},
where the training on three theoretical models was not general enough to
perform well on the fourth model not used in the training process.
However, we introduced random shifts in time of the light curves,
which reduced the bias significantly and motivated us to apply our FCNN trained on four theoretical models (with random shifts in time to reduce the bias), to the \texttt{SNEMO15} data set as a final test, where we find unfortunately significant biases.
Deeper and larger fully connected networks will not solve this problem as they will just fit
the training data better and do not guarantee the transfer learning part.
To overcome this, regularization and dropout might help, but this would be kind of a fine tuning to our
\texttt{SNEMO15} data set, because our investigations up to that
stage (transfer learning part working for training on three models and testing on fourth
model and very low bias on a test set composed of four theoretical models),
were very encouraging to apply our FCNN
to the final test (\texttt{SNEMO15} data set), which it failed. However, we defer further
investigations of FCNN to future work,
especially since more complex ML approaches like recurrent neural networks or
long short-term memory networks \citep{Sherstinsky_2020} might fit the problem even better.
The RF provides significantly lower biases on the \texttt{SNEMO15} data set
-- with 4 or more data points before peak, which means a detection of the first LSNe Ia image
about eight to ten days before peak, the bias can be kept within 0.10
days. If one of the images is very faint as shown in Figure \ref{fig:
quad LSN Ia mock}, we still can reach an accuracy of 0.15 days and
therefore a delay longer than 15 days provides already a time-delay
measurement better than 1 percent. Given the low bias in the RF
especially in comparison to the FCNN, the RF is the one to
use for a real application.
\cite{Huber:2019ljb} used the free-knot spline estimator from
\texttt{PyCS} \citep{2013:Tewesb,Bonvin:2015jia} to measure time
delays for LSNe Ia. To compare this approach to our results, we apply
\texttt{PyCS} as used in \cite{Huber:2019ljb} to the \texttt{SNEMO15}
data set. For the system shown in Figure \ref{fig: example light curve
187 system} with a very well sampled light curve, we achieve similar
uncertainties as the RF shown in Figure \ref{fig: SENOM15 test for DL
and RF using filters iz}. However, as soon as we look at cases,
where we have a reduced number of data points before peak as shown in
Figure \ref{fig: data points before peak using PyCS} (in comparison to
the RF results in Figure \ref{fig: data points before peak}), we see
that the RF approach achieves a much higher precision. In terms of the
bias, as long as we provide 2 data points or more before peak the RF
and \texttt{PyCS} provide sufficient results. For the case where the
first data point is at the peak of the $i$ band, even though
\texttt{PyCS} provides a much better bias than the RF, the
measurement has substantially poorer
precision. Overall the RF works better to measure time delays
in LSNe Ia in most cases in comparison to \texttt{PyCS}. However in a
real application, both approaches could be used to cross-check the time-delay measurements.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/PyCS_filter_i_0.png}
\caption{Same as Figure \ref{fig: data points before peak} but this time using \texttt{PyCS} on the \texttt{SNEMO15} data.}
\label{fig: data points before peak using PyCS}
\end{figure}
\section{Summary}
\label{sec: Summary}
In this work, we introduced two ML techniques, namely
a Deep Learning network using a fully connected neural network with two hidden layers
and a Random Forest to measure time delays of LSNe Ia. We
simulate LSN Ia light curves for the training process including
observational noise and microlensing uncertainties using four different
theoretical models. Our training set is composed of 400000 LSNe Ia
coming from 4 theoretical models, 10000 microlensing map positions and
10 noise realizations. Our test set has a size of 40000 LSNe Ia where
we draw 1000 microlensing map position instead of 10000 as for the
training set. We construct a further data set based on the empirical
\texttt{SNEMO15} model to create realistic LSN Ia light curves not
used in the training process to check if our approach is general
enough to handle real observations of LSNe Ia. To add microlensing to
the \texttt{SNEMO15} model, we use the microlensed light curves from
the theoretical models where we subtract the macrolensed light curve
to get the microlensing contribution.
To summarize our results, we look at the more realistic results from
the empirical \texttt{SNEMO15} data set. From the
investigation of the RF and the FCNN, we find that only the RF provides
sufficiently low bias and is therefore the approach to use in a real
application. From all investigated systems where we assumed a two-day cadence with
a few random gaps, we found that we can
achieve an accuracy better than 1\% for the RF if we restrict ourselves
to LSN Ia systems with a delay longer than 15 days, where we obtain
the first data point around eight to ten days before peak in the light curve of the first-appearing
SN image. In terms of precision, we can achieve an uncertainty of 1.5 days from the
$i$ band alone, for the median source redshift $\sim$%
$0.76$ of LSNe Ia in
OM10. Using three bands where the time delay is measured separately
for each RF and combined afterwards, we can reach a $\sim$%
$1.0$ day uncertainty. The three most promising filters to target are $g,
r,$ and $i$ for $z_{\rm s} \lesssim 0.6$ and $r, i,$ and $z$ for higher
source redshifts. As a fourth and fifth band $z$ and $y$ for $z_{\rm s} \lesssim 0.6$
and $g$ and $y$ band for $z_{\rm s} \gtrsim 0.6$ might be considered.
We find that the gain from multiple filters
is the best if a ML model is trained individually per band. The other
bands investigated in this work ($u, J,$ and $H$) provide very poor-quality
light curves and are therefore not useful.
From our investigations, we find mainly that the observational noise is
the dominant source of uncertainty in measuring time delays, and to improve the results
presented here, a greater depth would be required. The depth we assume
for follow-up observations is one magnitude deeper than the
single-epoch LSST-like 5$\sigma$ depth, meaning 25.7, 25.3, 24.7, 23.8
and 23.0 for $g, r, i, z,$ and $y$, respectively. From the investigation of the
source redshifts, we find that in comparison to the median source redshift $\sim$%
$0.76$ of LSNe Ia in OM10, $z_{\rm s} = 0.55$ can
improve the precision in the $i$ band by 0.2 days, but $z_{\rm s} =
0.99$ might lower the uncertainty by 0.7 days, which suggests that
especially for higher source redshifts, a greater depth might be
required. Although a greater depth could also compensate the moon
phase, the impact on the uncertainty is weaker (at most 0.4 days worse
uncertainty in our investigation) and becomes even less relevant the
redder the bands are. We further find that typical uncertainties in
the microlensing parameters ($\kappa, \gamma,$ and $s$) are not
relevant for our training process. Only a significantly overestimated $s$ value
could lead to an underestimation of the uncertainties. Furthermore, we
find that our approach works best if an individual RF is trained per
pair of images.
In comparison to the free-knot spline estimator from \texttt{PyCS}
\citep{2013:Tewesb,Bonvin:2015jia} as used in \cite{Huber:2019ljb}, our
approach works overall better, providing an improved precision up to
$\sim$0.8 days. We therefore can expect slightly more LSNe Ia with
well measured time delays than the number predicted by \cite{Huber:2019ljb}.
In this work, we have developed a new method to measure time delays of
LSNe Ia. The RF provides accurate and precise time-delay measurements
compatible or better than current methods and is therefore an
important tool, to pave the way for LSNe Ia as cosmological probe.
The downsides of our approach are that a RF needs to be trained separately for
each individual system's observing pattern, the dependency on the SN Ia models used
in the training process and that our approach cannot be applied to
other types of LSNe so far. To overcome this and build a ML network which
is more general, recurrent neural networks or
long short-term memory networks \citep{Sherstinsky_2020} are very
promising and will be investigated in a future study.
\FloatBarrier
\begin{acknowledgements}
We thank F.~Courbin, S.~Schuldt and R. Cañameras for useful discussions.
SH and SHS thank the Max Planck Society for support through the Max
Planck Research Group for SHS. This project has received funding from
the European Research Council (ERC) under the European Union’s Horizon
2020 research and innovation programme (grant agreement No
771776).
This research is supported in part by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2094 -- 390783311.
DG acknowledges support from the Baden-Württemberg Foundation through the Baden-Württemberg Eliteprogramm for Postdocs.
UMN has been supported by the Transregional Collaborative Research
Center TRR33 ‘The Dark Universe’ of the Deutsche
Forschungsgemeinschaft.
JHHC acknowledges support from the Swiss National Science
Foundation and through European Research Council (ERC) under the European
Union's Horizon 2020 research and innovation programme (COSMICLENS:
grant agreement No 787866).
MK acknowledges support from
the Klaus Tschira Foundation.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,327 | arxiv | \section{Introduction}
\vspace{-1mm}
The kernel size of a convolutional layer defines the region from which features are computed, and is a crucial choice in their design.
Commonly, small kernels are used almost exclusively and are combined with pooling to model long term dependencies \citep{simonyan2014very, szegedy2015going, he2016deep, tan2019efficientnet}. Recent works indicate, however, that CNNs benefit from using convolutional kernels (\emph{i}) of varying size at different layers \citep{pintea2021resolution, tomen2021deep}, and (\emph{ii}) at the same resolution of the data \citep{peng2017large,cordonnier2019relationship, romero2021ckconv}. Unfortunately, most CNNs represent convolutional kernels as tensors of discrete weights and their size must be fixed prior to training. This makes exploring different kernel sizes at different layers difficult and time-consuming due to (\textit{i}) the large search space, and (\textit{ii}) the large~number~of~weights~required~to~construct~large~kernels.
A more efficient way to tune different kernel sizes at different layers is to \textit{learn} them during training.\break
Existing methods define a \textit{discrete} weighted set of basis functions, e.g., shifted Delta-Diracs (Fig.~\ref{fig:dilated_kernel}, \citet{dai2017deformable}) or Gaussian functions (Fig.~\ref{fig:parametric_dilation}, \citet{jacobsen2016structured, Shelhamer2019BlurringTL, pintea2021resolution}). During training they learn dilation factors over the basis functions to increase the kernel size, which crucially limits the bandwidth of the resulting kernels.
In this work, we present the \textit{Flexible Size Continuous Kernel Convolution} (FlexConv), a convolutional layer able to learn \textit{high bandwidth} convolutional kernels of varying size during training (Fig.~\ref{fig:flexconv}). Instead of using discrete weights, we provide a \textit{continuous parameterization} of convolutional kernels via a small neural network \citep{romero2021ckconv}. This parameterization allows us to model continuous functions of arbitrary size with a fixed number of parameters. By multiplying the response of the neural network with a Gaussian mask, the size of the kernel can be learned during training (Fig.~\ref{fig:flexconv_kernel}). This~allows~us~to~produce~detailed~kernels~of~small~sizes~(Fig.~\ref{fig:capacity_tradeoff}),~and~tune~kernel~sizes~efficiently.
FlexConvs can be deployed at higher resolutions than those observed during training, simply by using a more densely sampled grid of kernel indices. However, the high bandwidth of the kernel can lead FlexConv to learn kernels that show aliasing at higher resolutions, if the kernel bandwidth exceeds the Nyquist frequency.
To solve this problem, we propose to parameterize convolutional kernels as \textit{Multiplicative Anisotropic Gabor Networks} (MAGNets). MAGNets are a new class of Multiplicative Filter Networks \citep{fathony2021multiplicative} that allows us to analyze and control the frequency spectrum of the generated kernels. We use this analysis to regularize FlexConv against aliasing. With this regularization, FlexConvs can be directly deployed at higher resolutions with minimal accuracy loss. Furthermore, MAGNets provide higher descriptive power and faster convergence speed than existing continuous kernel parameterizations \citep{schutt2017schnet, finzi2020generalizing, romero2021ckconv}. This leads to important improvements in classification accuracy (Sec.~\ref{sec:experiments})
\textit{Flexible Size Continuous Kernel CNNs} (FlexNets) learn the size of their convolutional kernels at every layer, and easily model long-term dependencies without the need of pooling.
FlexNets achieve state-of-the-art across several sequential datasets, match performance of recent works with learnable kernel sizes with less compute, and are competitive with much deeper ResNets \citep{he2016deep} when applied on image benchmark datasets. Thanks to the ability of FlexConvs to generalize across resolutions, FlexNets can be efficiently trained at low-resolution, e.g., $16\times16$ CIFAR images, and deployed directly on the original data resolution, e.g., $32\times32$ CIFAR images.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{images/flexconv-with-indices.png}
\vspace{-3mm}
\caption{The Flexible Size Continuous Kernel Convolution (FlexConv). FlexConv defines convolutional kernels as the multiplication of a continuous convolutional kernel \mlp$^{{\boldsymbol{\psi}}}$, with a Gaussian mask of local support $w_{\textrm{gauss}}$: $\boldsymbol{\psi}(x, y) = w_{\textrm{gauss}}( x, y ; \boldsymbol{\theta}_{\mathrm{mask}}) \cdot \boldsymbol{\text{\btt MLP}^{\psi}}(x, y)$. By learning the parameters of the mask, the size of the convolutional kernel can be optimized during training. See also Fig.~\ref{fig:app-flexconvexample}.
\vspace{-2mm}}
\label{fig:flexconv}
\end{figure}
In summary, our \textbf{contributions} are:
\begin{itemize}[topsep=0pt, leftmargin=*]
\item We introduce the \textit{Flexible Size Continuous Kernel Convolution} (FlexConv), a convolution operation able to learn \emph{high bandwidth} convolutional kernels of varying size end-to-end.
\item Our proposed \textit{Multiplicative Anisotropic Gabor Networks} (MAGNets) allow for analytic control of the properties of the generated kernels. This property allows us to construct analytic alias-free convolutional kernels that generalize to higher resolutions. In addition, MAGNets show higher descriptive power and faster convergence speed than existing kernel parameterizations.
\item \textit{Flexible Size Continuous Kernel CNNs} (FlexNets) obtain state-of-the-art across several sequential datasets, and match recent works with learnable kernel size on CIFAR-10 with less compute.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[c]{0.28\textwidth}
\centering
\vspace{-4mm}
\includegraphics[width=\textwidth]{images/flexconv_kernelincrease.png}
\caption{FlexConv kernels (ours)}
\label{fig:flexconv_kernel}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{images/dilated_kernelincrease.png}
\caption{Dilation / deformation\newline \citep{dai2017deformable}}
\label{fig:dilated_kernel}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.36\textwidth}
\centering
\includegraphics[width=\textwidth]{images/steerable_kernelincrease.png}
\caption{(Learnable) parametric dilation\newline \citep{pintea2021resolution}}
\label{fig:parametric_dilation}
\end{subfigure}
\vspace{-2.5mm}
\caption{Existing approaches increase the size of convolutional kernels via (learnable) parametric dilations, e.g., by deformation (b) or by Gaussian blur (c).
However, dilation limits the bandwidth of the dilated kernel and with it, the amount of detail it can describe.
Contrarily, FlexNets extend their kernels by passing a larger vector of positions to the neural network parameterizing them. As a result, FlexConvs are able to learn \textit{high bandwidth} convolutional kernels of varying size end-to-end (a).
\vspace{-2mm}}
\label{fig:dilations}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[c]{0.267\textwidth}
\centering
\includegraphics[width=\textwidth]{images/gt_approx.png}
\caption{Ground Truth}
\label{fig:gt_approx}
\end{subfigure}
\hspace{3.5mm}
\begin{subfigure}[c]{0.677\textwidth}
\centering
\includegraphics[width=\textwidth]{images/approximations.png}
\caption{Reconstructions at varying degrees of localization}
\label{fig:ckernel_approx}
\end{subfigure}
\vspace{-2mm}
\caption{\label{fig:capacity_tradeoff} The importance of dynamic kernel sizes in continuous kernel convolutions. Consider a neural network \mlp$^{{\boldsymbol{\psi}}}$ predicting pixel values at each position, like in a CKConv, aiming to reproduce the flower marked in (a). If the entire image is considered, the network must learn to predict zeros outside of the flower region. However, the network must use part of its capacity to zero out these large regions. This in turn degrades the quality of the approximation in the region of interest (b). The better the localization of the flower, the higher the fidelity with which the flower can be approximated. FlexNets learn the size of their convolutional kernels at each layer during training. Consequently, FlexNets \emph{(i)} use the capacity of the network parameterizing the kernel efficiently, \emph{(ii)} converge faster to good approximations, and \emph{(iii)} are faster in execution --via dynamic cropping--.
\vspace{-2mm}}
\end{figure}
\vspace{-1mm}
\section{Related Work}
\vspace{-1mm}
\textbf{Adaptive kernel sizes.}
Adaptive kernel sizes have been proposed via learnable pixel-wise offsets \citep{dai2017deformable}, learnable padding operations \citep{Han_2018_CVPR}, learnable dilated Gaussian functions \citep{Shelhamer2019BlurringTL, Xiong_2020_CVPR, tabernik2020spatially} and scalable Gaussian derivative filters \citep{pintea2021resolution, tomen2021deep, lindeberg2021scale}.
These approaches either dilate conventional discrete kernels (Fig.~\ref{fig:dilated_kernel}), or use discrete weights on dilated basis functions (Fig.~\ref{fig:parametric_dilation}). The ability of these methods to learn the kernel size depends on dilation, which crucially limits the bandwidth of the resulting kernels. In contrast, FlexConvs are able to construct high bandwidth convolutional kernels of varying size, and with a fixed parameter count. Larger kernels are obtained simply by passing a larger vector of positions to the neural network parameterizing the kernel (Fig.~~\ref{fig:flexconv}).
\textbf{Continuous kernel convolutions.} Discrete convolutional kernel parameterizations assign an independent weight to each specific position in the kernel. Continuous convolutional kernels, on the other hand, view convolutional kernels as continuous functions parameterized via a small neural network \mlp$^{{\boldsymbol{\psi}}}$$: {\mathbb{R}}^{\mathrm{D}} \rightarrow {\mathbb{R}}^{\mathrm{N}_{\mathrm{out}} \times \mathrm{N}_{\mathrm{in}}}$, with $\mathrm{D}$ the data dimensionality. This defines a convolutional kernel for which arbitrary input positions can be queried. Continuous kernels have primarily been used to handle irregularly-sampled data \textit{locally}, e.g., molecular data \citep{simonovsky2017dynamic, schutt2017schnet} and point-clouds \citep{thomas2018tensor, wang2018deep, shi2019points}.
Recently, \citet{romero2021ckconv} introduced continuous convolutional kernels as a tool to model long-\break term dependencies. Their Continuous Kernel Convolution (CKConv) uses a continuous kernel parameterization to construct very large convolutional kernels with a constant parameter cost. This leads to important savings in terms of required parameters with respect to equivalent discrete kernels.
CKConvs always use global kernel sizes., i.e., kernels as big as the input signal. In contrast, FlexConvs learn jointly the size of the convolutional kernel and its size. We show this leads to important advantages in terms of~expressivity~(Fig.~\ref{fig:capacity_tradeoff}),~convergence~speed~and~compute~costs~of~the~operation.
\textbf{Implicit neural representations.} Parameterizing a convolutional kernel via a neural network can be seen as learning an implicit neural representation of the underlying convolutional kernel \citep{romero2021ckconv}. Implicit neural representations construct continuous data representations by encoding data in the weights of a neural network \citep{park2019deepsdf, sitzmann2020implicit, fathony2021multiplicative}.
We replace the kernel parameterization via SIRENs \citep{sitzmann2020implicit} used in \cite{romero2021ckconv} with \textit{Multiplicative Anisotropic Gabor Networks} (MAGNets): a new class of Multiplicative Filter Networks (MFNs) \citep{fathony2021multiplicative}. MAGNets allows us to analytically control the properties of its implicit neural representation. This allows us to construct analytic alias-free convolutional kernels. Moreover, MAGNet show higher descriptive power and faster convergence than SIRENs and MFNs, and lead to important improvements in classification accuracy.
\vspace{-1mm}
\section{Method}
\vspace{-1mm}
In this section, we introduce our approach. First, we introduce FlexConv and the Gaussian mask. Next, we introduce our Multiplicative Anisotropic Gabor Networks (MAGNets) and provide a description of our regularization technique used to control the spectral components of the generated kernel.
\vspace{-1mm}
\subsection{Flexible Size Continuous Kernel Convolution (FlexConv)}
\label{sec:flexconv}
\vspace{-1mm}
To learn the kernel size during training, FlexConvs define their convolutional kernels $\boldsymbol{\psi}$ as the product of the output of a neural network \mlp$^{{\boldsymbol{\psi}}}$\ with a Gaussian mask of local support. The neural network \mlp$^{{\boldsymbol{\psi}}}$\ parameterizes the kernel, and the Gaussian mask parameterizes its size (Fig.~\ref{fig:flexconv}).
\textbf{Anisotropic Gaussian mask.} Let $G(x ; \mu_{\mathrm{X}}, \sigma^2_{\mathrm{X}}) {\coloneq} \exp\big\{\hspace{-0.5mm}-\frac{1}{2}\sigma_\mathrm{X}^{-2}(x - \mu_{\mathrm{X}})^{2}\big\}$
be a Gaussian function parameterized by a mean-variance tuple $(\mu_{\mathrm{X}}, \sigma^2_{\mathrm{X}})$. The anisotropic Gaussian mask is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
w_{\mathrm{gauss}}(x, y; \{\mu_{\mathrm{X}}, \sigma^2_{\mathrm{X}}, \mu_{\mathrm{Y}}, \sigma^2_{\mathrm{Y}}\}) = G(x ; \mu_{\mathrm{X}}, \sigma_{\mathrm{X}}^2) G(y ; \mu_{\mathrm{Y}}, \sigma_{\mathrm{Y}}^2). \label{eq:gaussianmask}
\end{equation}
By learning $(\mu_{\mathrm{X}}, \sigma^2_{\mathrm{X}})$ and $(\mu_{\mathrm{Y}}, \sigma^2_{\mathrm{Y}})$ independently, anisotropic non-centered windows can be learned.
\vspace{-1mm}
\subsection{Multiplicative Anisotropic Gabor Networks (MAGNets)}
\label{sec:magnets}
\vspace{-1mm}
In this section, we formalize our proposed parameterization for the kernel \mlp$^{{\boldsymbol{\psi}}}$. We start by introducing Multiplicative Filter Networks \citep{fathony2021multiplicative}, and present our MAGNets next.
\textbf{Multiplicative Filter Networks (MFNs).} Recently, \citet{fathony2021multiplicative} proposed to construct implicit neural representations as the linear combination of exponentially many basis functions $\boldsymbol{\mathrm{g}}$:
\vspace{-0.5mm}
\begin{align}
\label{eq:mfn}
&\boldsymbol{\mathrm{h}}^{(1)} = \boldsymbol{\mathrm{g}}\big( [x,y]; \boldsymbol{\theta}^{(1)}\big) && \boldsymbol{\mathrm{g}}: {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}}\\
&\boldsymbol{\mathrm{h}}^{(l)} = \big(\mat{W}^{(l)} \boldsymbol{\mathrm{h}}^{(l-1)} + \boldsymbol{\mathrm{b}}^{(l)}\big) \cdot \boldsymbol{\mathrm{g}}\big( [x,y] ; \boldsymbol{\theta}^{(l)}\big) && \mat{W}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}} \times \mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\mathrm{b}}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}} \quad \\
&\boldsymbol{\psi}(x, y) = \mat{W}^{(\mathrm{L})} \boldsymbol{\mathrm{h}}^{(\mathrm{L}-1)} + \boldsymbol{\mathrm{b}}^{(\mathrm{L})} && \mat{W}^{(\mathrm{L})} \in {\mathbb{R}}^{\mathrm{N} \times \mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\mathrm{b}}^{(\mathrm{L})} \in {\mathbb{R}}^{\mathrm{N}}
\end{align}
\vspace{-5.5mm}
where $\big\{\boldsymbol{\theta}^{(l)}, \Wm^{(l)}$, $\boldsymbol{\mathrm{b}}^{(l)}\big\}$ depict the learnable parameters of the bases and the affine transformations, and $\mathrm{N}, \mathrm{N}_{\mathrm{hid}}$ depict the number of output and hidden channels, respectively. Depending on the selection of $\boldsymbol{\mathrm{g}}$, MFNs obtain approximations comparable to those of SIRENs \citep{sitzmann2020implicit} with faster convergence rate.
The most successful instantiation of MNFs are the \textit{Multiplicative Gabor Network} (MGN): MFNs constructed with isotropic Gabor functions as basis $\boldsymbol{\mathrm{g}}$ (in Eq.~\ref{eq:mfn}):
\vspace{-1mm}
\begin{gather}
\boldsymbol{\mathrm{g}}\big( [x,y] ; \boldsymbol{\theta}^{(l)}\big) = \exp \bigg(-\frac{\boldsymbol{\gamma}^{(l)}}{2} \Big[ \big(x-\boldsymbol{\mu}^{(l)}\big)^{2} + \big(y-\boldsymbol{\mu}^{(l)}\big)^{2}\Big] \bigg)\, \mathrm{Sin} \big(\mat{W}_\mathrm{g}^{(l)} \cdot [x, y] + \boldsymbol{\mathrm{b}}_\mathrm{g}^{(l)} \big), \\
\boldsymbol{\theta}^{(l)} {=} \big\{ \boldsymbol{\gamma}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\mu}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}},\mat{W}_\mathrm{g}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}} \times 2}, \boldsymbol{\mathrm{b}}_\mathrm{g}^{(l)} \in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}} \big\}.
\end{gather}
Note that, by setting $\mathrm{N}{=}\mathrm{N}_{\textrm{out}}{\times} \mathrm{N}_{\textrm{in}}$, an MFN can parameterize a convolutional kernel with $\mathrm{N}_{\textrm{in}}$ input and $\mathrm{N}_{\textrm{out}}$ output channels.
\citet{fathony2021multiplicative} show that MFNs are equivalent to a linear combination of exponentially many basis functions $\boldsymbol{\mathrm{g}}$. This allows us to analytically derive properties of MFN representations, and plays a crucial role in the derivation of alias-free MAGNets (Sec.~\ref{sec:crtraining})
\textbf{Multiplicative Anisotropic Gabor Networks (MAGNets).} Our MAGNet formulation is based on the observation that the usage of isotropic Gabor functions, i.e., with equal $\gamma$ for both the horizontal and vertical directions, is undesirable as basis for the construction of MFNs. Whenever a frequency is required along a certain direction, an isotropic Gabor function automatically introduces that frequency in both directions. As a result, other bases must counteract this frequency in the direction where the frequency is not required, and thus the capacity of the MFN is not used optimally.
We alleviate this limitation by using anisotropic Gabor functions instead:
\vspace{-0.5mm}
\begin{gather}
\label{eq:anisotropicgaussian}
\boldsymbol{\mathrm{g}}\big( [x,y] ; \boldsymbol{\theta}^{(l)}\big) = \exp \bigg(-\frac{1}{2} \Big[\Big(\boldsymbol{\gamma}^{(l)}_\mathrm{X}\big(x - \boldsymbol{\mu}_\mathrm{X}^{(l)}\big)\Big)^2 \hspace{-1mm}+ \Big(\boldsymbol{\gamma}_\mathrm{Y}^{(l)}\big(y - \boldsymbol{\mu}_\mathrm{Y}^{(l)}\big)\Big)^2\Big] \bigg)\, \mathrm{Sin} \big(\mat{W}_\mathrm{g}^{(l)} [x, y] + \boldsymbol{\mathrm{b}}_\mathrm{g}^{(l)} \big)\\
\boldsymbol{\theta}^{(l)} {=} \Big\{ \boldsymbol{\gamma}_{\mathrm{X}}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\gamma}_{\mathrm{Y}}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\mu}_{\mathrm{X}}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}}, \boldsymbol{\mu}_{\mathrm{Y}}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}} },\mat{W}_\mathrm{g}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}} \times 2}, \boldsymbol{\mathrm{b}}_\mathrm{g}^{(l)}\in {\mathbb{R}}^{\mathrm{N}_{\mathrm{hid}}} \Big\}.\label{eq:params_anisotropicgaussian}
\end{gather}
\vspace{-5.5mm}
The resulting \textit{Multiplicative Anisotropic Gabor Network} (MAGNet) obtains better control upon frequency components introduced to the approximation, and demonstrates important improvements in terms of descriptive power and convergence speed (Sec.~\ref{sec:experiments}).
\textbf{MAGNet initialization.} \citet{fathony2021multiplicative} proposes to initialize MGNs by drawing the size of the Gaussian envelopes, i.e., the $\boldsymbol{\gamma}^{(l)}$ term, from a $\mathrm{Gamma(}\alpha \cdot \mathrm{L}^{-1}, \beta{\mathrm{)}}$ distribution at every layer $l \in [1, .., \mathrm{L}-1]$. We observe however that this initialization does not provide much variability on the initial extension of the Gaussian envelopes and in fact, most of them cover a large portion of the space at initialization.
To stimulate diversity, we initialize the $\{\boldsymbol{\gamma}_{\mathrm{X}}^{(l)}, \boldsymbol{\gamma}_{\mathrm{Y}}^{(l)}\}$ terms by a $\mathrm{Gamma(}\alpha l^{-1}, \beta{\mathrm{)}}$ distribution at the $l$-th layer. We observe that our proposed initialization consistently leads to better accuracy than the initialization of \cite{fathony2021multiplicative} across all tasks considered. (Sec.~\ref{sec:experiments}).
\vspace{-1mm}
\subsection{Analytic Alias-free MAGNets}
\label{sec:crtraining}
\vspace{-1mm}
FlexConvs can be deployed at higher resolutions than those observed during training, simply by sampling the underlying continuous representation of the kernel more densely, and accounting for the change in sampling rate. Consider a $\mathrm{D}$-dimensional input signal $f_{\mathrm{r}^{(1)}}$ with resolution $\mathrm{r}^{(1)}$. FlexConv learns a kernel $\boldsymbol{\psi}_{\mathrm{r}^{(1)}}$ that can be inferred at a higher resolution $\mathrm{r}^{(2)}$ via:
\begin{equation}
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{1pt}
\Big(f_{\mathrm{r}^{(2)}} * \boldsymbol{\psi}_{\mathrm{r}^{(2)}}\Big) \approx \left(\frac{\mathrm{r}^{(1)}}{\mathrm{r}^{(2)}}\right)^{\mathrm{D}} \Big(f_{\mathrm{r}^{(1)}} * \boldsymbol{\psi}_{\mathrm{r}^{(1)}}\Big). \label{eq:multires}
\end{equation}
Note however, that Eq.~\ref{eq:multires} holds \textit{approximately}. This is due to aliasing artifacts which can appear if the frequencies in the learned kernel surpass the Nyquist criterion of the target resolution. Consequently, an anti-aliased parameterization is vital to construct kernels that generalize well to high resolutions
\textbf{Towards alias-free implicit neural representations.} We observe that SIRENs as well as unconstrained MFNs and MAGNets exhibit aliasing when deployed on resolutions higher than the training resolution, which hurts performance of the model. An example kernel with~aliasing~is~shown~in~Fig.~\ref{fig:app-cifar10kernelfrequencies}.
To combat aliasing, we would like to control the representation learned by MAGNets. MAGNets --and MFNs in general-- construct implicit neural representations that can be seen as a \textit{linear combination of basis functions}. This property allows us to analytically derive and study the properties of the resulting neural representation. Here, we use this property to derive the maximum frequency of MAGNet-generated kernels, so as to regularize MAGNets against aliasing during training. We analytically derive the maximum frequency of a MAGNet, and penalize it whenever it exceeds the Nyquist frequency of the training resolution. We note that analytic derivations are difficult for other implicit neural representations, e.g., SIRENs, due to stacked layer-wise nonlinearities.
\textbf{Maximum frequency of MAGNets.}
The maximum frequency component of a MAGNet~is~given~by:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{2pt}
f^+_{\textrm{MAGNet}} = \sum_{l=1}^{\mathrm{L}} \max_{i_l}\left( \left( \max_{j} \frac{\mat{W}^{(l)}_{\mathrm{g}, i_l,j}}{2 \pi} \right) + \frac{\sigma_\mathrm{cut} \min\{\boldsymbol{\gamma}^{(l)}_{\mathrm{X}, i_l}, \boldsymbol{\gamma}^{(l)}_{\mathrm{Y},i_l}\}}{2 \pi}\right), \tag{\ref{eq:magnetfreq}}
\end{equation}
where $\mathrm{L}$ corresponds to the number of layers, $\mat{W}^{(l)}_{\mathrm{g}}, \boldsymbol{\gamma}^{(l)}_{\mathrm{X}}, \boldsymbol{\gamma}^{(l)}_{\mathrm{Y}}$ to the MAGNet parameters as defined in Eq.~\ref{eq:params_anisotropicgaussian}, and $\sigma_\mathrm{cut}{=}2\cdot \mathtt{stdev}$ to the cut-off frequency of the Gaussian envelopes in the Gabor filters. A formal treatment as well as the derivations can be found in Appx.~\ref{sec:magnetanalysis}.
\textbf{Effect of the FlexConv mask.} The Gaussian mask used to localize the response of the MAGNet
also has an effect on the frequency spectrum. Hence, the maximum frequency of a FlexConv kernel is:
\begin{equation}
\setlength{\abovedisplayskip}{2pt}
\setlength{\belowdisplayskip}{3pt}
f^+_{\textrm{FlexConv}} = f^+_{\textrm{MAGNet}} + f^+_{w_\textrm{gauss}}, \ \ \text{with}\ \ f^+_{w_\textrm{gauss}}=
\frac{\sigma_\mathrm{cut}}{\max\{\sigma_\mathrm{X}, \sigma_\mathrm{Y}\} 2 \pi}. \tag{\ref{eq:flexconvfreq}}
\end{equation}
Here, $\sigma_{\mathrm{X}}, \sigma_{\mathrm{Y}}$ correspond to the mask parameters (Eq.~\ref{eq:gaussianmask}). Intuitively, multiplication with the mask blurs in the frequency domain, as it is equivalent to~convolution~with~the~Fourier~transform~of~the~mask.
\textbf{Aliasing regularization of FlexConv kernels.} With the analytic derivation of $f^+_{\textrm{FlexConv}}$ we penalize the generated kernels to have frequencies smaller or equal to their Nyquist frequency $f_{\mathrm{Nyq}}(k)$ via:
\begin{equation}
\setlength{\abovedisplayskip}{4pt}
\setlength{\belowdisplayskip}{5pt}
\mathcal{L}_{\mathrm{HF}} = ||\max\{f^+_{\textrm{FlexConv}}, f_{\mathrm{Nyq}}(k)\} - f_{\mathrm{Nyq}}(k)||^2, \ \ \text{with} \ \ f_{\textrm{Nyq}}(k) = \tfrac{k-1}{4}. \tag{\ref{eq:regularizeflexconv}}
\end{equation}
Here, $k$ depicts the size of the FlexConv kernel before applying the Gaussian mask, and is equal to the size of the input signal. Fig.~\ref{fig:app-cifar10kernelfrequencies} (Appx.~\ref{sec:magnetanalysis}) shows frequency spectra of some example kernels to visualize that the frequency components of FlexNet kernels are properly regularized for aliasing.
\vspace{-1mm}
\section{Experiments}\label{sec:experiments}
\vspace{-1mm}
\subsection{Bandwidth of Kernels with Learnable Size}
\label{sec:type1-experiment}
\vspace{-1mm}
\begin{figure}
\centering
\begin{subfigure}[c]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/type1-gabor.pdf}
\label{fig:exp-type1-gabor-graph}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.49\textwidth}
\centering
\vspace{-8mm}
\includegraphics[width=\textwidth]{images/type1-allfunctions.pdf}
\label{fig:exp-type1-gabor-kernels}
\end{subfigure}
\hfill
\vspace{-7mm}
\caption{N-Jets cannot fit high frequency signals in large kernels, as the predefined Gaussian derivative order constraints the kernel bandwidth. Left: Final MSE after fitting each model to Gabor filters of different frequencies. N-Jets cannot fit high frequencies. Right: kernels learned by each model. SIREN and MAGNet can fit all targets. MAGNet-S is a variant of MAGNet approximately the size of N-Jet. It still does well on the Gabor and AlexNet targets.
\vspace{-2mm}}
\label{fig:exp-type1-gabor}
\end{figure}
We compare the bandwidth of MAGNet against N-Jet \citep{pintea2021resolution} and SIREN \citep{sitzmann2020implicit} by optimizing each to fit a target image. We minimize the mean squared error on (i) Gabor filters of known frequency, (ii) random noise and (iii) an AlexNet kernel \citep{krizhevsky_imagenet_2012}.\break
\vspace{-4mm}
Fig.~\ref{fig:exp-type1-gabor} shows that, even with 9 orders of Gaussian derivatives, N-Jets cannot fit high frequency signals in large kernels. Crucially, to be able to model high frequency signals in large kernels, N-Jet models need to use many Gaussian derivative orders, while the inference time and parameters of N-Jets scale with the Gaussian derivative order. In addition, the order of Gaussian derivatives is a hyperparameter and must be selected prior to training. MAGNets, on the other hand, model high frequency signals accurately. This allows FlexNets to learn large kernels with high frequency components
\vspace{-1mm}
\subsection{Classification Tasks}
\label{sec:classification}
\vspace{-1mm}
We evaluate FlexNets across classification tasks on sequential and image benchmark datasets. A complete description of the datasets used is provided in Appx.~\ref{sec:datasets}.
\begin{table}
\centering
\caption{Test accuracy and ablation studies on sMNIST, pMNIST, sCIFAR10 and npCIFAR10.}
\label{tab:smnist}
\vspace{-2mm}
\begin{small}
\scalebox{0.75}{
\begin{tabular}{cccccc}
\toprule
\sc{Model} & \sc{Size} & \sc{sMNIST} & \sc{pMNIST} & \sc{sCIFAR10} & \sc{npCIFAR10} \\
\midrule
DilRNN \citep{chang2017dilated} & 44\sc{k} & 98.0 & 96.1 & - & -\\
IndRNN \citep{li2018independently} & 83\sc{k} & 99.0 & 96.0& - & - \\
TCN \citep{bai2018empirical} &70\sc{k}& 99.0 & 97.2 & - & - \\
r-LSTM \citep{trinh2018learning} & 0.5\sc{m} & 98.4 & 95.2 & 72.2 & -\\
Self-Att. \citep{trinh2018learning} &0.5\sc{m} & 98.9 & 97.9 & 62.2 & -\\
TrellisNet \citep{bai2018trellis}& 8\sc{m} & 99.20 & 98.13 & 73.42 & - \\
URLSTM \citep{gu2020improving} & - & 99.28 & 96.96 & 71.00 & - \\
URGRU + Zoneout \citep{gu2020improving} & - & 99.27 & 96.51 & \textbf{74.40} & - \\
HiPPO \citep{gu2020hippo} & 0.5\sc{m} & - & \textbf{98.30} & - & - \\
Lipschitz RNN \citep{erichson2020lipschitz} & 158\sc{k} & 99.4 & 97.3 & 64.2 & 59.0 \\
coRNN \citep{rusch2020coupled} & 134\sc{k} & \textbf{99.4} & 97.3 & - & 59.0 \\
UnICORNN \citep{rusch2021unicornn} & 135\sc{k} & - & 98.4 & - & \textbf{62.4} \\
pLMU \citep{chilkuri2021parallelizing} & 165\sc{k} & - & 98.49 & - & -\\
\midrule
CKCNN-2 & 98\sc{k} & 99.31 & 98.00 & 62.25 & 60.5\\
CKCNN-2-Big & 1\sc{m} & 99.32 & 98.54 & 63.74 & 62.2 \\
CKTCN$_{\text{\sc{Fourier}}}$-2 & 105\sc{k} & 99.44 & 98.40 & 68.28 & 66.26 \\
CKTCN$_{\text{\sc{Gabor}}}$-2 & 106\sc{k} & 99.52 & 98.38 & 69.26 & 67.37 \\
CKTCN$_{\text{\sc{MAGNet}}}$-2 & 105\sc{k} & \textbf{99.55} &\textbf{ 98.57} & \textbf{74.58} & \textbf{67.52} \\
\midrule
FlexTCN-2 & 108\sc{k} & \textbf{99.60} & \textbf{98.61} & \textbf{78.99} & \textbf{67.11}\\
FlexTCN-4 & 241\sc{k} & \textbf{99.60} & \textbf{98.72} & \textbf{80.26} & \textbf{67.42}\\
FlexTCN-6 & 375\sc{k} & \textbf{99.62} & \textbf{98.63} & \textbf{80.82} & \textbf{69.87} \\
\midrule
FlexTCN$_{\text{SIREN}}$-6 & 343\sc{k} & 99.03 & 95.36 & 69.24 & 57.27 \\
FlexTCN$_{\text{Fourier}}$-6 & 370\sc{k} & 99.49 & 97.97 & 74.79 & 67.35 \\
FlexTCN$_{\text{Gabor}}$-6 & 373\sc{k} & 99.50 & 98.37 & 78.36 & 67.56 \\
FlexTCN$_{\text{MAGNet}}$-6 & 375\sc{k} & \textbf{99.62} & \textbf{98.63} & \textbf{80.82} & \textbf{69.87} \\
\bottomrule
\end{tabular}}
\end{small}
\vspace{-2mm}
\end{table}
\textbf{Network specifications.} Here, we specify the configuration of our networks for all our classification experiments. We parameterize all our convolutional kernels as the superposition of a 3-layer MAGNet and a learnable anisotropic Gaussian mask. We construct two network instances for sequential and image datasets respectively: FlexTCNs and FlexNets. Both are constructed by taking the structure of a baseline network --TCN \citep{bai2018empirical} or ResNet \citep{he2016deep}--, removing all internal pooling layers, and replacing convolutional kernels by FlexConvs. The FlexNet architecture is shown in Fig.~\ref{fig:flexnet-architecture} and varies only in the number of blocks (e.g. FlexNet-7 has 7 blocks) and channels. Hyperparameter values for all experiments are reported in Appx.~\ref{sec:flexnet-optimization}. Akin to \cite{romero2021ckconv} we utilize the Fourier theorem to speed up convolutions with large kernels.
\footnote{Our code is publicly available at \url{https://github.com/rjbruin/flexconv}.}
\textbf{Mask initialization.} We initialize the FlexConv masks to be small. Preliminary experiments show this leads to better performance, faster execution, and faster training convergence. For sequences, the mask center is initialized at the last kernel position to prioritize the last information seen.
\textbf{Time series and sequential data.} First we evaluate FlexTCNs on sequential classification datasets, for which long-term dependencies play an important role.
We validate our approach on intrinsic discrete data: \textit{sequential MNIST}, \textit{permuted MNIST} \citep{le2015simple}, \textit{sequential CIFAR10} \citep{chang2017dilated}, \textit{noise-padded CIFAR10} \citep{chang2019antisymmetricrnn}, as well as time-series data: \textit{CharacterTrajectories} (CT) \citep{bagnall2018uea}, \textit{SpeechCommands} \citep{warden2018speech} with raw waveform (SC\_raw) and MFCC input representations (SC).
Our results are summarized in Tables~\ref{tab:smnist}~and~\ref{tab:time-series}. FlexTCNs with two residual blocks obtain state-of-the-art results on all tasks considered. In addition, depth further improves performance. FlexTCN-6 improves the current state-of-the-art on sCIFAR10 and npCIFAR10 by more than 6\%. On the difficult SC\_raw dataset --with sequences of length 16000--, FlexTCN-6 outperform the previous state-of-the-art by 20.07\%: a remarkable improvement.
\begin{table}
\RawFloats
\centering
\begin{minipage}{0.48 \textwidth}
\centering
\caption{Test accuracy on CT, SC and SC\_raw}
\label{tab:time-series}
\vspace{-2mm}
\begin{small}
\scalebox{0.75}{
\begin{tabular}{ccccc}
\toprule
\sc{Model} & \sc{Size} & \sc{CT} & \sc{SC} & \sc{SC\_raw} \\
\midrule
GRU-ODE & 89\sc{k} & 96.2 & 44.8 & $\sim$10.0 \\
GRU-$\Delta t$ & 89\sc{k} & 97.8 & 20.0 & $\sim$10.0 \\
GRU-D & 89\sc{k} & 95.9 & 23.9 & $\sim$10.0 \\
ODE-RNN & 89\sc{k} & 97.1 & 93.2 & $\sim$10.0 \\
NCDE & 89\sc{k} & 98.8 & 88.5 & $\sim$10.0 \\
\midrule
CKCNN & 100\sc{k} &\textbf{ 99.53 }& 95.27 & 71.66 \\
CKTCN$_{\text{Fourier}}$ & & - & 95.65 & 74.90 \\
CKTCN$_{\text{Gabor}}$ & & - & 96.66 & 78.10 \\
CKTCN$_{\text{MAGNet}}$ & 105\sc{k} & \textbf{99.53} & \textbf{97.01} & \textbf{80.69} \\
\midrule
FlexTCN-2 & 105\sc{sk} & \textbf{99.53} & \textbf{97.10} & \textbf{88.03} \\
FlexTCN-4 & 239\sc{k} & \textbf{99.53} & \textbf{97.73} & \textbf{90.45} \\
FlexTCN-6 & 373\sc{k} & \textbf{99.53} & \textbf{97.67} & \textbf{91.73} \\
\midrule
FlexTCN$_{\text{SIREN}}$-6 & 370\sc{k} & - & 95.83 & 85.73\\
FlexTCN$_{\text{Fourier}}$-6 & 342\sc{k} & - & 97.62 & 91.02 \\
FlexTCN$_{\text{Gabor}}$-6 & 373\sc{k} & - & 97.35 & 91.50 \\
FlexTCN$_{\text{MAGNet}}$-6 & 373\sc{k} & - & \textbf{97.67} & \textbf{91.73} \\
\bottomrule
\end{tabular}}
\end{small}
\end{minipage}%
\hfill
\begin{minipage}{0.50 \textwidth}
\centering
\caption{Results on CIFAR-10. Time in sec/epoch. Results from *original works and $\dagger$ single run.}
\label{tab:cifar-10}
\vspace{-2mm}
\begin{small}
\scalebox{0.75}{
\begin{tabular}{cccc}
\toprule
\multirow{2}{*}{\sc{Model}} & \multirow{2}{*}{\sc{Size}} & \sc{CIFAR-10} & \multirow{2}{*}{\sc{Time}} \\
& & \sc{Acc.} & \\ \midrule
ResNet-44 & 0.66\sc{m} & 92.9*\!\dagger & - \\
DCN-$\sigma^{ji}$ & 0.47\sc{m} & 89.7 $\pm$ 0.3* & - \\
N-Jet-Resnet32 & 0.52\sc{m} & 92.3 $\pm$ 0.3* & - \\
N-Jet-ALLCNN & 1.07\sc{m} & 92.5 $\pm$ 0.1* & - \\ \midrule
FlexNet-7 w/ conv. ($k = 3$) & 0.17\sc{m} & 89.5 $\pm$ 0.3 & 41s \\
FlexNet-7 w/ conv. ($k = 33$) & 20.0\sc{m} & 78.0 $\pm$ 0.3 & 242s \\
FlexNet-7 w/ N-Jet & 0.70\sc{m} & 91.7 $\pm$ 0.1 & 409s \\ \midrule
CKCNN-7 & 0.63\sc{m} & 71.7\!\dagger & 266s \\
CKCNN$_{\text{MAGNet}}$-7 & 0.67\sc{m} & 85.9\!\dagger & 299s \\
FlexNet$_{\text{SIREN}}$-7 & 0.63\sc{m} & 88.9\!\dagger & 105s \\
FlexNet$_{\text{Gabor}}$-7 & 0.67\sc{m} & 92.0\!\dagger & 178s \\
\midrule
FlexNet-7 & 0.67\sc{m} & 92.2 $\pm$ 0.1 & 166s \\
\bottomrule
\end{tabular}}
\end{small}
\end{minipage}
\vspace{-2mm}
\end{table}
Furthermore, we conduct ablation studies by changing the parameterization of \mlp$^{{\boldsymbol{\psi}}}$, and switching off the learnable kernel size ("CKTCNs") and considering global kernel sizes instead. CKTCNs and FlexTCNs with MAGNet kernels outperform corresponding models with all other kernel parameterizations: SIRENs \citep{sitzmann2020implicit}, MGNs and MFNs \citep{fathony2021multiplicative}. Moreover, we see a consistent improvement with respect to CKCNNs \citep{romero2021ckconv} by using learnable kernel sizes. This shows that both MAGNets and learnable kernel sizes contribute to the performance of FlexTCNs.
Note that in 1D, MAGNets are equivalent to MGNs. However, MAGNets consistently perform better than MGNs. This improvement in accuracy is a result of our MAGNet initialization.
\textbf{Image classification.}
Next, we evaluate FlexNets for image classification on CIFAR-10 \citep{krizhevsky2009learning}. Additional experiments on Imagenet-32, MNIST and STL-10 can be found in Appx.~\ref{sec:appx-experiments}.
Table~\ref{tab:cifar-10} shows our results on CIFAR-10.
FlexNets are competitive with pooling-based methods such as ResNets \citep{he2016deep} and outperform learnable kernel size method DCNs \citep{tomen2021deep}.
In addition, we compare using N-Jet layers of order three (as in \citet{pintea2021resolution}) in FlexNets against using MAGNet kernels. We observe that N-Jet layers lead to worse performance, and are significantly slower than FlexConv layers with MAGNet kernels. The low accuracy of N-Jet layers is likely to be linked to the fact that FlexNets do not use pooling. Consequently, N-Jets are forced to learn large kernels with high-frequencies, which we show N-Jets struggle learning in Sec.~\ref{sec:type1-experiment}
To illustrate the effect of learning kernel sizes, we also compare FlexNets against FlexNets with large and small discrete convolutional kernels (Tab.~\ref{tab:cifar-10}).
Using small kernel sizes is parameter efficient, but is not competitive with FlexNets. Large discrete kernels on the other hand require a copious amount of parameters and lead to significantly worse performance. These results indicate that the best solution is somewhere in the middle and varying kernel sizes can~learn~the~optimal~kernel~size~for~the~task~at~hand.
Similar to the sequential case, we conduct ablation studies on image data by not learning kernel sizes, and different kernel parameterizations: SIRENs, MGNs and MFNs. Table~\ref{tab:cifar-10} shows that FlexNets outperform CKCNNs with corresponding kernel parameterizations, and need less compute time to do so. In addition, a clear difference in performance is apparent for MAGNets with respect to other parameterizations. These results corroborate that both MAGNets and FlexConvs contribute to the performance of FlexNets.
\vspace{-1mm}
\subsection{Alias-free FlexNets}
\label{sec:crossresexperiments}
\vspace{-1mm}
In this section we evaluate the importance of alias-free kernels to deploy FlexNets across resolutions.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/cifar-10-crossres.pdf}
\vspace{-6mm}
\caption{Alias-free FlexNet-7 on CIFAR-10. We report change in accuracy between source and target resolutions and accuracy after finetuning on the target resolution (means over five runs).}
\vspace{-2mm}
\label{fig:c10-crossres}
\end{figure}
\textbf{Regularizing the FlexConv mask.} Though including $f^+_{w_{\mathrm{gauss}}}$ in the frequency analysis of MAGNets is crucial for the accuracy of the derivation, including the FlexConv mask in aliasing regularization is undesirable, as it steers the model to learn large kernels in order to minimize the loss (see Eq.~\ref{eq:regularizeflexconv}). However, excluding the mask from regularization could compromise the ability of FlexNet to generalize to higher resolutions. Here, we experiment with this trade-off.
\begin{wraptable}{r}{7.4cm}
\centering
\caption{Alias-free FlexNets on CIFAR-10.}
\label{tab:crossres}
\vspace{-5mm}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{cccc}
\toprule
\multirow{2}{*}{\sc{Model}} & \multirow{2}{*}{\sc{Size}} & \multicolumn{2}{c}{\sc{CIFAR-10 Acc.}} \\
& & 16 px & $\Delta_{16 \textrm{px}}$ 32 px \\ \midrule
ResNet-44 & 0.66\sc{m} & 85.8 $\pm$ 0.2 & -31.6 $\pm$ 1.3 \\ \midrule
FlexNet-7 w/ conv. ($k = 3$) & 0.17\sc{m} & 85.3 $\pm$ 0.2 & -21.2 $\pm$ 1.0 \\
FlexNet-7 w/ conv. ($k = 33$) & 20.0\sc{m} & 67.7 $\pm$ 0.6 & -57.1 $\pm$ 1.6 \\
FlexNet-7 w/ N-Jets & 0.70\sc{m} & \textbf{86.4} $\pm$ 0.2 & -5.5 $\pm$ 1.3 \\ \midrule
CKCNN-7$_{\textrm{SIREN}}$ & 0.63\sc{m} & 45.9 $\pm$ 1.0 & -15.8 $\pm$ 1.2 \\
FlexNet-7$_{\textrm{SIREN}}$ & 0.63\sc{m} & 70.4 $\pm$ 0.8 & -50.0 $\pm$ 16.9 \\ \midrule
FlexNet-7 w/o reg. & 0.67\sc{m} & \textbf{86.4} $\pm$ 0.4 & -34.4 $\pm$ 14.3 \\ \midrule
FlexNet-7 w/ reg. $f^+_{\textrm{MAGNet}}$ & 0.67\sc{m} & \textbf{86.5} $\pm$ 0.1 & -3.8 $\pm$ 2.0 \\
FlexNet-7 w/ reg. $f^+_{\textrm{FlexConv}}$ & 0.67\sc{m} & 85.1 $\pm$ 0.3 & \textbf{-3.3} $\pm$ 0.3 \\
\bottomrule
\end{tabular}}
\end{center}
\end{wraptable}
Figure~\ref{fig:c10-crossres} shows the effect of including the FlexConv mask in the aliasing regularization for different combinations of resolutions on CIFAR-10. We train at the source resolution for 100 epochs, before testing the model at the target resolution with the upsampling described in Sec.~\ref{sec:crtraining}. Next, we adjust $f_{\textrm{Nyq}}(k)$ to the target resolution, and finetune each model for 100 epochs at the target resolution.
We find that regularizing just $f^+_{\textrm{MAGNet}}$ yields a trade-off. It increases the accuracy difference between low and high resolution inference, but also increases the fine-tune accuracy at the target resolution
We therefore choose to, by default, regularize $f^+_{\textrm{MAGNet}}$ only.
Results of our alias-free FlexNet training on CIFAR-10 are in Table~\ref{tab:crossres}. We observe that the performance of a FlexNet trained without aliasing regularization largely breaks down when the dataset is upscaled.
However, with our aliasing regularization most of the performance is retained.
Comparatively, FlexConv retains more of the source resolution performance than FlexNets with N-Jet layers, while ResNet-44, FlexNets with regular convolutions as well as FlexNets with SIRENs degrades drastically at the target resolution. Fig.~\ref{fig:app-cifar10kernelfrequencies} visualizes the effects of our aliasing regularization. It shows example kernels and their frequency spectra for regularized and unregularized models. This shows the effect of aliasing regularization on the frequency components of FlexConv.
\vspace{-1mm}
\section{Discussion}
\label{sec:discussion}
\vspace{-1mm}
\textbf{Learned kernel sizes match conventional priors.} Commonly, CNNs use architectures of small kernels and pooling layers. This allows convolutions to build a progressively growing receptive field. With learnable kernel sizes, FlexNet could learn a different prior over receptive fields, e.g., large kernels first, and small kernels next. However, FlexNets learn to increase kernel sizes progressively (Fig.~\ref{fig:c10-kernels}), and match the network design that has been popular since AlexNet \citep{krizhevsky_imagenet_2012}.
\textbf{Mask initialization as a prior for feature importance.} The initial values of the FlexConv mask can be used to prioritize information at particular input regions. For instance, initializing the center of mask on the first element of sequential FlexConvs can be used to prioritize information from the far past. This prior is advantageous for tasks such as npCIFAR10. We observe that using this prior on npCIFAR10 leads to much faster convergence and better results (68.33\% acc. w/ FlexNet-2).
\textbf{Regularization as prior induction.} MAGNets allow for analytic control of the properties of the generated kernels. We use this property to generate alias-free kernels, but other desiderata could be induced, e.g., smoothness.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/kernel_sizes_flexnet.pdf}
\vspace{-8mm}
\caption{Learned FlexConv masks for FlexNets with 3, 5 and 7 residual blocks. FlexNets learn very small kernels at shallow layers, which become larger as a function of depth.
\vspace{-4mm}}
\label{fig:c10-kernels}
\end{figure}
\textbf{FlexNets are able to exploit depth.} \cite{romero2021ckconv} indicate that CKCNNs cannot efficiently take advantage of depth, and thus CKCNNs contain only two residual blocks. Our work shows that learnable kernel sizes allow FlexNets to take advantage of depth analogously to conventional CNNs.
\vspace{-1mm}
\section{Limitations}
\label{sec:limitations}
\vspace{-1mm}
\textbf{Computation and memory costs of convolutions with large kernels.} Performing convolutions with large convolutional kernels is a compute-intensive operation. FlexConvs are initialized with small kernel sizes and their inference cost is relatively small at the start of training. However, despite the cropping operations used to improve computational efficiency (Figs.~\ref{fig:flexconv},~\ref{fig:capacity_tradeoff}, Tab.~\ref{tab:cifar-10}), the inference time may increase to up to double as the learned masks increase in size.
At the cost of larger memory requirements, convolutions can be performed in the frequency domain to speed up convolutions with large convolutional kernels. However, we observe that this does not bring much efficiency gains for image data. This is because convolutions in the frequency domain bring advantages only for very large convolutional kernels in the orders of hundreds of pixels \textit{at each dimension of the data}.
\textbf{Remaining accuracy drop in alias-free FlexNets.} Some drop in accuracy is still observed when using alias-free FlexNets at a higher target resolution (Tab.~\ref{tab:crossres}). Although more evidence is needed to draw conclusions, this may be caused by aliasing effects due to $\mathrm{ReLU}$ \citep{vasconcelos2021impact}, or changes in the activation statistics of the feature maps passed to global average pooling \citep{Touvron2019FixingTT}.
\vspace{-1mm}
\section{Conclusion}
\vspace{-1mm}
We propose FlexConv, a convolutional operation able to learn high bandwidth convolutional kernels\break of varying size during training at a fixed parameter cost. We demonstrate that FlexConvs are able to model long-term dependencies without the need of pooling, and shallow pooling-free FlexNets achieve state-of-the-art performance on several sequential datasets, match performance of recent works with learned kernel sizes with less compute, and are competitive with much deeper ResNets on image benchmark datasets. In addition, we show that our alias-free convolutional kernels allow FlexNets to be deployed at higher resolutions than seen during training with minimal precision loss.
\textbf{Future work.} The formulation of MAGNet gives control over the bandwidth of the kernel. We anticipate that this control has more uses, such as fighting sub-sampling aliasing \citep{zhang2019making,Kayhan_2020_CVPR,karras2021alias}. With the ability to upscale FlexNets to different input image sizes comes the possibility of exploring the use of transfer learning FlexNets between previously incompatible datasets, such as CIFAR-10 and Imagenet.
In a similar vein, the automatic adaptation of FlexConv to the kernel sizes required for the task at hand may make it possible to generalize the FlexNet architecture across different tasks and datasets. Neural architecture search \citep{zoph2016neural} could see benefits from narrowing the search space to exclude kernel size and pooling layers. In addition, we envisage further improvements from structural developments of FlexConvs such as attentive FlexNets.
\section*{Reproducibility Statement}
We hope to inspire others to use and reproduce our work. We publish the source code of this work, for which the link is provided in Sec.~\ref{sec:classification}. Sec.~\ref{sec:experiments} and Appx.~\ref{sec:appx-flexnet} detail FlexNet, its hyperparameters and optimization procedure. The full derivation of the aliasing regularization objective is included in Appx.~\ref{sec:magnetanalysis}. We report means over multiple runs for many experiments, to ensure the reported results are fair and reproducible, and do not rely on tuning of the random seed. All datasets used in our experiments are publicly available. If any questions remain, we welcome one and all to contact the corresponding author.
\section*{Acknowledgments}
We thank Nergis Tömen for her valuable insights regarding signal processing principles for FlexConv, and Silvia-Laura Pintea for explanations and access to code of her work \cite{pintea2021resolution}. We thank Yerlan Idelbayev for the use of the \href{https://github.com/akamaster/pytorch_resnet_cifar10}{CIFARResNet code}.
This work is supported by the \href{https://www.qualcomm.com/research/research/university-relations/innovation-fellowship/2021-europe}{Qualcomm Innovation Fellowship (2021)} granted to David W. Romero. David W. Romero sincerely thanks Qualcomm for his support. David W. Romero is financed as part of the Efficient Deep Learning (EDL) programme (grant number P16-25), partly funded by the Dutch Research Council (NWO). Robert-Jan Bruintjes is financed by the Dutch Research Council (NWO) (project VI.Vidi.192.100). All authors sincerely thank everyone involved in funding this work.
This work was partially carried out on the Dutch national infrastructure with the support of SURF Cooperative. We used Weights \& Biases \citep{wandb} for experiment tracking and visualizations.
|
2,877,628,088,328 | arxiv | \section{Introduction}
Recently, a novel theory of dark matter (DM) superfluidity \cite{Berezhiani:2015pia,Berezhiani:2015bqa} was proposed to combine the success of modified Newtonian dynamics (MOND) \cite{Milgrom:1983ca,Milgrom:1983pn,Milgrom:1983zz} on galactic scales with the triumph of the $\Lambda$ cold dark matter ($\Lambda$CDM) on cosmic scales. The MOND turns out to be an emergent phenomenon of DM itself on galactic scales due to a MOND-like force between baryons mediated by superfluid phonons of the axionlike particles condensed as superfluid with a coherence length of order the galactic size and a critical temperature of order micro-Kelvin. The $\Lambda$CDM model is eventually recovered beyond galactic scales when the fraction of particles in the condensate decreases with increasing temperature due to larger velocity dispersion and hence larger DM temperature in galaxy clusters.
It was known as the galactic coincidence \cite{Famaey:2011kh} that a critical acceleration scale appears in various seemingly unrelated Kepler-like laws of galactic dynamics, which cannot be simply explained in a common way in the context of the cold dark matter (CDM) scenario. However, MOND predicts such a universal acceleration scale $a_0\approx10^{-10} \mathrm{m/s^2}$, which should intriguingly happen to be of order the present Hubble scale $H_0\sim a_0$ or more boldly the cosmological constant scale $\Lambda^4\sim M_{\mathrm{Pl}}^2a_0^2$. Although MOND now emerges from DM itself on galactic scales in the context of DM superfluidity, the galactic coincidence still manifests itself as an input parameter in order to fix other parameters to their preferred values. It should be in any case striking that the dark matter and dark energy sectors have such a common scale even though it is currently unclear whether it is just a coincidence or smoking gun for new physics.
It was also known as the cosmic coincidence that the energy density used to account for the late-time cosmic acceleration happens to be the same order of magnitude as the matter components today. Alternative to the standard cosmological constant scenario, one might as well consider a slowly rolling scalar field known as dynamical dark energy (DE) with proper screening mechanisms \cite{Joyce:2014kja} to hide the fifth force from the local tests of gravity. To at least alleviate the cosmic coincidence, the energy density in the scalar field should at least track \cite{Zlatev:1998tr,Steinhardt:1999nw} the background energy density and then grow to dominate the energy budget at late times. Either the screening mechanism or tracking behavior can be realized if general interactions between dark energy and matter components are concerned.
In this paper, we propose a very simple explanation for the galactic coincidence problem by conformally coupling a Dirac-Born-Infeld (DBI) scalar field with local matter components. To effectively screen the fifth force mediated by the DBI scalar field from the MONDian force mediated by DM superfluid phonons on galactic scales, the galactic coincidence $a_0=\Lambda^2/2gM_{\mathrm{Pl}}\sim H_0$ is derived, provided that the DBI characteristic scale $\Lambda^4\sim M_{\mathrm{Pl}}^2H_0^2\sim(\mathrm{meV})^4$ coincides with current critical energy density for conformal coupling $g\sim\mathcal{O}(1)$. This allows us to interpret the DBI scalar field as a dynamical DE in the presence of a conformal coupling term. The equation of state (EOS) of our DBI dark energy mimics that of Chaplygin gas.
This paper is organized as follows. In Sec. \ref{sec:2}, we review the DM superfluidity and define the MOND transition scale. In Sec. \ref{sec:3}, we propose a DBI-like scalar conformally coupled with the matter component to solve the galactic coincidence problem. In Sec. \ref{sec:4}, the possibility of our DBI scalar playing the role of DE is explored. The final section is devoted to conclusions and discussions.
\section{Dark matter superfluid}\label{sec:2}
In the nonrelativistic regime, DM superfluid \cite{Berezhiani:2015pia,Berezhiani:2015bqa} is effectively described by the MOND Lagrangian with a conformal coupling term to baryons,
\begin{equation}\label{eq:MONDTb}
\mathcal{L}_{\mathrm{MOND}T_\mathrm{b}}=\frac{2}{3}\Lambda(2m)^{3/2}X\sqrt{|X|}+\frac{\alpha\Lambda\theta}{M_{\mathrm{Pl}}}T_{\mathrm{b}},
\end{equation}
where DM particle $m$ is of order $\mathrm{eV}$ to ensure the formation of Bose-Einstein condensation and the phonon excitation $X=\dot{\theta}-m\Phi-(\vec{\nabla}\theta)^2/2m$ is described by the Goldstone boson $\theta$ for a spontaneously broken global $U(1)$ symmetry under the external gravitational potential $\Phi$. The dimensionless parameter $\alpha$ and dimensionful parameter $\Lambda$ can be fixed later by inputting the MOND critical acceleration $a_0$ in order to reproduce the MONDian profile. For static spherically symmetric profile $\theta=\mu t+\varphi(r)$ at constant chemical potential $\mu$ and baryons distribution $T_{\mathrm{b}}=-\rho_{\mathrm{b}}(r)$, the equation of motion (EOM)
\begin{equation}
\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\sqrt{2m|X|}\varphi'(r)\right)=\frac{\alpha\rho_{\mathrm{b}}(r)}{2M_{\mathrm{Pl}}}
\end{equation}
can be integrated for the $X<0$ branch to obtain
\begin{equation}
\varphi'(r)\simeq\sqrt{\frac{\alpha M_{\mathrm{b}}(r)}{8\pi M_{\mathrm{Pl}}r^2}}\equiv\sqrt{\kappa}
\end{equation}
for $\kappa\gg\mu-m\Phi$ with $M_{\mathrm{b}}(r)\equiv4\pi\int_0^rr'^2\mathrm{d}r'\rho_{\mathrm{b}}(r')$, which admits a MONDian acceleration,
\begin{equation}
a_{\varphi}=\alpha\frac{\Lambda}{M_{\mathrm{Pl}}}\varphi'\simeq\sqrt{\frac{\alpha^3\Lambda^2}{M_{\mathrm{Pl}}}\frac{GM_{\mathrm{b}}(r)}{r^2}},
\end{equation}
if one identifies
\begin{equation}
\frac{\alpha^3\Lambda^2}{M_{\mathrm{Pl}}}\equiv a_0,
\end{equation}
hence $\alpha\sim\mathcal{O}(1)$ for $\Lambda\sim\mathrm{meV}$.
The general picture of DM superfluidity is that the DM halo core where galaxies are located is almost entirely condensed and the dynamics is dominated by the MONDian force mediated by the DM superfluid phonons, whereas galaxy clusters are either in a mixed phase or entirely in the normal phase just as those on cosmic scales. Therefore, it is natural to define a MONDian transition radius
\begin{equation}\label{eq:MONDr}
r_{\mathrm{MOND}}=\sqrt{\frac{MG}{a_0}}
\end{equation}
in the context of the DM superfluid core with core radius $r_{\mathrm{MOND}}$ containing the total mass of $M$. To see that this is a reasonable definition, consider a DM halo with central density $\rho_0\sim M_{r_0}/r_0^3$ and core radius $r_0=\sqrt{M_{r_0}G/a_0}$; one obtains a constant surface density $\rho_0r_0\sim M_{r_0}/r_0^2\sim a_0/G$ independent of galaxy luminosity found recently by several astrophysical observations \cite{Kormendy:2004se,Spano:2007nt,Donato:2009ab,Gentile:2009bw}. One can even reproduce a sort of baryonic Tully-Fisher relation (BTFR) \cite{McGaugh:2000sr,McGaugh:2005qe,McGaugh:2011ac} $M_{r_0}\sim\rho_0r_0^3\sim(a_0/G)r_0^2\sim v^4/Ga_0$ by using $\rho_0r_0\sim a_0/G$ and $a_0\sim v^2/r_0$. The MONDian transition radius thus serves as a natural separation between the MOND regime $r<r_0$ with $a_{\mathrm{N}}<a_0$ and the Newtonian regime $r>r_0$ with $a_{\mathrm{N}}>a_0$ where $a_{\mathrm{N}}=GM_r/r^2$.
\section{DBIonic screening}\label{sec:3}
The action of the scalar field we propose in this paper has the form
\begin{align}\label{eq:DBITm}
\nonumber S_{\mathrm{DBI}T_{\mathrm{m}}}=&\int\mathrm{d}^4x \sqrt{-f}\left(-\Lambda^4\sqrt{1-\Lambda^{-4}(\partial\phi)^2}\right)\\
&+\int\mathrm{d}^4x \sqrt{-f}\frac{g\phi}{M_{\mathrm{Pl}}}T_{\mathrm{m}},
\end{align}
which will be referred to as the $\mathrm{DBI}T_{\mathrm{m}}$ action for short. It should be kept in mind that the same symbol $\Lambda$ used in our action (\ref{eq:DBITm}) has nothing to do with that in the action (\ref{eq:MONDTb}), although they actually coincide as we will see later. Here, $f$ is the determinant of the Friedmann-Robertson-Walker (FRW) metric of a 3-brane moving in a five-dimensional Minkowski space with two time dimensions,
\begin{equation}
\mathrm{d}s_5^2=-\mathrm{d}w^2+f_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}.
\end{equation}
Here, the Gaussian normal transverse coordinate $w(x)=\Lambda^{-2}\phi(x)$ is written in terms of the DBI scalar field $\phi(x)$. The first term in $\mathrm{DBI}T_{\mathrm{m}}$ action (\ref{eq:DBITm}) can thus be interpreted as a cosmological constant term,
\begin{equation}
S=\int\mathrm{d}^4x\sqrt{-g}(-\Lambda^4)=\int\mathrm{d}^4x\sqrt{-f}(-\Lambda^4\gamma^{-1}),
\end{equation}
in terms of the induced metric $g_{\mu\nu}=f_{\mu\nu}-\Lambda^{-4}\partial_{\mu}\phi\partial_{\nu}\phi$ on the brane, and the inverse of the induced metric is just $g^{\mu\nu}=f^{\mu\nu}+\Lambda^{-4}\gamma^2\partial^{\mu}\phi\partial^{\nu}\phi$ with an abbreviation $\gamma\equiv1/\sqrt{1-\Lambda^{-4}(\partial\phi)^2}$.
The first term in (\ref{eq:DBITm}) differs from the standard DBI action \begin{equation}
S_{\mathrm{DBI}}=\int\mathrm{d}^4x \sqrt{-f}\left(-\Lambda^4\sqrt{1+\Lambda^{-4}(\partial\phi)^2}\right)
\end{equation}
by a flipped sign in front of the derivative term, which as we will see is essential for the so-called DBIonic screening mechanism \cite{Burrage:2014uwa}. It is worth noting that the first term in (\ref{eq:DBITm}) also differs from
\begin{equation}
S_{\mathrm{DBIonic}}=\int\mathrm{d}^4x \sqrt{-f}\left(\Lambda^4\sqrt{1-\Lambda^{-4}(\partial\phi)^2}\right)
\end{equation}
in standard DBIonic screening by an overall sign of the action, which as we will see is also essential for the scalar field to mediate a repulsive fifth force and to drive the late-time acceleration. The second term in the $\mathrm{DBI}T_{\mathrm{m}}$ action (\ref{eq:DBITm}) describes a conformal coupling of the DBI scalar with the trace of the energy-momentum tensor of background matter fields with strength $g\sim\mathcal{O}(1)$ from the stringy perspective.
Suppose the DBI scalar field $\phi(r)$ with a static and spherically symmetric profile is coupled to a static local source $T_{\mathrm{m}}=-\rho_{\mathrm{m}}(r)$; then, the EOM
\begin{equation}
\frac{1}{r^2}\frac{\partial}{\partial r}\left(\frac{r^2\phi'(r)}{\sqrt{1-\Lambda^{-4}\phi'(r)^2}}\right)=-\frac{g}{M_{\mathrm{Pl}}}\rho_{\mathrm{m}}(r)
\end{equation}
can be integrated to give
\begin{equation}
\phi'(r)=-\frac{\Lambda^2}{\sqrt{1+\left(\frac{r}{r_{\mathrm{DBI}}}\right)^4}},
\end{equation}
where a DBI transition radius \cite{Burrage:2014uwa}
\begin{equation}\label{eq:DBIr}
r_{\mathrm{DBI}}=\frac{1}{\Lambda}\left(\frac{gM}{4\pi M_{\mathrm{Pl}}}\right)^{1/2}
\end{equation}
is introduced to separate the DBI regime $r\gg r_{\mathrm{DBI}}$ with repulsive force
\begin{equation}
\vec{a}_{\phi}=-\frac{g}{M_{\mathrm{Pl}}}\phi'(r)\hat{r}\simeq2g^2 G\frac{M}{r^2}\hat{r}=-2g^2\vec{a}_{\mathrm{N}}
\end{equation}
from the Newtonian regime $r\ll r_{\mathrm{DBI}}$ with screened force
\begin{equation}
\vec{a}_{\phi}=-\frac{g}{M_{\mathrm{Pl}}}\phi'(r)\hat{r}\simeq-2g^2\left(\frac{r}{r_{\mathrm{DBI}}}\right)^2\vec{a}_{\mathrm{N}}.
\end{equation}
To retain the success of the MOND phenomenon of DM superfluidity on galactic scales, the DBI force should also be screened from the MOND force on the galactic scale, which renders an identification of the DBI transition radius (\ref{eq:DBIr}) with the MOND transition radius (\ref{eq:MONDr}),
\begin{equation}
r_{\mathrm{DBI}}^2=\frac{1}{\Lambda^2}\frac{gM}{4\pi M_{\mathrm{Pl}}}\Leftrightarrow r_{\mathrm{MOND}}^2=\frac{MG}{a_0}.
\end{equation}
Therefore, the galactic coincidence
\begin{equation}
a_0=\frac{\Lambda^2}{2gM_{\mathrm{Pl}}}\simeq H_0
\end{equation}
is derived, provided that
\begin{equation}
\Lambda^4\simeq M_{\mathrm{Pl}}^2H_0^2\simeq(\mathrm{meV})^4
\end{equation}
for a conformal coupling $g$ of order unity. It turns out as a nice surprise that $\Lambda^4$ coincides with current critical energy density and $\Lambda$ in the $\mathrm{DBI}T_{\mathrm{m}}$ action (\ref{eq:DBITm}) matches that in the $\mathrm{MOND}T_{\mathrm{b}}$ action (\ref{eq:MONDTb}). This is why we use the same symbol for the scale $\Lambda$ in both actions (\ref{eq:MONDTb}) and (\ref{eq:DBITm}), which shares the same scale with the cosmological constant.
\section{DBI dark energy}\label{sec:4}
The repulsive feature of the DBI force and the unexpected match of $\Lambda^4$ with the current critical energy density inspire us to explore the possibility of our DBI scalar field playing the role of dark energy.
We start with the total Lagrangian
\begin{equation}
\sqrt{-f}\mathcal{L}=\sqrt{-f}\mathcal{L}_{\phi}+\sqrt{-f}\mathcal{L}_{\phi T}+\sqrt{-f}\mathcal{L}_{\mathrm{m}},
\end{equation}
where
\begin{align}
\mathcal{L}_{\phi}&=-\Lambda^4\sqrt{1-\Lambda^{-4}(\partial\phi)^2};\\
\mathcal{L}_{\phi T}&=\frac{g\phi}{M_{\mathrm{Pl}}}T_{\mathrm{m}};\\
\mathcal{L}_{\mathrm{m}}&=\mathcal{L}_{\mathrm{m}}(f_{\mu\nu},\psi).
\end{align}
\subsection{Backreaction on matter}
In the absence of the conformal coupling term, the matter component is supposed to behave as a pressureless fluid with the trace $T_{\mathrm{m}}=-\rho_{\mathrm{m}}$ of the energy-momentum tensor $T_{\mu\nu}^{\mathrm{m}}=(2/\sqrt{-f})\delta(\sqrt{-f}\mathcal{L}_{\mathrm{m}})/\delta f^{\mu\nu}$. In the presence of the conformal coupling term, the matter field could exchange momentum by interacting with the DBI scalar field. Therefore, the conformal coupling term would necessarily introduce an effective pressure in the matter fluid, and the effective EOS parameter of matter could in principle deviate from zero. We will show below that such a deviation from pressureless fluid can be made arbitrarily small for a sub-Planckian DBI scalar.
The EOM of the DBI scalar field for a spatial homogenous profile $\phi(t)$ is simply
\begin{equation}\label{eq:EOM}
\ddot{\phi}+3H\dot{\phi}\gamma^{-2}+\frac{gT_{\mathrm{m}}}{M_{\mathrm{Pl}}\gamma^3}=0,
\end{equation}
according to the Euler-Lagrange equation
\begin{equation}
\frac{\partial{(\sqrt{-f}\mathcal{L}_{\phi}+\sqrt{-f}\mathcal{L}_{\phi T})}}{\partial\phi}=\partial_{\mu}\frac{\partial(\sqrt{-f}\mathcal{L}_{\phi}+\sqrt{-f}\mathcal{L}_{\phi T})}{\partial(\partial_{\mu}\phi)}.
\end{equation}
In the absence of the conformal coupling term, the energy-momentum tensor of the DBI scalar field can be computed as
\begin{equation}\label{eq:T1}
T_{\mu\nu}^{\phi}=f_{\mu\nu}\mathcal{L}_{\phi}-\frac{\partial\mathcal{L}_{\phi}}{\partial(\partial^{\mu}\phi)}\partial_{\nu}\phi
\end{equation}
with its energy density and pressure of the form
\begin{align}
\rho_{\phi}=&\Lambda^4\gamma;\\
p_{\phi}=&-\Lambda^4\gamma^{-1}.
\end{align}
In the presence of the conformal coupling term, the conservation equation of the above energy-momentum tensor should be written as
\begin{equation}\label{eq:conservation1}
\nabla^{\mu}T_{\mu\nu}^{\phi}=-\frac{gT_{\mathrm{m}}}{M_{\mathrm{Pl}}}\partial_{\nu}\phi,
\end{equation}
where the temporal component of the above equation reads
\begin{equation}\label{eq:conservation10}
\dot{\rho}_{\phi}+3H(\rho_{\phi}+p_{\phi})=-\frac{g\rho_{\mathrm{m}}}{M_{\mathrm{Pl}}}\dot{\phi},
\end{equation}
which is consistent with the EOM (\ref{eq:EOM}).
In the absence of the conformal coupling term, the EOM (\ref{eq:EOM}) has a trivial solution $\dot{\phi}=0$, and the EOS parameter
\begin{equation}
w_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}=-\gamma^{-2}\equiv-1-\Lambda^{-4}\dot{\phi}^2
\end{equation}
would simply imply a cosmological constant with $w_{\phi}=-1$. In the presence of the conformal coupling term, the EOM (\ref{eq:EOM}) cannot admit such a trivial solution $\dot{\phi}=0$ unless $\phi$ is always equal to zero, which is of less physical interest. Therefore, our DBI scalar should generally behave as a dynamical Chaplygin gas \cite{Kamenshchik:2001cp} $p_{\phi}=-\Lambda^8/\rho_{\phi}$ with phantomlike EOS parameter and superluminal sound speed \cite{Mukhanov:2005bu} $c_s^2=\dot{p}/\dot{\rho}=\gamma^{-2}$, where the closed timelike curves are argued to be evaded within the regime of validity of the effective field theory (EFT) due to chronology protection \cite{Burrage:2011cr,Babichev:2007dw}. With slow-roll condition $\dot{\phi}\ll\Lambda^2$, our DBI scalar could serve as a candidate for the DE sector. We will show below that such a slow-roll condition can be satisfied for a sub-Planckian DBI scalar as well.
To derive the conservation equation for the matter component, we start with an alternative definition of the energy-momentum tensor for the DBI scalar,
\begin{equation}\label{eq:T2}
T_{\mu\nu}^{\phi+\phi T}=f_{\mu\nu}(\mathcal{L}_{\phi}+\mathcal{L}_{\phi T})-\frac{\partial(\mathcal{L}_{\phi}+\mathcal{L}_{\phi T})}{\partial(\partial^{\mu}\phi)}\partial_{\nu}\phi,
\end{equation}
with its energy density and pressure of the form
\begin{align}
\rho_{\phi T}=&\Lambda^4\gamma+\frac{g\phi}{M_{\mathrm{Pl}}}\rho_{\mathrm{m}};\\
p_{\phi T}=&-\Lambda^4\gamma^{-1}-\frac{g\phi}{M_{\mathrm{Pl}}}\rho_{\mathrm{m}}.
\end{align}
In the presence of the conformal coupling term, the conservation equation of the above energy-momentum tensor should be written as
\begin{equation}\label{eq:conservation2}
\nabla^{\mu}T_{\mu\nu}^{\phi+\phi T}=\frac{g\phi}{M_{\mathrm{Pl}}}\partial_{\nu}T_{\mathrm{m}},
\end{equation}
where the temporal component of the above equation reads
\begin{equation}\label{eq:conservation20}
\dot{\rho}_{\phi T}+3H(\rho_{\phi T}+p_{\phi T})=\frac{g\phi}{M_{\mathrm{Pl}}}\dot{\rho}_{\mathrm{m}},
\end{equation}
which is also consistent with the EOM (\ref{eq:EOM}).
Since the total energy-momentum tensor is conserved, the conservation equation of the energy-momentum tensor of the matter component is thus
\begin{equation}\label{eq:conservation3}
\nabla^{\mu}T_{\mu\nu}^{\mathrm{m}}=-\frac{g\phi}{M_{\mathrm{Pl}}}\partial_{\nu}T_{\mathrm{m}},
\end{equation}
where the temporal component of the above equation reads
\begin{equation}\label{eq:conservation30}
\dot{\rho}_{\mathrm{m}}+3H\rho_{\mathrm{m}}=-\frac{g\phi}{M_{\mathrm{Pl}}}\dot{\rho}_{\mathrm{m}}.
\end{equation}
The source term on the right-hand side of above equation can be accounted for by recognizing the effective EOS parameter of the matter component as
\begin{equation}
w_{\mathrm{m}}=\frac{1}{1+\frac{g\phi}{M_{\mathrm{Pl}}}}-1.
\end{equation}
Therefore, the backreaction of the DBI field on the matter component due to the conformal coupling term can be safely neglected in the field region $\phi\ll M_{\mathrm{Pl}}$ of the DBI scalar for conformal coupling of order unity. From now on, we will take a fiducial value $g=1$ for the conformal coupling in order to solve the galactic coincidence problem.
\subsection{Steady flow assumption}
In the rest of this section, we will work with the assumption, called the \emph{steady flow} assumption, that the energy flow from the DBI scalar to the matter component is conserved. We define the energy flow as the energy-momentum tensor associated with the conformal coupling term
\begin{equation}
T_{\mu\nu}^{\phi T}=T_{\mu\nu}^{\phi+\phi T}-T_{\mu\nu}^{\phi}=f_{\mu\nu}\mathcal{L}_{\phi T};
\end{equation}
then, steady flow assumption is expressed as
\begin{equation}\label{eq:conservation4}
\nabla^{\mu}T_{\mu\nu}^{\phi T}=\frac{g}{M_{\mathrm{Pl}}}\partial_{\nu}(\phi T_{\mathrm{m}})=0,
\end{equation}
where the temporal component of the above equation reads
\begin{equation}\label{eq:conservation40}
\dot{\phi}\rho_{\mathrm{m}}+\phi\dot{\rho}_{\mathrm{m}}=0.
\end{equation}
The steady flow assumption simply states that, although the energy-momentum tensors of the DBI field and matter field are not separately conserved as indicated in Eqs. (\ref{eq:conservation1}) and (\ref{eq:conservation3}), there is no loss during the energy transfer from the DBI scalar to the matter component and the total energy-momentum tensor of the DBI field and the matter field is conserved, namely, $\nabla^{\mu}T_{\mu\nu}^{\phi}+\nabla^{\mu}T_{\mu\nu}^{\mathrm{m}}=-\nabla^{\mu}T_{\mu\nu}^{\phi T}=0$. We will justify numerically the steady flow assumption below.
With the steady flow assumption, one can solve the DBI field
\begin{equation}
\phi(a)=\frac{M_{\mathrm{Pl}}}{g}W\left(\frac{g\phi_0}{M_{\mathrm{Pl}}}e^{\frac{g\phi_0}{M_{\mathrm{Pl}}}}\left(\frac{a}{a_0}\right)^3\right)
\end{equation}
analytically by combining Eq. (\ref{eq:conservation30}) with Eq. (\ref{eq:conservation40}), where $\phi_0\equiv\phi(a=a_0)$ with present-day scale factor $a_0\equiv1$ and $W(z)$ is the Lambert W function defined by $z=W(z)\exp[W(z)]$. Hence, the evolution equation (\ref{eq:conservation30}) of the matter component can be directly integrated to give
\begin{equation}
\rho_{\mathrm{m}}(a)=\rho_{\mathrm{m}0}\exp\left(-3\int_{a_0}^{a}\frac{\mathrm{d}\ln a'}{1+W\left(\frac{g\phi_0}{M_{\mathrm{Pl}}}e^{\frac{g\phi_0}{M_{\mathrm{Pl}}}}\left(\frac{a'}{a_0}\right)^3\right)}\right).
\end{equation}
The evolutions of DBI field, the effective EOS parameter of matter component, the matter energy density, and the conformal coupling term are presented in Fig. \ref{fig:phi and so on}
\begin{figure*}
\includegraphics[width=8cm]{phi.pdf}
\includegraphics[width=8cm]{wmeff.pdf}\\
\includegraphics[width=8cm]{rhom.pdf}
\includegraphics[width=8cm]{phirho.pdf}\\
\caption{The evolutions of the DBI field, the effective EOS parameter of the matter component, the matter energy density, and the conformal coupling term with respect to the scale factor for initial conditions $\phi_0/M_{\mathrm{Pl}}=10^{-1},10^{-2},10^{-3}.$}\label{fig:phi and so on}
\end{figure*}
The backreaction of the DBI field on the matter component is negligible during the matter dominated era as long as a sub-Planckian field value for the DBI field at present is specified. However, the effective EOS parameter of the matter component will eventually approach $-1$ in the future, causing an unavoidable vacuum decay to matter, saving us from big rip singularity as we will see. The steady flow assumption is justified by a constant conformal coupling term. At small scale factor $a\ll1$, the evolution of the Lambert W function $W(a^3)\sim a^3$ compensates the evolution of the matter component $\rho_{\mathrm{m}}\sim a^{-3}$ to render a constant conformal coupling term $\phi T_{\mathrm{m}}\sim W(a^3)\rho_{\mathrm{m}}\sim\mathrm{const}$. At a large scale factor, the constant nature of the conformal coupling term is nontrivial.
The evolution of the energy density of the DBI field can be solved numerically by rewriting Eq. (\ref{eq:conservation10}) as
\begin{equation}
\rho'_{\phi}(a)+\frac{3}{a}\left(\rho_{\phi}(a)-\frac{\Lambda^8}{\rho_{\phi}(a)}\right)=-\frac{g\rho_{\mathrm{m}}(a)}{M_{\mathrm{Pl}}}\phi'(a).
\end{equation}
With numerical solution $\rho_{\phi}(a)$, one can evaluate all other quantities like
\begin{align}
w_{\phi}(a)&=-\left(\Lambda^{-4}\rho_{\phi}(a)\right)^{-2};\\
w_{\phi}^{\mathrm{eff}}(a)&=w_{\phi}(a)+\frac{g a}{3M_{\mathrm{Pl}}}\phi'(a)\frac{\rho_{\mathrm{m}}(a)}{\rho_{\phi}(a)};\\
\rho_{\phi T}(a)&=\rho_{\phi}(a)+\frac{g}{M_{\mathrm{Pl}}}\phi(a)\rho_{\mathrm{m}}(a);\\
w_{\phi T}(a)&=\frac{-\frac{\Lambda^8}{\rho_{\phi}(a)}-\frac{g}{M_{\mathrm{Pl}}}\phi(a)\rho_{\mathrm{m}}(a)}{\rho_{\phi}(a)+\frac{g}{M_{\mathrm{Pl}}}\phi(a)\rho_{\mathrm{m}}(a)};\\
w_{\phi T}^{\mathrm{eff}}(a)&=w_{\phi T}(a)-\frac{g a}{3M_{\mathrm{Pl}}}\phi(a)\frac{\rho'_{\mathrm{m}}(a)}{\rho_{\phi T}(a)},
\end{align}
where the effective EOS parameters $w_{\phi}^{\mathrm{eff}}(a)$ and $w_{\phi T}^{\mathrm{eff}}(a)$ of the DBI scalar field are defined by rewriting Eqs. (\ref{eq:conservation10}) and (\ref{eq:conservation20}) in a form without the interacting term,
\begin{align}
&\dot{\rho}_{\phi}+3H(1+w_{\phi}^{\mathrm{eff}})\rho_{\phi}=0;\\
&\dot{\rho}_{\phi T}+3H(1+w_{\phi T}^{\mathrm{eff}})\rho_{\phi T}=0.
\end{align}
The evolutions of the above quantities are plotted in Fig. \ref{fig:rhophi and so on}.
\begin{figure*}
\includegraphics[width=8cm]{rho.pdf}
\includegraphics[width=8cm]{rhoT.pdf}\\
\includegraphics[width=8cm]{wphi.pdf}
\includegraphics[width=8cm]{wphiT.pdf}\\
\includegraphics[width=8cm]{wphieff.pdf}
\includegraphics[width=8cm]{wphiTeff.pdf}\\
\caption{The evolutions of the energy density of the DBI field and their effective EOS parameters with respect to the scale factor for initial conditions $\phi_0/M_{\mathrm{Pl}}=10^{-1},10^{-2},10^{-3}.$}\label{fig:rhophi and so on}
\end{figure*}
The division of DBI fluid from matter fluid is somewhat artificial since the DBI scalar and matter component are coupled together. However, the difference between definitions (\ref{eq:T1}) and (\ref{eq:T2}) of the energy-momentum tensor of the DBI scalar are shown to be negligible in Fig. \ref{fig:rhophi and so on}; therefore, we will just stick to Eq. (\ref{eq:T1}) for the sake of simplicity. We also compute the evolution of the Hubble parameter by $3M_{\mathrm{Pl}}^2H(a)^2=\rho_{\phi T}(a)+\rho_{\mathrm{m}}(a)+\rho_{\mathrm{r}}(a)$ and the fractions of energy density by $\Omega_i(a)=\rho_i(a)/3M_{\mathrm{Pl}}^2H(a)^2$ in Fig. \ref{fig:Hubble and Omega}.
\begin{figure*}
\includegraphics[width=8cm]{Hubble.pdf}
\includegraphics[width=8cm]{Omega.pdf}\\
\caption{The evolutions of the Hubble parameter and fractions of energy density with respect to the scale factor for initial conditions $\phi_0/M_{\mathrm{Pl}}=10^{-1},10^{-2},10^{-3}.$}\label{fig:Hubble and Omega}
\end{figure*}
It is worth noting that the DBI scalar relaxes its phantom nature by vacuum decaying to matter, preventing the matter component from being diluted away and leading to a constant Hubble parameter in the asymptotic future free of big rip singularity.
\subsection{Slow-roll conditions}
Last but not least, it is the slow-roll condition
\begin{equation}\label{eq:slow-roll 1}
\frac{\dot{\phi}^2}{\Lambda^4}\ll1
\end{equation}
that allows us to interpret our DBI scalar as a candidate for the dark energy sector.
To evaluate analytically the EOS parameter of our DBI DE, we propose a second slow-roll condition,
\begin{equation}\label{eq:slow-roll 2}
\left|\frac{\ddot{\phi}}{3H\dot{\phi}\gamma^{-2}}\right|\ll1,\left|\frac{\ddot{\phi}}{\frac{g\rho_{\mathrm{m}}}{M_{\mathrm{Pl}}\gamma^3}}\right|\ll1,
\end{equation}
on the EOM (\ref{eq:EOM}) and find that
\begin{equation}
\dot{\phi}^2\simeq\frac{g^2T_{\mathrm{m}}^2}{9M_{\mathrm{Pl}}^2H^2\gamma^2}.
\end{equation}
Recalling that the factor $\gamma\equiv1/\sqrt{1+\Lambda^{-4}\dot{\phi}^2}$ and the matter component $T_{\mathrm{m}}=-\rho_{\mathrm{m}}=-3M_{\mathrm{Pl}}^2H^2\Omega_{\mathrm{m}}$ and the galactic coincidence $\Lambda^4=4g^2M_{\mathrm{Pl}}^2H_0^2$, one can immediately derive from the above equation the EOS parameter
\begin{equation}\label{eq:EOS}
w_{\phi}=-\gamma^{-2}\simeq\frac{1}{-1+E^2\Omega_{\mathrm{m}}^2/4},
\end{equation}
where the reduced Hubble parameter $E=H/H_0$ is understood and the conformal coupling $g$ is surprisingly canceled out. Testing Eq. (\ref{eq:EOS}) with the present value of matter fraction $\Omega_{\mathrm{m}0}\approx0.3$, one finds the present value of the EOS of our DBI DE,
\begin{equation}\label{eq:w0}
w_{\phi0}\simeq\frac{1}{-1+\Omega_{\mathrm{m}0}^2/4}\approx-1.023,
\end{equation}
perfectly matching the Planck 2015 constraints \cite{Ade:2015xua}. A distinct feature of our DBI DE is that $w_{\phi0}$ and $\Omega_{\mathrm{m}0}$ are strongly correlated without other free parameters encountered. Although behaving mildly like the phantom at present, our DBI DE will relax its phantom nature by vacuum decaying to matter, preventing matter from being diluted away, resulting in a constant Hubble parameter and leading to a de Sitter future free of big rip singularity. The validity of the first and second slow-roll conditions (\ref{eq:slow-roll 1}) and (\ref{eq:slow-roll 2}) is presented in Fig. \ref{fig:slow-roll}.
\begin{figure}
\includegraphics[width=8cm]{phidot.pdf}\\
\includegraphics[width=8cm]{onebytwo.pdf}\\
\includegraphics[width=8cm]{onebythree.pdf}\\
\caption{The evolutions of the first slow-roll condition $\dot{\phi}^2\ll\Lambda^4$ and the second slow-roll condition $|\ddot{\phi}|\ll3H\dot{\phi}\gamma^{-2}, |\ddot{\phi}|\ll\frac{g\rho_{\mathrm{m}}}{M_{\mathrm{Pl}}\gamma^3}$, with respect to the scale factor for initial conditions $\phi_0/M_{\mathrm{Pl}}=10^{-1},10^{-2},10^{-3}.$}\label{fig:slow-roll}
\end{figure}
\section{Conclusions and discussions}\label{sec:5}
It was recently claimed that the axionlike dark matter particles can condense on galactic scales as a superfluid, the phonons of which mediate MONDian force between baryons, and thus MOND arises as an emergent phenomenon of dark matter itself. The standard $\Lambda$CDM model is recovered on cosmic scales in the presence of dark matter particles in the normal phase instead of the condensed phase. We have proposed to study the possible origin of the MOND critical acceleration scale in the context of dark matter superfluidity. We have introduced a DBI-like scalar field conformally coupled to the matter components. It turns out that the MOND critical acceleration is roughly at the same magnitude with the present Hubble scale, provided that the conformally coupled DBI scalar plays the role of dark energy.
However, one might be concerned with the possible ghost problem of our proposal. In canonical quantum field theory, a Lagrangian with a wrong-sign kinetic term, after canonical quantization, usually admits the negative norm states with negative energy, namely, the ghost states. If there are no other fields directly coupled to the ghost field, it would not cause us any trouble. However, if there are other fields with a correct-sign kinetic term directly coupled to the ghost field, the vacuum would be unstable because it could generate a pair of ghost particles with negative energy and a pair of normal particles with positive energy. We argue that the possible ghost problem might not be as pronounced as it appears to be due to the following three features encountered in our model. First, the Hamiltonian density turns out to be positive and bounded below, which suggests that there might be a stable vacuum where ghost particles can condense. Second, the equation of motion is second order in the time derivative, which might evade the ghost problem from the view point of Ostrogradsky's theorem. Third, even if the ghosts indeed exist, they are indirectly coupled to the matter fields via the trace of the energy-momentum tensor. Since the matter fields act as a source term, there are simply no sources for ghosts to be generated when DBI-like scalar field come to dominate. This might explain why the equation of state of our DBI dark energy approaches $-1$ in the end. Therefore, our model should be treated as a phenomenological model which requires further study in the future.
\begin{acknowledgments}
S.J.W. would like to thank Lasha Berezhiani and Alexander Vikman for helpful correspondences and Bin Hu, Jian-Wei Hu, Qi Guo, and Run-Qiu Yang for helpful discussions. We would like to thank an anonymous referee for greatly improving the presentation and validity of the paper.
R.G.C. is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No.XDB09000000.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
2,877,628,088,329 | arxiv | \section{Introduction: The Meaning of Structure}
Distributed controller design concerns the imposition of architectural constraints on a feedback controller while attempting to stabilize, and possibly optimize, the closed-loop performance of a given system, called the {\em plant}. The problem only arises when the plant is multi-input and multi-output, and the standard notion of architectural constraints implies that certain elements of the controller transfer function matrix are forced to be zero.
Although the sparsity pattern of a transfer function is certainly one notion of a system's structure, it is typically the weakest form of system structure considered. There are other notions of system structure, such as the interconnection pattern of subsystems or the sparsity pattern of a state space realization that are stronger structural concepts \cite{enoch:math_rel1,enoch:math_rel2}. Here we say they are stronger structural concepts in the sense that the interconnection of subsystems or a particular state space realization determines the sparsity pattern of the associated transfer function, but not the other way around.
In this paper we consider another notion of structure, called the signal structure of the system, that is both stronger than the sparsity pattern of the transfer function but weaker than the sparsity pattern of the system's state space realization. If we use these two system representations as extremes, suggesting that the sparsity pattern of the state realization is the {\em complete computational structure} of the system while the sparsity pattern of the transfer function may contain little (if any) structural information, then the signal structure is squarely between the two in terms of its structural informativity.
The signal structure is encoded by a representation of linear time invariant systems called the dynamical structure function (DSF) \cite{TAC08}. Since all representations of the system, whether a state realization, DSF, or transfer function, describe the system's behavior or dynamic response to inputs equally well, these representations really differ in how much structural information they convey about the system. As a result, the DSF is a {\em partial-structure} representation of the system.
Although these ideas will be made precise in the sequel, intuitively the system's DSF describes the {\em open-loop} causal dependencies among manifest variables (inputs and outputs), whereas the transfer function describes the {\em closed-loop} dependencies from inputs to outputs. Thus, while a DSF may be intricately structured, its corresponding transfer function may be fully connected, essentially exhibiting no particular structure (see Figure 1). This is why many interesting distributed control problems are not described well by imposing sparsity constraints on the controller's transfer function.
\begin{figure}[ht]
\centering
\subfigure[]{
\includegraphics[trim = 0 .4in 0 .2in, scale=.33]{zpcyclicweak.pdf}
\label{fig:ctrl_tf}
}
\subfigure[]{
\includegraphics[trim = 0 .4in 0 .2in, scale=.43]{cycle.pdf}
\label{fig:ctrl_dsf}
}
\caption{Two distinct notions of structure for the same system. The top figure indicates that the transfer function, evidently a $3\times 3$ matrix $G(s)$, is full and unstructured, while the bottom figure indicates that the signal structure, represented by the dynamical structure function with two $3\times 3$ matrices $Q(s)$ and $P(s)$ where $G(s)=(I-Q(s))^{-1}P(s)$, is sparse and definitively structured. Note that the bottom figure may represent communication links, and since there is a pathway from every input to every output, the associated transfer function may be full, as in the top figure.}
\label{fig:ctrl_required}
\end{figure}
This paper describes a technique for designing stabilizing controllers with a particular signal structure for a given plant, or demonstrating that no such controller exists. The next section discusses related work, while the following section details mathematical preliminaries regarding dynamical structure functions as a partial structure representation of linear time invariant systems. We then present the design procedure and the main result, which proves that the design procedure delivers a stabilizing controller with the desired structure if possible. Examples and conclusions follow.
\section{Related Work}
One of the first results on the existence of a decentralized controller was given in \cite{wang_davison}. It developed the idea of {\em fixed modes} and showed that a decentralized controller exists if and only if the system had no unstable fixed modes. More precisely, it showed that a system $(A,B,C)$ is stabilizable with a diagonal or block diagonal controller $K$ if and only if $A-BKC$ does not have any unstable eigenvalues that cannot be moved by changing the nonzero entries of $K$. This result was extended in \cite{siljak:decentralized_control} by showing that this is in fact true for any distributed controller $K$, not just for diagonal and block diagonal. The authors also present methods to synthesize the decentralized stabilizing controller.
In \cite{lall:qi} the authors show that if the structure of the transfer function matrices of the plant and the controller meets the {\em quadratic invariance} condition then the problem of synthesizing the optimal controller is convex. In \cite{lall:qi2} the authors show that the quadratic invariance condition is necessary and sufficient for the problem of synthesizing the optimal controller to be convex. This method requires a decentralized stabilizing controller to initialize the convex optimization problem, so to complete the process, an algorithm to obtain such a controller is provided in \cite{nuno:qi}.
A different type of distributed controller design has been proposed in \cite{nicola:reliazable_ctrl}. The approach taken in this paper enforces the controller to have the same network structure as the plant. The structure in this paper is defined as the constraint on the interconnection of sub-systems, or the subsystem structure. Hence, the plant and the controller can share the same communication network reducing the implementation cost. An algorithm to synthesize a sub-optimal controller with such structure is also provided in this paper.
In this paper we introduce a similar, but a more general controller design problem. Instead of the controller having to have the same structure as the plant, we allow it to have any structure. Also, the structure is defined as a constraint on the signal structure. In Figure \ref{fig:ctrl_required} we show an example of a plant and a corresponding controller structure that we might want to have. When a controller has such a structure, we can see that all the controller units affect each other directly or indirectly, hence, the controller transfer function matrix is completely full. As a result, using the usual approach of placing binary constraint on the controller transfer function will produce a centralized controller as shown in \ref{fig:ctrl_obtained}. Also, most of these setups do not meet the quadratic invariance criterion. In this paper, we will show that these controllers can be obtained by placing binary constraints on the dynamical structure function of the controller.
\begin{figure}[ht]
\centering
\includegraphics[scale=.5]{ring_plant_controller.pdf}
\caption{Plant with the signal structure as in Figure \ref{fig:ctrl_dsf} interconnected with controller with a particular desired distributed structure.}
\label{fig:ctrl_required}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale = .5]{ring_tf.pdf}
\caption{Since the desired signal structure for the controller in Figure \ref{fig:ctrl_required} yields a full transfer function, other design methods yield a centralized controller.}
\label{fig:ctrl_obtained}
\end{figure}
In \cite{sequential1}, \cite{sequential2}, etc., sequential design methods have been used to construct decentralized controllers. Although these methods do not produce the optimal controller, they provide an efficient method to synthesize a nominal stabilizing controller with a desired decentralized sparsity pattern in its transfer function. We will use a similar strategy to design a stabilizing controller with constraints on the signal structure in Section \ref{sec:main}. In the event that this process cannot produce a stabilizing controller, we will show that there is no controller of the given signal structure that stabilizes the plant.
\section{dynamical structure functions}
dynamical structure functions is a representation for linear time invariant systems developed in \cite{sean:dsf}. It gives a partial representation of the structure of the system, namely how the inputs affect the manifest states and how the manifest states affect each other. We also call it this representation the signal structure of the system. A brief derivation is provided below.
Let us consider a state-space LTI system
\begin{align}\label{eqn:trans_sys}\begin{bmatrix}\dot{y} \\ \dot{x} \end{bmatrix} &= \begin{bmatrix}A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix}y \\ x \end{bmatrix}+ \begin{bmatrix}B_1 \\ B_2 \end{bmatrix} u \\
y &= \begin{bmatrix}I & 0 \end{bmatrix} \begin{bmatrix}y\\ x \end{bmatrix}, \nonumber
\end{align}
Here $y$ are the states that are measured, and $x$ are the hidden states. Note that the assumption in the second equation is made for notational convenience. For a detailed derivation please see \cite{sean:csm}.
Now, taking Laplace Transforms of the signals in (\ref{eqn:trans_sys}), we get
\begin{align}\label{eqn:laplace_sys}\begin{bmatrix}sY \\ sX \end{bmatrix} &= \begin{bmatrix}A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix}Y \\ X \end{bmatrix}+ \begin{bmatrix}B_1 \\ B_2 \end{bmatrix} U.
\end{align}
Solving for X in the second equation of \ref{eqn:laplace_sys} gives $$X=(sI-A_{22})^{-1} A_{21}Y + (sI-A_{22})^{-1}B_2U$$
Substituting into the first equation of (\ref{eqn:laplace_sys}) we get,
$$sY = WY + VU,$$ where $W=A_{11} + A_{12}(sI-A_{22})^{-1}A_{21}$ and $V=A_{12}(sI-A_{22})^{-1}B_2 + B_1$.
Let $D$ be a diagonal matrix with the diagonal entries of $W$. Then, $$(sI-D)Y = (W-D)Y+VU.$$Now we can rewrite this equation as, \begin{equation}\label{eqn:dsf} Y=QY+PU, \end{equation} where
$$Q = (sI-D)^{-1}(W-D)$$ and $$P=(sI-D)^{-1}V.$$
The matrix $Q$ is a matrix of transfer functions from $Y_i$ to $Y_j$, $i\ne j$, or relating each measured signal to the other measured signals. A nonzero entry in $Q_{ji}$ says that the signal $Y_i$ affects the signal $Y_j$ either directly or through some hidden states. Note that $Q$ is zero on the diagonal and either zero or a strictly proper transfer function on the off diagonal. The matrix $P$ is a matrix of zeros or strictly proper transfer functions from each input to each output without depending on any additional measured states. Together, the pair $(Q(s),P(s))$ is called the {\em dynamical structure function} for system (\ref{eqn:trans_sys}). The transfer function matrix for this system is given by $$G = (I-Q)^{-1}P = C(sI-A)^{-1}B.$$ Hence, DSF can also be seen as an interconnection of the systems $Q$ and $P$ as shown in Figure \ref{fig:dsf as feedback}. Also, note that if $Q=0$, $G=P$.
\begin{figure}[ht]
\centering
\includegraphics[scale=.7]{qp_interconnection.pdf}
\caption{DSF can be viewed as an interconnection of two systems characterized by the transfer function matrices $Q$ and $P$, where $Q$ is a hollow transfer function matrix. The transfer function from $u$ to $y$ is given by $G = (I-Q)^{-1}P$. }
\label{fig:dsf as feedback}
\end{figure}
\begin{example}
Let us consider a system given by the following state space equations
\begin{align*}
\dot{\begin{bmatrix}x_1 \\ x_2 \\ x_3\end{bmatrix}} &= \begin{bmatrix}1 & 0 & 3 \\ 0 & 2 & 3 \\1 &3 &2\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ x_3\end{bmatrix} + \begin{bmatrix} 1 & 0\\0 & 1\\0 & 0\end{bmatrix}\\
y &= \begin{bmatrix}1 & 0 & 0\\0 & 1 & 0\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ x_3\end{bmatrix}
\end{align*}
The corresponding DSF is given by
\begin{align*}Q &= \left(\begin{array}{cc} 0 & -\frac{9}{ - s^2 + 3\, s + 1}\\ \frac{3}{\left(s + 1\right)\, \left(s - 5\right)} & 0 \end{array}\right) \text{ and } \\P &= \left(\begin{array}{cc} -\frac{s - 2}{ - s^2 + 3\, s + 1} & 0\\ 0 & \frac{1}{2\, \left(s + 1\right)} + \frac{1}{2\, \left(s - 5\right)} \end{array}\right).\end{align*}
Here, $x_1$ and $x_3$ are the manifest states, and $x_3$ is the hidden shared state. $x_3$ is called the shared state because it is shared between the two links in $Q$.
\end{example}
In this paper, the structure of a controller is defined as a sparsity constraint on the $Q$ matrix; we assume, for the ease of exposition, that the $P$ matrix to be diagonal. We will use the binary matrices $(Q^{bin}, P^{bin})$ to represent the sparsity of the desired controller. The $(i,j)^{th}$ element of $Q^{bin}$, $q^{bin}_{ij} = 1$ if the $j^{th}$ controller unit can communicate with the $i^{th}$ controller unit. Similarly, $p^{bin}_{ij}=1$ if the $j^{th}$ plant unit communicates with the $i^{th}$ controller unit. $K^{bin}$ represents a structural constraint on the transfer function of the controller.
\begin{example}
Using this notation, the desired controller in Figure 1 is given by:
$$P^{bin}=\begin{bmatrix}1& 0 & 0 \\ 0 & 1& 0 \\ 0 & 0 & 1 \end{bmatrix}$$ and
$$Q^{bin}=
\begin{bmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}.
$$
\renewcommand\arraystretch{1.2}
Let us assume that the transfer function $Q_{ij} = q_{ij}$ if $Q^{bin}_{ij}$ = 1, and $Q_{ij} =0$ otherwise, and similarly $P_{ij} = p_{ij}$ if $P^{bin}_{ij}$ = 1, and $P_{ij} =0$ otherwise. The corresponding transfer function matrix for this controller is given by, $(Q^{bin}, P^{bin})$
\begin{align*}
K &= (I-Q_k)^{-1}P_k \\
&=\left[\begin{array}{ccc}
-\frac{{p_{11}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{12}}\, {q_{13}}\, {q_{32}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{13}}\, {q_{13}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1}\\
-\frac{{p_{11}}\, {q_{21}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{12}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{13}}\, {q_{13}}\, {q_{21}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1}\\
-\frac{{p_{11}}\, {q_{21}}\, {q_{32}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{12}}\, {q_{32}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1} & -\frac{{p_{13}}}{{q_{13}}\, {q_{21}}\, {q_{32}} - 1}
\end{array}\right]
\end{align*}
We can see that this transfer function matrix is full, hence this controller cannot be obtained by placing binary constraints on the transfer function matrix.
Quadratic Invariance results presented in \cite{lall:qi} provide a method to place other types of constraint on the transfer function. For the structure given in this example the constraints are as follows:
\begin{align} \label{constraint}
\frac{k_{21}}{k_{11}} = \frac{k_{32}}{k_{13}}, \frac{k_{31}}{k_{21}} = \frac{k_{32}}{k_{22}}, \text{ and } \frac{k_{12}}{k_{32}} = \frac{k_{13}}{k_{33}}
\end{align}
Let us assume that plant has the structure as shown in Figure \ref{fig:ctrl_required}. If $\bar{p}_{ij}$ and $\overline{q}_{ij}$ represents the transfer functions on the DSF of the plant, the transfer function matrix for the plant is given by
$$G = \left[\begin{array}{ccc} -\frac{\bar{p}_{11}\bar{q}_{12}\bar{q}_{32}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & -\frac{\bar{p}_{22}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & - \frac{\bar{p}_{33}\bar{q}_{12}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1}\\
-\frac{\bar{p}_{11}\bar{q}_{32}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & -\frac{\bar{p}_{22}\bar{q}_{31}\bar{q}_{32}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & -\frac{\bar{p}_{33}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1}\\
-\frac{\bar{p}_{11}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & -\frac{\bar{p}_{22}\bar{q}_{31}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1} & -\frac{\bar{p}_{33}\bar{q}_{12}\bar{q}_{31}}{\bar{q}_{12}\bar{q}_{31}\bar{q}_{32} - 1}\end{array}\right].$$
By computing the product $Z = KGK$ we can see that $$\frac{z_{21}}{z_{11}} \ne \frac{z_{32}}{z_{13}}.$$ This violates the constraints given in Equation (\ref{constraint}), hence, the plant and the controller are not quadratically invariant and the algorithm in \cite{lall:qi} cannot be used to construct such controllers.
\end{example}
\renewcommand\arraystretch{1}
\section{Main Result}\label{sec:main}
In this section, we present a procedure to design a controller $(Q,P)$ with a structure given by $(Q^{bin}, P^{bin})$ to stabilize a plant with the transfer function matrix $G$. The procedure is as follows:
\\{\bf Procedure $\mathbb{P}$}
\begin{enumerate}
\item Choose an undesigned link $p_{ij}$ such that $p^{bin}_{ij} = 1$
\item Design $p_{ij}$ to stabilize $g_{ji}$ such that there is no pole zero cancellation in $PG$. That is, the controller link is designed such that it stabilizes the transfer function it sees, and there is no pole-zero cancellation.
\item After adding $p_{ij}$, if the closed loop system $(G,P)$ is still unstable, repeat for all $p_{xy}$, $p^{bin}_{xy}=1$.
\item If the closed loop system $S$, formed by adding $P$ in feedback with $G$, is still unstable, add links in $Q^{bin}$ such that there is no pole-zero cancellation between $Q$ and $S$.
\end{enumerate}
\begin{theorem}
Given a transfer function matrix, $G$, and a desired signal structure for a feedback controller characterized by $(Q^{bin}, P^{bin})$, Procedure $\mathbb{P}$ either delivers a stabilizing controller with the desired structure or no such controller exists.
\end{theorem}
This theorem says that if the controller obtained using this procedure does not stabilize the plant, then there is no controller of the given structure that can stabilize it. Hence, this procedure provides a test for the existence of a structured stabilizing controller, and if such a controller exists, it synthesizes a nominal stabilizing controller that meets the structural constraint. Before proving this theorem, we will prove some lemmata.
\begin{lemma}
\label{lem:cannotAffect}
Let $K$ be the controller transfer function. A link $k_{ij}$ cannot affect a mode of the plant $G$ that is not observable or controllable from this link.
\end{lemma}
\begin{proof}
Let, $$G=\left[\begin{array}{c|c} A & B \\ \hline C & D \\ \end{array}\right] \text{ and } k_{ij}=\left[\begin{array}{c|c}A_k & B_k \\ \hline C_k & 0 \end{array}\right].$$ Since we are only adding one link, both of these systems are SISO.
Using the Kalman decomposition on $G$, we can transform it such that
$$A = \begin{bmatrix} A_{co} & 0 & A_{\times o} & 0 \\
A_{c \times} & A_{c\bar{o}} & A_{\times \times} & A_{\times \bar{o}} \\
0 & 0 & A_{\bar{c}o} & 0 \\
0 & 0 & A_{\bar{c} \times} & A_{\bar{c} \bar{o}} \end{bmatrix},
B = \begin{bmatrix}B_{co} \\ B_{\bar{c}o} \\ 0 \\ 0\end{bmatrix}$$
$$C = \begin{bmatrix} C_{co} & 0 & C_{c \bar{o}} & 0\end{bmatrix}, \text{ and } D=d.$$ Here, the eigenvalues of $A_{c\bar{o}}$, $A_{\bar{c}o}$, and $A_{\bar{c}\bar{o}}$ are the modes of $G$ that are unobservable, uncontrollable, and both respectively from feedback link $k_{ij}$.
The closed loop modes are given by the eigenvalues of the following matrix:
\begin{align*}A_{cl} &= \begin{bmatrix}A & BC_k \\ B_kC & A_k+B_kDC_k\end{bmatrix}\\
&= \left[\begin{array}{ccccc}
A_{co} & 0 & A_{\times o} & 0 & B_{co}C_k \\
A_{c \times} & A_{c\bar{o}} & A_{\times \times} & A_{\times \bar{o}} & B_{c \bar{o}}C_k \\
0 & 0 & A_{\bar{c}o} & 0 & 0\\
0 & 0 & A_{\bar{c} \times} & A_{\bar{c} \bar{o}} & 0 \\
B_kC_{co} & 0 & B_k C_{c\bar{o}} & 0 & A_k+B_kDC_k
\end{array}\right] \end{align*}
Transforming this matrix using the permutation $$T=\begin{bmatrix}0 & 1 & 0 & 0 & 0\\1 & 0 & 0 & 0 & 0\\0& 0 &0 & 0& 1 \\0 & 0 & 0 & 1 & 0\\0 & 0 & 1 & 0 & 0\end{bmatrix},$$ we get,
\begin{align*}
A_{clT} &= TA_{cl}T'\\ &=
\left[\begin{array}{ccccc}
A_{c\bar{o}}&A_{c \times} & B_{c \bar{o}}C_k & A_{\times \bar{o}} & A_{\times \times}\\
0 & A_{co} & B_{co}C_k & 0 & A_{\times o} \\
0 & B_kC_{co} & A_k+B_kDC_k & 0 & B_k C_{c\bar{o}} \\
0 & 0 & 0 & A_{\bar{c} \bar{o}} & A_{\bar{c} \times} \\
0 & 0 & 0 & 0 & A_{\bar{c}o}
\end{array}\right]
\end{align*}
We can see that $A_{clT}$ is block triangular, and the uncontrollable or unobservable modes, namely the eigenvalues of $A_{\bar{c}o}, A_{c\bar{o}}$, and $A_{\bar{c}\bar{o}}$, are not affected by the choices of $A_k, B_k$, or $C_k$.
\end{proof}
This result shows that when a controller link is added to the system such that it stabilizes all the modes that it can control and observe, it cannot destabilize other modes of the system that are already stable. Now, the following lemma gives a necessary and sufficient condition for the existence of the controller with transfer function structure $K^{bin}$.
\begin{lemma}
\label{lem:existence}
There exists a controller with pattern $K^{bin}$ that stabilizes a plant $G$ if and only if every unstable mode of $G$ is controllable and observable from at least one link $k_{ij}$, $k^{bin}_{ij} = 1$.
\end{lemma}
\begin{proof}
From Lemma \ref{lem:cannotAffect}, we know that a link in the feedback controller cannot affect the uncontrollable or unobservable modes. Hence, any controller that stabilizes a given $G$ must have links such that all the unstable modes are both controllable and observable from at least one of the controller link. Also, if every unstable mode is controllable and observable from some controller links, these links can stabilize the plant.
\end{proof}
lemmata \ref{lem:cannotAffect} and \ref{lem:existence} allow us to add links in $P$, since adding a link in $P$ cannot change the controllability/observability of the plant for the other links in $P$. However, adding these links might cause the links in $Q$ to lose controllability or observability of some of the modes, because links in $Q$ are added on top of the links in $P$. Also, the links in $Q$ themselves can create controllability/observability issues for subsequent links in $Q$.
Loss of observability/controllability can happen for two reasons: structurally or by exact cancellations. If it happens because of structural reasons, the system stays uncontrollable/unobservable for any choice of $P$ or $Q$ as long as it has the same structure. However, if the problem occurs because of exact cancellations, we can avoid these issues by a proper choice of the transfer function. Lemma \ref{lem:pzcancellation} provides a methodology to design $P$ and $Q$ such that these cancellations are prevented. We will use the following result from \cite{mimostability} to prove the lemma.
\begin{theorem} \label{mimo_stability}
Let $G$, $H$ be proper rational transfer function matrices and suppose that $det[I+G(\infty)H(\infty)]\ne0$. Then all the poles of the transfer function matrix $$W=\begin{bmatrix} (I+HG)^{-1} & -H(I+GH)^{-1} \\ G(I+HG)^{-1} & (I+GH)^{-1}\end{bmatrix}$$ are stable if and only if
\begin{itemize}
\item $GH$ has no unstable pole-zero cancellation, and
\item all the poles of $(I+GH)^{-1}$ are stable.
\end{itemize}
\end{theorem}
\begin{proof}
See \cite{mimostability} Theorem 5.
\end{proof}
\begin{lemma}\label{lem:pzcancellation}
Loss of controllability/observability can be prevented from each link in $Q$ if pole-zero cancellations are avoided in $PG$ and $QS$. Here, $S$ is the closed loop transfer function that $Q$ observes and controls.
\end{lemma}
\begin{figure}[ht]
\centering
\includegraphics[scale=.8]{Qcl.pdf}
\caption{After designing $P$, the plant as seen by $Q$ is given by $S=(I-PG)^{-1}$.}
\label{fig:qcl}
\end{figure}
\begin{proof}
The transfer function that $Q$ observes for the closed loop system formed by adding $P$ in feedback with $G$ is given by $S=(I-PG)^{-1}$ as shown in Figure \ref{fig:qcl}.
Using the Theorem \ref{mimo_stability}, since there is no pole zero cancellations in $PG$, the closed loop system is stable if and only if $S$ is stable. Which says that this transfer function has all the poles of the system. Hence $Q$ observes and controls all the poles of the system after adding all the links in $P$ if there is no pole zero cancellation in $PG$.
Similarly, when adding the links in $Q$ if there is no pole zero cancellation in $QS$ the controllability and observability properties are maintained. That is, if a mode is observable/controllable from a link $Q_{ij}$ for some choices of the other links in the controller, then choosing the links in this fashion will keep the mode observable/controllable from $Q_{ij}.$
\end{proof}
Now we will present the proof of Theorem 1:
\begin{proof}
For every controller link that is added, either in $P$ or $Q$, it stabilizes all the modes that are controllable and observable. Also, by Lemma \ref{lem:cannotAffect}, a newly added link cannot destabilize a mode that was already stable. Hence with every new link added to the system, the number of unstable modes either decreases or stays the same.
If every unstable mode in the system is controllable and observable by some link, it gets stabilized. If the plant has an unstable mode that is uncontrollable and unobservable from every link in $P$ and $Q$, then by Lemma \ref{lem:existence}, there is no controller with the given pattern that stabilizes the plant. Also, since the added links satisfy the conditions in Lemma \ref{lem:pzcancellation}, if a mode is controllable or observable from a link for any choices previously added links, then it is controllable and observable.
\end{proof}
\section{Specific Examples}
In this section we use Procedure $\mathbb{P}$ to identify plants that are stabilizable or not stabilizable by controllers with some specific structural constraints.
\subsection{Controllers with a cyclic structure}
A cycle in the controller can be represented by the following binary constraints: \begin{align*}
P_{cyl}^{bin}&=\begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & \vdots\\0 & 0 & \ddots & 0 \\ 0 & \cdots & 0& 1 \end{bmatrix}_{n\times n} \text{ and },\end{align*}
\begin{align*}Q_{cyl}^{bin}&=\begin{bmatrix}0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & \vdots\\ 0 & 0 & 0 & \ddots & 0\\ 0 & 0 & \cdots & 0 & 1\\ 1 & 0 & 0 & \cdots & 0\end{bmatrix}_{n \times n}.\end{align*} For such constraints on the controller we can prove the following result.
\begin{corollary}
If an $n \times n$ plant is detectable and stabilizable, there always exists a stabilizing controller with the structure $(Q_{cyl}^{bin}, P_{cyl}^{bin})$ .
\end{corollary}
\begin{proof}
When all the links in $P$, and all but the last one in $Q$ is added, all the remaining unstable modes of the system must be observable and controllable from th last link in $Q$. This happens because when adding links in the controller we satisfy the conditions in \ref{lem:pzcancellation} avoiding any pole zero cancellations. Hence, if a link $Q_{i+1,i}$ is added then all the modes that are observable at $y_i$ are also observable at $y_{i+1}$, and all the modes that are controllable from $u_{i+1}$ are also controllable from $u_i$.
\end{proof}
\subsection{Systems that are not stabilizable by a diagonal controller}
We know that not all plants can be stabilized by a diagonal controllers. To study these systems one might want to generate plants that fall in this category. We can use our results to design such systems.
From Lemma \ref{lem:existence}, we know that a detectable and stabilizable plant can be stabilized by a diagonal controller if and only if a mode of the system that is controllable from input $i$ is also observable at the output $i$. Hence, a plant cannot be stabilized by a diagonal controller if there is a node that is observable only at output $i$ and controllable only from input $j$, $i\ne j$. For example, the following system cannot be stabilized by a diagonal controller:
\begin{align*}
\dot{x} &= \begin{bmatrix}1 & 0 & 0\\ 1 & 2 & 3 \\ 1 & 0 & 3\end{bmatrix}x + \begin{bmatrix}1 & 0\\0 & 1 \\ 0 & 0\end{bmatrix}\begin{bmatrix}u_1 \\ u_2\end{bmatrix}\\
\begin{bmatrix}y_1 \\ y_2\end{bmatrix}&=\begin{bmatrix}1 & 0 & 0\\0 & 1 & 0\end{bmatrix} x
\end{align*}
This system has the modes at \{1,2,3\}. Using the Popov-Belevitch-Hautus (PBH) tests for controllability and observability, we can see that the mode 3 is controllable only from input $u_1$ and observable only at output $y_2$. Hence a diagonal controller cannot satisfy the condition given in Lemma \ref{lem:existence}.
\section{Conclusion}
In this paper, we presented an algorithm to construct stabilizing controllers with a given signal structure. We also showed that if the procedure fails to produce a stabilizing controller, the plant cannot be stabilized with a controller with the given structure.
We note that this procedure might not be a practical method for generating stabilizing controllers. This method does not provide any optimality guarantees. Also, if synthesis techniques LQG is used to construct the controller links, the order of the transfer function on these links grows exponentially. Hence, we need to develop a controller synthesis technique that produce a low order controller. These issues will be addressed in the future research. Nevertheless, this paper introduces a new kind of decentralized control problem which is very important for networked systems, and gives a nominal solution for it.
\section{Acknowledgment}
We gratefully acknowledge the generous support of AFRL grants FA8750-09-2-0219 and FA8750-11-1-0236.
|
2,877,628,088,330 | arxiv | \section{Introduction}
\label{sec:intro}
The area of complex networks
\cite{Albert2002,Dorogovtsev2002,Newman2003}, which can be viewed as
an intersection between graph theory and statistical mechanics, has
been marked by many theoretical advances and relevant applications
over the last few years. New concepts such as the hubs, i.e. nodes
with particularly high degree, had major impact for understanding and
re-interpreting problems such as essentiality \cite{Jeong2001} and
resilience to attacks \cite{Schwartz2002}. Applications of complex
networks have appeared in widely diverse areas, ranging from the
Internet \cite{Albert1999} to networks of Jazz artists
\cite{Gleiser2003}. Because of its special importance to human
communication, culture, and even intelligence, the representation and
analysis of written texts in terms of graphs and complex networks
offers a promising (and challenging) research opportunity for the
forthcoming years. The application of concepts and tools from
mathematics, physics and computer science to the analysis of texts is
not new and includes approaches generally associated with first-order
statistics of words and other elements obtained from texts. With the
availability of databases accessible through the Internet,
unprecedented possibilities for such investigations are now open. For
instance, considering first-order statistics, Gon\c{c}alves \& Gon\c{c}alves
have identified the works of renowned English writers
\cite{Goncalves2005}, Montemurro \& Zanette have grouped words based
on their linguistic role in a corpus \cite{Montemurro2002}, and Zhou
\& Slater have proposed a method to measure the relevance of words in
a text \cite{Zhou2003}. Indeed, first-order analysis does provide
valuable information about the global and specific features of most
texts.
We believe that further insights can be obtained by using higher-order
statistics in order to enhance the context representation, to which
the concept of complex networks is closely related. More specifically,
each word in a text can be represented as a node, while subsequent
words define associations, or edges, between such nodes (this model is
known as word adjacency/co-ocurrence network). Typically, pairs of
subsequent words, excluding articles and other connecting words, are
considered, implying a Markov model with unity length memory. Larger
Markov memory lengths can be obtained by considering tuples of
subsequent words. Because the networks incorporate the most immediate
associations between words and concepts, their topology - quantified
by several measurements~\cite{Costa2005a} such as node degree,
clustering coefficient and shortest path - can provide information on
some properties of the text, such as style and authorship. A series of
studies indicate that word adjacency networks
\cite{Cancho2001,Dorogovtsev2001,Allegrini2003,Milo2004}, semantic
networks
\cite{Albert2002,Steyvers2001,Kinouchi2001,Sigman2002,Motter2002,Holanda2004,Dorow2005},
word association networks \cite{Steyvers2001,Costa2004,Capocci2004}
and syntactic networks \cite{Cancho2004,Cancho2005} are graphs that
show features present in classical examples of complex networks, such
as the World Wide Web and social networks. One of the important
consequences of such studies is the presence of hubs in linguistic
networks.
In this study we investigate the possibility of automated evaluation
of text quality using topological measurements extracted from the
corresponding complex networks. We consider three criteria for scoring
texts which are related to text quality, namely i) coherence and
cohesion, (ii) adherence to standard writing conventions and (iii)
theme adequacy/development. These are the criteria generally employed
to mark essays for high-school students applying to enter the
university in Brazil. Complex networks are obtained from such texts by
considering the proximity between words, and the indegree and
outdegree, the clustering coefficient and shortest path distributions
are estimated for each text. Such measurements are estimated after the
full construction of the networks, while the number of connected
components is monitored during their growth, yielding a topological
feature which is a function of the number of added word
associations. All the measurements are correlated with grades assigned
by human experts. The results indicate that, despite the many
parameters and unavoidable subjectivity of human language, such an
approach presents potential to be used as a subsidy for a more
objective and reproducible means to evaluate text quality.
\section{Text assessment by human subjects}
\label{sec:assess}
One set of 40 pieces of text has been selected, which comprise essays
on the same subject - influence from TV on our society - written in
Brazilian Portuguese by high school students. All pieces of text have
approximately the same size, with an average of 228 words. A panel of
five human judges, all of which are computational linguists, analyzed
the texts using three criteria to mark them, namely (i) coherence and
cohesion, (ii) adherence to standard writing conventions and (iii)
theme adequacy/development, henceforth referred to as $CC$, $SWC$ and
$TAD$, respectively. The judges assigned marks from 0 to 10 to each
text for the three criteria, and did not receive any instruction as to
reference values or how these criteria should be rated. Not
surprisingly, there is large dispersion among the marks given, as
illustrated in Fig.~\ref{fig:scores}, where the five marks are shown
in the vertical axes for each of the 40 texts (horizontal axes). The
texts were sorted from left to right according to an increasing
dispersion in the scores assigned by the judges. The numbering of the
text may therefore vary from one figure to the other, as the different
criteria were analyzed. Note also that for some texts less than five
points may appear in the picture because equal scores could have been
given. Because of the large dispersion of the marks, in this paper we
shall concentrate on data obtained with the 20 texts with lowest score
dispersion. The results obtained with the full set of 40 texts will be
briefly commented upon in the Concluding section.
\begin{figure}
\centering
\resizebox{0.9\columnwidth}{!}{
\includegraphics{scores-cc.eps}
}
\resizebox{0.9\columnwidth}{!}{
\includegraphics{scores-swc.eps}
}
\resizebox{0.9\columnwidth}{!}{
\includegraphics{scores-tad.eps}
}
\caption{40 texts vs. corresponding scores (from 0 to 10)
according to the three quality criteria, identified as $CC$
for coherence and cohesion, $SWC$ for adherence to standard
writing conventions and $TAD$ for theme
adequacy/development. The horizontal axes are ordered from the
text with the lowest score dispersion between the five judges
to the text with the highest dispersion. The sequences of
texts in the horizontal axes are not necessarily equal for the
different criteria.}
\label{fig:scores}
\end{figure}
\section{Measurements of text features using complex networks}
\label{sec:measur}
Two word adjacency networks were obtained from a given text. In the
first one, called \mbox{NET-A}, each different pair $(w_1,w_2)$ of subsequent
words (at distance one from each other) defines a directed weighted
edge in the network, whose weight represents the frequency of the
association from word $w_1$ to word $w_2$. The association network was
obtained similarly to that described in \cite{Costa2004}, i.e. each of
the $N$ different words was represented as a node and each connection
between words as a weighted edge between the corresponding nodes
representing those words. The stopwords have been removed and the
remaining words have been lemmatized. Removing stopwords eliminates
very common words, such as verb to be and some adverbs, and words from
closed classes (articles, pronouns, prepositions and
conjunctions). Lemmatization is the process of reducing a word into
its base form, such as the verb ``passed'' to the infinitive
``pass''. Therefore, different occurrences of meaning-related words are
represented by the same node in the network. The second word adjacency
network, called \mbox{NET-B}, is almost the same as \mbox{NET-A}, but also connects
subsequent words at distance two, i.e. $w_1$ is also connected to
$w_3$, although there is a word $w_2$ between them. In other words,
each sequence of three words $(\ldots, w_1, w_2, w_3, \ldots)$ implies
a directed edge from $w_1$ to $w_3$ and another directed edge from
$w_2$ to $w_3$. Note that the two adopted types of networks, namely
\mbox{NET-A} and \mbox{NET-B}, represent Markov models of memory one and two,
respectively. The choice of these two models has been aimed at
providing subsidies for investigating the effect of the extent of the
considered context into the measurements and results.
All network measurements adopted were extracted from the weight matrix
$W$ representing the network. This $N \times N$ matrix was obtained by
starting with all elements as zero and making $W(j,i)=W(j,i)+1$
whenever there was the association $i \rightarrow j$. Because of the
directed edges, the matrix $W$ is not symmetric. It is also possible
to obtain an adjacency matrix $K$ from $W$ by making $K(j,i)=1$
whenever $W(j,i) > 0$. The measurements obtained from such networks
are described in the remaining of this section.
\subsection{Indegree and outdegree}
The indegree and outdegree of node $i$ are defined, respectively, as
\begin{equation}
ID(i)=\sum^{N}_{j=1}{W(i,j)} \label{eq:indegree}
\end{equation}
and
\begin{equation} OD(i)=\sum^{N}_{j=1}{W(j,i)} \label{eq:outdegree} .
\end{equation}
We adopt the network outdegree $OD$ as the arithmetic mean of every
$OD(i)$ (the network indegree $ID$ is obtained similarly). Because the
average value of the indegrees coincides with that obtained for the
outdegrees, only the latter will be considered henceforth.
\subsection{Clustering coefficient}
The clustering coefficient of node $i$ is calculated as
follows. First, all nodes receiving an edge from node $i$ are
identified and included into the set $R$, with $N_c = |R|$. If $B$ is
the total number of edges between all the nodes in $R$ (taking into
account the edges directions, i.e. edge $i \rightarrow j$ is different
from edge $j \rightarrow i$), the clustering coefficient of node $i$
is obtained as (for an example, see Fig.~\ref{fig:clc})
\begin{equation} CLC(i) = \frac{B}{N_c(N_c-1)} . \label{eq:clustcoeff}
\end{equation} In case $N_c$ is smaller or equal to 1, then $CLC(i) =
0$. The network clustering coefficient $CLC$ is the arithmetic mean of
all individual clustering coefficients $CLC(i)$.
\begin{figure}
\centering
\resizebox{0.8\columnwidth}{!}{
\includegraphics{CLC.eps}
}
\caption{Computation of the clustering coefficient of node~4
($CLC(4)$). In this particular case, $N_c = 3$, since node~4
is connected to the nodes belonging to the set $R = \{1,2,5\}$
(node~3 has an edge shared with node~4, but this edge does not
come from node~4). If the nodes 1, 2 and 5 formed a fully
connected subnetwork, there would be $N_c(N_c-1) = 3(3-1) = 6$
edges between them, but in fact there is only $B =
3$. Finally, the clustering coefficient of node~4 is $CLC(4) =
B / (N_c(N_c-1)) = 3/6 = 0.5$. This definition of clustering
coefficient does not take into account the edge weights.}
\label{fig:clc}
\end{figure}
\subsection{Network dynamics}
We have taken measurements considering the dynamics of growth for the
complex network as a given text was analyzed. The number of connected
components (or clusters) was calculated after adding each word
association to the network, yielding a topological feature which is a
function of the number of associations and, consequently, of the
evolution of the text construction. For each text, the network was
initiated with all $N$ words, each one representing a single
component, and the connections were established by each word
association that occurred along the text. When a word association was
read, a new edge was created in the network or the weight of an
already existing edge was increased. As a consequence of the word
adjacency model, the number of connected components always converged
to one after all words had been introduced. Fig.~\ref{fig:components}
shows how the number of components evolves with the number of edges
for three texts extracted from the selected corpus, being therefore
representative of the evolution of connectivity in a given text. In
each graph, the straight line was included to guide the eye, and
represents the special case of uniform variation of the number of
components, while the other curve indicates the real variation as the
word associations were read. A quantitative treatment of the data in
Fig.~\ref{fig:components} was carried out by calculating the extent to
which the real plot departed from the straight line. For short, this
measurement will be referred to in the remainder of this article as
``components dynamics deviation'' ($CDD$). Let $f_a(x)$ be the actual
function that associates the number of components with the number $x$
of word associations already inserted into the network, $f_s(x)$ be
the reference straight line, $L$ be the total number of word
associations in the text and $N$ be the total number of vertices in
the network. The deviation in the network dynamics is calculated as
\begin{equation} CDD = \frac{\sum_{x=1}^{L}{|f_a(x)-f_s(x)|/N}}{L}
\label{eq:compdyn} . \end{equation} Texts A, B and C, whose dynamics
are represented in Fig.~\ref{fig:components}, have $CDD$ values of
0.014, 0.045 and 0.064, respectively. A visual inspection of these
three texts in Fig.~\ref{fig:components} corroborates these increasing
values obtained for texts from A to C.
\begin{figure*}
\centering
\resizebox{1.8\columnwidth}{!}{
\includegraphics{components-A.eps}
\includegraphics{components-B.eps}
\includegraphics{components-C.eps}
}
\caption{Dynamics of the network for three texts extracted
from the selected set of 40 pieces of text. In the horizontal
axes, $WA$ stands for the number of word associations already
inserted into the network. The straight dotted line is a
reference that assumes uniform variation of the number of
components as the edges are inserted or as their corresponding
weights are modified in the network. The other curve is the
real one, which reflects the actual variation of the number of
components. The deviation in the network dynamics, according
to Equation~\ref{eq:compdyn}, for the three texts above are
0.014 (A), 0.045 (B) and 0.064 (C).} \label{fig:components}
\end{figure*}
\subsection{Shortest path}
Distances between pairs of nodes, which also consider the edges
directions, were calculated with the Floyd-Warshall algorithm
\cite{Cormen2001}. We consider the complement of each weight,
$W_{max}-W(j,i)+1$, where $W_{max}$ is the maximum edge weight present
in the network, to compute the shortest paths $SP(i,j)$ between any
two nodes $i$ and $j$. $SP$ is defined in this way because its is not
desirable that the shortest paths algorithm gives low priority to the
strongest edges, which are those that represent more frequent and
possibly more important associations between words. Whenever there is
no path between two nodes $i$ and $j$, we take $SP(i,j) = N W_{mean}$,
where $N$ is the number of vertices and $W_{mean}$ is the arithmetic
mean of all edge weights. The $SP$ measurement for a whole network is
the arithmetic mean of every $SP(i,j)$, provided that $i \neq j$.
\section{Results and discussion}
\label{sec:results}
In a previous report \cite{Antiqueira2005a}, we have shown that the
measurements associated with complex networks could be used to
distinguish between low-quality and high-quality texts, selected from
two different sources. However, a limitation to that study was that
the differences emerging from the analysis could arise from the source
of the text, age and background of the writers and even subject of the
essays. In order to avoid such possible interferences, in the present
study we took texts from only one source, namely essays written by
high-school students, with approximately the same age and academic
background, on a single topic - influence from TV on our
society. Firstly, we illustrate in Fig.~\ref{fig:scale-free} for three
texts from the set that the distribution of outdegrees of the
investigated data suggest the scale-free property, indicated by the
linear log$\times$log plot for the outdegree, which is consistent with
previous reports in the literature
\cite{Cancho2001,Dorogovtsev2001}. Similar results were obtained for
the indegree and for the other texts (not shown here).
\begin{figure*}
\centering
\resizebox{1.8\columnwidth}{!}{
\includegraphics{scalefree-A.eps}
\includegraphics{scalefree-B.eps}
\includegraphics{scalefree-C.eps}
}
\caption{Log$\times$log outdegree ($OD$) distributions for
three texts extracted from the corpus of 40 texts. A
scale-free behavior is suggested by these examples.}
\label{fig:scale-free}
\end{figure*}
We now attempt to correlate the measurements using complex networks
concepts with the scores assigned by the human judges. Because of the
large score dispersion for some texts, we perform the analysis taking
only the 20 texts with the lowest dispersion for each criterion. This
analysis results in a set of 24 plots
(Figs.~\ref{fig:corr-od}--\ref{fig:corr-sp}) which correlate the four
network measurements with the three types of scores for each of the
two types of networks. Figs.~\ref{fig:corr-od}--\ref{fig:corr-sp} are
organized with the measurements distributed along the horizontal axes,
while the scores assigned by the human judges are positioned in the
vertical axes. The labels A and B refer to the measurements taken from
the networks constructed following the models \mbox{NET-A} and \mbox{NET-B},
respectively. The values from both the measurements and scores were
standardized into a standard normal distribution $N(0,1)$ and a linear
regression was performed for each correlation plot. The corresponding
straight line, the Pearson correlation coefficient and the p-value are
also given in the figure as a guide for the strength of the linear
correlations \cite{Neter1996}.
Figs.~\ref{fig:corr-od}A and \ref{fig:corr-od}B indicate that the
scores assigned by the human judges - for the three criteria -
decrease with increasing number of outdegrees. Most significant are
the results for the cohesion and coherence ($CC$) and adherence to
standard writing conventions ($SWC$). Large Pearson coefficients were
obtained with very low p-values, which indicates that the linear
correlations were not obtained by chance. The scores corresponding to
the theme adequacy/development ($TAD$) are less sensitive to the number
of outdegrees, though they also tend to decrease. It appears then that
an analysis of the number of outdegrees allows one to capture the
quality of the text, with a large number of outdegrees causing the
text to lose quality. There is practically no difference in behavior
in the results using \mbox{NET-A} and \mbox{NET-B}, i.e. considering a larger
context in \mbox{NET-B} did not affect the results significantly. It should
be mentioned that we have also calculated the indegrees for all of the
texts separately. Because averages were taken, the results were
identical to those of the outdegrees and were therefore omitted.
\begin{figure*}
\centering
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net1-OD.eps}
}
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net2-OD.eps}
}
\caption{Correlations between the outdegrees ($OD$, horizontal
axes) and the scores (vertical axes) for the 20 texts with the
lowest score dispersion. In the vertical axes, $SWC$ stands
for standard writing conventions, $TAD$ for theme
adequacy/development and $CC$ for coherence and cohesion. Both
axes are standardized into a standard normal distribution
$N(0,1)$. Measurements obtained from the two types of
networks, \mbox{NET-A} and \mbox{NET-B}, are discriminated by the labels A
and B, respectively.}
\label{fig:corr-od}
\end{figure*}
Similar conclusions can be drawn from Figs.~\ref{fig:corr-clc}A and
\ref{fig:corr-clc}B, which show that text quality decreases with an
increasing clustering coefficient ($CLC$). Now correlations appeared
stronger for the data with \mbox{NET-A} than for \mbox{NET-B}, particularly for the
$CC$ and $SWC$ scores. In fact, from all measurements those of $CLC$ gave
the highest correlations (cf. Pearson coefficient) with text
quality. From a linguistic point of view, one may infer that texts
lose quality if the concepts are highly interconnected, probably
excessively interconnected.
\begin{figure*}
\centering
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net1-CLC.eps}
}
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net2-CLC.eps}
}
\caption{Correlations between the clustering coefficients
($CLC$, horizontal axes) and the scores (vertical axes) for the
20 texts with the lowest score dispersion. In the vertical
axes, $SWC$ stands for standard writing conventions, $TAD$ for
theme adequacy/development and $CC$ for coherence and
cohesion. Both axes are standardized into a standard normal
distribution $N(0,1)$. Measurements obtained from the two
types of networks, \mbox{NET-A} and \mbox{NET-B}, are discriminated by the
labels A and B, respectively.}
\label{fig:corr-clc}
\end{figure*}
As for the deviation from a linear dynamics for the network growth
($CDD$), an inspection of Figs.~\ref{fig:corr-cdd}A and
\ref{fig:corr-cdd}B points to the text quality decreasing with
increasing deviations, with little difference between data for \mbox{NET-A}
and \mbox{NET-B}. This corroborates our earlier finding with texts from two
different sources (see first version of this paper
\cite{Antiqueira2005a}). In the latter study, a threshold in the $CDD$
value was used to distinguish between low- and high-quality texts. A
large deviation indicates that the concepts were first introduced at
an early stage of the text construction, thus causing the total number
of components to decrease fast. As a result, the writer probably kept
repeating the arguments in the remainder of the writing process,
leading to a low quality text. As an example of this correlation,
consider the texts whose dynamics are illustrated in
Fig.~\ref{fig:components}. These texts received average scores of 7.9
(A), 5.2 (B) and 3.7 (C), according to the coherence and cohesion
criterion, while the $CDD$ values were 0.014 (A), 0.045 (B) and 0.064
(C), respectively. From a linguistic point of view, $CDD$ appears to
capture whether the flow of the prose is adequate, which is reflected
especially in the cohesion and coherence.
\begin{figure*}
\centering
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net1-CDD.eps}
}
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net2-CDD.eps}
}
\caption{Correlations between the components dynamics
deviations ($CDD$, horizontal axes) and the scores (vertical
axes) for the 20 texts with the lowest score dispersion. In
the vertical axes, $SWC$ stands for standard writing
conventions, $TAD$ for theme adequacy/development and $CC$ for
coherence and cohesion. Both axes are standardized into a
standard normal distribution $N(0,1)$. Measurements obtained
from the two types of networks, \mbox{NET-A} and \mbox{NET-B}, are
discriminated by the labels A and B, respectively.}
\label{fig:corr-cdd}
\end{figure*}
The correlation between the scores used to assess quality and the
measurements of shortest paths is weaker than for the other
measurements obtained with \mbox{NET-A} and \mbox{NET-B}, as shown in
Figs.~\ref{fig:corr-sp}A and \ref{fig:corr-sp}B. There is a slight
increase in the quality scores with increasing shortest paths,
especially with the $SWC$. The reason for a weaker correlation may be
found in the results from our previous work with texts of different
sources~\cite{Antiqueira2005a}. There, we found that text quality
appeared to increase slightly with the shortest path when all texts
were considered. However, when analyzing only the low-quality texts,
we observed text quality to decrease with increasing shortest
paths. We interpreted the latter result as being due to the
difficulties faced by poor writers in establishing long sequences of
connections among different concepts. This discrepancy between low and
high-quality texts calls for further, more detailed research into the
possible correlation between shortest paths and quality.
\begin{figure*}
\centering
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net1-SP.eps}
}
\centering
\resizebox{1.7\columnwidth}{!}{
\includegraphics{net2-SP.eps}
}
\caption{Correlations between the shortest paths ($SP$,
horizontal axes) and the scores (vertical axes) for the 20
texts with the lowest score dispersion. In the vertical axes,
$SWC$ stands for standard writing conventions, $TAD$ for theme
adequacy/development and $CC$ for coherence and cohesion. Both
axes are standardized into a standard normal distribution
$N(0,1)$. Measurements obtained from the two types of
networks, \mbox{NET-A} and \mbox{NET-B}, are discriminated by the labels A
and B, respectively.}
\label{fig:corr-sp}
\end{figure*}
\section{Conclusions and perspectives}
\label{sec:concl}
We have applied the concepts of complex networks to one set of texts
which comprises essays of variable quality (as confirmed by human
judges) written by high-school students on the same topic. A
correlation could be established between the measurements,
i.e. outdegrees, clustering coefficient and deviation from a linear
dynamics in the network growth, and the scores assigned by the human
judges. The influence of shortest paths on text quality could not be
established unequivocally, probably because the effects may differ for
low and high-quality texts. Among the criteria employed, cohesion and
coherence was the one showing strongest correlation between the scores
and the network measurements. One may argue that this correlation
indicates that the measurements are able to capture how the text is
developed in terms of the concepts represented by the nodes in the
networks. We should not expect these measurements to capture the text quality in terms of the
adherence to standard writing conventions ($SWC$), as there is no deep
analysis of the texts. Essays performing well in this criterion were
those with small or negligible number of spelling and grammatical
mistakes. However, writers that produce good-quality texts in terms of
cohesion and coherence normally write grammatically correct texts. We
believe this to be the reason for the good correlation between the
scores of $SWC$ and the network measurements. The third criterion, theme
adequacy/development ($TAD$) is a more subjective because human judges
assess whether the writer addressed the expected issues for the given
topic. It is not uncommon that the score assigned be related to
whether the examiner agrees with the ideas put forward in the
essay. Not surprisingly then, the correlation between the network
measurements and the scores was weak.
The conclusions above hold for the 2 types of analysis performed, both
with \mbox{NET-A} and \mbox{NET-B}. Therefore, the context captured with only
adjacent words appears to be sufficient to correlate with text
quality. In addition, in subsidiary experiments we observed that
essentially the same conclusions and trends apply for the full set of
40 texts, which also included those with large dispersions in the
scores assigned by the human judges (results not shown here). The
trend toward decreasing scores with the number of outdegrees and
clustering coefficient suggests that text lose quality if the concepts
are highly interconnected. With the analysis of the network dynamics,
one infers that the faster and closer a writer introduces new concepts
not seen so far in a text, the worse the text is.
Though based on a particular set of texts and specific language, the
results presented here point to potential applications in other
instances of text analysis. Indeed, the relatively high correlations
obtained between human assessment and network measurements are to some
extent surprising because of the potential complexity and subjectivity
underlying text quality and human language. One can now envisage, for
instance, an expert system that automatically marks essays, based on
machine learning methods. This will require golden standards, with a
panel of human judges agreeing on scores for a given set of texts (say
100 texts), with very little dispersion. If the network measurements
are taken for these manually-marked essays and associated with the
corresponding scores, machine learning algorithms may be used to
classify the remaining texts. This is an interesting scenario for
exams involving thousands of essays. Moreover, for essays with a
pre-defined specific topic the expert system could be further
sophisticated to consider the use of expected concepts and
associations among these concepts. Finally, the approach presented
here paves the way for the concepts of complex networks to be applied
to other types of text, as in the identification of text genres and
authorships, in addition to systems of information retrieval and
automatic summarization. This may have a large impact in areas such as
natural language processing \cite{Joshi1991}, in particular, and
linguistic studies in general.
\vspace{1cm}
The authors are grateful to FAPESP and CNPq (Brazil) and the Human
Frontier Science Program (RGP39/2002) for financial support. Thanks
are also due to several students from NILC for their invaluable help
in the experiment with human judges.
\bibliographystyle{epj}
|
2,877,628,088,331 | arxiv | \section{Introduction}
Orthogonal frequency division multiplexing with index modulation (OFDM-IM) \cite{bacsar2013orthogonal} is a novel multicarrier technique, which extends the concept of spatial modulation (SM) \cite{mesleh2008spatial} into frequency domain. In OFDM-IM, the subcarriers are partitioned into a series of subblocks.
Also, the information bits are conveyed by not only the modulated symbols but also the subcarrier indices unlike the conventional OFDM. That is, the subcarriers have two states, active and inactive, and the indices of the active subcarriers carry information.
The special design of OFDM-IM reduces inter-carrier interference (ICI) and gives better bit error rate (BER) performance in the low to medium data rate region than the conventional OFDM \cite{bacsar2013orthogonal}. Also, it is possible to generate energy efficient signals compared to the conventional OFDM \cite{zhao2012high}.
For detecting the subcarrier activation pattern (SAP) at the receiver, the optimal method is maximum likelihood (ML) detection, where it detects jointly both the indices of the active subcarriers and the modulated symbols carried on. However, naive implementation of the ML detector requires a huge computational complexity.
In \cite{zheng2015low, zhang2017dual}, by using the fact that each symbol can be demodulated independently, the equivalent ML detector is proposed, which only needs to search through all possible realizations of SAP and the $M$ signal space
for each symbol, leading to a reduced computational complexity. In spite of the investigation in \cite{zheng2015low, zhang2017dual}, this ML detector would still become impractical if the number of possible SAPs is large.
To solve this problem, one can practically employ a low-complexity near ML detector which simply picks up $k$ active indices that have $k$ largest values of active likelihood metrics, called a $k$ largest values ($k$lv) detector in this letter.
However, the $k$lv detector may also decide on an illegal SAP that do not belong to the set of the legal SAPs, resulting in degraded detection performance.
The authors in \cite{zheng2015low} mentioned that the probability of this event is very small and thus the performance loss is negligible. However, as the ratio of illegal SAPs to SAPs increases, the degradation of the detection performance of this $k$lv detector cannot be ignored.
In this letter, the suboptimal ML detector for OFDM-IM is proposed. The suboptimal ML detector is a slight modification of the $k$lv detector and thus has likewise low complexity. However, its detection performance is almost the same as the ML detector, as verified thorough the probabilistic analysis and simulation results. By using the proposed suboptimal ML detector, OFDM-IM systems can be implemented with low complexity and suboptimal detection performance.
\subsection{OFDM-IM}
In the OFDM-IM system using $N$ subcarriers, $m$ information bits enter the OFDM-IM transmitter for transmission of one OFDM-IM block. These $m$ bits are divided into $G$ groups, where each contains $p$ bits, i.e., $m=pG$. The $p$ bits in each group are mapped to one subblock of length $n$ in frequency domain, where $n = N/G$.
Unlike the conventional OFDM, this mapping procedure is not only performed by assigning the corresponding modulated symbols, but also by the indices of the subcarriers \cite{bacsar2013orthogonal}.
Specifically, for each subblock, only $k$ out of $n$ subcarriers are activated and the pattern is determined based on the first $p_1$ bits of the $p$ bits in the group. The remaining $p_2=k \log_2M$ bits of the $p$ bits, i.e., $p = p_1 + p_2$, are mapped onto the $M$-ary signal constellation to determine the symbols in the active subcarriers. We set the symbols in the inactive subcarriers to zero. In other words, in the OFDM-IM system, the information is conveyed by both of the $M$-ary modulated symbols and the indices of the active subcarriers \cite{bacsar2013orthogonal}.
Since the number of possible patterns is $\binom{n}{k}$, there has to be $\binom{n}{k}-2^{p_1}$ redundancy or illegal SAPs.
We denote the set of the $\binom{n}{k}$ possible SAPs as $\mathcal{I}$.
Also we denote the set of the $2^{p_1}$ legal SAPs as $\mathcal{I}_l$ and denote the set of the $\binom{n}{k}-2^{p_1}$ illegal SAPs as $\mathcal{I}_i$. Clearly, $\mathcal{I} = \mathcal{I}_l \cup \mathcal{I}_i$.
Denote the set of the indices of the $k$ active subcarriers in the transmitted $g$-th OFDM-IM subblock, $g=1,2,\cdots,G$, as
\begin{equation}
I_{g} = \{i_{g,1},i_{g,2},\cdots,i_{g,k}\}
\end{equation}
with $i_{g,m} \in \{1,2,\cdots,n\}$ for $m = 1,2,\cdots,k$. Clearly, $I_{g} \in \mathcal{I}_l$.
Correspondingly, the set of $k$ modulated symbols is denoted by
\begin{equation}
S_{g} =\{S_{g,1},S_{g,2},\cdots,S_{g,k}\},
\end{equation}
where $S_{g,m} \in \mathcal{S}$ and $\mathcal{S}$ is the used signal constellation.
Then the $g$-th OFDM-IM subblock can be constructed as
\begin{equation}
\mathbf{X}_{g} = [X_{g,1}~X_{g,2}~\cdots~X_{g,n}]^T,
\end{equation}
where the $i$-th OFDM-IM symbol $X_{g,i} \in \mathcal{S}$ only if $i \in I_{g}$ and otherwise $X_{g,i} = 0$.
The OFDM-IM transmitter creates $\mathbf{X}_{g}$ for all $g$.
Then the $G$ subblocks are concatenated to generate the $N\times 1$ OFDM-IM symbol sequence. For achieving frequency diversity gain as much as possible, concatenation in an interleaved manner is employed.
After these point, the same procedure as the conventional OFDM is applied. The symbol sequence in frequency domain is processed by the inverse discrete Fourier transform (IDFT) to generate the OFDM-IM signal in time domain. Then cyclic prefix (CP) is appended followed by parallel-to-serial (P/S) and digital-to-analog (D/A) conversion.
\subsection{Detection for OFDM-IM}
Let us consider the detection of the $g$-th subblock. We omit the subblock index $g$ for simplicity. By considering a joint detection for the indices of the active subcarriers and the modulated symbols carried on, the ML detector for OFDM-IM is given by
\begin{align}\label{eq:MLd}
\{\hat{I}_\mathrm{ML}, \hat{S}\}
&= \arg\min_{\tilde{I} \in \mathcal{I}_l,\tilde{S}} \sum_{i=1}^{n}|Y_{i}-H_{i}X_{i}|^2\nonumber\\
&= \arg\min_{\tilde{I} \in \mathcal{I}_l ,\tilde{S}} \sum_{i=1}^{n} |H_{i}|^2 |R_i-X_{i}|^2,
\end{align}
where $Y_{i} = H_iX_i + Z_i$ is the $i$-th received OFDM-IM symbol, $H_{i}$ is the $i$-th channel frequency response (CFR), $Z_i$ is the Gaussian noise with $\mathcal{CN}(0,2\sigma^2)$, and $R_i = H^{-1}_{i}Y_{i}$ for $i=1,\cdots,n$.
It is remarkable that the symbol detection can be independently performed for each subcarrier \cite{zheng2015low, zhang2017dual}. Then, the symbol detection is separately performed as
\begin{equation}
\hat{s}_{i} = \arg\min_{s \in \mathcal{S}}|R_i-s|^2
\end{equation}
for $i = 1,\cdots, n$. Then, (\ref{eq:MLd}) becomes
\begin{equation}\label{eq:Iselect}
\hat{I}_\mathrm{ML} = \arg\min_{\tilde{I} \in \mathcal{I}_l} \left\{ \sum_{i\in \tilde{I} } |H_{i}|^2 |R_i-\hat{s}_{i}|^2 + \sum_{j\notin \tilde{I} } |H_{j}|^2 |R_j|^2\right\}.
\end{equation}
Since $\sum_{i=1}^{n} |H_{i}|^2 |R_i|^2$ is not related to the realizations of $\tilde{I}$, we subtract it from (\ref{eq:Iselect}). Then we have
\begin{align}\label{eq:MLdetect}
\hat{I}_\mathrm{ML}
&= \arg\min_{\tilde{I}\in \mathcal{I}_l} \sum_{i\in \tilde{I}} |H_{i}|^2 (|R_{i}-\hat{s}_{i}|^2 - |R_{i}|^2)\nonumber\\
&= \arg\max_{\tilde{I}\in \mathcal{I}_l} \sum_{i\in \tilde{I}} |H_{i}|^2 (|R_{i}|^2 - |R_{i}-\hat{s}_{i}|^2)\nonumber\\
&= \arg\max_{\tilde{I}\in \mathcal{I}_l} \sum_{i\in \tilde{I}} A_i,
\end{align}
where
\begin{align}\label{eq:almA}
A_i &= |H_{i}|^2 (|R_{i}|^2 - |R_{i}-\hat{s}_{i}|^2)\nonumber\\
&= |H_{i}|^2 (2\mathrm{Re}\{R_i^*\hat{s}_{i}\} -|\hat{s}_{i}|^2)
\end{align}
is an active likelihood metric for the $i$-th subcarrier.
Since the ML detector calculates $2^{p_1}$ combinations of $A_i$ in (\ref{eq:MLdetect}), the ML detector would become impractical for a larger $p_1$ as $2^{p_1}$ grows exponentially with it. Therefore, the $k$lv detector that chooses the indices with the $k$ largest values of $A_i$ may be preferred in practical systems.
That is, the $k$lv detector is
\begin{equation}\label{eq:klvdetect}
\hat{I}_{k\mathrm{lv}} = \arg\max_{\tilde{I}\in \mathcal{I}} \sum_{i\in \tilde{I}} A_i.
\end{equation}
This $k$lv detector may also decide on illegal SAPs that do not belong to $\mathcal{I}_l$, resulting in degraded detection performance. Although the probability of this error event is small unless the ratio of illegal SAPs to SAPs is large \cite{zheng2015low}, this constraint prevents the flexible implementation of OFDM-IM systems with various parameters $n$ and $k$.
\section{The Proposed Suboptimal ML Detection}
\subsection{Active Likelihood Metric $A_i$}\label{sec:ALM}
If we employ quadrature phase shift keying (QPSK) for modulating symbols, $A_i$ in (\ref{eq:almA}) becomes
\begin{equation}
A_i = 2|H_i|^2(|\mathrm{Re}\{R_i\}|+|\mathrm{Im}\{R_i\}|-1).
\end{equation}
For a given $H_i$, $A_i$ is a Gaussian distribution with $\mathcal{N}(|H_i|^2,2|H_i|^4\sigma^2)$ if the $i$-th subcarrier is active. Otherwise, $A_i$ becomes a distribution of $\mathcal{N}(-|H_i|^2,2|H_i|^4\sigma^2)$.
Assume that the $i$-th subcarrier is active and the $j$-th subcarrier
is inactive. Since the means of $A_i$ and $A_j$ are opposite to
each other, confused detection of the $i$-th and $j$-th
subcarriers occurs when $A_i$ and $A_j$ are close to zero. It means that bad channel qualities $H_i$ and $H_j$ at the same
time are necessary for confused detection of the $i$-th and $j$-th
subcarriers. This phenomenon can also be mentioned in \cite{bacsar2013orthogonal}, where it is shown that the index demodulation error event has a diversity order of two.
For future use, we denote the indices of $A_i$ as $\hat{i}_{1},\cdots, \hat{i}_{n}$ when $A_i$ are sorted in descending order. That is,
\begin{equation}\label{eq:met}
A_{\hat{i}_{1}} > A_{\hat{i}_{2}} > \cdots > A_{\hat{i}_{n}}.
\end{equation}
Then, the set constructed by the indices of the $k$ largest values of $A_i$ becomes the best SAP of the $k$lv detector in (\ref{eq:klvdetect}) as
\begin{equation}
\hat{I}_{k\mathrm{lv}} =\hat{I}_1 = \{ \hat{i}_{1}, \hat{i}_{2}, \cdots, \hat{i}_{k}\}.
\end{equation}
We may also denote $\hat{I}_v$'s for $v=2,\cdots,\binom{n}{k}$, which means the $v$-th best SAP based on the metrics in (\ref{eq:met}).
Clearly, the second best SAP $\hat{I}_2$ is
\begin{equation}\label{eq:secSAP}
\hat{I}_2 = \{ \hat{i}_{1}, \hat{i}_{2}, \cdots, \hat{i}_{k-1}, \hat{i}_{k+1}\}.
\end{equation}
Note that the other $v$-th best SAPs ($v\geq 3$) are not fixed and can be varied according to the specific values of $A_i$'s.
For example, the third best SAP $\hat{I}_3$ is either $\{ \hat{i}_{1}, \hat{i}_{2}, \cdots, \hat{i}_{k-1}, \hat{i}_{k+2}\}$ or $\{ \hat{i}_{1}, \hat{i}_{2}, \cdots, \hat{i}_{k-2}, \hat{i}_{k}, \hat{i}_{k+1}\}$ according to the values of $A_i$.
\subsection{Correct Detection Probabilities of $\hat{I}_{k\mathrm{lv}}$ and $\hat{I}_{\mathrm{ML}}$}
Consider a sample space in probability theory of the received OFDM-IM subblock, which denotes the set of all possible realizations.
The sample space can be separated into three sets according to which the best SAP $\hat{I}_1$ is, as in Fig. \ref{fig:ss}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\linewidth]{space.pdf}
\caption{A sample space of a received OFDM-IM subblock.}
\label{fig:ss}
\end{figure}
Specifically, the sets are separated by the following criteria:
\begin{itemize}
\item $\Omega(c)$: The best SAP is correct. ($\hat{I}_1 = I$)
\item $\Omega(l)$: The best SAP is incorrect and legal. ($\hat{I}_1 \neq I$ and $\hat{I}_1 \in \mathcal{I}_l$)
\item $\Omega(i)$: The best SAP is incorrect and illegal. ($\hat{I}_1 \in \mathcal{I}_i$)
\end{itemize}
Moreover, according to the second best SAP $\hat{I}_2$, $\Omega(i)$ can be separated into three subsets as
\begin{itemize}
\item $\Omega(i,c)$: $\hat{I}_1 \in \mathcal{I}_i$ and the second best SAP $\hat{I}_2$ is correct.
\item $\Omega(i,l)$: $\hat{I}_1 \in \mathcal{I}_i$ and the second best SAP $\hat{I}_2$ is incorrect and legal.
\item $\Omega(i,i)$: $\hat{I}_1 \in \mathcal{I}_i$ and the second best SAP $\hat{I}_2$ is incorrect and illegal.
\end{itemize}
Likewise, $\Omega(i,i)$ can be further separated into three subsets $\Omega(i,i,c), \Omega(i,i,l)$, and $\Omega(i,i,i)$ according to the third best SAP $\hat{I}_3$.
For example, $\Omega(i,i,c)$ means $\hat{I}_1 \in \mathcal{I}_i$, $\hat{I}_2 \in \mathcal{I}_i$, and $\hat{I}_3 = I$.
In the same manner, this separation can be performed until we have $\Omega(\underbrace{i,i,\cdots,i}_{\binom{n}{k}-2^{p_1}},c)$.
Clearly, the correct detection probability of the $k$lv detector is
\begin{equation}\label{eq:klv}
P_{k\mathrm{lv}} = P(\Omega(c)).
\end{equation}
The ML detector in (\ref{eq:MLdetect}) finds the SAP having the largest sum of $A_i$ in the set of legal SPAs $\mathcal{I}_l$ as in (\ref{eq:MLdetect}). Therefore, if we use the ML detector, then not only the case in $\Omega(c)$ but also the cases in $\Omega(i,c) + \cdots + \Omega(\underbrace{i,i,\cdots,i}_{\binom{n}{k}-2^{p_1}},c)$
can be correctly detected by the ML detector.
That is, the correct detection probability of the ML detector is
\begin{equation}\label{eq:ML}
P_{\mathrm{ML}} = P(\Omega(c)) + P(\Omega(i,c)) + \cdots + P(\Omega(\underbrace{i,i,\cdots,i}_{\binom{n}{k}-2^{p_1}},c)).
\end{equation}
Therefore, the ML detection is superior to the $k$lv detector.
Also, from (\ref{eq:klv}) and (\ref{eq:ML}), the probability gap becomes
\begin{align}\label{eq:gapMLklv}
P_{\mathrm{ML}} - P_{k\mathrm{lv}}
&= P(\Omega(i,c)) + \cdots + P(\Omega(\underbrace{i,i,\cdots,i}_{\binom{n}{k}-2^{p_1}},c))\nonumber\\
&\leq P(\Omega(i))\nonumber\\
&=\frac{\binom{n}{k} - 2^{p_1}}{\binom{n}{k}-1}\cdot(1-P(\Omega(c)))\nonumber\\
&=r\cdot(1-P(\Omega(c))),
\end{align}
where $r$ is the ratio of the illegal SAPs to the all incorrect SAPs as
\begin{equation}
r = \frac{\binom{n}{k} - 2^{p_1}}{\binom{n}{k}-1}.
\end{equation}
Without loss of generality, we consider the transmitted SAP $I=\{1,2,\cdots,k\}$.
Then,
\begin{equation}
P(\Omega(c)) = P(\min(A_1,\cdots,A_{k})>\max(A_{k+1},\cdots,A_n)),
\end{equation}
where the probability $P(\Omega(c))$ is regardless of $r$. Therefore, the gap $P_{\mathrm{ML}} - P_{k\mathrm{lv}}$ in (\ref{eq:gapMLklv}) becomes larger as $r$ increases.
\subsection{The Proposed Suboptimal ML Detector}
We focus on the fact that in (\ref{eq:ML}) the first and second terms are dominant and these terms can be obtained when we also test the second best SAP in addition to the first best SAP.
Fortunately, the second best SAP is fixed as in (\ref{eq:secSAP}).
Using these, we propose the suboptimal ML detector in Algorithm \ref{al:subML}.
\begin{algorithm}
\caption{Suboptimal ML Detection}\label{al:subML}
\begin{algorithmic}[1]
\State $\hat{I}_1 = \{\hat{i}_1, \hat{i}_2, \cdots, \hat{i}_k\} $
\State $\hat{I}_2 = \{\hat{i}_1, \hat{i}_2, \cdots, \hat{i}_{k-1}, \hat{i}_{k+1}\} $\Comment{Newly added}
\If{$\hat{I}_1 \in \mathcal{I}_l$}
\State $\hat{I}_{\mathrm{subML}} \gets \hat{I}_1$
\ElsIf{$\hat{I}_2 \in \mathcal{I}_l$}\Comment{Newly added}
\State $\hat{I}_{\mathrm{subML}} \gets \hat{I}_2$\Comment{Newly added}
\EndIf
\State \textbf{return} $\hat{I}_{\mathrm{subML}}$
\end{algorithmic}
\end{algorithm}
Note that the proposed suboptimal ML detector is a slight modification of the $k$lv detector in (\ref{eq:klvdetect}) and the parts newly added are marked in Algorithm \ref{al:subML}.
After calculating and sorting the values of $A_i$ for $i=1,\cdots,n$, the rest procedure of the $k$lv detector is investigation $\hat{I}_1 \in \mathcal{I}_l$ as in the third line in Algorithm \ref{al:subML}. The computational complexity of this investigation procedure is negligible because the SAP $\hat{I}_1$ can be seen as a binary representation.
Clearly, the added parts of the proposed ML detector induce no additional complexity burden because $\hat{I}_2$ is fixed as in (\ref{eq:secSAP}) and the computational complexity of the investigation procedure of $\hat{I}_2 \in \mathcal{I}_l$ is negligible as $\hat{I}_1 \in \mathcal{I}_l$.
If we use the proposed suboptimal ML detector, then the received OFDM-IM subblock in
$\Omega(c)$ and $\Omega(i,c)$ in Fig. \ref{fig:ss}
can be correctly detected. Then its correct detection probability is
\begin{equation}\label{eq:subML}
P_{\mathrm{subML}} = P(\Omega(c)) + P(\Omega(i,c)).
\end{equation}
The difference between (\ref{eq:subML}) and (\ref{eq:ML}) is
\begin{align}\label{eq:diff}
P_{\mathrm{ML}} - P_{\mathrm{subML}} &= P(\Omega(i,i,c) + \cdots + \Omega(i,i,\cdots,i,c))\nonumber\\
&\leq P(\Omega(i,i))\nonumber\\
&= \frac{\binom{n}{k} - 2^{p_1}-1}{\binom{n}{k}-1}\cdot (P(\Omega(i))-P(\Omega(i,c))).
\end{align}
Now we consider $P(\Omega(i))$ and $P(\Omega(i,c))$ in (\ref{eq:diff}).
Without loss of generality, we assume that the transmitted SAP is $I=\{1,2,\cdots,k\}$.
First, $P(\Omega(i))$ becomes
\begin{align}\label{eq:Omegai}
&P(\Omega(i))\nonumber\\
&= P(\hat{I}_1\in \mathcal{I}_i)\nonumber\\
&= P(\hat{I}_1\in \mathcal{I}_i \cap |\hat{I}_1-I| = 2) + P(\hat{I}_1\in \mathcal{I}_i \cap |\hat{I}_1-I| = 4) + \cdots\nonumber\\
&\simeq P(\hat{I}_1\in \mathcal{I}_i \cap |\hat{I}_1-I| = 2)\nonumber\\
&= r\cdot P(|\hat{I}_1-I| = 2)\nonumber\\
&= r \cdot k(n-k)\cdot P(\hat{I}_1 = \{1,\cdots,k-1,k+1\}) \nonumber\\
&= r \cdot k(n-k)\nonumber\\
&\cdot P(\min(A_1,\cdots,A_{k-1},A_{k+1})>\max(A_k,A_{k+2},\cdots,A_n)),
\end{align}
where the similarity in the third line is reasonable because the event $|\hat{I}_1-I|=2$ frequently occurs compared to the other events and $k(n-k)$ in the fifth line comes from the number of $\hat{I}_1$ satisfying $|\hat{I}_1-I| = 2$.
In the similar way, we also have
\begin{align}\label{eq:Omegaic}
&P(\Omega(i,c))\nonumber\\
& = P(\hat{I}_1 \in \mathcal{I}_i \cap \hat{I}_2 = I)\nonumber\\
& \simeq P(\hat{I}_1 \in \mathcal{I}_i \cap \hat{I}_2 = I \cap |\hat{I}_1-I| = 2)\nonumber\\
& = r\cdot P(\hat{I}_2 = I \cap |\hat{I}_1-I| = 2)\nonumber\\
& = r\cdot k(n-k)\cdot P(\hat{I}_1 = \{1,\cdots,{k-1},{k+1}\} \cap \hat{I}_2 = \{1,\cdots,k\})\nonumber\\
&= r\cdot k(n-k)\nonumber\\
&\cdot P(\min(A_1,\cdots,A_{k-1})>A_{k+1}>A_k>\max(A_{k+2},\cdots,A_n)).
\end{align}
From (\ref{eq:Omegai}) and (\ref{eq:Omegaic}), $P(\Omega(i))-P(\Omega(i,c))$ becomes
\begin{align}\label{eq:UVA}
& P(\Omega(i))-P(\Omega(i,c))\nonumber\\
&\simeq r\cdot k(n-k)\cdot (P(\min(U,A_{k+1})>\max(A_k, V))\nonumber\\
&~~~~~~~~~~~~~~~~~~~~- P(U>A_{k+1}>A_k>V))\nonumber\\
&= r\cdot k(n-k)\cdot(P(A_{k+1}>U>A_k>V)\nonumber\\
&~~~~~~~~~~~~~~~~~~~~ + P(U>A_{k+1}>V>A_k) \nonumber\\
&~~~~~~~~~~~~~~~~~~~~ + P(A_{k+1}>U>V>A_k)),
\end{align}
where
\begin{align}
U &= \min(A_1,\cdots,A_{k-1})\\
V &= \max(A_{k+2},\cdots,A_n).
\end{align}
Let us consider the three probabilities in (\ref{eq:UVA}).
Note that the bad channel qualities are necessary condition for
confusion of active subcarriers, as explained in subSection \ref{sec:ALM}.
Then the event $A_{k+1}>U>A_k>V$ in (\ref{eq:UVA}) occurs rarely because this event requires that the $k$-th, $k+1$-th, and $z$-th ($1\leq z \leq k-1$) CFRs are bad at the same time. That is, this event has a frequency diversity order of three.
Likewise, the other two events $U>A_{k+1}>V>A_k$ and $A_{k+1}>U>V>A_k$ require three and four bad CFRs, respectively.
Therefore, $P(\Omega(i)-\Omega(i,c))$ in (\ref{eq:UVA}) is small and thus, from (\ref{eq:diff}), we expect that the detection performance gap between the ML detector and the proposed suboptimal ML detector is also small especially in high signal-to-noise ratio (SNR) region.
\section{Simulation Results}
To verify the performance of the proposed suboptimal ML detector, we simulate two OFDM-IM systems with two different illegal SAPs ratios $r$. For modulating the symbols in the active subcarriers, QPSK is commonly used because OFDM-IM gives better BER performance in the low to medium data rate region than the conventional OFDM \cite{bacsar2013orthogonal}. Also, we consider a Rayleigh fading channel with length eight having the exponential power-delay profile. Since an interleaved concatenation is employed, in frequency domain, the elements within an OFDM-IM subblock experience nearly independent CFRs.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{n8k4.pdf}
\caption{BER performance of the three detectors, where we use $N=128$, $n=8$, and $k=4$.}
\label{fig:n8k4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{n10k5.pdf}
\caption{BER performance of the three detectors, where we use $N=100$, $n=10$, and $k=5$.}
\label{fig:n10k5}
\end{figure}
Fig. \ref{fig:n8k4} shows the BER performance of the three detectors, where we use $N=128$, $n=8$, and $k=4$.
In this case, the illegal SAPs ratio is only $r= 0.086$ and thus there is only a small gap between the ML detector and the $k$lv detector, described in (\ref{eq:gapMLklv}). Also, the proposed suboptimal ML detector shows a similar BER performance compared to the ML detector, explained in (\ref{eq:UVA}).
The error events within different subblocks are identical and it is sufficient to investigate the error events within a single subblock to determine the overall system performance. Therefore, it is enough to verify the OFDM-IM systems with small size of $N$.
Fig. \ref{fig:n10k5} shows the BER performance of the three detectors, where we use $N=100$, $n=10$, and $k=5$.
In this case, the redundancy SAPs ratio is $r= 0.49$ and thus there is a visible gap between the ML detector and the $k$lv detector. Then, the proposed suboptimal ML detector shows almost the same BER performance compared to the ML detector. As explained in (\ref{eq:UVA}), the performance gap between the ML detector and the proposed detector becomes smaller as SNR increases.
\section{Conclusion}
In this letter, the suboptimal ML detection for OFDM-IM systems is proposed, where the second best SAP is subsequently tested after the test of the first best SAP.
This simple modification can significantly enhance the detection performance because it is enough for boosting the detection performance to test the first and second best SAPs only, which is analyzed.
By using the proposed suboptimal ML detector with low complexity, we obtain almost the same detection performance compared to the ML detector.
This leads to the flexible and unconstrained implementation of OFDM-IM systems.
\bibliographystyle{IEEEtran}
|
2,877,628,088,332 | arxiv | \section{Introduction}
Direct and scattered Cherenkov light (CL) is one of the relevant contributions to the uncertainty of the measured flux of fluorescent light (FL) from extensive air showers (EAS). The impact of backscattered CL was noted when modelling the response of several detectors~\cite{EUSO2015},\cite{Auger2010},\cite{ Unger2008}. The problem of reliable CL and FL separation is relevant for a better detection of high energy EAS and estimation of the primary particle parameters. This study is the first step in search of both theoretical approaches to the problem and possible design of electronics for actual and future detectors.
\section{The method of separation FL and CL}
A separation method based on the simultaneous recording of light from one `optical pixel’ by two or more pairs of silicon photomultipliers (SiPMs) is proposed. The first SiPM detects the incoming light flux in the wavelength band of its maximal sensitivity. The second SiPM detects it through an optical ultraviolet (UV) filter for fluorescent light (FL) separation. If the SiPMs' sensitivity characteristics, absorption characteristics of the filter elements and the spectra of fluorescent~\cite{AIRFLY2016} and Cherenkov light are known, one can calculate the contribution of each component to the total light flux. Upon the completion of this work it will be possible to separate fluorescent and Cherenkov light at the stage of on-board primary processing of recorded data. This method will increase the methodological accuracy of fluorescent light measurements.
In this work both ordinary colored-glass filters and interference filters were studied. In theory the interference filters should allow a better FL separation due to their narrow pass-through band in the UV region. The spectral characteristics of all studied filters are show on fig.~\ref{Spect_filters}.
In fig.~\ref{method} the possible modification of the common photomultiplier tubes (PMT) mosaic is shown on the example of the Telescope Array (TA)~\cite{TA2012} and Pierre Auger Observatory~\cite{Auger2015} detectors. In this case the signal loss on the filters is minimal since only one third of the mosaic area is covered by the filters. However, it is worth noting that the proposed method is effective only when the light spot from the remote point source on the mosaic is two or more times larger than the SiPM mosaic subpixel size.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{Spect_filters.pdf}
\end{center}
\caption{Spectral characteristics of light filters.}
\label{Spect_filters}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{method.pdf}
\end{center}
\caption{PMT mosaic modification example. The areas ratio (with filter)/(without filter) is equal to 1/2.}
\label{method}
\end{figure}
\section{FL and CL test stands}
On the right side in fig.~\ref{FL_CL_stands} are the general scheme and photo of the stand for the optical filters testing with CL. CL photons are generated by cosmic ray muons passing through a $15\times15\times50$~mm acrylic radiator (acrylic glass without a UV stabilizer) with 5 reflective sides (the bottom side was transparent). The radiator's bottom side was matted to create an isotropic CL flux. The selection of vertical muons passing through the whole radiator is done using two $15\times15\times15$~mm scintillator blocks with a SiPM (shown in blue on the scheme) placed above and below the radiator. Signals from the SiPMs are routed to the coincidence scheme that in turn triggers the recording on a digital oscilloscope. The tested filters are installed between the radiator and 4 measuring SiPMs using different fittings. Each muon passing through the radiator gives around 1800 CL photons, but due to scattering, multiple reflections and bottom side absorption only about 25\% of photons reach the measuring SiPMs. For more reliable measurements the SiPMs were connected in parallel in diagonal pairs. The resulting 2 signals were amplified by fast operational amplifiers~AD8011 and then recorded by the oscilloscope. This scheme allows to have an average pulse amplitude of 42~mV from each channel with noise level less than 5~mV.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{FL_CL_stands.pdf}
\end{center}
\caption{\label{FL_CL_stands}Schemes and photos of the stands for filter (UFS-1, UFS-5, FS6, SL 360$\setminus$50, SL 280-380, FF01-375/110) attenuation coefficient measurements. The FL stand is on the left and the CL one is on the right.}
\end{figure}
On the left side in fig.~\ref{FL_CL_stands} are the general scheme and photo of the stand for testing optical filters with FL. The FL source was manufactured~\cite{MELZ_PMT} on request and is described in detail in~\cite{FL_CL_2017}. The stand consists of a FL source, light flux attenuator, a diffuser, filter housing and~4~SiPMs. The light attenuator consists of two parallel foils with multiple pinholes in them situated approximately 12~mm apart of each other. The pinholes are arranged so that direct light from the FL source does not reach the filters and SiPMs. The diffuser consists of a $15\times15\times50$~mm light guide made of matted foil to match the measuring geometry of the CL test stand. The optical filters, their housing and measurement SiPM arrangement and circuitry was the same as on the CL stand described above. The only difference is that FL is continuous. In this case it is more convenient to measure and compare the SiPM anode currents. For the SiPM linearity test two equal diodes with 403~nm main wavelength were used. The diodes illuminated SiPMs one by one with 5 different voltages and in combinations. The SiPMs proved to have high linearity.
\section{Preliminary results}
Using the stands described above a series of tests were carried out to determine the attenuation coefficients for 6 filters (see fig.~\ref{Spect_filters}) for both CL and FL. The preliminary results are shown in table~\ref{table}.
\begin{table}[t]
\caption{Preliminary results on the transparency coefficient measurements for FL and CL of conventional optical filters (UFS-1, UFS-5, FS6) and interference filters (SL 360$\setminus$50, SL 280-380, FF01-375/110).}
\label{table}
\begin{center}
\begin{tabular}{lccc}
\br
Filter &FL &CL &Arbitrary separation efficiency\\
\mr
FF01-375/110 & $0.73\pm 0.07$ & $0.39\pm0.15$ & 1.87\\
UFS-5 & $0.65\pm 0.01$ & $0.36\pm0.26$ & 1.81\\
UFS-1 & $0.71\pm 0.01$ & $0.48\pm0.25$ & 1.48\\
FS6 (BG3) & $0.81\pm 0.01$ & $0.55\pm0.18$ & 1.47\\
SL 280-380 & $0.36\pm 0.01$ & $0.33\pm0.11$ & 1.09\\
SL 360$\setminus$50 & $0.28\pm 0.01$ & $0.28\pm0.09$ & 1.00\\
\br
\end{tabular}
\end{center}
\end{table}
For filter comparison the `Arbitrary separation efficiency' characteristic, defined as the ratio of FL transparency to CL transparency, was introduced. The best characteristics has the interference filter~FF01-375/110 and colored glass filter UFS-5.The FF01-375/110 filter has a good FL transparency but as a drawback has a high cost. Moreover, the filters' angular properties need more careful analysis since in many actual experiments the light falls on the sensitive part of the detector in a wide range of angles while in this study it was collected in a narrow angle.
Since the filters were installed over SiPMs without an optical contact the transparency coefficients in table~\ref{table} also include reflection losses on the SiPM surface. Since FF01-375/110 compared to other filters has a `perfectly' polished surface that gives visible flares the measured transparency coefficient has a higher uncertainty. In the next planned measurement series this will be corrected using a different diffuser design.
A significant uncertainty in CL measurements comes from two major sources:
\begin{enumerate}
\item muon trajectory in the radiator and
\item low muon flux --- $\sim$3~muons per hour.
\end{enumerate}
The first factor gives a difference factor of up to 2 in two measuring channels, but does not change the average value. The second factor does not allow to gather enough statistics in a reasonable time to reduce the statistical error. While the expositions can be increased by 10 times it still will not be nearly enough to reach the precision of FL measurements. To solve this issue the measurement of CL filter attenuation coefficients is planned to be carried out using CL from EAS. The measurements are planned to take place on a test site with low light pollution and aerosol levels using a 0.3~m$^2$ 470~mm curvature radius mirror and a set of 7 SiPMs with light collectors. In comparison with the stand described above the new setup will increase the light flux per SiPM by more than 15 times and in total increase to about 150 events per hour.
\section{Development of the detector}
The detector is designed for ground and space based optical experiments for high and ultrahigh energy cosmic ray studies.
It is planned to create a sensitive module utilizing light collectors. The module will consist of a board with 7 SiPMs \cite{SensL} and a set of light collectors (fig.~\ref{Seven_SiPM} right and left respectively), amplifiers and set of additional sensors (temperature, pressure etc). The light collectors effectively enlarge the pixel area to $\sim$3~cm$^2$. Unlike the commercially available ones, the new matrix allows to set the supply voltage individually for each SiPM for sensitivity equalization across the module (and between the modules). It is planned that the whole sensitive matrix will consist of 7 modules , i.e. 49 pixels in total. The effective sensitive area should be 4--5 times larger than for a single module (due to the mirror attenuation by the matrix).
On fig.~\ref{Electronic_signal} the 7-channel photoelectron counting board prototype is shown. The board houses a debug module with a FPGA chip and receives commands via ethernet interface. It also houses the SiPM power source ($-24$ \ldots$-29$~V), digital-to-analogue converters (DAC) for individual SiPM sensitivity control, comparators for photoelectron counting, DACs for the their level selection and DACs for sensor readings. The 7-SiPM module is connected to the board. The white cable is used for module powering and 7 gray cables (micro-coaxial cables) connect the SiPM amplifiers to the comparators.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Seven_SiPM.pdf}
\end{center}
\caption{\label{Seven_SiPM} Matrix of seven SiPM SensL MicroFC-60035-SMT with light collectors and amplifiers.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Electronic_signal.pdf}
\end{center}
\caption{\label{Electronic_signal} Left panel: an electronic board for counting photoelectrons with a Zynq-7020 FPGA debugger. Right panel: An amplified signal of SiPM's fast output.}
\end{figure}
\section{Conclusion}
A method for separating Cherenkov and the fluorescent light from EAS is being developed. Preliminary measurement results indicate good prospects from the application of the proposed method. The first elements of the detector using silicon photomultipliers for the realization of this method are produced.
\ack
This work was funded by Russian Foundation for Basic Research (RFBR) project No.16-02-00777.
\section*{References}
|
2,877,628,088,333 | arxiv | \section{Abstract.}
In [\ref{rigid}] authors asked: (open question 2) is the size of USS of a rigid pseudo-Anosov braid is bounded above by some polynomial in the number of strands and the braid length? We answer this question in the negative.
\section{Introduction.}
The conjugacy problem in the braid group is solved by F.Garside in 1969. The problem separates into two parts: to decide for two braids whether they are conjugate and to find by what braid they are conjugate. Advances in these problems` solutions are equal, so we do not differ them. In papers [\ref{rigid},\ref{periodic},\ref{bkl},\ref{elrifai},\ref{processing},\ref{franco},\ref{gebhardt}] the algorithm was improved but a polynomial algorithm is still unknown.
Garside`s algorithm calculates the Summit Set of a braid. We define it in Section 3. In short, the Summit Set is a finite canonical subset of the conjugacy class of the braid.
Later W.Thurston [\ref{processing}] improved the algorithm by introducing a left normal form of the braid which can be calculated in polynomial time in the number of strands and the braid length. So W.Thurston obtained a polynomial algorithm for the word problem in the braid group.
In 1994 E.El-Rifai and H.Morton [\ref{elrifai}] improved the algorithm by replacing a Summit Set by its subset Super Summit Set. And it was shown how one can find at least one element of the Super Summit Set in polynomial time.
And in 2003 J.Gonz\'alez-Meneses and N.Franco [\ref{franco}] shown how one can find the rest of the Super Summit Set in the time bounded above by a product of the size of the Super Summit Set and a polynomial in the number of strands and the braid length. However, the size of the Super Summit Set is not bounded above by a polynomial.
In 2003 V.Gebhardt [\ref{gebhardt}] replaced a Supper Summit Set by its subset Ultra Summit Set. The time to find at least one braid in the Ultra Summit Set is not known but one can find the rest of the Ultra Summit Set in the time bounded above by a product of the size of the Ultra Summit Set and a polynomial in the number of strands and the braid length. However, the size of Ultra Summit Set is not bounded above by a polynomial.
J.Birman, V.Gebhardt and J.Gonz\'alez-Meneses [\ref{rigid}] suggested reducing problem to so called rigid pseudo-Anosov braids. They asked: (open question 2) is the size of USS of a rigid pseudo-Anosov braid is bounded above by some polynomial in the number of strands and the braid length? In this paper we answer this question in the negative.
\smallskip
\noindent{\bf Theorem 1.} The braid $\alpha_n:=\sigma_1 \sigma_2^{-1} \sigma_3 \sigma_4^{-1}\dots \sigma_{n-1}^{(-1)^n}$ on $n$ strands is rigid and the size of its Ultra Summit Set is at least $2^{[(n-2)/2]}.$
\noindent{\bf Theorem 3.} The braid $\alpha_n$ is pseudo-Anosov for odd number $n\ge3.$
\smallskip
\noindent{\bf Note.} The statement of Theorem 3 holds true for arbitrary $n$. One can construct an invariant train-truck (for the notion of train truck see [\ref{bestvina},\ref{thurston}]) of the braid $\alpha_n.$ But for simplicity we present here a direct proof for odd $n$.
This counter-example was discovered experimentally by I.A.Dynnikov with help of the program of J.Gonz\'alez-Meneses which implements the last version of algorithm introduced in [\ref{gebhardt}]. According to the calculations the size of the Ultra Summit set of the braid $\alpha_n$ is equal to $$(3-(-1)^n)\cdot 3^{n-3},$$ if $n=3,4,\dots,11.$
\smallskip
J.Birman, K.H.Ko, S.J.Lee [\ref{bkl}] introduced a new presentation of the braid group with so-called {\it band generators}. Using it, they obtained a new solution to the word and conjugacy problem which retains most of desirable features of the solution we consider above (it uses the notion of left normal form and Ultra Summit Set) and at the same time makes possible certain computational improvements. However, Birman-Ko-Lee presentation modification of open question 2 in [\ref{rigid}] has the same answer.
\smallskip
\noindent{\bf Theorem 2} Ultra Summit Set in Birman-Ko-Lee presentation of the braid $\alpha_n$ contains at least $2^{(n{-}1)/2}$ rigid braids for odd $n$.
\smallskip
Due to computations of my program the size of USS of $\alpha_n$ is equal to $$\frac{3-(-1)^n}2\cdot n\cdot3^{n-3},$$ if $n=3,4,\dots,9.$
\section{Definitions.}
The following definitions refer to theorem 1.
\noindent{\bf Definition.} A braid with positive crossings is a {\it permutation braid} if its every pair of strands crosses at most once.
A permutation braid is uniquely defined by its permutaion of strands and a number of permutation braids equals to $n!$.
\noindent{\bf Definition.} Let {\it Garside element} be the permutation braid $\Delta$ with $p (i) = n{+}1{-}i$ as a permutation of strands.
For every permutation braid one can find a permutation braid such that their product equals to Garside element.
\noindent{\bf Theorem-definition.} [\ref{processing}] Any braid has a unique representative in {\it left normal form} $\Delta^k\cdot b_1\cdot b_2\cdot\dots\cdot b_m$ where $b_i$ is a permutation braid not equal to $\Delta$ and for $i=1,2,\dots,m{-}1$ and $j=1,2,\dots,n{-}1$ if the $j$th and the $(j{+}1)$th strands of $b_{i+1}$ cross then two strands of $b_i$ which end in the $j$th and the $(j{+}1)$th points also cross.
\noindent{\bf Definition.} [\ref{elrifai}] Let {\it Super Summit Set} of a braid $b$ be the set of braids which are conjugate to $b$ and have maximal power of $\Delta$ and minimal number of permutation braids in their left normal form.
\noindent{\bf Definition.} Consider a braid $b$ in left normal form. Let ${\bf c}(b)=\Delta^k\cdot b_2\cdot b_3\cdot\dots\cdot b_m\cdot(\Delta^k b_1 \Delta^{-k}) =(\Delta^k b_1 \Delta^{-k})^{-1}b(\Delta^k b_1 \Delta^{-k})$ and ${\bf d}(b)=\Delta^k\cdot(\Delta^{-k}b_m\Delta^k)\cdot b_1\cdot b_2\cdot b_{m-1}=b_m b b_m^{-1}.$ We call ${\bf c}(b)$ and ${\bf d}(b)$ the {\it cycling} of $b$ and {\it decycing} of $b$ respectively.
\noindent{\bf Theorem.} [\ref{elrifai}] Let $l$ be the word length of a braid $b$. Then a sequence of at most $l\cdot n^2$ cyclings and decyclings applied to $b$ produces a representative of Super Summit Set.
\noindent{\bf Definition.} [\ref{gebhardt}] Let {\it Ultra Summit Set} ($U_A$) be the subset of Super Summit Set which consists of braids $b$ such that ${\bf c}^d(b){=}b$ for some natural number $d$.
\noindent{\bf Definition.} If ${\bf c}(b)$ in definition of the cycling is already in left normal form then we call $b$ {\it rigid}.
The following definitions refer to theorem 2. Here the main reference is [\ref{bkl}].
For $t>s$ denote the braid $\sigma_{t-1} \sigma_{t-2}\dots\sigma_{s}\sigma_{s+1}^{-1}\sigma_{s+2}^{-1}\dots\sigma_{t-1}^{-1}$ by $(t\ s)$. The set of the braids $(t\ s)$ is called {\it band generators}.
\noindent{\bf Definition.} Let $n_1,n_2,\dots,n_m$ be a decreasing sequence. Denote by $(n_1\ n_2\ \dots\ n_m)$ the braid $(n_1\ n_2)(n_2\ n_3)\dots(n_{m-1}\ n_m)$. This braid is called {\it descending cycle}.
\noindent{\bf Definition.} The braid $\delta:=(n\ n{-}1\dots2\ 1)$ is called a {\it fundamental word}.
\noindent{\bf Definition}. Two descending cycles $(n_1\ n_2\ \dots\ n_p)$ and $(m_1\ m_2\ \dots\ m_q)$ are {\it parallel} if for any $i=1,2,\dots,p{-}1$ and $j=1,2,\dots,q{-}1$ $(n_i-m_j)(n_i-m_{j+1})(n_{i+1}-m_j)(n_{i+1}-m_{j+1})>0.$
\noindent{\bf Definition.} The product of pairwise parallel descending cycles is called a {\it cannonical factor.}
\noindent{\bf Theorem-definition.} Any braid has a unique representation in {\it left normal form} $\delta^k\cdot b_1\cdot b_2\cdot\dots\cdot b_m$ where $b_i$ is a cannonical factor not equal to $\delta$ and for $i=1,2,\dots,m{-}1$ and every generator $(j\ k)$ the braids $b_i\cdot (j\ k)$ and $(j\ k)^{-1}\cdot b_{i+1}$ are not simultaneously cannonical factors.
Definitions if Cycling, Decycling, Super Summit Set and Ultra Summit Set are similar to Artin generators case. Denote Ultra Summit Set in Birman-Ko-Lee presentation by $U_{BKL}$. For definitions of periodic, reducible and pseudo-Anosov braid see [\ref{bestvina},\ref{rigid},\ref{thurston}].
\section{Main result.}
\noindent{\bf Theorem 1.} The braid $\alpha_{n+1}$ on $n{+}1$ strands is rigid and the size of its $U_A$ is at least $2^{[(n-1)/2]}.$
The word generators permutations applied to the braid $\alpha_{n+1}$ produces $2^{n-1}$ braids (proposition 2) which are conjugate to $\alpha_{n+1}$ (proposition 1). We will prove that $2^{[(n-1)/2]}$ braids among them are rigid. Note that cycling of rigid braid is also rigid. So by the theorem [\ref{elrifai}] mentioned above a rigid braid belongs to its $U_A$. So we obtain $2^{[(n-1)/2]}$ elements of $U_A$.
\noindent{\bf Proposition 1.} A braid produced by generators permutation applied to the word of the braid $\alpha_{n+1}$ is conjugate to $\alpha_{n+1}$.
\noindent{\bf Proof.} Assume that a generator $\sigma_i$ is on the right of the generator $\sigma_1$ in that braid word. Transpose them if $i\neq2$. Do this operation while it is possible. If $\sigma_1$ is on the right end of the word then conjugate the braid by $\sigma_1$ and obtain a braid with $\sigma_1$ being on the left end. Then do mentioned operation until $\sigma_1$ meets $\sigma_2^{-1}$. Then similarly move $\sigma_1$ and $\sigma_2^{-1}$ together on the right until $\sigma_2^{-1}$ meets $\sigma_3$. So at most $n^2$ transpositions and conjugations produce $\alpha_{n+1}$.
\noindent{\bf Proposition 2.} A number of the obtained braids equals to $2^{n-1}$.
\noindent{\bf Proof.} An order of generators of neighbouring numbers in the braid word (which generator $\sigma_i^{\pm1}$ or $\sigma_{i+1}^{\mp1}$ is on the left) determines the braid due to the commutativity relations. We will prove that this order is determined by the braid.
\noindent{\bf Definition.} We correspond to the braid obtained by permuting generators in the word of $\alpha_{n+1}$ a sequence $n_1,m_1,n_2,m_2,\dots,n_r,m_r$ of natural numbers such that $\sigma_i^{\pm1}$ is on the left of $\sigma_{i+1}^{\mp1}$ for $i=1,2,\dots,n_1$, $\sigma_i^{\pm1}$ is on the right of $\sigma_{i+1}^{\mp1}$ for $i=n_1+1,n_1+2,\dots,n_1+m_1$, etc.
The $(n_1{+}m_1{+}\dots{+}n_k{+}m_k{+}1)$th strand ends in the $(n_1{+}m_1{+}\dots{+}n_k{+}m_k{+}n_{k+1}{+}2)$th endpoint and the $(n_1{+}m_1{+}\dots{+}n_k{+}1)$th strand ends in the $(n_1{+}m_1{+}\dots{+}n_k{+}m_k{+}2)$th endpoint. The number of a strand decreases by one if it is between $n_1{+}m_1{+}\dots{+}n_k{+}m_k{+}1$ and $n_1{+}m_1{+}\dots{+}n_k{+}m_k{+}n_{k+1}{+}1$. Other strands increase their number by one. So the order of generators of neighbouring numbers is determined by the sequence $n_1,m_1,\dots,n_r,m_r$ determined by our braid permutation. So we have $2^{n-1}$ braids.
\noindent{\bf Collorary from the proof.} A braid is determined by a sequence $n_1,m_1,n_2,m_2,\dots,n_r,m_r$
\smallskip
\smallskip
According to computations of J.Gonz\'alez-Meneses' program all these braids are rigid (therefore belong to $U_A$ of $\alpha_{n+1}$) for $2\leqslant n\leqslant5$. For simplicity we will consider only such braids among them that if a generator is on the left of both generators of neighbouring numbers then it has an even number and if a generator is on the right of both generators of neighbouring numbers then it has an odd number. In notations of the previous paragraph we require that all $n_i$ are even and all $m_i$ except the last are odd. Also we require that $m_r\neq0$.
\smallskip
\smallskip
\noindent{\bf Proposition 3.} A number of the braids we consider is at least $2^{[\frac{n-1}2]}.$
\noindent{\bf Proof.} Without loss of generality we assume that $n$ is odd. By collorary from proposition 2, a number of the braids we consider equals to a number of decompositions of $n{-}1$ into the sum of $2r$ numbers $n_1,n_2,\dots,n_r,m_1,m_2,\dots,m_r$ for all $r$, where $n_i$ are even and $m_i$ except the last are odd. Assume that $m_r$ is odd. Then the quantity of decompositions we have equals to the quantity of decompositions of $n{-}1{+}r$ into the sum of $2r$ even numbers for all $r$ that equals to the quantity of decompositions of $(n{-}1{+}r)/2$ into the sum of $2r$ numbers for all $r$ that equals to $\sum\limits_{r=1}^{n}C_{(n{-}1{+}r)/2+2r-1}^{2r-1}\geqslant\sum\limits_{r=1}^{n}C_{(n{-}1)/2}^{2r-1}=2^{(n-1)/2-1}.$ For even $m_r$ we obtain the other $2^{(n-1)/2-1}$ braids.
\smallskip
\noindent{\bf Proposition 4.} The braids considered are rigid.
\noindent{\bf Proof.} First we compute a left normal form of the braids considered.
\begin{figure}[h]
\center{\includegraphics[scale=2]{braid2.eps} \Huge \ \includegraphics[scale=2]{braid1.eps} \ \ \ \ \ \ \includegraphics[scale=2]{braid3.eps}}
\end{figure}
First assume that $r=1$ and $n$ is odd, i.e. we have a braid in the middle of the picture $$\sigma_n\sigma_{n-1}^{-1}\dots\sigma_{n_1+2}^{-1}\sigma_1\sigma_2^{-1}\dots\sigma_{n_1+1}.$$ Note that crossings of the first strand with strands of numbers $3, 5, 7, \dots, n_1{+}1$ are negative. We can get rid of this: multiply our braid from the left by the permutation braid having $(3\ 5\ 7\ \dots\ n_1{+}1\ 1\ 2\ 4\ \dots\ n_1\ n_1{+}3\ n_1{+}5\ \dots\ n_1{+}m_1{+}1\ n_1{+}m_1{+}2\ n_1{+}2\ n_1{+}4 \dots n_1{+}m_1)$ as a permutation of strands. In the obtained product first $n_1/2$ strands cross the other strands from above, which means that we can move them above the other strands to the top, so that negative crossings between first $n_1{+}1$ strands disappear and any two of these strands cross at most once (on the picture the right braid is a product of the braids on the left). Similarly move below to the bottom strands of numbers from $n_1{+}2{+}m_1/2$ to $n$. We obtain a permutation braid having $$(2\ 4\dots n_1\ n_1{+}2\ 1\ 3\ 5\ \dots n_1{+}1\ n_1{+}4\ n_1{+}6 \dots n_1{+}m_1{+}2\ n_1{+}1\ n_1{+}3\ n_1{+}5 \dots n_1{+}m_1{+}1)$$ as a permutation of strands.
Now consider the general case. First reduce our braid by generators which are on the left of both generators of neighbouring numbers: multiply our braid from the left by a product $\sigma_{1+n_1+m_1}\sigma_{1+n_1+m_1+n_2+m_2}\dots\sigma_{1+n_1+m_1+\dots+n_{r-1}+m_{r-1}}$. If $n$ is odd also multiply our braid by $\sigma_n$ from the left. By our assumption these generators have even numbers, therefore they will cancel. Denote that product of generators (including $\sigma_n$ if neccesary) by $c$. After multiplication our braid devides into the product of braids. Each braid among them has all strands standing still except strands whose number is between $1+n_1+m_1+\dots+n_k+m_k$ and $n_1+m_1+\dots+n_{k+1}+m_{k+1}$. Note that the case of such braid was analyzed in the previous paragraph providing us a permutation braid to multiply from the left, which means that we can multiply from the left our braid by a product $d$ of the corresponding permutation braids and obtain a permutation braid $b$. Denote a permutation braid $d\cdot c$ by $a$ and our initial braid by $\beta$. So we have $\beta=a^{-1}b=\Delta^{-1} a^* b$ where $a^*=\Delta a^{-1}.$
Now we prove that $\Delta^{-1} a^* b$ is a left normal form. In the braid $b$ the $i$th and the $(i+1)$th strands cross for $i=n_1/2+1,n_1+m_1/2+2,n_1+m_1+3+n_2/2,n_1+m_1+4+n_2+m_2/2,\dots,n_1+m_1+\dots+n_{r-1}+m_{r-1}+3+n_r/2,n_1+m_1+\dots+n_{r-1}+m_{r-1}+4+n_r+m_r/2.$ But in the braid $a$ for such values of $i$, the $i$th and the $(i+1)$th strands do not cross. Therefore strands ending at the $i$th and the $(i+1)$th endpoints cross in the braid $a^*$. So we have a left normal form.
Now we prove that the braid $\beta$ is rigid. It is sufficient to check that $\Delta^{-1} b (\Delta^{-1} a^* \Delta)$ is a left normal form. Strands ending at the $i$th and the $(i+1)$th endpoints do not cross for odd $i$ in the braid $a$. Note that $a (\Delta^{-1} a^* \Delta)=\Delta$. Therefore the $i$th and the $(i+1)$th strands cross in the braid $(\Delta^{-1} a^* \Delta)$ for odd $i$. But in the braid $b$ strands ending at the $i$th and the $(i+1)$th endpoints crosses for odd $i$. So $\beta$ is rigid.
\bigskip
\noindent{\bf Theorem 2} Ultra Summit Set in Birman-Ko-Lee presentation of the braid $\alpha_n$ contains at least $2^{(n{-}1)/2}$ rigid braids for odd $n$.
\noindent{\bf Proposition 1.} A braid $\beta=\delta^{-1}((n{-}1\ 1)(n{-}2\ 2)\dots(\frac{n+1}2\ \frac{n-1}2))^2$ is conjugate to $\alpha_n$.
\noindent{\bf Proof.} A braid $\delta^{-1} \beta \delta=\delta^{-1}( (n\ 2)(n{-}1\ 3)\dots(\frac{n+3}2\ \frac{n+1}2))^2$ is conjugate to the braid $\beta$. So then it is sufficient to prove for this braid the conjugacy to $\alpha_n.$
Using $(n\ 2)(n\ n{-}1\ n{-}2\ \dots 2)=(n\ n{-}1\ n{-}2\ \dots\ 2)(3\ 2)$ we have
\begin{multline*} \bigl((n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1)\bigr)^{-1} (\delta^{-1}\beta\delta) \times \\ \bigl((n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1)\bigr) = \bigl((n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1)\bigr)^{-1} \times \\ \times \delta^{-1} \times \bigl((n\ 2)(n{-}1\ 3)\dots(\frac{n+3}2\ \frac{n+1}2)\bigr)^2 \times \bigl( (n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1) \bigr) = \\ = \bigl((n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1)\bigr)^{-1} \delta^{-1}\times \\ \times \bigl( (n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1) \bigr) \times \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)^2 = \\ = \delta^{-1} \bigl((n{-}1\ n{-}2)^{-1} (n{-}1\ n{-}2\ n{-}3\ n{-}4)^{-1}\dots(n{-}1\ n{-}2\ \dots\ 1)^{-1}\bigr)\times \\ \times \bigl( (n\ n{-}1\ \dots\ 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1) \bigr) \times \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)^2.\end{multline*}
Then note that $(n{-}1\ n{-}2\ \dots\ 1)^{-1} (n\ n{-}1\ \dots\ 2)= \\ \bigl((2\ 1)^{-1} (n{-}1\ n{-}2\dots\ 2)^{-1}\bigr) \bigl((n{-}1\ n{-}2\ \dots2)(n\ 2)\bigr)=(1\ 2)(n\ 2)$, and $(n\ 2)(n\ n{-}1\ \dots\ 4) = (n\ n{-}1\ \dots\ 4)(4\ 2)$. Using it we continue equality \begin{multline*}\delta^{-1} \bigl( (n{-}1\ n{-}2)^{-1} (n{-}1\ n{-}2\ n{-}3\ n{-}4)^{-1}\dots(n{-}1\ n{-}2\dots1)^{-1}\bigr) \times \\ \times \bigl( (n\ n{-}1\ \dots 2) (n\ n{-}1\ \dots\ 4) \dots (n\ n{-}1) \bigr) \times \\ \times \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)^2 = \delta^{-1}\bigl( (n{-}2\ n{-}1)(n{-}3\ n{-}2)\dots(1\ 2)\bigr)\times\\\times \bigl((n\ n{-}1)(n{-}1\ n{-}3) (n{-}3\ n{-}5)\dots (4\ 2)\bigr) \times \\ \times \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)^2 = \bigl((2\ 3)(4\ 5)\dots(n{-}1\ n)\bigr) \delta^{-1} \times \\ \times (n\ n{-}1\ n{-}3\ n{-}5\dots 2) \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)^2. \\ \end{multline*}
Conjugating the obtained braid by a braid $(n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)$, we obtain that $\beta$ is conjugate to a braid
\begin{multline*}\bigl(\delta^{-1} (n\ n{-}1\ n{-}3\ n{-}5\ \dots\ 2)\bigr) \bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr) = \\ = \Bigl(\bigl((1\ 2)(2\ 3)\dots(n{-}1\ n)\bigr)\bigl((n\ n{-}1)(n{-}1\ n{-}3)(n{-}3\ n{-}5)\dots(4\ 2)\bigr)\Bigr)\times \\ \times\bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr)=\\=\bigl((n{-}2\ n{-}1)(n{-}4\ n{-}3)\dots(1\ 2)\bigr)\bigl((n\ n{-}1)(n{-}2\ n{-}3)\dots(3\ 2)\bigr).\end{multline*}
Write the obtained braid in Artin presentation: $\sigma_{n{-}2}^{-1}\sigma_{n-4}^{-1}\dots \sigma_{1}^{-1}\sigma_{n-1}\sigma_{n-3}\dots\sigma_2$. And the same for $\Delta\alpha_n\Delta^{-1}$: $\sigma_{n-1}\sigma_{n-2}^{-1}\sigma_{n-3}\sigma_{n-4}^{-1}\dots\sigma_2\sigma_{1}^{-1}$. Therefore these two braids are conjugate due to Proposition 1 in the proof of Theorem 1.
\smallskip
\noindent{\bf Proposition 2.} Consider a set of braids ${(n{-}1\ 1), (n{-}2\ 2),\dots,(\frac{n+1}2\ \frac{n-1}2)}$. Choose a subset $S$. Denote by $s$ a product of braids in $S$. Then a braid $(\delta^{-1}s\delta)^{-1} \beta (\delta^{-1}s\delta)$ belongs to its $U_{BKL}$ (and therefore to $U_{BKL}$ of $\beta$ and $\alpha_n$).
\noindent{\bf Proof.} Denote by $T$ a complementary to $S$: ${(n{-}1\ 1), (n{-}2\ 2),\dots,(\frac{n+1}2\ \frac{n-1}2)}\setminus S.$ Denote by $t$ a product of braids in $T$. Note that $(\delta^{-1}s\delta)^{-1} \beta (\delta^{-1}s\delta)=\delta^{-1} (ts) (t \delta^{-1}s\delta )$.
\noindent{\bf Step 1.} First we prove that $t \delta^{-1}s\delta $ is a cannonical factor.
Denote $a_i=(n{-}i\ i)$ if $(n{-}i\ i)\notin S$ and $a_i=(n{-}i{+}1\ i{+}1)$ if $(n{-}i\ i)\in S$, for $i=1,2,\dots,\frac{n-1}2$. Consider the braids $(n{-}i\ i)$ belonging to $S$. Then a product of the correspinding $a_i$ equals to $\delta^{-1}s\delta$.
Analyse a product $(t \delta^{-1}s\delta )$. First separate it into the commutating multipliers. Assume that $(\frac{n+1}2\ \frac{n-1}2)\in T$. If $(\frac{n+1}2{+}1\ \frac{n-1}2{-}1)\in T$ then $(\frac{n+1}2\ \frac{n-1}2)$ commutates with the rest of the product, so separate it. If $(\frac{n+1}2{+}1\ \frac{n-1}2{-}1)\in S$ and $(\frac{n+1}2{+}2\ \frac{n-1}2{-}2)\in S$ then $((\frac{n+1}2\ \frac{n-1}2)\cdot(\frac{n+1}2{+}1\ \frac{n-1}2{-}1)$ commutates with the rest of the product, so separate it. Then go further in the same way. Now formalize our observation. We say that $a_i$ and $a_{i+1}$ belong to the same multiplier if $(n{-}i\ i)$ and $(n{-}i-1\ i+1)$ belong to the same subset ($S$ or $T$). Indeed, the product divides into the commutating multipliers in this way.
Now compute each multiplier. It equals to $a_i\cdot a_{i+1}\cdot\dots\cdot a_j$. Its form depends on the belonging of $(n{-}i\ i)$ and $(n{-}j\ j)$ to $S$. So we have four cases. Consider two of them, the other cases are similar. So assume that $(n{-}i\ i)\in T$ and $(n{-}j\ j)\in T$. Then using two formulas ( $(r_1\ r_2\ \dots\ r_p)(r_q\ r)=(r_1\ r_2\dots\ r_q\ r\ r_{q+1}\ \dots\ r_p)$ if $r_q < r< r_{q+1}$ and $(r\ r_q)(r_1\ r_2\ \dots\ r_p)=(r_1\ r_2\dots\ r_{q-1}\ r\ r_q\ \dots\ r_p)$ if $r_{q-1}< r < r_q$) and induction we obtain that the considered multiplier equals to $(n{-}i\ n{-}i{-}2\ \dots\ n{-}j\ j\ j{+}2\ \dots\ i).$ And if $(n{-}i\ i)\in S$ and $(n{-}j\ j)\in T$ then it equals to $(n{-}i{+}1\ n{-}i{-}1\dots n{-}j\ j\ j{+}2\ \dots\ i{+}1).$ In both cases we obtain a descending cycle. Note that the numbers in the cycle form two arithmetic progressions.
Note that the descending cycles in our product are parallel, so the statement of the first step is proved.
\noindent{\bf Step 2.} Now we prove that $\delta^{-1}\cdot(ts)\cdot(t\delta^{-1}s\delta)$ is a left normal form. First we introduce some notations and present some results from [\ref{bkl}].
We say that cannonical factor $c$ is divisible by a generator $(i\ j)$ if $(j\ i)\cdot c$ is also a cannonical factor. Collorary 3.7 [\ref{bkl}]: a cannonical factor is divisible by a generator $(i\ j)$ if and only if one of descending cycles in its decomposition includes $i$ and $j$. By the same collorary, $(j\ i)\cdot c$ is a cannonical factor if and only if $c\cdot(j\ i)$ is a cannonical factor.
For every cannonical factor $a$ $(\delta^{-1}a)$ and $(a\delta^{-1})$ are also cannonical factors ([\ref{bkl}]). Therefore we have: if $ab=\delta$ where $a$ and $b$ are cannonical factors then $a\cdot(i\ j)$ is a cannonical factor if and only if $b$ is divisible by $(i\ j)$. Recall that $ts=((n{-}1\ 1)(n{-}2\ 2)\dots(\frac{n+1}2\ \frac{n-1}2))$ and notice that $$((n{-}1\ 1)(n{-}2\ 2)\dots(\frac{n+1}2\ \frac{n-1}2))\cdot((n\ 1)(n{-}1\ 2)\dots(\frac{n+3}2\ \frac{n-1}2))=\delta.$$
So we can reformulate a statement of step 2: cannonical factor $(t\delta^{-1}s\delta)$ and factor $((n\ 1)(n{-}1\ 2)\dots(\frac{n+3}2\ \frac{n-1}2))$ are not divisible by the same generator simultanously. Indeed, $((n\ 1)(n{-}1\ 2)\dots(\frac{n+3}2\ \frac{n-1}2))$ is divisible only by $(n\ 1),(n{-}1\ 2),\dots,(\frac{n+3}2\ \frac{n-1}2)$. A descending cycle of $(t\delta^{-1}s\delta)$ consists of two arithmetical progressions. Numbers in the first progression are at least $\frac{n+1}2$ and in the second --- at most $\frac{n+1}2$. Therefore if $n{-}i$ and $i{+}1$ belong to some descending cycle then these numbers belong to the different progressions. But recall that a sum of numbers from different progressions is of the same parity with $n$. So the statement of the second step is proved.
\noindent{\bf Step 3.} Now we prove that $(\delta^{-1}ts\delta)\cdot(\delta^{-1}t\delta^{-1}s\delta^2)\cdot\delta^{-1}$ is a right normal form.
We are to prove that if $(\delta^{-1}ts\delta)$ is divisible by $(i\ j)$ then $(i\ j)\cdot t\delta^{-1}s\delta$ is not a cannonical factor. Due to the decomposition into descending cycles of $(\delta^{-1}ts\delta)$ and collorary 3.7 [\ref{bkl}] we have that if $(\delta^{-1}ts\delta)$ is divisible by $(i\ j)$ then $(i\ j)=(n{-}i\ i)$. So denote $r:=(n{-}i\ i)\cdot t\delta^{-1}s\delta$. Assume that $(n{-}i\ i)\in T$. Then the braid $t\delta^{-1}s\delta$ is divisible by $(n{-}i\ i)$ and so $(n{-}i\ i)^{-2} r$ is also a cannonical factor. Therefore $r$ is not a cannonical factor by lemma 3.3 in [\ref{bkl}]. Assume that $(n{-}i\ i)\in S$. Then $((n{-}i\ i)(n{-}i{+}1\ i{+}1))^{-1}r$ is also a cannonical factor. So $r$ is not a cannonical factor by the same lemma.
\noindent{\bf Step 4.} Let $\delta^{-1}\cdot a \cdot b$ and $(\delta^{-1} a \delta)\cdot(\delta^{-1} b \delta)\cdot\delta^{-1}$ be a left normal form and a right normal form respectively. Then a braid $\gamma$ they representing is rigid and so belongs to $U_{BKL}$.
\noindent{\bf Proof.} We use arguments of the second step. By the definition of cycling ${\bf c}(\gamma)=\delta^{-1} b\cdot (\delta^{-1}a\delta)$. We have $(\delta^{-1} a \delta)\cdot(\delta^{-1} b \delta)\cdot\delta^{-1}$ is a right normal form. Then cannonical factors $(\delta^{-1} a \delta)$ and $b^*$ have not common divisors, where $b^* (\delta^{-1} b \delta)=\delta$. Note that $b\cdot b^* = \delta$. Then $\delta^{-1} b\cdot (\delta^{-1}a\delta)$ is a left normal form and $\gamma$ is rigid.
Note that cycling of rigid braid is also rigid. So by theorem [\ref{elrifai}] mentioned in the part Definitions a rigid braid belongs to its $U_{BKL}$.
Now finish the prove of proposition 2. In steps 1,2 we found a left normal form of the braid $(\delta^{-1}s\delta)^{-1} \beta (\delta^{-1}s\delta)$. In steps 3,4 we proved that it is rigid and belongs to $U_{BKL}$.
To finish the proof of theorem 2 count a number of braids $(\delta^{-1}s\delta)^{-1} \beta (\delta^{-1}s\delta)$. They are all distinct due to they have distinct left normal forms and their number equals to the number of subsets of $S$, i.e. to $2^{\frac{n-1}2}$. Therefore a size of $U_{BKL}$ is at least $2^{\frac{n-1}2}$.
\bigskip
\noindent{\bf Theorem 3.} The braid $\alpha_n$ is pseudo-Anosov for odd number $n\ge3.$
\noindent{\bf Proof.} Assume that $\alpha_n$ is periodic braid, i.e. $\alpha_n^k$ belongs to center of $B_n$, i.e. it equals to some power of a braid $(n\ n{-}1\ n{-}2\ \dots\ 1)^n$ [Chow, 1948]. But a differ between the quantity of positive generators and negative generators in the word of a braid is invariant with respect to the braid group relations. For the braid $\alpha_n^k$ this differ equals to zero, but for $(n\ n{-}1\ n{-}2\ \dots\ 1)^n$ it is a positive number. Therefore the braid $\alpha_n^k$ is trivial. But this is impossible because the braid group is torsion-free.
Assume that $\alpha_n$ is reducible, i.e. one obtain the braid $\alpha_n$ if one substitute nontrivially the strands of some nontrivial braid by some braids. Denote by $\gamma$ the braid whose strands are substituted. Assume that the $i$th strand is substituted by a braid $\gamma_i$.
Consider a special case $n=3$. Note that $\gamma$ has two strands. Any pair of strands in $\alpha_3^3$ are not linked. Therefore $\gamma^3$ is trivial. Therefore $\gamma_i$ is trivial which means that $\alpha_3^3$ is trivial. Contradiction.
Consider a general case. Let $k,l,m$ be natural numbers, $1\leqslant k< l< m\leqslant n$ and numbers $l{-}k$ and $m{-}l$ are odd. Delete from $\alpha_n^n$ all strands except the $k$th, $l$th and $m$th. Note that we obtain $\alpha_3^3$. Therefore the $k$th, $l$th and $m$th strands belong to either the same $\gamma_i$ or three distinct braids $\gamma_i$.
Assume that the $i$th and $j$th strands belong to the same braid $\gamma_s$ and $i<j$.
Assume that $j{-}i$ is even. In a set of three numbers $i,i{+}1,j$ neighbouring numbers have odd differ. Therefore $(i{+}1)$th strand belongs to $\gamma_s$. Considering sets of three numbers $(i,i{+}1,i{+}2),(i{+}1,i{+}2,i{+}3),\dots,(n{-}2,n{-}1,n)$ and $(i{-}1,i,i{+}1),(i{-}2,i{-}1,i),\dots(1,2,3)\\$ we obtain that all strands belong to $\gamma_s$. That is a contradiction.
Assume that $j{-}i$ is odd. If $i>1$ then consider a set $i{-}1,i,j$ and obtain that $(i{-}1)$th strand belongs to $\gamma_s$. If $j<n$ consider a set $i,j,j{+}1$ and obtain that $(j{+}1)$th strand belongs to $\gamma_s$. Then we obtain similarly that all strands belong to $\gamma_s$. That is a contradiction and so theorem 3 is proved.
\newpage
\newcounter{num}
\setcounter{num}0
\refstepcounter{num}
[\arabic{num}\label{bestvina}] M.Bestvina, M.Handell, Train-tracks for surface homeomorphisms, Topology, vol. 34, 1995, no.1, pp. 1-51
\refstepcounter{num}
[\arabic{num}\label{rigid}] J.Birman, V.Gebhardt, J.Gonz\'alez-Meneses, Conjugacy in Garside groups I: cyclings, powers and rigidity, Groups, Geometry and Dynamics,1,2007,pp.221-279
\refstepcounter{num}
[\arabic{num}\label{periodic}] J.Birman, V.Gebhardt, J.Gonz\'alez-Meneses, Conjugacy in Garside groups III: periodic braids, Journal of Algebra, vol.316, 2007, 2, pp.746-776
\refstepcounter{num}
[\arabic{num}\label{bkl}] J.Birman, K.H.Ko, S.J.Lee, A new approach to the word and conjugacy problems in the braid groups, Advanced Mathematics, vol. 139, 1998, no. 2
\refstepcounter{num}
\noindent[\arabic{num}\label{elrifai}] E.A.Elrifai, H.R.Morton --- Algorithms for positive braids, Quart. J. Math. Oxford (2), 45, 1994, pp. 479-497
\refstepcounter{num}
[\arabic{num}\label{processing}] D.Epstein, J.Cannon, D.Holt, S.Levy, M.Paterson, W.Thurston, Word Processing in Groups, Jones and Barlett Publishers, Boston, MA, 1992
\refstepcounter{num}
[\arabic{num}\label{franco}] N.Franco, J.Gonz\'alez-Meneses, Conjugacy problem for braid groups and Garside groups, Journal of Algebra, 266, 2003, 1, pp. 112-132
\refstepcounter{num}
[\arabic{num}\label{garside}] F.A.Garside --- The braid group and other groups,Quart. J. Math. Oxford Ser. (2) 20 (1969), 235-254.
\refstepcounter{num}
[\arabic{num}\label{gebhardt}] V.Gebhardt --- A new approach to the conjugacy problem in Garside groups,J. Algebra 292 (1) (2005) 282-302
\refstepcounter{num}
[\arabic{num}\label{thurston}] W.P.Thurston --- On the geometry and dynamics of diffeomorphisms of surfaces, Bull. AMS 19 (1988) no. 2, 417-431
\end{document}
|
2,877,628,088,334 | arxiv | \section{INTRODUCTION}
Symbolic dynamics originated as a tool to investigate various natural and physical phenomena around us. The convenience of symbolic representation and easier computability of the system has attracted attention of several researchers around the globe and the topic has found applications in various branches of sciences and engineering. In particular, the area has found applications in areas like data storage, data transmission and communication systems to name a few \cite{bruce,shanon,lind1}. The structure and dynamics of a symbolic system can be used to investigate the dynamics of a general dynamical system. In fact, it is known that every discrete dynamical system can be embodied in a symbolic dynamical system (with appropriate number of symbols) \cite{fu}. Consequently, it is sufficient to study the shift spaces and its subsystems to investigate the dynamics of a general discrete dynamical system.
Let $A = \{a_i : i \in I\}$ be a finite set and let $d$ be a positive integer. Let the set $A$ be equipped with the discrete metric and let $A^{\mathbb{Z}^d}$, the collection of all functions $c : \mathbb{Z}^d \rightarrow A$ be equipped with the product topology. Any such function $c$ is called a configuration over $A$. Any configuration $c$ is called periodic if there exists $u\in\mathbb{Z}^d~~(u \neq 0)$ such that $c(v+u)=c(v)~~\forall v\in\mathbb{Z}^d$. The set $\Gamma_c= \{w\in {\mathbb{Z}}^{d} : c(v+w)=c(v)~~\forall v\in\mathbb{Z}^d\}$ is called the lattice of periods for the configuration $c$. The function $\mathcal{D} : A^{\mathbb{Z}^d} \times A^{\mathbb{Z}^d} \rightarrow \mathbb{R}^+$ be defined as $\mathcal{D} (x,y) = \frac{1}{n+1}$, where $n$ is the least non-negative integer such that $x \neq y$ in $R_n = [-n,n]^d$, is a metric on $A^{\mathbb{Z}^d}$ and generates the product topology. For any $a\in \mathbb{Z}^d$, the map $\sigma_a : A^{\mathbb{Z}^d} \rightarrow A^{\mathbb{Z}^d}$ defined as $(\sigma_a (x))(k)= x(k+a)$ is a $d$-dimensional shift and is a homeomorphism. For any $a,b\in \mathbb{Z}^d$, $\sigma_a \circ \sigma_b = \sigma_b \circ \sigma_a$ and hence $\mathbb{Z}^d$ acts on $A^{\mathbb{Z}^d}$ through commuting homeomorphisms. For any nonempty $S\subset \mathbb{Z}^d$, any element of $A^S$ is called a pattern over $S$. A pattern is said to be finite if it is defined over a finite subset of $\mathbb{Z}^d$. A pattern $q$ over $S$ is said to be extension of the pattern $p$ over $T$ if $T\subset S$ and $q|_T=p$. The extension $q$ is said to be proper extension if $T\cap Bd(S)=\phi$, where $Bd(S)$ denotes the boundary of $S$. It may be noted that any $k$- dimensional pattern can be visualized as an adjacent placement of some $k-1$- dimensional patterns. For $k-1$-dimensional pattens $B_1,B_2,\ldots,B_r$, let $B=[B_1 B_2 \ldots B_r]_i$ denote the $k$-dimensional pattern obtained by placing $B_1,B_2,\ldots,B_r$ adjacently in the $i$-th direction. We say that a patten $C=[C_1 C_2\ldots C_r]_i$ overlaps progressively with $B=[B_1 B_2 \ldots B_r]_i$ in the $i$-th direction if $B_2B_3\ldots B_r=C_1C_2\ldots C_{r-1}$. Let $\mathcal{F}$ be a given set of finite patterns (possibly over different subsets of $\mathbb{Z}^d$) and let $X=\overline{\{x\in A^{\mathbb{Z}^d}: \text{any pattern from~~} \mathcal{F} \text{~~does not appear in~~} x \}}$. The set $X$ defines a subshift of $\mathbb{Z}^d$ generated by set of forbidden patterns $\mathcal{F}$. If the shift space $X$ can be generated by a finite set of finite patterns, we say that the shift space $X$ is a shift of finite type. We say that a pattern is allowed if it is not an extension of any forbidden pattern. We denote the shift space generated by the set of forbidden patterns $\mathcal{F}$ by $X_{\mathcal{F}}$. Two forbidden sets $\mathcal{F}_1$ and $\mathcal{F}_2$ are said to be equivalent if they generate the same shift space, i.e. $X_{\mathcal{F}_1}= X_{\mathcal{F}_2}$. Refer \cite{bruce, lind1} for details.\\
Let $X$ be a two dimensional shift space over alphabet $\mathcal{A}$ and let $\mathcal{B}_{(M,N)}(X)$ denote the collection of all $M \times N $ patterns allowed for the shift space $X$. Then, $\beta_{(M,N)}: X \rightarrow (\mathcal{B}_{(M,N)}(X))^{\mathbb{Z}^2}$ defined as $(\beta_{[(M,N)]}(x))_{[(i,j)]}= x_{[i,i+M-1] \times [j,j+N-1]}$ is called $(M,N)$-higher block code. It can be proved that $\beta_{(M,N)}(X)$ is a shift space (Proposition \ref{hbc}). Further, it may be noted that for any configuration $c$ in the shift space $X$, any rectangular patterns of size $M \times N$ appearing in $\beta_{(M,N)}(c)$ placed adjacently (in any direction) overlap progressively (in that direction). A two dimensional shift space of finite type $X_{\mathcal{F}}$ is said to be $(m,n)$-step shift if it can be described by a forbidden set consisting of rectangles of size $(m+1) \times (n+1)$. If the shift space can be described by a forbidden set consisting of blocks of size $1 \times (m+1)$ or $(m+1)\times 1$ , then the shift space $X_{{\mathcal{F}}}$ is called a $m$-step shift. Analogously, for $P=(P_1,P_2,\ldots, P_k)\in\mathbb{N}^k$, one can define $\mathcal{B}_{P}(X)$ denote the collection of all $ P_1\times P_2\times \ldots \times P_k$ patterns allowed for a $k$-dimensional shift space $X$. Then, Then, $\beta_{P}: X \rightarrow (\mathcal{B}_{P}(X))^{\mathbb{Z}^k}$ defined as $(\beta_{P}(x))_{[(i_1,i_2,\ldots,i_k)]}= x_{[i_1,i_1+P_1-1] \times [i_2,i_2+P_2-1]\times\ldots \times [i_k,i_k+P_k-1]}$ is called $(P_1,P_2,\ldots, P_k)$-higher block code(or $P$-higher block code). One again, it can be proved that $\beta_{P}(X)$ is a shift space (Corollary \ref{cc}) and the results (observations) made for the two dimensional case extend analogously for a $k$-dimensional shift space. A shift space $X_{\mathcal{F}} $ is said to be aperiodic if it does not contain any periodic points.\\
Let $G$ be a graph with finite set vertices $V$ and finite set of edges $E$. It can be seen that the set of bi-infinite walks over a graph is a $1$-step shift of finite type. Also, for any given shift of finite type $X$, there exists a higher block shift (conjugate to $X$) which can be generated by a finite graph $G$. Consequently, every one dimensional shift of finite type can be visualized as a shift generated from some graph \cite{bruce,lind1}.\\
For multidimensional shifts of finite type, it is known that given a set of forbidden patterns, the non-emptiness problem for multidimensional shift spaces is undecidable \cite{ber}. In \cite{emma}, the authors show that the sets of periods of multidimensional shifts of finite type are exactly the sets of integers of the complexity class NE. They also give characterizations for general sofic and effective subshifts. In \cite{coven}, authors prove that a multidimensional of finite type has a power that can be realized as the same power of a tiling system. They show that the set of entropies of tiling systems equals the set of entropies of shifts of finite type. It is known that multidimensional shifts of finite type with positive topological entropy cannot be minimal\cite{quas}. Infact, if $X$ is subshift of finite type with positive topological entropy, then $X$ contains a subshift which is not of finite type, and hence contains infinitely many subshifts of finite type \cite{quas}. In \cite{hoch4}, Hochman proved that $h\geq 0$ is the entropy of a $\mathbb{Z}^d$ effective dynamical system if and only if it is the lim inf of a recursive sequence of rational numbers. For two dimensional shifts, Lightwood proved that strongly irreducible shifts of finite type have dense set of periodic points \cite{sam}. In \cite{pd}, the authors characterized a multidimensional shift of finite type using an infinite matrix. In \cite{pd1}, authors gave an algorithmic approach to address the non-emptiness problem for multidimensional shift space. They give an algorithm to generate the elements of the shift space using finite matrices. In the process, they prove that that elements of d-dimensional shift of finite type can be characterized by a sequence of finite matrices.\\
Although a lot of work for multidimensional shift spaces has been done, graph induced multidimensional shifts have not been investigated. If $\{G_1,G_2,\ldots,G_d\}$ is a set of $d$ graphs with a common set of vertices $V$, the collection naturally induces a $d$-dimensional shift of finite type (where $i$-th graph determines the compatibility of the vertices in the $i$-th direction). In this paper, we investigate the relation between the structure of the generating graphs $G_i$ and the shift space generated. In particular, we answer some of the questions relating the the structure of the underlying graphs with the non-emptiness problem of the shift space and existence of periodic points. For example, can every shift of finite type $X$ be generated by a finite set of graphs? when does a given collection $\{G_1,G_2,\ldots,G_d\}$ of graphs generate a non-empty shift space? When does a multidimensional shift generated by $\{G_1,G_2,\ldots,G_d\}$ exhibit periodic points? Does existence of periodicity in one direction ensure the periodicity in other directions? We now give answers to some of these questions relating the multidimensional shift space and the generating set of graphs.
\section{Main Results}
\begin{Proposition} \label{odg}
For any two dimensional one step shift of finite type $X$, there exists a two dimensional graph $G$ such that $X=X_{G}$.
\end{Proposition}
\begin{proof}
Let $X$ be a two dimensional one step shift of finite type over the finite alphabet set $\mathcal{A}$. As $X$ is one step, $X$ is generated by a forbidden set $\mathcal{F}$ such that any element of $\mathcal{F}$ is of the form $^{b}_{a}$ or $ab$ (where $a,b\in \mathcal{A}$). Define a graph $H$ ($V$) with $\mathcal{A}$ as the set of vertices and $\exists $ a directed edge from vertex $a$ to vertex $b$ in $H$ ($V$) if and only if $ab$ ($^{b}_{a}$) does not belong to $\mathcal{F}$. Then, as $G=(H,V)$ is a two dimensional graph that captures horizontal and vertical compatibility of the elements of $\mathcal{A}$, $G$ generates any arbitrary element of $X$. Consequently, $X=X_G$ and the proof is complete.
\end{proof}
\begin{Remark}
The above result establishes that any two dimensional one step shift of finite type can be generated by a two dimensional graph. It may be noted that for a $k$-dimensional one step shift of finite type $X$, if $H_i$ is the graph that captures the compatibility of the symbols in the $i$-th direction, then similar arguments establish that $G=(H_1,H_2,\ldots,H_k)$ generates an arbitrary element of $X$ (and conversely). Consequently, the above result holds for any higher dimensional one step shift and we get the following corollary.
\end{Remark}
\begin{Cor}\label{kdg}
For any $k$-dimensional one step shift of finite type $X$, there exists a $k$- dimensional graph $G$ such that $X=X_{G}$.
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark 1.
\end{proof}
\begin{Proposition} \label{hbc}
For any two dimensional shift space $X_{\mathcal{F}}$, $X^{(M,N)} $ is a shift space conjugate to $X_{\mathcal{F}}$.
\end{Proposition}
\begin{proof}
Let $X_{\mathcal{F}}$ be a shift space generated by the forbidden set $\mathcal{F}$ and let ${(M,N)} \in \mathbb{N}^{2} $. Let $\mathcal{F}^*$ be the set obtained by replacing any forbidden pattern $P$ of size less than size $M\times N$ by all $M\times N$ extensions of $P$. Then, $X_{\mathcal{F}}= X_{\mathcal{F}^*}$ and hence we obtain a modified forbidden set generating $X_{\mathcal{F}}$ such that all the forbidden patterns in the generating forbidden set are bigger than a rectangle of size $M\times N$. Further, as all the forbidden patterns can be extended to rectangles of uniform size to generate the same space, we assume all the elements of the forbidden set to be rectangles of size $R \times S$ (for some integers $R,S\in\mathbb{N}$).
For any $P\in F$, define $P^{(M,N)}$ to be a pattern of size $(R-M+1)\times (S-N+1)$ over $(\mathcal{B}_{(M,N)}(X))$ defined as $P^{(M,N)}_{[(k,l)]}= P_{[k,k+M-1]\times [l,l+N-1]}$, i.e. the $M\times N$ rectangle with left bottom corner at $(k,l)$ is placed at $(k,l)$. Let $\mathcal{F}_{1}=\{P^{(M,N)}: P\in \mathcal{F}\}$. Further, let $\mathcal{F}_{2}= \{P_{1}P_{2}:P_{1},P_{2} \in \mathcal{A}_{X}^{[M,N]} \ such \ that \ P_{1}\ and\ P_{2} \ do \ not \ overlap \ progressively \ horizontally\}$ and let $\mathcal{F}_{3}= \{^{P_{2}}_{P_{1}} \ : \ P_{1},P_{2}\in \mathcal{A}_{X}^{[M,N]} \ such \ that \ P_{1}\ and \ P_{2} \ do \ not \ overlap \ progressively \\ vertically \}$.
Note that as elements of $\mathcal{F}$ are forbidden for $X$, elements of $\mathcal{F}_{1}$ are forbidden for $X^{(M,N)}$. Also, as any two blocks placed adjacently for $X^{(M,N)}$ must overlap progressively, $\mathcal{F}_{2}$ and $\mathcal{F}_{3}$ are also forbidden for $X^{(M,N)}$ and thus $X^{(M,N)} \subset \cap_{i=1}^3 X_{\mathcal{F}_{i}}$ or $ X^{(M,N)} \subseteq X_{{\mathcal{F}_{1}} \cup {\mathcal{F}_{2}} \cup {\mathcal{F}_{3}} } $. Conversely, for any element $x$ in $X_{{\mathcal{F}_{1}}\cup {\mathcal{F}_{2}}\cup {\mathcal{F}_{3}} }$, as adjacent placement of blocks not overlapping progressively is forbidden, any two adjacent blocks overlap progressively. Further, as elements of $\mathcal{F}_1$ are forbidden for $X_{{\mathcal{F}_{1}} \cup {\mathcal{F}_{2}} \cup {\mathcal{F}_{3}}}$, any block forbidden for $X$ does not appear in $x$ . Consequently, $x\in X^{(M,N)} $ and the proof for $X^{(M,N)}= X_{{\mathcal{F}_{1}}\cup {\mathcal{F}_{2}}\cup {\mathcal{F}_{3}}}$ is complete.
Further,for any $x\in X$ as $((\beta_{(M,N)})(x))_{(i,j)}$ is a $M\times N$ pattern of $x$ with left corner at $x_{(i,j)}$, the map $\beta_{(M,N)}$ defines a conjugacy between shift space X and $ X^{(M,N)}$.
\end{proof}
\begin{Remark}\label{kde}
The above result establishes that any two dimensional shift is conjugate to its higher block code $X^{(M,N)}$. The proof uses the fact slicing any given configuration in patterns of size $M\times N$ at each $(r,s)\in \mathbb{Z}^2$ (and placing it at each $(r,s)\in \mathbb{Z}^2$) yields an element of $(\mathcal{B}_{(M,N)}(X))^{\mathbb{Z}^2}$. The correspondence is natural and indeed is a conjugacy between $X$ and $X^{(M,N)}$. Note that if $X$ is a $k$-dimensional shift space and $P\in\mathbb{N}^k$, then slicing any configuration in $X$ in patterns of size $P$ at each point in $\mathbb{Z}^k$ (and placing the slice at each point in $\mathbb{Z}^k$) extends the above result for a $k$- dimensional shift space. Thus we get the following corollary.
\end{Remark}
\begin{Cor} \label{cc}
For any $k$-dimensional shift space $X_{\mathcal{F}}$ and $P\in\mathbb{N}^k$, $X^{P} $ is a shift space conjugate to $X_{\mathcal{F}}$.
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark \ref{kde}.
\end{proof}
\begin{Proposition}
For any two dimensional shift space of finite type $X_{\mathcal{F}}$, there exists a graph $G$ such that $X = X_{G}$.
\end{Proposition}
\begin{proof}
Let $X_{\mathcal{F}}$ be a shift space of finite type generated by the forbidden set $\mathcal{F}$. If all the elements of $\mathcal{F}$ are of type $\{\alpha \beta\} $ or $\{^{\alpha}_{\beta}\}$, then $X_{\mathcal{F}}$ is one step shift of finite type. If not, let all the elements of $\mathcal{F}$ be rectangles of size $M\times N$.
By previous proposition, Since $ X^{[M,N]}$ can be viewed as one step of finite type over alphabet $\mathcal{A}_{X}^{[(M,N)]}$, shift space $X_{\mathcal{F}}$ can be expressed as one step shift of finite type. For $\mathcal{V}= \mathcal{B}_{(M,N)}(X)$, define the graph $H_1=(\mathcal{V},E_1)$ as a graph with set of vertices $\mathcal{V}$ where any two elements of $\mathcal{V}$ are connected if they overlap progressively horizontally. Let $H_2=(\mathcal{V},E_2)$ be the graph with $\mathcal{V}$ as the set of vertices where any two elements of $\mathcal{V}$ are connected if they overlap progressively vertically. Then $G=(H_1,H_2)$ generates $X^{(M,N)}$ and the proof is complete.
\end{proof}
\begin{Remark} \label{kdc}
The above result establishes that any two dimensional shift of finite type can be generated by a two dimensional graph. The proof uses the fact as any shift of finite type is conjugate to $X^{(M,N)}$, the shift space can be visualized as a one step shift of finite type. Further, as any one step shift of finite type can be generated through a graph, any two dimensional shift of finite type can be generated by some graph $G=(H,V)$. As any $k$-dimensional one step shift of finite type can be generated through a graph (Corollary \ref{kdg}), any $k$-dimensional shift of finite type can be generated by a $k$-dimensional graph. Consequently, an analogous extension of the above result is true and we get the following corollary.
\end{Remark}
\begin{Cor}
Every $k$-dimensional shift of finite type $X_{\mathcal{F}}$ can generated by some $k$-dimensional graph $G$.
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark \ref{kdc}.
\end{proof}
\begin{ex}
Let X be two dimensional shift space with alphabet $\{e,f,g\}$ with forbidden pattern set $\mathcal{F}= \{ff,gg,fe,eg, \ ^{f}_{f}, \ ^{e}_{e}, \ ^{g}_{g}, \ ^{e}_{f} , \ ^{g}_{e} \}$. Then, graph G for this shift space is given by figure 1.
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{1.png}
\caption{}
\end{figure}
Then, as there exists $2\times 2$ patterns whose infinite repetition (in both directions) tiles the plane in an allowed manner, the shift space is non-empty and exhibits periodic points. Further, an arbitrarily large central block of a given configuration can be infinitely repeated to obtain an element of $X$, the shift space exhibits a dense set of periodic points.
\end{ex}
\begin{ex}
Let X be two dimensional Golden Mean shift space over alphabet $\{0,1\}$ with forbidden pattern set $\mathcal{F}= \{11, \ ^{1}_{1} \}$. Then, $X= X_{G}$, where graph G is given by figure 2.
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{GMS.png}
\caption{}
\end{figure}
Then, appearance of two consecutive $1$'s is forbidden in any direction. As the configuration comprising of all $0$'s is a valid element of $X$, the shift space $X$ is indeed non-empty. Note that any allowed $2\times 2$ pattern can be extended to a valid repetition of the shift space $X$. Once again, as arbitrarily large central blocks of a given configuration can be infinitely repeated to obtain an element of $X$, the shift space exhibits a dense set of periodic points.
\end{ex}
\begin{Proposition}
For any one step $2$-dimensional shift of finite type $X$, $X$ has a horizontally periodic point if and only if $X$ has a $(m,n)$ periodic point (for some $m,n\in \mathbb{Z}\setminus \{0\}$).
\end{Proposition}
\begin{proof}
Let $X$ be a one step shift of finite type and let $x\in X$ be a $(m,0)$ periodic point. Then, note that $x$ is a infinite horizontal repetition of an infinite vertical strip of width $m$ (say $\mathbb{S}$). Further as $\mathbb{S}$ can be realized as a vertical arrangement of one dimensional strips of length $m$, there exists a $1\times m$ block $a_1,a_2\ldots a_m$ which appears twice in $\mathbb{S}$ (say at heights $u$ and $v$). Consequently, infinite repetition of the block $x_{[0,m-1]\times [u,v-1]}$ is an element of $X$ and is periodic of period $(m,v-u)$. \\
Conversely, if $X$ has a $(m,n)$ periodic point then there exists an infinite (horizontal) strip $\mathbb{S}$ such that $x$ is a vertical arrangement of shifts of $\mathbb{S}$ (where $\sigma^{(-m,0)}(\mathbb{S}), \mathbb{S}, \sigma^{(m,0)}(\mathbb{S}), \sigma^{(2m,0)}(\mathbb{S}), \ldots$ are placed vertically one over the other to obtain $x$). As the blocks of size $m\times n$ are finite, there exists a block $B_0$ of size $m\times n$ that appears in $x$ at $(u,0)$ and $(v,0)$. Consequently, if $B_0 B_1\ldots B_k B_0$ is a block appearing in $X$ then the $k \times k$ rectangular arrangement of $B_0,B_1,\ldots,B_k$ where $B_{(k-j+i+1) \text{mod}(k+1)}$ is placed at $(i,j)$-th position is an allowed rectangular block. Further, as infinite repetition of the block generated yields an allowed configuration of $X$, the shift space exhibits a horizontally periodic point and the proof is complete.
\end{proof}
\begin{Remark}\label{pp}
The above proof establishes equivalence of existence of periodic points with existence of horizontally periodic points for a shift of finite type. The proof uses the fact that any $(m,n)$ periodic point (with $m,n\neq 0$) can be realized as a vertical arrangement of shifts of an infinite horizontal strip of height $n$. The periodic point generated is also vertically periodic and hence the proof establishes equivalence of existence of periodic points with existence of vertically periodic points for a shift of finite type. Thus we get the following corollary.
\end{Remark}
\begin{Cor}
For any one step $2$-dimensional shift of finite type $X$, $X$ has a vertically periodic point if and only if $X$ has a $(m,n)$ periodic point (for some $m,n\in \mathbb{Z}\setminus \{0\}$).
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark \ref{pp}.
\end{proof}
\begin{Proposition}
A two dimensional shift space $X_{G}$ is finite iff it can be generated by a pair of permutation matrices.
\end{Proposition}
\begin{proof}
Firstly note that any finite shift space is a union of finitely many periodic points (with finite orbits). Also, if $X$ itself is a single periodic orbit then $X$ can be visualized as an infinite repetition (both horizontal and vertical) of an $m\times n$ rectangle. Then, if $H$ and $V$ are indexed with allowed rectangles of size $m\times n$ capturing horizontal and vertical compatibility of the indices then $H$ and $V$ are permutation matrices and the graph $G=(H,V)$ generates the shift space $X$. Finally if $X$ is a union of periodic orbits, a similar argument applied to each periodic orbit (and collating the set of indices to generate $H$ and $V$) generates a pair of permutation matrices that generate $X$ and the proof of forward part is complete.
Conversely, if the generating matrices are permutation matrices then fixing the entry at the origin fixes the entries in the immediate neighborhood and hence fixes all the entries at other coordinates. Consequently, the shift space $X$ is finite and the proof is complete.
\end{proof}
\begin{Remark}
The above result establishes that a two dimensional shift space is finite if and only if it can be generated by a pair of permutation matrices. However, finiteness of the shift space $X_{G}$ does not enforce the generating matrices $H$ and $V$ to be permutation matrices. To establish our claim, let $X$ be the shift space generated by the graph shown in Figure $3$. Then, it can be seen that although the shift generated by the graph is finite, the associated adjacency matrices H and V are not permutation matrices and hence the claim is indeed true.
\begin{figure}[h]
\includegraphics[height=6.0cm, width=12.0cm]{exxx.png}
\caption{}
\end{figure}
$$\textit{H}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 1 & 1 & 0 \cr
1 & 0 & 0 & 1 \cr
2 & 1 & 0 & 0 \cr
}
\ \ \ \ \ \ \ \ \
\textit{V}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 1 & 1 \cr
1 & 1 & 0 & 0 \cr
2 & 1 & 0 & 0 \cr
}$$
But $X_{G}$ is finite as it is the orbit of a single periodic point (given below):
$$ {\begin{array}{ccccccccccccccccccccccc}
\ldots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ldots \\
\ldots & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & \ldots \\
\ldots & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & \ldots \\
\ldots & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & \ldots \\
\ldots & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & \ldots \\
\ldots & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 0 & 0 & \ldots \\
\ldots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ldots \\
\end{array} } $$
Consequently, finiteness of the shift space $X$ does not guarantee the generating matrices to be permutation matrices.\\
\end{Remark}
We now discuss non-emptiness of shift spaces using adjacency matrices H and V. Note that while $(HV)_{ij}$ computes number of ways pattern $^{\ \ \ {j}}_{ \ {i} \ \ \ }$ can be extended to triangular pattern of form $^{\ \ \ {j}}_{ \ {i} \ {k} } \in \mathcal{B}(X_{G})$, $(VH)_{ij}$ computes the number of ways pattern $^{\ \ \ {j}}_{ \ {i} \ \ \ }$ can be extended to triangular pattern of form $^{\ {l} \ \ {j}}_{ \ {i} \ } \in \mathcal{B}(X_{G})$. As removing the vertices with no incoming (or outgoing) edge (both horizontally or vertically) does not effect the shift space generated, we assume that the generating matrices do not contain any zero row or zero column.
\begin{Proposition}
Let G be graph with associated adjacency matrices H and V. If H and V are irreducible permutation matrices then $HV = VH$ iff $X_{G} \neq \phi$.
\end{Proposition}
\begin{proof}
Let $X_G$ be the shift space generated by $G=(H,V)$. If $H$ and $V$ are permutation matrices then fixing an entry at origin uniquely determines the immediate neighbors of any symbol (in both horizontal and vertical directions). Further as $HV$ and $VH$ are permutation matrices (characterizing blocks of form ${\begin{array}{cc} & b \\ a & *\\ \end{array} }$ and ${\begin{array}{cc} * & b \\ a & \\ \end{array} }$ respectively), any block ${\begin{array}{cc} & b \\ a & *\\ \end{array} }$ can be extended to a $2\times 2$ square if and only if $HV=HV$. As the possible choices for immediate neighbor are unique, $X_G$ is non-empty if and only if $HV=VH$ and the proof is complete.
\end{proof}
\begin{Remark}\label{ne}
It may be noted that for any shift generated by permutation matrices, as the immediate neighborhood of a symbol is uniquely determined, the shift space generated by permutation matrices is always finite (may be empty). The above result establishes that a two dimensional shift space generated by a pair of irreducible matrices is non-empty if and only if and only if the generating matrices commute with each other. The proof follows from the fact that if $HV=VH$, any pattern of the form ${\begin{array}{cc} & c \\ a & b\\ \end{array} }$ can be extended to a $2\times 2$ square and hence shift space generated is non-empty ( in fact, is a finite shift space comprising of a single periodic orbit). Note that if $(HV)_{ij} \neq 0 \Leftrightarrow (VH)_{ij} \neq 0~~\forall i,j$ then the shift space is non-empty and hence a more general form of the above result is true. Further, note that if $(HV)_{ij}\neq 0\implies (VH)_{ij}\neq 0$ (or $(VH)_{ij}\neq 0\implies (HV)_{ij}\neq 0$), shift space generated does not contain any forbidden pattern of the form ${\begin{array}{cc} & c \\ a & b\\ \end{array} }$ (or ${\begin{array}{cc} a & b \\ c & \\ \end{array} }$) and hence the shift space generated is once again non-empty. Thus we get the following corollaries.
\end{Remark}
\begin{Cor}
Let $X_{G}$ be a shift space generated by $G=(H,V)$. If $(HV)_{ij} \neq 0 \iff (VH)_{ij} \neq 0~~\forall i,j$ then $X_{G} \neq \phi$.
\end{Cor}
\begin{proof}
The proof follows from the discussions in Remark \ref{ne}.
\end{proof}
\begin{Cor}
Let $X_{G}$ be a shift space generated by $G=(H,V)$. If $(HV)_{ij} \neq 0 \Rightarrow (VH)_{ij} \neq 0$. Then $X_{G} \neq \phi$.
\end{Cor}
\begin{proof}
The proof follows from the fact that if $(HV)_{ij} \neq 0 \Rightarrow (VH)_{ij} \neq 0$, the shift space does not contain forbidden pattern of the form ${\begin{array}{cc} & c \\ a & b\\ \end{array} }$. Consequently, any arbitrarily large $1\times r$ pattern can be extended to an $r\times r$ square. As the shift space contains valid arbitrarily large squares, the shift space is non-empty and the proof is complete.
\end{proof}
\begin{Remark}
The above proposition proves that if $H$ and $V$ are irreducible permutation matrices, the shift space is non-empty if $HV=VH$. Further as $HV=VH$ ensures that every pattern of the form ${\begin{array}{cc} & c \\ a & b\\ \end{array} }$ (or ${\begin{array}{cc} a & b \\ c & \\ \end{array} }$) extendable to $2\times 2$ square, the shift space is non-empty if $HV=VH$. However, if $H$ and $V$ are not irreducible, any element of the shift space can possibly be generated by sub-matrices of $H$ and $V$ and hence the shift space can be non-empty even when $HV=VH$ does not hold good. We now give an example in support of our claim.
\end{Remark}
\begin{ex}
Let $X$ be a shift space generated by the graph in Figure 4.
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{2.png}
\caption{}
\end{figure}
Then,
$$\textit{H}= \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr
0 & 0 & 0 & 1 & 0 & 0 & 0\cr
1 & 1 & 0 & 0 & 0 & 0 & 0 \cr
2 & 0 & 1 & 0 & 0 & 0 & 0\cr
3 & 0 & 0 & 0 & 0 & 0 & 1\cr
4 & 0 & 0 & 0 & 1 & 0 & 0\cr
5 & 0 & 0 & 0 & 0 & 1 & 0\cr
}
\ \ \ \ \ \ \ \ \
\textit{V}= \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr
0 & 0 & 1 & 0 & 0 & 0 & 0\cr
1 & 0 & 0 & 1 & 0 & 0 & 0\cr
2 & 1 & 0 & 0 & 0 & 0 & 0\cr
3 & 0 & 0 & 0 & 0 & 0 & 1\cr
4 & 0 & 0 & 0 & 0 & 1 & 0\cr
5 & 0 & 0 & 0 & 1 & 0 & 0\cr
}$$
Note that $G$ can be written as a union of disjoint graphs $G_1$ and $G_2$ indexed by symbols $0,1,2$ and $3,4,5$ respectively. Further, while matrices capturing horizontal and vertical compatibility of $G_1$ commute, matrices capturing horizontal and vertical compatibility of $G_2$ do not commute and hence $X_{G_1}\neq \phi$ but $X_{G_2}=\phi$. Consequently, $X_G=X_{G_1}$ and the shift space is indeed non-empty. Thus, the shift space is generated by a non-commuting pair of permutation matrices.\\
\end{ex}
\begin{Remark}
The above results investigate the non-emptiness of the shift space using the matrices $HV$ and $VH$. However, as $HV^T$ and $V^TH$ characterizes allowed patterns of the form ${\begin{array}{cc} a & b \\ & c\\ \end{array} }$ and ${\begin{array}{cc} a & \\ b & c \\ \end{array} }$ respectively, the non-emptiness problem and existence of periodic points can be investigated using the matrices $HV^T$ and $V^TH$. However, it is worth mentioning that the two conditions are indeed independent and hence can be used independently to investigate the shift space under discussion. We now give an example in support of our claim.
\end{Remark}
\begin{ex}
Let $X$ be the shift space arising from graph in figure-5 over symbol set $ \{1,2,3 \}$.
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{ex1.png}
\caption{}
\end{figure}
Then, generating matrices corresponding to given graph are:\\
$$\textit{H}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 0 & 1 & 0 \cr
2 & 0 & 0 & 1 \cr
3 & 1 & 1 & 0 \cr
}
\ \ \ \ \ \ \ \ \
\textit{V}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 0 & 0 & 1 \cr
2 & 1 & 0 & 1 \cr
3 & 0 & 1 & 0 \cr
}$$
Then,
$$\textit{HV}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 0 & 1 \cr
2 & 0 & 1 & 0 \cr
3 & 1 & 0 & 2 \cr
}
\ \ \ \ \ \ \ \ \
\textit{VH}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 1 & 0 \cr
2 & 1 & 2 & 0 \cr
3 & 0 & 0 & 1 \cr
}$$
For the above example, one can find indices $i,j$ such that of $(HV)_{ij} \neq 0$ but $(VH)_{ij} =0 $ (and indices $k,l$ such that $(VH)_{kl} \neq 0 $ but $(HV)_{kl} =0$). Consequently, the condition $(HV)_{ij}=0 $ iff $ (VH)_{ij}=0$ does not hold good and the derived results cannot be used to investigate the non-emptiness of the shift space. However, as $H=V^T$, we have $HV^{T}=V^{T}H$ and the shift space is indeed non-empty (and possesses periodic points)
\end{ex}
\begin{Proposition}
Let $X_{G}$ be a shift space generated by $G=(H,V)$. If $(HV)_{ij} \neq 0 \iff (VH)_{ij} \neq 0~~\forall i,j$ then $X_{G}$ possesses periodic points (of arbitrarily large periods).
\end{Proposition}
\begin{proof}
Let $X_{G}$ be a shift space generated by $G=(H,V)$ and let $m\in \mathbb{N}$. Let $u$ be a block of size $1\times m$. As $(HV)_{ij} \neq 0 \iff (VH)_{ij} \neq 0~~\forall i,j$, $u$ can extended to a pattern of size $k\times m$ (for any $k\in \mathbb{N}$). Without loss of generality, let $u$ be extended to a rectangle $v$ of size $s\times m$ such that $v_{00}=v_{ms}$. As $v_{00}=v_{ms}$, the block $v$ can be further extended to the block ${\begin{array}{cc} & v \\ v & \\ \end{array} }$ (along the line $sx-my=0$) to obtain a valid pattern of $X$. Further, as $(HV)_{ij} \neq 0 \iff (VH)_{ij} \neq 0~~\forall i,j$, the pattern can be extended to valid $2m\times 2s$ pattern for the shift space. Finally, note that the infinite such repetition of $v$ (along the line $sx-my=0$) and extending the pattern with the same choices (as in the previous step) yields a valid periodic point for the shift space. As the proof holds for any $m\in \mathbb{N}$, the shift space contains periodic points of arbitrarily large periods and the proof is complete.
\end{proof}
\begin{Remark}\label{hvp}
The above result establishes the existence of periodic points under the condition $(HV)_{ij} \neq 0 \iff (VH)_{ij} \neq 0~~\forall i,j$. The proof uses the condition to extend the pattern for the form to a valid $2m\times 2s$ pattern. As such a repetition can be made infinitely often, filling the choices in a unique manner at each step yields a periodic point for the shift space. Note that as such an extension is possible under $(HV)_{ij} \neq 0 \implies (VH)_{ij} \neq 0~~\forall i,j$, the result holds good under a weaker condition. Further, as similar arguments establish the result under the condition $(VH)_{ij} \neq 0 \implies (HV)_{ij} \neq 0~~\forall i,j$, we get the following corollary.
\end{Remark}
\begin{Cor}
Let $X_{G}$ be a shift space generated by $G=(H,V)$. If $(HV)_{ij} \neq 0 \implies (VH)_{ij} \neq 0~~\forall i,j$ (or $(VH)_{ij} \neq 0 \implies (HV)_{ij} \neq 0~~\forall i,j$) then $X_{G}$ possesses periodic points (of arbitrarily large periods).
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark \ref{hvp}.
\end{proof}
\vskip 0.5cm
Let $X$ be a shift space generated by a graph $G$. It may be noted that if $(HV)_{ij}=0$ then any block of the form ${\begin{array}{cc} & j \\ i & *\\ \end{array} }$ is forbidden for the shift space $X$. Consequently, the set $\{(i,j): (HV)_{ij}=0 \text{~but~} (VH)_{ij}\neq 0\}$ characterizes all patterns of the form ${\begin{array}{cc} * & j \\ i & \\ \end{array} }$ which cannot be extended to a $2\times 2$ square. Similarly, the set $\{(i,j): (VH)_{ij}=0 \text{~but~} (HV)_{ij}\neq 0\}$ characterizes all patterns of the form ${\begin{array}{cc} & j \\ i & *\\ \end{array} }$ which cannot be extended to a $2\times 2$ square. As such patterns do not contribute towards generation of elements of $X$, ignoring this piece of information leads to generation of elements with reduced complexity. Thus let us set $(HV)_{ij}=0$ if $(VH)_{ij}=0$ (and conversely). \\
Let $A_{1} = \{ ^{\ \ \ {c}}_{ \ {a} \ {b} } \ : \exists~~ d \ \in \mathcal{V}(G) \ such \ that \ ^{d \ \ c}_ { a \ \ b} \ \in \mathcal{B}({X}_{G}) \}$ and $A_{2} = \{ {^{{y} \ {z}}_{x} } \ : \exists \ w \in \mathcal{V}(G) \ such \ that \ ^{y \ \ z}_ { x \ \ w} \ \in \mathcal{B}({X}_{G}) \}$. Let $M$ and $N$ be matrices indexed by elements of $A_1$ and $A_2$ in the following manner:\\
For $I \ = \ ^{\ \ \ \ {a_{3}}}_{ \ {a_{1}} \ {a_{2}} }~~$, $~~J = \ ^{\ \ \ \ {a_{5 }}}_{ \ {a_{3}} \ {a_{4}} }, ~~R = {^{{b_{2}} \ {b_{3}}}_{b_{1}}} $ and $~~S = {^{{b_{4}} \ {b_{5}}}_{b_{3}}}$\\
$M_{IJ}=
\begin{cases}
1, & if \ \ {^{{a_{3}} \ {a_{4}}}_{a_{2}} } \in {A}_{2} \\
0, & otherwise
\end{cases}
~~$
and
$~~N_{RS}=
\begin{cases}
1, & if \ \ ^{\ \ \ \ {b_{4}}}_{ \ {b_{2}} \ {b_{3}} } \in {A}_{1} \\
0, & otherwise
\end{cases}
$
We identify the pair of indices $I= ^{\ \ \ \ {a_{3}}}_{ \ {a_{1}} \ {a_{2}} }$ and $J= {^{{a_{4}} \ {a_{3}}}_{a_{1}}}$ as an E-pair. We now investigate the non-emptiness of the shift space using the notion of an E-pair.
\begin{Proposition}
Let $X$ be a two dimensional shift of finite type and let the sequence space generated by $M$ and $N$ be non-empty. If for every $M_{ij} \neq 0$ and for every E-pair $" i_{1}"$ of i, $\exists $ an E-pair $"j_{1}"$ of j such that $N_{i_{1}j_{1}} \neq 0 $
then, $X_{G} \neq \phi $.
\end{Proposition}
\begin{proof}
Let $X$ be a shift of finite type such that the sequence spaces generated by $M$ and $N$ are non-empty. Let $M_{ij} \neq 0$ and let $i_1$ be an E-pair of $i$. If there exists an E-pair $"j_{1}"$ of j such that $N_{i_{1}j_{1}} \neq 0$ then the pattern $^{\ \ \ \ {a_{3}} \ ^{ \ a_{5} }_{ \ a_{4}}}_{ \ {a_{1}} \ {a_{2}} }$ can be extended to a $3\times 3$ pattern. As the shift spaces generated by $M$ and $N$ are non-empty, any finite pattern generated by $M$ can be extended to an allowed rectangle of arbitrarily large size and hence can be extended to an element of the shift space. Consequently, the shift space is non-empty and the proof is complete.
\end{proof}
\begin{Remark}\label{ne2}
The above result establishes the non-emptiness of the shift space using the notion of an E-pair. In particular, the proof establishes that if for every $M_{ij} \neq 0$ and for every E- pair $" i_{1}"$ of i, $\exists $ an E-pair $"j_{1}"$ of j such that $N_{i_{1}j_{1}}\neq 0$ then the shift space is non-empty. It may be noted that the proof ensures the extension of compatible E-pairs into a $3\times 3$ square and hence the condition "for every $M_{ij} \neq 0$ and for every E- pair $" i_{1}"$ of i, $\exists $ an E-pair $"j_{1}"$ of j such that $N_{i_{1}j_{1}} \neq 0 $" is sufficient (but not necessary) to ensure non-emptiness of the shift space. A similar argument proves that if or every $N_{kl} \neq 0$ and for every E- pair $" k_{1}"$ of k, $\exists $ an E-pair $"l_{1}"$ of l such that $M_{k_{1}l_{1}} \neq 0 $ then the shift space is non-empty and hence we get the following corollary.
\end{Remark}
\begin{Cor}
Let $X$ be a two dimensional shift of finite type and let the sequence space generated by $M$ and $N$ be non-empty. If for every $N_{kl} \neq 0$ and for every E- pair $" k_{1}"$ of k, $\exists$ an E-pair $"l_{1}"$ of l such that $M_{k_{1}l_{1}} \neq 0 $ then $X_{G} \neq \phi $.
\end{Cor}
\begin{proof}
The proof follows from discussions in Remark \ref{ne2}
\end{proof}
\begin{Proposition}
A shift space $X_{G}$ is finite if it follows two conditions: \
\begin{enumerate}
\item M and N are permutation matrices.
\item Every pattern in $A_{1}$ and $A_{2}$ has unique E-pair.
\end{enumerate}
\end{Proposition}
\begin{proof}
Let $X$ be a shift space and let $(1)$ and $(2)$ hold good. Firstly, it may be noted that as $M$ and $N$ are permutation matrices, the shift spaces generated by $M$ and $N$ are finite (union of periodic orbits). Further, as every triangular pattern is uniquely extendable to a $2\times 2$ pattern, every infinite pattern generated by $M$ is uniquely extendable to an element of the shift space. Consequently, the shift space is finite and the proof is complete.
\end{proof}
We now give examples to show that shift space may not be finite if any of the above two conditions are dropped.
\begin{ex}
Let $X$ be the shift space arising from Figure 6. Then, the adjacency matrices associated with graph are:
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{cex1.png}
\caption{}
\end{figure}
$$\textit{H}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 1 & 0 \cr
1 & 0 & 0 & 1 \cr
2 & 1 & 1 & 0 \cr
}
\ \ \ \ \ \ \ \ \
\textit{V}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 1 & 1 & 0 \cr
1 & 0 & 0 & 1 \cr
2 & 1 & 1 & 0 \cr
}$$
Then,
$$\textit{HV}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 0 & 1 \cr
1 & 1 & 1 & 0 \cr
2 & 1 & 1 & 1 \cr
}
\ \ \ \ \ \ \ \ \
\textit{VH}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 1 & 1 \cr
1 & 1 & 1 & 0 \cr
2 & 0 & 1 & 1 \cr
}$$
Note that there exists indices $i,j$ such that $(HV)_{ij} \neq 0$ but $(VH)_{ij} =0 $ (and there exists $k, l$ such that $(VH)_{kl} \neq 0 $ but $(HV)_{kl} =0$). Updating the matrices $HV$ and $VH$ we obtain
$$\textit{HV}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 0 & 1 \cr
1 & 1 & 1 & 0 \cr
2 & 0 & 1 & 1 \cr
}
\ \ \ \ \ \ \ \ \
\textit{VH}= \bordermatrix{ & 0 & 1 & 2 \cr
0 & 0 & 0 & 1 \cr
1 & 1 & 1 & 0 \cr
2 & 0 & 1 & 1 \cr
}$$
Using above matrices, we obtain: \\
$\mathcal{A}_{1} = \{ ^{\ \ \ {2}}_{ \ {0} \ {1} } , ^{\ \ \ {0}}_{ \ {1} \ {2} } , ^{\ \ \ {1}}_{ \ {1} \ {2} } , ^{\ \ \ {1}}_{ \ {2} \ {0} } , ^{\ \ \ {2}}_{ \ {2} \ {1} } \}$ and
$\mathcal{A}_{2} =\{ {^{{1} \ {2}}_{0} } , \ {^{{2} \ {0}}_{1} } , \ {^{{2} \ {1}}_{1} } , \ {^{{0} \ {1}}_{2} } , \ {^{{1} \ {2}}_{2} } \}$ \\
It can be verified that every element of $\mathcal{A}_{1}$ and $ \mathcal{A}_{2}$ can be extended to $2 \times 2 $ square uniquely and hence condition (2) holds. The matrices M and N are :
$$\textit{M}= \bordermatrix{ & ^{\ \ \ {2}}_{ \ {0} \ {1} } & ^{\ \ \ {0}}_{ \ {1} \ {2} } & ^{\ \ \ {1}}_{ \ {1} \ {2} } & ^{\ \ \ {1}}_{ \ {2} \ {0} } & ^{\ \ \ {2}}_{ \ {2} \ {1} } \cr
^{\ \ \ {2}}_{ \ {0} \ {1} } & 0 & 0 & 0 & 1 & 1 \cr \\
^{\ \ \ {0}}_{ \ {1} \ {2} } & 1 & 0 & 0 & 0 & 0 \cr \\
^{\ \ \ {1}}_{ \ {1} \ {2} } & 0 & 1 & 1 & 0 & 0 \cr \\
^{\ \ \ {1}}_{ \ {2} \ {0} } & 0 & 1 & 1 & 0 & 0\cr
^{\ \ \ {2}}_{ \ {2} \ {1} } & 0 & 0 & 0 & 1 & 1 \cr
}$$
$$\textit{N}= \bordermatrix{ & {^{{1} \ {2}}_{0} } &{^{{2} \ {0}}_{1} } &{^{{2} \ {1}}_{1} } & {^{{0} \ {1}}_{2} } & {^{{1} \ {2}}_{2} } \cr
{^{{1} \ {2}}_{0} } & 0 & 0 & 0 & 1 & 1 \cr \\
{^{{2} \ {0}}_{1} } & 1 & 0 & 0 & 0 & 0 \cr \\
{^{{2} \ {1}}_{1} } & 0 & 1 & 1 & 0 & 0 \cr \\
{^{{0} \ {1}}_{2} } & 0 & 1 & 1 & 0 & 0\cr
{^{{1} \ {2}}_{2} } & 0 & 0 & 0 & 1 & 1 \cr
}$$
It can be seen that $M$ and $N$ are not permutation matrices and the shift space $X$ is not finite. Consequently, the Proposition $8$ does not hold good if $M$ and $N$ are not ensured to be permutation matrices.
\end{ex}
\begin{ex}
Let $X$ be the shift space arising from graph $G$ in figure-7. Then, the adjacency matrices corresponding to the graph $G$ are:\\
\begin{figure}[h]
\includegraphics[height=5.0cm, width=10.0cm]{ex1.png}
\caption{}
\end{figure}
$$\textit{H}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 0 & 1 & 0 \cr
2 & 0 & 0 & 1 \cr
3 & 1 & 1 & 0 \cr
}
\ \ \ \ \ \ \ \ \
\textit{V}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 0 & 0 & 1 \cr
2 & 1 & 0 & 1 \cr
3 & 0 & 1 & 0 \cr
}$$
Further,
$$\textit{HV}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 0 & 1 \cr
2 & 0 & 1 & 0 \cr
3 & 1 & 0 & 2 \cr
}
\ \ \ \ \ \ \ \ \
\textit{VH}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 1 & 0 \cr
2 & 1 & 2 & 0 \cr
3 & 0 & 0 & 1 \cr
}$$
Once again, note that there exists indices $i,j$ such that $(HV)_{ij} \neq 0$ but $(VH)_{ij} =0 $ (and there exists $k, l$ such that $(VH)_{kl} \neq 0 $ but $(HV)_{kl} =0$). Updating the matrices $HV$ and $VH$ we obtain
$$\textit{HV}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 0 & 0 \cr
2 & 0 & 1 & 0 \cr
3 & 0 & 0 & 2 \cr
}
\ \ \ \ \ \ \ \ \
\textit{VH}= \bordermatrix{ & 1 & 2 & 3 \cr
1 & 1 & 0 & 0 \cr
2 & 0 & 2 & 0 \cr
3 & 0 & 0 & 1 \cr
}$$
Consequently, \\
$\mathcal{A}_{1} = \{ ^{\ \ \ {1}}_{ \ {1} \ {2} } , ^{\ \ \ {2}}_{ \ {2} \ {3} } , ^{\ \ \ {3}}_{ \ {3} \ {1} } , ^{\ \ \ {3}}_{ \ {3} \ {2} } \}$ and
$\mathcal{A}_{2} =\{ {^{{3} \ {1}}_{1} } , \ {^{{3} \ {2}}_{2} } , \ {^{{1} \ {2}}_{2} } , \ {^{{2} \ {3}}_{3} } \}$ \\
and \\
$$\textit{M}= \bordermatrix{ & ^{\ \ \ {1}}_{ \ {1} \ {2} } & ^{\ \ \ {2}}_{ \ {2} \ {3} } & ^{\ \ \ {3}}_{ \ {3} \ {1} } & ^{\ \ \ {3}}_{ \ {3} \ {2} } \cr
^{\ \ \ {1}}_{ \ {1} \ {2} } & 1 & 0 & 0 & 0 \cr \\
^{\ \ \ {2}}_{ \ {2} \ {3} } & 0 & 1 & 0 & 0 \cr \\
^{\ \ \ {3}}_{ \ {3} \ {1} } & 0 & 0 & 1 & 0 \cr \\
^{\ \ \ {3}}_{ \ {3} \ {2} } & 0 & 0 & 0 & 1 \cr
}$$
$$\textit{N}= \bordermatrix{ & {^{{3} \ {1}}_{1} } & {^{{3} \ {2}}_{2} } &{^{{1} \ {2}}_{2} } & {^{{2} \ {3}}_{3} } \cr
{^{{3} \ {1}}_{1} } & 1 & 0 & 0 & 0 \cr \\
{^{{3} \ {2}}_{2} } & 0 & 1 & 0 & 0 \cr \\
{^{{1} \ {2}}_{2} } & 0 & 0 & 1 & 0 \cr \\
{^{{2} \ {3}}_{3} } & 0 & 0 & 0 & 1 \cr
}$$
Clearly, M and N are permutation matrices but not every triangular pattern is getting extended uniquely to $2 \times 2$ pattern. It can be seen that the shift space generated is not finite and hence Proposition $8$ does not hold good if any of the two conditions are dropped.
\end{ex}
|
2,877,628,088,335 | arxiv | \section{\label{sec:level1}Introduction}
Attainment of boson and fermion condensates~\cite{bei} of ultracold neutral atoms
has presented an unprecedented opportunity to study properties of
quantum many-particle systems. {\it Fermionic} atoms in {\it optical lattices}~\cite{greiner02,kohl05}
constitute yet another intriguing set of systems.
While these are by themselves interesting to study, they may also provide a way to
gain useful insight into properties of correlated electrons
in solids. Jaksch et al~\cite{jaksch98} suggested that atoms in optical lattices, confined
to the lowest Bloch band, can be represented by the Hubbard model with
hopping kinetic energy $t$ between neighboring sites,
and on-site interaction $U$.
Hubbard model calculations~\cite{micnas,theory,kotliar88} predict that attractive-U
Hubbard model give rise to s-wave superconductivity, while the repulsive-U model results
in an antiferromagnetic or a d-wave superconducting phase depending on filling (number of
fermions per lattice site). Owing to the continuous tunability of model parameters such as,
density, hopping or interactions,
optical lattices can serve as testing grounds for such models.
This has led, for example, to the suggestion~\cite{hofstetter02} that the
underlying physics of the high $T_c$ superconductors may be understood by
studying these systems.
Recent work~\cite{kohl05,duan05,multiband} have pointed out possible
role of additional Bloch bands and multi-band couplings
in optical lattices. In solids this would correspond to having multiple orbitals and
near-neighbor interactions. Duan~\cite{duan05} has shown that on different sides of a
broad Feshbach resonance, the effective Hamiltonian can be reduced to a t-J model, familiar
in correlated electron systems, wherein it has been suggested~\cite{kotliar88} that t-J model
can give rise to d-wave pairing.
Fermionic atoms subjected to positive and negative detuning using Feshbach resonance
technique provide realizations of BEC-BCS crossover behavior.
It has been recently suggested~\cite{euro} that it should also
be possible to study superfluid properties
of fermions in {\it optical lattices} around BEC-BCS crossover regime.
Starting with the seminal work of Eagles~\cite{eagles} and Leggett~\cite{leggett},
the BEC-BCS crossover problem received considerable theoretical
attention~\cite{micnas,VQ,becbcshtccont,melo,becbcshtclat,derhertog99,andrenacci99}
due to the possibility that high $T_c$ superconductors, possessing short coherence lengths,
could fall in the BEC-BCS crossover region. Several authors employed
continuum models~\cite{micnas,VQ,becbcshtccont,melo,andrenacci99},
focussing mostly on conventional s-wave pair symmetry. Lattice models with on-site
or nearest-neighbor attractions have also been
considered~\cite{micnas,VQ,becbcshtclat,derhertog99,andrenacci99}.
More recent theory work~\cite{becbcscold}
are in the context of cold fermions.
Motivated by these issues, {\it in this paper},
we study superfluid properties of fermions in a 2D square lattice
in the BEC-BCS crossover regime using a finite-range {\it pairing} interaction,
obtainable from a multi-band {\it extended} Hubbard model.
As representative cases of unconventional pair symmetry,
we consider two even-parity representations of the
cubic group, namely the $\ell =2$ $d_{x^2-y^2}$-wave, and the
$\ell=0$ extended s-wave ($s^*$).
There has been work~\cite{derhertog99,andrenacci99,chen} employing similar pairing interaction;
however these have focussed on different systems and issues. We present several new results,
including specific signatures of superfluid states with
{\it unconventional pairing gap} symmetry as one goes between the BEC and BCS regimes.
This could provide a way to distinguish between different gap symmetry states
in systems that allow for tuning into the BEC-BCS crossover regime, such as fermionic atoms in
2D optical lattices, and possibly high $T_c$ cuprates.
One of our key results is
the remarkable behavior of the fermion distribution function, $v_k^2$,
(related to momentum distribution, $n_k$):
For the d-wave gap function, $v_k^2$ changes {\it abruptly} from
having a peak at the Brilloiun zone (BZ) center (0,0)
to a vanishing central peak accompanied by a
redistribution of the weight around other parts of the
BZ ($(0,\pm \pi)$,$(\pm \pi, 0)$) as the system crosses from
the weak-coupling BCS to the strong-coupling BEC regime.
By contrast, $v_k^2$ changes smoothly in the $s^*$-wave case.
Similar signatures are also found in the ratio of Bogoliubov coefficients $v_k/u_k$,
related to the phase of the superfluid wavefunction. The Fourier transform of $v_k^2$
in real space exhibits a ``checkerboard'' type pattern that could have consequences for experiments.
The extended Hubbard model for two equal species population system on a 2D square lattice
is given by:
\begin{eqnarray}
H&=&\sum_{<ij>\sigma}(-t c_{i\sigma}^+c_{j\sigma}+\rm{H.c.}) + U\sum_{i}n_{i\sigma}n_{i-\sigma}\nonumber\\
&-&V\sum_{<ij>\sigma\sigma^{\prime}}n_{i\sigma} n_{j\sigma^{\prime}} - \mu_{o}\sum_{i}n_{i},
\end{eqnarray}
where $t$ is the kinetic energy hopping,
$\mu_{o}$ the unrenormalized chemical potential,
$U$ the on-site repulsion and $V$ the nearest-neighbor attraction. In the case of
cold fermions on a lattice, $V$ would be related to inter-band coupling.
$\sigma$ is the ``pseudo-spin'' index, that could refer to equally populated hyperfine states
in the case of optical lattices.
At the mean-field level, the Hartree self-energy terms
renormalize $\mu_{o}$ such that $\mu = \mu_o + \mu_U(f) +\mu_V(f)$
where $\mu_U(f)$ and $\mu_V(f)$ are filling-dependent corrections
to $\mu$. We work with the
renormalized $\mu$ so as to properly deal with weak and strong couplings, and
take $\mu_{J_i}(f) = J_if$, where $J_i = U$, $- V$.
The filling $f=N/2M$, with $N$ the number of particles, $M$
the number of lattice sites, and the pseudo-spin degeneracy factor 2.
On Fourier transforming and retaining interactions between
particles with equal and opposite momentum, as in BCS theory,
the reduced {\it pairing Hamiltonian} assumes the form:
\begin{equation}
H_{pair}=\sum_{k}(\epsilon_k-\mu) c^{+}_{k}c_k+\sum_{kk'}V_{kk'}c^{+}_{k'}c^{+}_{-k'}
c_{-k}c_{k}
\end{equation}
where in the tight-binding approximation,
$\epsilon_k=- 2t (\cos k_x+\cos k_y)$; $V_{kk'}=V_0 (\cos(k_x-k'_x)+\cos(k_y-k'_y))$, which
is {\it non-separable}.
Using the standard BCS variational ansatz,
$|\Phi_{BCS}> = \prod_{\bf k} (u_{\bf k} + v_{\bf k}c_{\bf k}^{{\dag}}
c_{\bf - k}^{{\dag}})|0>$,
we obtain the $T=0$ gap equations
for the gap functions $\Delta_k^{d,s}=\Delta_o (f) (\cos k_x \pm \cos k_y)$
with $d_{x^2-y^2}$(-) and $s^*$(+) symmetries,
\begin{equation}
\frac{1}{V_o} =
\frac{1}{2 M} \sum_{k}^{\rm{BZ}}\frac{\cos k_x (\cos k_x \pm \cos k_y)}
{E_k^{d,s^*}},
\end{equation}
where
$E_k^{d,s^*} =
((\epsilon_k-\mu)^2+\Delta_{o}^2 (\cos k_x \pm \cos k_y)^2)^{1/2}$.
The Bogoliubov coefficients are given by,
\begin{equation}
|u_{\bf k}|^2; \;\; |v_{\bf k}|^2 = \frac{1}{2}
(1 \pm \frac{\epsilon_k -\mu}{E_k^{d,s^*}}).
\end{equation}
The ratio $v_{\bf k}/u_{\bf k} = - (E_k^{d,s^*} - (\epsilon_k - \mu))/
\Delta_k^{d,s^*}$.
Following Leggett\cite{leggett}, we readjust $\mu$ for strong
attractions by supplementing the T=0 gap equation with the number equation:
\begin{equation}
N= \sum_{k}^{\rm{BZ}} (1-(\frac{\epsilon_{k}-\mu}{E_k^{d,s^*}});
\end{equation}
This determines the self-consistently readjusted $\mu$, which is no longer
fixed at the Fermi level, and makes the gap
equation applicable over the entire range of filling, thereby the BCS and BEC regimes.
To allow for strong scattering, the sums are performed over the entire BZ.
The natural momentum cut-off afforded by the lattice avoids any possible
ultraviolet divergences.
\begin{figure}
\centering
\includegraphics[scale=0.85]{Fig1.eps}
\caption{Chemical potential $\mu$ vs. coupling V at different
fillings $f$ for the $d$-wave case.
BEC pairs appear where $\mu(V)$ crosses the $\mu/2t =-2$ line.
The \underline{inset} shows $\mu(V)$ for the $s$- (dash-short dashed line),
$s^*$ (dashed line), and d-wave (solid line) at
$f=0.2$.}
\label{fig:Fig.1}
\end{figure}
Remarkable differences in features stem
in an essential way from differences in gap symmetry.
The $d_{x^2-y^2}$ gap $\Delta_k^d$ vanishes
along the lines $\pm k_x = \pm k_y$
in the 2D BZ, i.e. at {\sl four} points on the Fermi surface(fs), the location
of which depends upon filling.
The $s^*$ gap $\Delta_k^{s^*}$
coincides with the tight-binding fs at exact 1/2-filling,
and is nodeless otherwise. Here, $\mu \le 0$,
with $\mu = -4t$ at the bottom of the band. Owing to particle-hole
symmetry, it is sufficient to consider $0 \leq f \leq 1/2$.
Upon examination of the gap functions and Eqs. (3-5), the
following {\it distinctions} become apparent:
(a) For very low fillings
($f \rightarrow 0, \mu \rightarrow -4t$), a threshold coupling is required
for pairing in the d-wave case, while in the \underline{$s^*$ case}
$\Delta^{s^*} \rightarrow 0$ as $V \rightarrow 0$ due to a
{\sl weak singularity}
at $\mu =-4t$. On the other hand, at 1/2-filling,
due to a {\sl weak singularity} at $\mu = 0$
in the \underline{d-wave case}, $\Delta^d \rightarrow 0$ as $V \rightarrow 0$.
In the $s^*$ case such a singularity is not present and
as $\Delta^{s^*} \rightarrow 0$, $V/4t \rightarrow \pi^2/8$, i.e.
a minimum coupling is needed for pairing.
In contrast with $\Delta_o^{s^*}(V)$, $\Delta_o^{d}(V)$ changes
slope at $\mu = -4t$, and hence not smooth everywhere (though continuous).
(b) For small $k$, we have the following limiting behavior:
(i) $\epsilon_k < \mu(= -4t); \;\; |u_k| \rightarrow 1, |v_k| \rightarrow 0$;
this is the {\it strong-coupling} BEC limit. Here
the ratio $v_k/u_k \sim \Delta_k/2|\mu| \rightarrow (k_x^2 - k_y^2)/2|\mu|$,
i.e. {\it analytic}.
(ii) $\epsilon_k > \mu(= -4t); \;\; |u_k| \rightarrow 0, |v_k| \rightarrow 1$;
this is the {\it weak-coupling} BCS limit. Here
$v_k/u_k \rightarrow 1/(k_x - k_y)$, i.e. {\it non-analytic}.
(iii) $\epsilon_k = \mu(= -4t); \;\; |u_k| \ne 0, |v_k| \ne 0$,
when $E_k \rightarrow 0$.
Then $v_k/u_k \sim (k_x - k_y)/(k_x + k_y)$, i.e. intermediate between
(i) and (ii).
It may be noted that for d-wave, the quasiparticle excitations
in the BCS limit (ii) are ``gapless'' for some values
of $k$, while in the BEC limit (i),
$E_{k} \ne 0$, even for gaps with nodes~\cite{Mohit}.
Self-consistent numerical solutions of Eqs.(3-5) bear out the above features in
detail, and also reveal a number of {\it other features}. We scale $\mu$, $V$, $\Delta$
by hopping parameter, $t$.
At a given filling $f$, both $\Delta_k^d$ and $\Delta_k^{s^*}$ increase
with increasing $V$. While for d-wave it is
easier to pair electrons at
higher fillings, this is not necessarily the case for
$s^*$-wave for the weaker couplings $V/4t \leq 1.5$ and
small gaps $\Delta^{s^*}/2t \leq 0.5$.
In Fig.1 we show $\mu(V)$ for different fillings f. At a fixed $f$,
in both the $d$- and $s^*$-wave cases, $\mu$ decreases with increasing
coupling $V$, changing less rapidly for progressively larger $f$.
However in the $s^*$ case, $\mu(V)$ exhibits a small ``bump'' for
weaker couplings $V/4t \leq 1.5$.
The drop in $\mu$ with increasing attraction is
significantly more rapid in the uniform
$s$-wave case; see inset in Fig. 1.
{\it Crossover to the BEC regime} here is signalled by
$\mu(V)$ going below the bottom of the band,
i.e. crossing the $\mu = - 4t$ line.
As Fig. 1 shows, for the d-wave case, this develops at both
low and high fillings at some minimum value
$V_b/4t$ of the coupling.
It is interesting to note that as
$f \rightarrow 0$, $V_b/4t \rightarrow 1.8$.
At exactly 1/2-filling this coupling tends to infinitely
large values.
For couplings $V > V_b$, the system is
conducive to BEC pairing;
for $V < V_b$, the system exhibits BCS-like features.
\begin{figure}
\centering
\includegraphics[scale=0.85]{Fig2.eps}
\caption{d-wave gap functions $\Delta/2t$ vs. nearest-neighbor coupling
$V/4t$ for different chemical potential $\mu$. \underline{Inset}: Results for the $s^*$ case.
$\mu = - 4t$ demarcates BEC and BCS regimes.}
\label{fig:Fig.2}
\end{figure}
Fig. 2 shows the behavior of the d-wave gaps as a function of coupling
$V$ for different values of the chemical potential $\mu$.
The $\mu=-4t$ curve represents the locus of $V_b/4t$ for different
fillings (see Fig. 1), and demarcates BEC and BCS -pair regimes.
To the left is the $\mu>-4t$ region wherein finite gaps of the BCS or intermediate
BCS-BEC types exist. On a given
constant-$\mu$ curve it may not be possible to have solutions for
any arbitrary filling,
but only those that satisfy Eqs. (3) and (4) self-consistently.
The {\it inset} in Fig. 2 shows the corresponding $\Delta^{s^*}(V)$ curves
for the $s^*$ case. There are interesting differences with the d-wave results
in that the boundary ($\mu=-4t$) separating BEC/BCS regimes
is not as clear-cut for the weaker couplings $V/4t \leq 1.5$ and
the smaller gaps $\Delta/2t \leq 0.5$, however the $\mu<-4t$ region
lies to the right of the $\mu=-4t$ curve as in the $d$-wave case.
\begin{figure}
\centering
\includegraphics[scale=0.40]{Fig3.eps}
\caption{(a), (b): 3D plots of d-wave electron distribution
functions $v_k^2$ vs. $k_x-k_y$ at filling $f=0.1$, showing
abrupt ``jump'' in $v_k^2$. In
BCS regime (a), $\mu = - 3t$, $\Delta^d = 5.2t$, $ V = 18.7t$, and in
BE regime (b), $\mu = - 6t$, $\Delta^d = 0.6t$,
$ V = 5.2t$.
(c), (d): 3D plots of d-wave $v_k/u_k$ vs $k_x-k_y$ for the same parameters
as in ((a),(b)) respectively. In BCS regime (c)
it can be seen to be non-analytic; in BEC regime (d)
it is analytic. (e), (f): The same as in (a), (b), but for
$s^*$-wave; the behavior is smooth.}
\label{fig:Fig 3}
\end{figure}
Differences in the gap symmetry manifest in a
striking manner in the momentum distribution function, $v_k^2$, and
the ratio $v_k/u_k$.
For d-wave, for a given filling, in the {\sl weak-coupling} BCS regime
($V < V_b(f)$, $\mu > -4t$), $v_k^2$ exhibits a peak centered
around the zone center (0,0),
that becomes progressively narrower with decreasing filling.
Then at the crossover point
at $V_b(f)$ ($\mu=-4t$), $v_k^2$ {\it abruptly}
goes to zero around (0,0), and shows a drastic redistribution in a different
region of the BZ, namely, along $(0, \pm \pi)$, and $(\pm \pi, 0)$.
The abruptness is evident from the ``jump''
in $v_k^2$ as the chemical potential
goes from just above the bottom of the band ($\mu > -4t$) to just
below ($\mu < -4t$), i.e. from BCS to BEC regime. A representative case is shown in Figs. 3a,3b.
In marked contrast, in the the $s^*$-wave case (Figs 3e,3f), the zone center peak
in $v_k^2$ decreases {\it smoothly} as one goes from the BCS regime
to the BEC regime; only a slight redistribution occurs at $(\pm \pi, \pm \pi)$.
We find this behavior to be replicated at all fillings f.
As observed above in the limiting cases, the numerical
calculations show (Fig 3c,3d) that for d-wave,
in the {\it weak-coupling} BCS regime, $v_k/u_k$ is {\it non-analytic}
at $\pm k_x = \pm k_y$; in the {\it strong-coupling} BEC regime,
$v_k/u_k$ is {\it analytic},
vanishing along the zone diagonals and peaking about
$(\pm \pi,0)$, $(0, \pm \pi)$.
In the $s^*$ case (not shown), $v_k/u_k$ is analytic in both regimes.
Similar behavior in $n_k$ has also been reported in other work~\cite{melo,VQ,derhertog99}.
\begin{figure}
\centering
\includegraphics[scale=0.30]{Fig4.eps}
\caption{(a) Fourier transform $\rho_v(x,y)$ of a typical d-wave
electron distribution function, $v_k^2$. Here, filling $f=0.01$,
$\mu = - 4.2t$ (strong-coupling regime), gap $\Delta = .76t$. (b) Projection
of (a) to show contrast ratio of $\rho_v(x,y)$.}\label{fig:Fig 4}
\end{figure}
Our findings suggest that experiments, that may
be able to directly or indirectly
probe $v_k^2$ or combinations of $u_k$ and $v_k$, could
reveal novel aspects of the paired states. For example,
it may be possible to decipher the OP symmetry (e.g. d- or $s^*$- wave)
by measuring $v_k^2$ as a function of
filling (especially at low-fillings),
and/or for different interaction strengths, both of which can be controlled in optical
lattices.
At the BCS-BEC crossover, we expect the behavior
to be quite different depending
on whether the OP is d- or $s^*$ wave.
Also, in the case of d-wave pairs,
quantities sensitive to $v_k^2$ or to ($u_k, v_k$) should be very different
depending on whether the paired state is BEC or BCS like.
A possible probe may be ARPES. Information may also be obtained
from experiments that sample
the quasiparticle energy $E_k = (\Delta_k^2 + \epsilon_k^2)^{1/2}$ (related
to $u_k,v_k$), the quasiparticle density of states, or
coherence factors, $u_kv_k +u_k'v_k'$.
Angle-dependent or transverse ultrasound attenuation\cite{coleman},
or quasiparticle tunneling at low fillings
are possible experiments.
The Fourier transform of $v_k^2(k_x,k_y)$, namely, $\rho_v(x,y)$ may provide
yet another interesting way to test our results. In the {\it d-wave case}, in marked contrast
with its behavior in the BCS regime,
$\rho_v(x,y)$ is {\it oscillatory} in the BEC regime, and exhibits an inhomogenous
``checkerboard-type'' pattern as shown in Fig 4(a,b).
For the chosen parameters of Fig. 3, the contrast ratio
of the lowest density to the peak is roughly 50\%, being
most sensitive to the location of $\mu(V)$. The length scale
is of the order of fractions of lattice spacing.
$\rho_v(x,y)$ is fairly uniform in the {\it $s^*$ case} in both regimes.
Highly sensitive STM may be able to pick up such distinctions\cite{STM}.
Much of the phenomena we have discussed
are away from exact 1/2-filling, and at
relatively strong coupling, where possible effects of spin density
wave (SDW) and charge density wave (CDW) instabilities are expected to
be suppressed. Addition of a next-near-neighbor hopping
would also stabilize the paired state, as well as lower the minimum
near-neighbor interaction necessary for a bound-state;
we have checked this\cite{VQ}. We have not explored here the issues
of collective modes or phase separation\cite{phasesep}.
It may be interesting to
extend this work to, for example, finite-T, or to explore whether
the inhomogenous density that we find bear relationship to
the range/strength of the interaction, or to possible phase separation.
We thank E. Abrahams, S. Davis, and H. Neuberger for discussions and comments.
The work is supported in part by ICAM.
|
2,877,628,088,336 | arxiv | \section{TMDs and the Light-Cone CQM}
In recent years much work has been devoted to study semi-inclusive deep
inelastic scattering (SIDIS), Drell-Yan dilepton production and hadron production in $e^+ e^-$ annihilation as
powerful tools to understand the nucleon structure. According to the
factorization theorem,
the physical
observables of such processes can be expressed as convolution of hard partonic
scattering cross sections, parton distribution functions (PDFs) and/or
fragmentation functions
(FFs)~\cite{Collins:2003fm,Collins:1992kk,Mulders:1995dh,Boer:1997nt,Brodsky:2002cx,Collins:2002kn}.
With respect to the usual
inclusive deep inelastic scattering (DIS) where PDFs only depend on the
longitudinal momentum fraction carried by the parton, now PDFs, as well as FFs,
also depend on the transverse momentum.
At leading twist there are eight transverse momentum dependent
PDFs (TMDs)~\cite{Mulders:1995dh,Boer:1997nt}, three of them
surviving when integrated over the transverse momentum and giving rise to
the familiar parton density, helicity and transversity distributions.
Data~\cite{Arneodo:1986cf,Diefenthaler:2005gx,Airapetian:1999tv,Airapetian:2001eg,Avakian:2003pk,Airapetian:2002mf,Airapetian:2006rx,Airapetian:2004tw,Alexakhin:2005iw,Kotzinian:2007uv} on SIDIS are available and many more are expected to come in future,
giving first insights on the TMDs
\cite{Efremov:2002ut,Efremov:2004tp,Vogelsang:2005cs,Efremov:2006qm,Anselmino:2007fs,Arnold:2008ap,Anselmino:2008sg,Zhang:2008nu}.
However, model calculations
\cite{Jakob:1997wg,Avakian:2008dz,Pasquini:2008ax,Bacchetta:2008af,Meissner:2007rx,Yuan:2003wk,Gamberg:2007gb,Schweitzer:2001sr,Pasquini:2005dk,Efremov:2004tz} play an important role
for unraveling the information on the quark dynamics encoded
in these novel functions. In this contribution we will review the results for the TMDs in a light-cone constituent quark model (CQM) which was successfully applied also for the calculation of the electroweak properties of the nucleon~\cite{Pasquini:2007iz} and
generalized parton distributions~\cite{Boffi:2007yc}.
\newline\noindent
A convenient way to describe parton distributions is to
use the representation in terms of overlaps of light-cone wave functions
(LCWFs), which are the probability amplitudes to find a given $N$-parton configuration in the Fock-space expansion of the hadron state.
This representation becomes useful in
phenomenological applications where one can reasonably truncate the
expansion of the hadron state to the Fock components with a few partons.
In our approach, we consider the minimum Fock sector with just three-valence quarks. This truncation allows to describe the parton distributions in those kinematical region where the valence degrees of freedom are effective, while
the contributions from sea quarks and gluons
are suppressed.
The three-quark component of the LCWF, keeping the full transverse momentum dependence of the partons, can be classified in a model independent way in terms
of six independent light-cone amplitudes~\cite{Ji:2002xn}, which serve to parametrize the
contribution
from the four different orbital angular momentum components $L_z$ compatible
with total angular momentum conservation, i.e. $L_z=0,\pm 1 , 2$.
In Ref.~\cite{Pasquini:2008ax}, these six amplitudes have been explicitly derived
in a light-cone CQM,
considering the relativistic spin dynamics arising from the boost of
instant-form wave function into the light-cone.
The instant-form wave function is constructed as a product of a momentum
wave function which is in a pure S-wave state and invariant under permutations,
and a spin-isospin wave function determined by SU(6)
symmetry.
The corresponding solution in light-cone dynamics is obtained through the
unitary Melosh rotations acting
on the spin of the individual quarks.
The relativistic effects of the Melosh rotations
are evident in the presence
of spin-flip terms generating non-zero orbital
angular momentum components which fit the
model-independent classification of the three-quark LCWF~\cite{Pasquini:2008ax,Ji:2002xn}.
The explicit expressions of these light-cone amplitudes can be found in
Ref.~\cite{Pasquini:2008ax}, while the corresponding results for the time-even
TMDs are
\begin{eqnarray}
\label{eq:f1}
f^a_1(x,p_T)&=&
N^a \int{\rm d}[X]\
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})\
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,
\nonumber\\
\label{eq:g1}
g^a_{1L}(x,p_T)&=&
P^a\int{\rm d}[X]\
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})\
\frac{(m+ x M_0)^2 -{\bf p}^2_{T}}{(m+ xM_0)^2 + {\bf p}^2_{T}}\;
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,
\nonumber\\
\label{eq:g1T}
g^{a}_{1T}(x,p_T)&=&
P^a
\int{\rm d}[X]\
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})\
\frac{2M(m+ xM_0)}{(m+ xM_0)^2 + {\bf p}^2_{T}}\;
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,
\nonumber\\
h^{\perp\, a}_{1L}(x,p_T)&=&
- P^a
\int{\rm d}[X]\
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})\
\frac{2M(m+ xM_0)}{(m+ xM_0)^2 + {\bf p}^2_T}\;
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,\nonumber
\label{eq:h1L}
\\
\label{eq:h1T}
h^{\perp\,a}_{1T}(x,p_T)&=&-
P^a
\int{\rm d}[X]\
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})\
\frac{2M^2}{(m+ xM_0)^2 + {\bf p}^2_{T}}\;
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,
\nonumber\\
h^a_1(x,p_T)&=&
P^a
\int{\rm d}[X]
\delta(x-x_3)\delta({\bf p}_{T}-{\bf p}_{\perp\,3})
\frac{(m+ xM_0)^2}{(m+ xM_0)^2 + {\bf p}^2_{T}}
\vert \psi(\{x_i,{\bf p}_{\perp\,i}\})\vert^2,
\label{eq:h1}
\end{eqnarray}
where
the integration measure is defined as in Ref.~\cite{Pasquini:2008ax},
$M_0$ is
the mass of the non-interacting three-quark system, and
the flavor dependence is given by
the factors $N^u=2$, $N^d=1,$ and $P^u=4/3$,
$P^d=-1/3$, as dictated by SU(6) symmetry.
A further consequence of the assumed SU(6) symmetry is the factorization
in Eqs.~(\ref{eq:h1}) of the momentum-dependent wave function
$\psi(\{x_i,{\bf p}_{\perp\,i}\})$
from the spin-dependent factor arising from the Melosh rotations.
As a result one finds the following relations
\begin{eqnarray}
\label{eq:61}
&&2h^a_1(x,p_T)
=g^a_{1L}(x,p_T)+\frac{P^a}{N^a}f^a_1(x,p_T),
\qquad
h_{1L}^{\perp a}(x,p_T)
=-g_{1T}^a(x,p_T),
\\
&&
\frac{P^a}{N^a}f^a_1(x,p_T)
=h_1^a(x,p_T) -\frac{p_T^2}{2M^2}h_{1T}^{\perp \,a}(x,p_T).
\label{eq:61a}
\end{eqnarray}
These relations are common to several quark model calculations
~\cite{Avakian:2008dz,Pasquini:2008ax,Jakob:1997wg,Efremov:2004tz}
though not all~\cite{Bacchetta:2008af}.
The common feature of such models is that gluon degrees of freedom are
neglected. On the other side,
the recent model calculation of Ref.~\cite{Efremov:2004tz}
found the interesting result that SU(6) symmetry is not a necessary condition
for the relation ~(\ref{eq:61a}).
\begin{figure}[t]
\includegraphics[width=12.1 truecm]{ssa_spin_2008_evolution.eps}
\caption{The SSAs $A_{UT}^{\sin(\phi+\phi_S)}$ (left column),
$A_{UT}^{\sin(3\phi-\phi_S)}$ (middle column), and
$A_{UL}^{\sin(2\phi)}$ (right column) in DIS production of charged pions
off proton target, as function of $x$. The solid curves show the results on the basis of the light-cone CQM evolved to the scale $Q^2=2.5 $ GeV$^2$,
and the dashed
curves correspond to the predictions at the low scale of the model.
The experimental data are from Refs.~\cite{Diefenthaler:2005gx,Airapetian:1999tv}.
\label{fig1}}
\end{figure}
\section{Results for azimuthal SSAs}
The results in Eqs.~(\ref{eq:h1}) are general and can be applied
to any CQM adopting the appropriate nucleon wave function.
In the following we will take the momentum wave-function
from Schlumpf~\cite{Schlumpf:1992ce}.
In Fig.~\ref{fig1} we show the results for the single spin asymmetries
(SSAs) with unpolarized (U) beam
and transversely (T) polarized proton target in SIDIS of positive (upper panels) and negative (lower panels) pions.
The asymmetries $A_{UT}^{\sin(\phi+\phi_S)}$ (left column),
$A_{UT}^{\sin(3\phi-\phi_S)}$ (middle column),
and
$A_{UL}^{\sin(2\phi)}$ (right column) are due to the Collins function and to the three chirally-odd TMDs
$h_1$, $h_{1T}^\perp$, and $h_{1L}^\perp$, respectively.
For the Collins function we use the results extracted in \cite{Efremov:2006qm}.
In the denominator of the asymmetries we take $f_1$ from~\cite{Gluck:1998xa}
and the unpolarized FF from~\cite{Kretzer:2000yf}, both valid at the scale $Q^2=2.5$ GeV$^2$.
The model results for $h_1$ evolved from the low hadronic scale
of the model to $Q^2=2.5 $ GeV$^2$ ideally describe the HERMES
data~\cite{Diefenthaler:2005gx} for
$A_{UT}^{\sin(\phi+\phi_S)}$.
This is in line with the favourable comparison
between our model predictions~\cite{Pasquini:2005dk} and the phenomenological extraction
of the transversity and the tensor charges in Ref.~\cite{Anselmino:2007fs}.
In the case of $A_{UL}^{\sin(2\phi)}$ and $A_{UT}^{\sin(3\phi-\phi_S)}$ we compare the results obtained using the TMDs
at the scale of the model (dashed curves) and the TMDs evolved
at leading order to $Q^2=2.5$ GeV$^2$ (solid curves)
using the evolution pattern of the transversity.
Although this is not the correct evolution pattern,
it may give us a rough insight on the possible size of effects
due to evolution
(for a more detailed discussion we refer to~\cite{BEPS09}).
In the case of $A_{UT}^{\sin(3\phi-\phi_S)}$, the evolution effects give
smaller asymmetries in absolute value and shift the peak at lower $x$ values.
Measurements in range $0.1\lesssim x \lesssim 0.6$ are planned with the CLAS 12 GeV upgrade~\cite{Avakian-LOI-CLAS12}
and will be able to discriminate between the two scenarios.
In the region $x\lesssim 0.2$, there exist also preliminary
deuteron target data~\cite{Kotzinian:2007uv}
which are compatible, within error bars,
with the model predictions both at the hadronic and the evolved scale.
Similar conclusions can be drawn also in the case of $A_{UL}^{\sin(2\phi)},$
where we compare our results with HERMES data~\cite{Airapetian:1999tv}.
\bibliographystyle{aipproc}
|
2,877,628,088,337 | arxiv | \section{Introduction}
Regions of the interstellar medium that are (partly) ionized play an important role in a number of effects
\corr{such as pulse dispersion and scattering, and Faraday rotation. }
Additionally, ionized parts of the interstellar medium emit radiation through free-free emission and $\mathrm{H}_\alpha$ emission. The magnitude of these effects depends on the distribution of free electrons, the \textit{free electron density}. It is therefore of great interest to model or reconstruct the free electron density as accurately as possible.
Reconstruction and modeling of the Milky Way has been an ongoing topic of research for many years. The free electron density has been modeled by \cite{Taylor-1993}, \cite{cordes-2002}, and \cite{Gaensler-2008} among others. For a comparison and discussion of various existing models see \cite{Schnitzeler-2012} and for a review of the mapping of HI regions see \cite{Kalberla-2009}. The interstellar magnetic \corr{field} has been modeled by \cite{Sun-2008,Sun-2010} and \cite{Jansson-2012,Jansson-2012b}. The dust distribution has been modeled by e.g.~\cite{Berry-2012} and even \corr{nonparametric} tomography has been performed by \cite{Lallement-2014} and \cite{Sale-2014}.
\corr{We plan to use the dispersion measures ($D\!M$) of pulsar signals together with accurate pulsar distances to map the distribution of ionized gas in the Milky Way.} The dispersion measure is defined as the line of sight integral over the free electron density between the observer and the pulsar,
\begin{equation}
D\!M = \int\limits_{\mathrm{pulsar}}^{\mathrm{observer}}\!\!\mathrm{d}r\, n_\mathrm{e},
\end{equation}
where $n_\mathrm{e}$ is the three-dimensional free electron density. $D\!M$ can be estimated by measuring the arrival time of a pulse at different frequencies, since the time delay is proportional to $D\!M/\nu^2$. While there is a vast number of known dispersion measures very few of them are complemented by an independent distance estimate.
The NE2001 model by \cite{cordes-2002} is currently the most popular model for the free electron density of the Milky Way. \corr{It uses 1143 $D\!M$ measurements of which 112 were complemented by distance estimates of varying quality. Additionally it uses 269 pulsar scattering measurements, which only provide very indirect distance constraints.}
In this paper, we \corr{perform nonparametric tomography of a simulation of }the Galactic free electron density from pulsar dispersion measures complemented by independent distance estimates. \corr{By nonparametric tomography we mean a reconstruction with a virtually infinite\footnote{
In numercial practice, the amount of degrees of freedom is the number of pixels used. However, the reconstruction will be resolution independent once the resolution is high enough.}
number of degrees of freedom using a close to minimal set of prior assumptions that only resolves structures which are supported by the data.
Our assumptions are that the electron density is positive and spatially correlated and that the large-scale electron distribution only shows a variation with distance from the Galactic Centre and height above the Galactic Plane. Both the correlation structure and the scaling behavior have to be inferred from the data. As a consequence, our reconstruction is focused on the large ($\mathrm{kpc}$) scales of the Galactic} free electron density. \corr{Small-scale structures such as HII regions and supernova remnants as well as spiral arms are only recovered if they are sufficiently probed and constrained by the data.}
\corr{Our tomography algorithm is derived from first principles in a Bayesian setting. This has the advantage that all assumptions are clearly states as priors. Additionally, it allows us to provide uncertainty maps of our reconstructions, which are important for any subsequent scientific analysis.}
To get a meaningful map with minimal assumptions, one of course needs a data set of \corr{high} quality. Currently, there are \corr{around} 100 pulsars known with reliable (independent) distance estimates. This only allows for a \corr{nonparametric} reconstruction of the largest features in the Milky Way. \corr{New measurements with the Very Long Baseline Array will soon double the number of pulsars with accurate distances (see \cite{Deller-2011}).} However, with the planned Square Kilometer Array radio interferometer \corr{(SKA)} the number of pulsars with parallax distance estimates might increase to around 10000 (see \cite{Smits-2011}).
In this paper we therefore investigate the feasibility of \corr{nonparametric} tomography of the free electron density and demonstrate the performance of our algorithm by applying it to mock data sets similar to what the SKA might deliver. To that end, we create \corr{four} Galaxy models from the NE2001 code by \cite{cordes-2002} with varying degrees of fluctuations \corr{and contrast} as well as observational mock data sets for up to 10000 pulsars with distance estimates of varying quality and apply our algorithm to these data sets.
The remainder of this paper is \corr{structured} as follows: First, we \corr{derive} our tomography algorithm in Sec.~\ref{sec:algorithm}, explaining our notation, our underlying assumptions as well as all probability density functions involved. Second, we explain our Galaxy models and mock observations in detail in Sec.~\ref{sec:simulation}. In Sec.~\ref{sec:reconstructions}, we compare the electron density distributions reconstructed from mock observations with those from the Galaxy models used to produce them. We summarize our discussion in Sec.~\ref{sec:discussion}.
\section{Reconstruction algorithm}
\label{sec:algorithm}
The reconstruction algorithm applied in this work was derived within the framework of \textit{information field theory} introduced by \cite{IFT-2009}. We also follow -- for most parts -- the notation used by them.
To reconstruct the Galactic free electron density from pulsar dispersion measurements we use a very similar filter formalism \corr{to} the one presented by \cite{Junklewitz-2013}, which in turn is based on the critical filter formalism developed by \cite{Gibbs-2010}, \cite{Crit-2011}, and refined by \cite{niels-smoothness}.
\subsection{Signal model}
In the inference formalism we aim to reconstruct the free electron density field $\rho$, a three-dimensional scalar field. We assume it is related to the observed dispersion measure data $D\!M$ by a linear measurement equation subject to additive and signal independent measurement noise,
\begin{equation}
D\!M = R\rho + n,\\
\label{eq:data_model}
\end{equation}
where $n$ is the measurement noise and $R\rho$ is the application of the linear response operator $R$ on the field $\rho$,
\begin{equation}
\left(R\rho\right)_i \equiv \int\!\!\mathrm{d}^3x\ R(i,\vec{x})\, \rho(\vec{x}).
\end{equation}
The response operator $R$ describes line-of-sight integrals through the density. It can be defined as
\begin{equation}
R(i,\vec{x}) = \int\limits_{0}^{\left|\vec{d}_i\right|}\!\!\mathrm{d}r\ \delta\!\left( \vec{x} - r\boldsymbol{\hat{d}}_i \right),
\end{equation}
where $\vec{d}_i$ is the position of pulsar $i$ in a coordinate system centered on Sun and $\delta(\cdot)$ is the three-dimensional Dirac delta-distribution and $\boldsymbol{\hat{d}}_i := \vec{d}_i/|\vec{d}_i|$.
Formally, the free electron density is a continuous field. In practice, we reconstruct a discretized version of this field, e.g.\ a three-dimensional map with some pixel size. One can think of the discretized density field as a vector of dimension $N_{\mathrm{pix}}$ with each component containing the field value in a specific pixel. The \corr{dispersion data} $D\!M$ and the noise $n$ can be regarded as vectors of dimension $N_{\mathrm{data}}$, where each component of $D\!M$ contains a specific measurement result and the corresponding component of $n$ the noise contribution to it. Thus, the response operator becomes a matrix with $N_{\mathrm{pix}}$ columns and $N_{\mathrm{data}}$ rows.
\corr{We parametrize the density as
\begin{equation}
\rho(\vec{x}) = \Delta(\vec{x}) \tilde{\rho}(\vec{x}),
\end{equation}
where $\Delta$ is the Galactic profile field which describes the the disk shape of the Milky Way.
All deviations from the Galactic profile are described by $\tilde{\rho}$ for which we assume no distinguished direction or position \textit{a~priori}.
To ensure positivity of the density these fields are in turn parametrized as
\begin{equation}
\begin{split}
\Delta(x,y,z) & = \exp\!\left( \alpha\!\left(\sqrt{x^2+y^2}\right) + \beta\left(|z|\right) \right),\\
\tilde{\rho}(x,y,z) & = \exp(s(x,y,z)).
\label{eq:parametrization}
\end{split}
\end{equation}
Thus, $\Delta$ can only represent the vertical and radial scaling behavior of the density and has the degrees of freedom of two one-dimensional functions. On the other hand, $\tilde{\rho}$ retains all degrees of freedom of a three-dimensional field and can represent arbitrary structures. Both, $\Delta$ and $\tilde{\rho}$ are unknown \textit{a~priori} and will} be inferred from the data.
\corr{We summarize our modeling in Fig.~\ref{fig:model_diagram}. The logarithmic density $\rho$ is parametrized by three additive components, one 3D field and two 1D fields. As we outline in Secs.~\ref{sec:prior}~and~\ref{sec:profile}
all three fields are assumed to follow Gaussian statistics \textit{a priori}. For the 1D fields a specific correlation structure is assumed while the correlation structure of the 3D field is unknown, but assumed to be homogeneous and isotropic. Therefore, our modeling prefers smooth structures, fluctuations that scale with the density, and exponential scaling in radial and vertical directions.
Of course, this is a strong simplification of the Galaxy, where the behaviour of the fluctuations can depend on, e.g., the phase of the interstellar medium or the position within the Galaxy.
However, all of these properties can be recovered if the data demand it, since all degrees of freedom are retained. They are just not part of the prior knowledge entering our inference.}
\begin{figure}
\begin{overpic}[width=0.5\textwidth]{flowchart.pdf}
\put(36,86){\small{\textbf{parametrization}}}
\put(13,80){\small{$\ln \rho(x,y,z) = s(x,y,z) + \alpha(\sqrt{x^2\!+\!y^2}) + \beta(|z|)$}}
\put(25.5,68.5){\small{3D field}}
\put(57,68.5){\small{1D fields}}
\put(39,55){\small{\textbf{assumptions}}}
\put(2,42){\parbox{0.5\linewidth}{\centering
$s$ is Gaussian with position\\
and orientation independent\\
correlation structure}}
\put(51,42){\parbox{0.5\linewidth}{\centering
$\alpha$ and $\beta$ are Gaussian\\
in their second derivatives}}
\put(36,22){\small{\textbf{implied preferences}}}
\put(0,10){\parbox{0.25\linewidth}{\centering
fluctuations\\
scale with\\
magnitude}}
\put(25,10){\parbox{0.35\linewidth}{\centering
smooth structures\\
without\\
sharp edges}}
\put(62,10){\parbox{0.35\linewidth}{\centering
exponential\\
scaling behavior\\
in $\sqrt{x^2\!+\!y^2}$ and $|z|$}}
\end{overpic}
\caption{A diagram outlining the structure of our modeling.
}
\label{fig:model_diagram}
\end{figure}
\subsection{Necessary probability density functions}
Our goal is to derive an algorithm that yields an estimate of the logarithm of the Galactic free electron density. Hence, we construct the posterior probability density function (PDF) $\mathcal{P}(s|\mathrm{data})$, which is the PDF for the signal given the data set $\{D\!M,\vec{d}_\mathrm{obs(erved)}\}$, using Bayes' theorem,
\begin{equation}
\mathcal{P}(s|\mathrm{data}) = \frac{\mathcal{P}(s,D\!M|\vec{d}_\mathrm{obs})}{\mathcal{P}(D\!M|\vec{d}_\mathrm{obs})} = \frac{\mathcal{P}(s|\vec{d}_\mathrm{obs}) \mathcal{P}(D\!M|s,\vec{d}_\mathrm{obs})}{\mathcal{P}(D\!M|\vec{d}_\mathrm{obs})}.
\label{eq:Bayes}
\end{equation}
On the \corr{right-hand side}, we have three PDFs: the prior $\mathcal{P}(s|\vec{d}_\mathrm{obs}) = \mathcal{P}(s)$, the likelihood $\mathcal{P}(D\!M|s,\vec{d}_\mathrm{obs})$, and the evidence $\mathcal{P}(D\!M|\vec{d}_\mathrm{obs})$.
The evidence is independent from the signal and therefore automatically determined by the normalization of the posterior.
The prior and the likelihood will be addressed in the following sections. \corr{For notational convenience we will drop the dependence on the observed pulsar positions $\vec{d}_\mathrm{obs}$ throughout the rest of this paper.}
\corr{Throughout this section we will assume the Galactic profile field to be given. We will adress its inference in Sec.~\ref{sec:profile}.}
\subsubsection{The likelihood}
\label{sec:likelihood}
The likelihood $\mathcal{P}(D\!M|s)$ is the PDF that an observation yields \corr{dispersion measures} $D\!M$ assuming a specific realization of the underlying signal field $s$. If both the noise $n$ and the pulsar distances $d_i \equiv |\vec{d}_i|$, were known, the relation between the dispersion measure data and signal would be deterministic,
\begin{equation}
\mathcal{P}(D\!M|s,n,d) = \delta(D\!M-R\rho-n),
\end{equation}
with $\rho(\vec{x}) = \Delta(\vec{x}) \mathrm{e}^{s(\vec{x})}$.
We do not know the realization of the noise, nor do we aim to reconstruct it. It is assumed to follow Gaussian statistics with zero mean and known covariance structure\footnote{We denote expectation values with respect to the underlying PDF as $\left\langle f(x) \right\rangle_{\mathcal{P}(x)} := \int\!\mathcal{D}x\, f(x)\, \mathcal{P}(x)$.},
\begin{equation}
\left\langle n_i n_j \right\rangle_{\mathcal{P}(n)} = N_{ij} = \delta_{ij} \sigma_i^2,
\end{equation}
where $\sigma_i$ is the \corr{root mean square error} of the observation $i$ and we assumed independent measurements.
Distance information is usually given in the form of parallaxes from which distance estimates can be derived. As all observables these are subject to uncertainties which is why the information about the distances of the pulsars is described by a PDF\footnote{
We assume here that the distance PDF is correctly derived from the parallax PDF taking Lutz-Kelker bias into account (see \cite{Verbiest-2010}).
}, $\mathcal{P}(d) \equiv \mathcal{P}(d|\mathrm{parallaxes})$, which can be non-Gaussian. Since we are doing inference on $s$, we need the noise and distance\footnote{Technically, we also need to marginalize over the position on the sky (i.e. the direction of the line of sight). But since the angular error of the pulsar position is small compared to the error in distance, we can neglect it and treat the direction as an exact value.} marginalized likelihood
\begin{equation}
\mathcal{P}(D\!M|s) = \int\!\!\mathcal{D}n\mathcal{D}d\ \mathcal{P}(D\!M|s,n,d) \mathcal{P}(n) \mathcal{P}(d),
\label{eq:exact-likelihood}
\end{equation}
where we assumed $n$ and $d$ to be independent from $s$ and each other. \corr{The symbols $\mathcal{D}n$ and $\mathcal{D}d$ denote integration over the full phase space of $n$ and $d$, i.e.~the space of all possible configurations ($\mathcal{D}n \equiv \Pi_i \mathrm{d}n_i$).}
Integration over $n$ in Eq.\ \eqref{eq:exact-likelihood} is trivial and yields
\begin{equation}
\mathcal{P}(D\!M|s) = \int\!\!\mathcal{D}d\ \mathcal{G}(D\!M-R\rho,N) \mathcal{P}(d),
\end{equation}
\corr{where $\mathcal{G}$ indicates a Gaussian PDF, $\mathcal{G}(x,X) := |2\pi X|^{-\frac{1}{2}} \mathrm{e}^{-\frac{1}{2} x^\dagger X^{-1} x}$. }
Integration over $d$, however, cannot be done analytically, but one can approximate the marginalized likelihood by a Gaussian characterized by its first two moments in $D\!M$. The first moment is
\begin{equation}
\left\langle D\!M \right\rangle_{\mathcal{P}(D\!M|s)} = \tilde{R}\rho,
\end{equation}
with
\begin{equation}
\tilde{R}_i(\vec{x}) = \left\langle R_i(\vec{x}) \right\rangle_{\mathcal{P}(d)} = \int\limits_{0}^{\infty}\!\!\mathrm{d}r\ \delta\!\left( \vec{x} - r\boldsymbol{\hat{d}}_i \right)\, P[d_i>r],
\end{equation}
where $P[d_i>r]$ is the probability that the pulsar distance $d_i$ is larger than $r$.
The second moment is
\begin{equation}
\left\langle D\!M\,D\!M^\dagger \right\rangle_{\mathcal{P}(D\!M|s)} = N+\left\langle \left(R\rho\right)\left(R\rho\right)^\dagger \right\rangle_{\mathcal{P}(d)}.
\end{equation}
For non-diagonal elements the second term on the right hand side decouples,
\begin{equation}
\begin{split}
\left\langle \left(R\rho\right)_i\left(R\rho\right)_j \right\rangle_{\mathcal{P}(d)} & = \left\langle \left(R\rho\right)_{i\!\!\phantom{j}} \right\rangle_{\mathcal{P}(d)} \left\langle\left(R\rho\right)_j \right\rangle_{\mathcal{P}(d)}\\
& = \left( \tilde{R}\rho \right)_i\left( \tilde{R}\rho \right)_j \quad \mathrm{for}\quad i\neq j.
\end{split}
\end{equation}
Diagonal elements yield
\begin{equation}
\begin{split}
\left\langle \left(R\rho\right)_i\left(R\rho\right)_i \right\rangle_{\mathcal{P}(d)} = & \int\limits_{\mathbb{R}^3}\!\!\mathrm{d}^3x\int\limits_{\mathbb{R}^3}\!\!\mathrm{d}^3y \ \rho(\vec{x})\rho(\vec{y})\times \\ & \left\langle R_i(\vec{x}) R_i(\vec{y}) \right\rangle_{\mathcal{P}(d)},
\end{split}
\end{equation}
with
\begin{equation}
\begin{split}
\left\langle R_i(\vec{x}) R_i(\vec{y}) \right\rangle_{\mathcal{P}(d)} = & \int\limits_{0}^{\infty}\mathrm{d}r\int\limits_{0}^{\infty}\mathrm{d}r'\ \delta(\vec{x}-r\boldsymbol{\hat{d}}_i) \delta(\vec{y}-r'\boldsymbol{\hat{d}}_i) \times \\ & P[d_i>\max(r,r')].
\end{split}
\end{equation}
Using these first two moments, we can approximate\footnote{This corresponds to characterizing the likelihood by its cumulants and setting all but the first two cumulants to zero.} the likelihood $\mathcal{P}(D\!M|s)$ by a Gaussian $\mathcal{G}(D\!M-\tilde{R}\rho,\tilde{N})$ with
\begin{equation}
\tilde{N}_{ii} = N_{ii} + \rho^{\dagger} F^{(i)} \rho,
\label{eq:effective_noise}
\end{equation}
where\footnote{In this work we abbreviate \mbox{$\xi^\dagger \zeta := \int\!\mathrm{d}^3x\, \xi^*(\vec{x})\,\zeta(\vec{x})$} and \mbox{$\Xi\, \xi := \int\!\mathrm{d}^3y\, \Xi(\vec{x},\vec{y})\,\xi(\vec{y})$} for continuous quantities.}
\begin{equation}
\begin{split}
F^{(i)}(\vec{x},\vec{y}) & := \left\langle R_i(\vec{x})R_i(\vec{y})\right\rangle _{\mathcal{P}(d_i)} - \tilde{R}_i(\vec{x})\tilde{R}_i(\vec{y}) \\
&\ = \int\limits_{0}^{\infty}\mathrm{d}r\int\limits_{0}^{\infty}\mathrm{d}r'\ \delta(\vec{x}-r\boldsymbol{\hat{d}}_i) \delta(\vec{y}-r'\boldsymbol{\hat{d}}_i) \times \\ & \quad \ \ P[d_i>\max(r,r')]P[d_i<\min(r,r')].
\end{split}
\end{equation}
The noise covariance matrix of this effective likelihood is signal dependent, which increases the complexity of the reconstruction problem.
Therefore, we approximate the density in Eq.\ \eqref{eq:effective_noise} by its posterior mean,
\begin{equation}
\rho^{\dagger} F^{(i)} \rho = \mathrm{tr}\left( \rho \rho^\dagger F^{(i)} \right) \approx \mathrm{tr}\left( \left\langle\rho\right\rangle_{\mathcal{P}(\rho|D\!M)} \left\langle\rho\right\rangle_{\mathcal{P}(\rho|D\!M)}^\dagger F^{(i)} \right).
\label{eq:noise_addition}
\end{equation}
Since $\left\langle\rho \right\rangle_{\mathcal{P}(\rho|D\!M)}$ depends on $\tilde{N}$ this yields a set of equations that need to be solved self-consistently (see Sec.~\ref{sec:filter_equations}).
\subsubsection{The priors}
\label{sec:prior}
The signal field $s$ is unknown \textit{a~priori}, but we assume that it has some correlation structure. We describe this correlation structure by moments up to second order in $s$. The principle of maximum entropy therefore \corr{requires} that our prior probability distribution has a Gaussian form,
\begin{equation}
\mathcal{P}(s|S) = \mathcal{G}(s,S) := \left| 2\pi S \right|^{-\frac{1}{2}} \exp\!\left( -\frac{1}{2} s^\dagger S^{-1} s \right),
\end{equation}
with some unknown correlation structure,
\begin{equation}
S(\vec{x},\vec{y}) = \left\langle s(\vec{x}) s(\vec{y}) \right\rangle_{\mathcal{P}(s)}.
\end{equation}
The first moment of $s$ is set to zero, since it can be absorbed into $\Delta(\vec{x})$. So the \textit{a~priori} mean of $s$ is contained in $\Delta(\vec{x})$.
\textit{A~priori}, our algorithm has no \corr{preferred} direction or position for $s$. This reduces the number of degrees of freedom of the correlation structure $S$. It is fully described by a power spectrum $p(k)$,
\begin{equation}
S(\vec{x},\vec{y}) = \sum\limits_k\,S^{(k)}(\vec{x},\vec{y})\, p(k),
\end{equation}
where $S^{(k)}$ is the projection operator onto the spectral band $k$ with its Fourier transform defined as
\begin{equation}
S^{(k)}(\vec{q},\vec{q'}) = (2 \pi)^3 \delta(\vec{q} - \vec{q'}) \mathbb{1}_k\!\left(|\vec{q}|\right),
\end{equation}
with
\begin{equation}
\mathbb{1}_k\!\left(|\vec{q}|\right) = \begin{cases}
1 & \mathrm{for}\ \ |\vec{q}|=k \\
0 & \mathrm{otherwise}
\end{cases}.
\end{equation}
The power spectrum $p(k)$, however, is still unknown. The prior for the power spectrum is constructed out of two parts. First, an inverse Gamma distribution $\mathcal{I}(p(k);\alpha_k,q_k)$ for each $k$-bin (see Appendix~\ref{app:parameters}), which is a conjugate prior for a Gaussian PDF, second a Gaussian cost-function that punishes deviations from power law spectra (see \cite{niels-smoothness}),
\begin{equation}
\mathcal{P}(p) \propto \left\{\prod_k \mathcal{I}(p(k);\alpha_k,q_k)\right\} \exp\!\left( -\frac{1}{2} (\log p)^\dagger T (\log p) \right).
\end{equation}
$T$ is an operator that fulfills
\begin{equation}
(\log p)^\dagger T (\log p) = \frac{1}{\sigma_p^2} \int\!\!\mathrm{d}(\log k) \left(\frac{\partial^2\log p(k)}{\partial (\log k)^2} \right)^2,
\label{eq:smoothness-prior}
\end{equation}
and $\sigma_p$ is a parameter that dictates how smooth the power spectrum is expected to be. In \corr{our paper} $\log$ refers to the natural logarithm.
We explain our choice of the parameters $\alpha_k$, $q_k$, and $\sigma_p$ in Appendix~\ref{app:parameters}.
\subsubsection{The power spectrum posterior}
\label{sec:p_prior}
With the signal and power spectrum priors, we can derive a posterior for the power spectrum,
\begin{equation}
\begin{split}
\mathcal{P}(p|D\!M) & \propto \int\!\!\mathcal{D}s\ \mathcal{P}(D\!M|s,p)\,\mathcal{P}(s|p)\,\mathcal{P}(p)\\
& = \int\!\!\mathcal{D}s\ \mathcal{P}(D\!M|s)\,\mathcal{G}(s,S)\,\mathcal{P}(p).
\end{split}
\end{equation}
We calculate the integral using a saddle point approximation up to second order around the maximum for the $s$-dependent part,
\begin{equation}
\begin{split}
\mathcal{P}(D\!M|s)\,\mathcal{G}(s,S) & \approx \mathcal{P}(D\!M|m)\,\mathcal{G}(m,S)\,\mathrm{e}^{-\frac{1}{2}(s-m)^\dagger D^{-1}(s-m)}\\
& \propto \mathcal{G}(m,S)\,\mathrm{e}^{-\frac{1}{2}(s-m)^\dagger D^{-1}(s-m)}
\end{split}
\end{equation}
where $m$ and $D$ are defined as $m^{(s)}$ and $D^{(s)}$ in Sec.\ \ref{sec:posterior} and only $s$ and $p$-dependent factors are kept after the proportionality sign.
With this approximation we arrive at
\begin{equation}
\mathcal{P}(p|D\!M) \propto \left| 2\pi D \right|^{\frac{1}{2}}\,\left| 2\pi S \right|^{-\frac{1}{2}}\,\mathrm{e}^{-\frac{1}{2}m^\dagger S^{-1} m}\, \mathcal{P}(p).
\end{equation}
Maximizing this PDF with respect to $\log(p)$ (see \cite{niels-smoothness}) leads to
\begin{equation}
p(k) = \frac{q_k + \frac{1}{2}\mathrm{tr}\!\left(S^{(k)}(mm^\dagger + D) \right)}{\alpha_k - 1 +\frac{1}{2}\varrho_k+(T\log p)_k} ,
\label{eq:powspec_approx}
\end{equation}
where $\varrho_k = \mathrm{tr}\left(S^{(k)}\right)$ is the number of degrees of freedom in the spectral band $k$.
This formula for the power spectrum $p(k)$ should be solved self-consistently, since $m$ and $D$ depend on $p(k)$ as well. Thus we arrive at an iterative scheme, where we look for a fixed point of Eq.\ \eqref{eq:powspec_approx}.
\subsubsection{The signal posterior}
\label{sec:posterior}
The signal posterior can be expressed as
\begin{equation}
\mathcal{P}(s|D\!M) = \int\!\!\mathcal{D}(\log p)\, \mathcal{P}(\log p|D\!M)\, \mathcal{P}(s|p,D\!M),
\end{equation}
where $\mathcal{P}(s|p,D\!M)$ is the signal posterior with a given power spectrum. Instead of calculating the marginalization over $\log p$ we use Eq.~\eqref{eq:powspec_approx} for the power spectrum; i.e., we approximate $\mathcal{P}(\log p|D\!M)$ by a Dirac peak at its maximum. This procedure is known as the Empirical Bayes method.
The signal posterior with a given power spectrum is proportional to the product of the signal prior and the likelihood (see Eq.\ \eqref{eq:Bayes}),
\begin{equation}
\begin{split}
\mathcal{P}(s,D\!M|p,\tilde{N}) \propto & \exp\!\left( -\frac{1}{2} s^\dagger S^{-1} s \right)\times\\ & \exp\!\left( -\frac{1}{2} (D\!M-\tilde{R}\rho)^\dagger \tilde{N}^{-1} (D\!M-\tilde{R}\rho) \right)
\end{split}
\end{equation}
However, as has been demonstrated in Sec.\ \ref{sec:p_prior} and \ref{sec:likelihood}, $S$ and $\tilde{N}$ depend on the mean and the covariance of $\mathcal{P}(s|D\!M,p,\tilde{N})$ leading to a circular dependence that needs to be solved self-consistently.
We approximate the mean of the posterior by minimizing the joint Hamiltonian $\mathcal{H}(s,D\!M|p,\tilde{N}) := -\log \mathcal{P}(s,D\!M|p,\tilde{N})$ with respect to $s$,
\begin{equation}
m^{(s)} \approx \underset{s}{\mathrm{arg\,min}}\ \mathcal{H}(s,D\!M|p,\tilde{N}),
\end{equation}
and its covariance by the inverse Hessian at that minimum,
\begin{equation}
D^{(s)} \approx \left(\left. \frac{\delta^2}{\delta s \delta s^\dagger} \mathcal{H}(s,D\!M|p,\tilde{N}) \right|_{s=m}\right)^{-1}.
\label{eq:inverse_Hessian}
\end{equation}
These estimates are the \textit{maximum~a~posteriori} (MAP) estimates of $s$.
Consequently $m^{(\rho)}$ and $D^{(\rho)}$ are estimated as
\begin{equation}
m^{(\rho)}(\vec{x}) \approx \Delta(\vec{x})\mathrm{e}^{m^{(s)}(\vec{x})}
\end{equation}
and
\begin{equation}
\begin{split}
D^{(\rho)}(\vec{x},\vec{y}) & \approx \Delta(\vec{x})\mathrm{e}^{m^{(s)}(\vec{x})}\left( \mathrm{e}^{D^{(s)}(\vec{x},\vec{y})} - 1 \right)\mathrm{e}^{m^{(s)}(\vec{y})}\Delta(\vec{y}).\\
\end{split}
\end{equation}
$S$ is then constructed as
\begin{equation}
S(\vec{x},\vec{y}) = \sum\limits_k\,S^{(k)}(\vec{x},\vec{y})\, p(k),
\end{equation}
with $p(k)$ given by Eq.~\eqref{eq:powspec_approx}.
$\tilde{N}$ is constructed using Eqs.~\eqref{eq:effective_noise} and \eqref{eq:noise_addition} as
\begin{equation}
(\tilde{N})_{ij} = (N)_{ij} + \delta_{ij}\, \mathrm{tr}\left( m^{(\rho)} m^{(\rho)\dagger} F^{(i)} \right),
\end{equation}
where $\delta_{ij}$ is the Kronecker delta.
\subsection{Galactic profile inference}
\label{sec:profile}
To infer the Galactic profile field $\Delta$ we introduce $\tilde{s} \equiv s + \log(\Delta) \equiv \log(\rho)$. The Galactic profile is to capture the most prominent symmetries of a disk galaxy, namely its rotational symmetry and the scaling \corr{behaviour with radial distance from the Galactic center and} vertical distance from the \corr{Galactic plane.
Using Eq.~\eqref{eq:parametrization} $\mu \equiv \log(\Delta)$ becomes}
\begin{equation}
\mu(x,y,z) = \alpha(r) + \beta(|z|),\quad \mathrm{with} \quad r\equiv\sqrt{x^2+y^2},
\label{eq:profiles}
\end{equation}
where $\alpha$ and $\beta$ are one-dimensional functions describing the average behavior with respect to the radial distance and the vertical distance from the Galactic center.
\corr{Including the shift by $\mu$ from $s$ to $\tilde{s}$} yields the signal prior
\begin{equation}
\mathcal{P}(\tilde{s}|\alpha,\beta) = \mathcal{G}(\tilde{s}-\mu,S).
\end{equation}
We do not want to assume specific functions $\alpha$ and $\beta$ but to infer them. To that end we choose a Gaussian prior,
\begin{equation}
\mathcal{P}(\alpha,\beta) \propto \exp\!\left(- \frac{1}{2 \sigma_\alpha^2} \left(\frac{\partial^2 \alpha}{\partial r^2} \right)^2 - \frac{1}{2 \sigma_\beta^2} \left(\frac{\partial^2 \beta}{\partial |z|^2} \right)^2 \right),
\end{equation}
with the second derivative of $\alpha$ (or $\beta$ respectively) as the argument. This prior prefers linear functions for $\alpha$ and $\beta$ and thus Galactic profile fields with an exponential fall-off (or rise).
To simplify the notation we define $\xi(r,|z|) = \left(\alpha(r),\beta(|z|)\right)^T$ and introduce the linear operators $\Xi$ and $X$, where
\begin{equation}
X\xi = \alpha + \beta \equiv \mu
\label{eq:X_operator}
\end{equation}
and
\begin{equation}
\xi^\dagger \Xi \xi = \frac{1}{\sigma_\alpha^2} \left(\frac{\partial^2 \alpha}{\partial r^2} \right)^2 + \frac{1}{\sigma_\beta^2} \left(\frac{\partial^2 \beta}{\partial |z|^2} \right)^2.
\end{equation}
Now we can write the Hamiltonian of $\xi$ given a specific electron density as
\begin{equation}
\begin{split}
\mathcal{H}(\xi|\tilde{s}) & = \frac{1}{2} \left( \tilde{s} - X\xi\right)^\dagger S^{-1} \left( \tilde{s} - X\xi\right) + \frac{1}{2} \xi^\dagger \Xi \xi + \mathrm{const.} \\
& = \frac{1}{2} \xi^\dagger \left( X^\dagger S^{-1} X + \Xi \right) \xi - \tilde{s}^\dagger S^{-1} X \xi + \mathrm{const.} \\
& \equiv \frac{1}{2} \xi^\dagger D_{(\xi)}^{-1} \xi - j^\dagger_{(\xi)} \xi + \mathrm{const.}
\label{eq:profile_Hamiltonian}
\end{split}
\end{equation}
with $D_{(\xi)} = \left( X^\dagger S^{-1} X + \Xi \right)^{-1}$ and $j_{(\xi)} = X^\dagger S^{-1} \tilde{s}$.
Since this Hamiltonian is a quadratic form in $\xi$ the mean of the corresponding Gaussian PDF is
\begin{equation}
\left\langle \xi \right\rangle_{(\xi|\tilde{s})} = D_{(\xi)} j_{(\xi)}
\end{equation}
\subsection{Filter equations}
\label{sec:filter_equations}
Using the posterior estimates presented in the previous section, we arrive at the following iterative scheme to reconstruct the density $\rho$:
\begin{enumerate}
\item Make an initial guess for the power spectrum (e.g.\ some power law) and the additive term in the noise covariance (e.g.\ simple relative error propagation).
\item With the current estimates for $p$ and $\tilde{N}$ the Hamiltonian, \mbox{$\mathcal{H}(s,D\!M|p,\tilde{N}) + \mathrm{const.} \equiv \log\mathcal{P}(s,D\!M|p,\tilde{N})$}, is
\begin{equation}
\begin{split}
\mathcal{H}(s,D\!M|p,\tilde{N}) & = \frac{1}{2} s^\dagger \left( \sum\limits_k S^{(k)} p_k^{-1} \right) s \\
&\quad + \frac{1}{2}\left(\mathrm{e}^s * \Delta \right)^\dagger \tilde{R}^\dagger \tilde{N}^{-1}\tilde{R} \left(\mathrm{e}^s * \Delta \right)\\
&\quad - D\!M^\dagger \tilde{N}^{-1} \tilde{R} \left(\mathrm{e}^s * \Delta \right),
\end{split}
\end{equation}
where $*$ denotes point-wise multiplication in position space.
\label{final-filter1}
\item The MAP estimate of this Hamiltonian is calculated as
\begin{equation}
m^{(s)} = \underset{s}{\mathrm{arg\,min}}\ \mathcal{H}(s,D\!M|p,\tilde{N}),
\end{equation}
with the covariance estimate (see Appendix~\ref{app:derivatives})
\begin{equation}
D^{(s)} = \left( \left. \frac{\delta^2}{\delta s \delta s^\dagger} \mathcal{H}(s,D\!M|p,\tilde{N}) \right|_{s=m} \right)^{-1}.
\label{eq:cov_estimate}
\end{equation}
\label{final-filter2}
\item The updated power spectrum is the solution (with respect to $p(k)$) of the equation
\begin{equation}
p(k) = \frac{q_k + \frac{1}{2}\mathrm{tr}\!\left(S^{(k)}(mm^\dagger + D) \right)}{\alpha_k - 1 +\frac{1}{2}\varrho_k+(T\log p)_k}.
\end{equation}
\label{final-filter3}
\item The updated effective noise covariance is calculated as
\begin{equation}
(\tilde{N})_{ii} = (N)_{ii} + \mathrm{tr}\left( m^{(\rho)} m^{(\rho)\dagger} F^{(i)} \right),
\end{equation}
with
\begin{equation}
\begin{split}
m^{(\rho)}(\vec{x}) & = \Delta(\vec{x})\exp\!\left(m^{(s)}(\vec{x})\right).\\
\end{split}
\end{equation}
\label{final-filter4}
\item The updated Galactic profile field is
\begin{equation}
\begin{split}
\Delta & = \exp\!\left(X m^{(\xi)}\right)\qquad \mathrm{with} \\
m^{(\xi)} & = \left( X^\dagger S^{-1} X + \Xi \right)^{-1} X^\dagger S^{-1} \log(m^{(\rho)})
\end{split}
\end{equation}
\item Repeat from step \ref{final-filter1} until convergence is reached.
\end{enumerate}
\vspace{0.2cm}
When the solution of this set of equations is converged, the estimate of the density $\rho$ is
\begin{equation}
\rho(\vec{x}) \approx m^{(\rho)}(\vec{x}) \pm \sigma^{(\rho)}(\vec{x}),
\end{equation}
where the confidence interval $\sigma^{(\rho)}$ is defined as
\begin{equation}
\sigma^{(\rho)}(\vec{x}) := \sqrt{m^{(\rho)}(\vec{x})\left( \mathrm{e}^{D^{(s)}(\vec{x},\vec{x})} - 1 \right)m^{(\rho)}(\vec{x})}.
\label{eq:uncertainty}
\end{equation}
\section{Application to simulated data}
\label{sec:simulation}
To test the reconstruction of the Galactic free electron density distribution with the SKA we generate mock data sets of \corr{pulsars with various distance} uncertainties. We simulate pulsar \corr{populations} using the PSRPOPpy package by \cite{Bates-2014}, which is based on the pulsar population model by \cite{Lorimer-2006}. \corr{The generated populations take into account the observational thresholds of the SKA (mid-frequency). These data sets} sample modified versions of the NE2001 model by \cite{cordes-2002} \corr{through dispersion measures.}
\subsection{Galaxy model}
\label{sec:galaxy_model}
We deactivated\footnote{
This is achieved by modifying the ``nelism.inp'', ``neclumpN.NE2001.dat'', and ``nevoidN.NE2001.dat'' files provided with the \corr{NE2001} code.
}
all local ISM components as well as all clumps and voids in the NE2001 model. \corr{We keep the clump in the Galactic center, since it is the only one at a distinguished position.}
We evaluated\footnote{
To get the three-dimensional free electron density from the compiled NE2001 code, we evaluate two positions in each pixel, that have parallel line-of-sight vectors. The difference between their dispersion measures divided by the difference of their distance to Sun is then taken as the free electron density in that pixel.
}
the \corr{resulting free electron density model} in a 512x512x64 pixel grid centered on the Galactic center with a pixel edge length of $75\,\mathrm{pc}$. This means that our model \corr{extends out to} $2400\,\mathrm{pc}$ from the Galactic plane. We assume a density of zero outside of this regime when calculating the dispersion measures.
The resulting density field is very smooth.
\corr{We generate three Gaussian random fields which follow a power-law distribution with a spectral index\footnote{
There is no physical reason for that choice, but a power law with this index seems to follow the spectrum of the log-density in the original model NE2001 rather well on large to medium scales.
} of -4.66 but have different fluctuation amplitudes. We make sure that the} Sun sits in an underdensity \corr{in these random fields. Then we add these three random field maps to our smooth map of $\log(n_\mathrm{e})$ to create three different modified versions of NE2001.}
In Fig.~\ref{fig:fluct_power} we depict the power spectra of the smooth NE2001 field (without local features, clumps and voids) and the power spectra of the \corr{three modified versions of it.}
\begin{figure}
\includegraphics[width=\linewidth]{NE_power_fluct_power2.pdf}
\caption{The power spectrum of the NE2001 field without local features, clumps, and voids compared to the three \corr{unenhanced} Galaxy models. The thick solid line depicts the NE2001 spectrum, the thin dashed lines depict the spectra of the models with strong, medium and weak (from top to bottom) fluctuations. For the calculation of the spectra, the density peak in the Galactic center is masked.}
\label{fig:fluct_power}
\end{figure}
\subsubsection{Contrast enhanced model}
\label{sec:enhanced_contrast}
\corr{
The three Galaxy models we generated from NE2001 have relatively little contrast in the sense that under- and overdense regions differ by relatively moderate factors. For example, the density in the region between the Perseus and the Carina-Sagittarius arm where the Sun is located is only a factor of three lower than in the Perseus arm itself. Since the Perseus arm is a less than $1\,\mathrm{kpc}$ in width any excess dispersion measure due to the arm can also be explained by an underestimated pulsar distance for many lines of sight. In consequence, we expect the reconstruction quality to improve if the input model has higher contrast. Therefore, we prepare one additional model with enhanced contrast. To that end, we take the input model with medium strength fluctuations as described above. We divide out the scaling behavior in radial and vertical directions using the scale heights from NE2001. We square the density and divide it by a constant to ensure that the mean density in the Galactic plane remains unchanged}\footnote{
\corr{The bulge in the Galactic center is kept unchanged by the whole procedure.}
}\corr{. Finally we multiply the resulting density with the scaling functions to restore the original scaling in radial and vertical directions.}
\corr{This procedure yields a Galaxy model sharing the same morphology and scaling behavior as the input model. Averaged over the lines-of-sight, the value of dispersion measures is roughly unchanged.
But the contrast is twice as strong, i.e., the previously mentioned factor between the density in the Perseus arm and the inter-arm region is now squared from 3 to 9. We will show a picture of the density in the Galactic plane of this model in Sec.~\ref{sec:midplane_comp}, where we compare it with its reconstruction.}
\subsection{Simulated population and survey}
\label{sec:mock_population}
We use the ``SKA'' template in the PSRPOPpy package, but reduce the maximum declination in equatorial coordinates to $50^\circ$ (due to the SKAs position on the Southern Hemisphere, see e.g.~\cite{Smits-2009}). This yields a detected population of roughly 14000 pulsars. Out of these, we take the first 1000, 5000, or 10000 pulsars as our test populations. The population is not ordered in any sense, so the first, e.g., 1000 pulsars represent a random sample from the whole detected population. In reality, Malmquist bias will select preferentially pulsars that lie close to the Sun. We choose, however, a random selection in order to see the effect of the population size on the quality of the reconstruction more clearly. In Fig.~\ref{fig:sky_10000} we depict the population of 10000 pulsars projected onto the sky. The pulsars are concentrated towards the center of the Galaxy. The gap in the equatorial Northern Hemisphere is clearly evident in the left part of the plot.
\begin{figure}
\includegraphics[width=0.46\textwidth]{10000}
\caption{The positions of the simulated 10000 pulsars on the sky in Galactic coordinates.}
\label{fig:sky_10000}
\end{figure}
\subsection{Simulated dispersion measures and distances}
\label{sec:mock_data}
We calculate the line integrals through the Galaxy models from the positions generated by the PSRPOPpy package to Sun to generate simulated dispersion measures.
We add Gaussian random variables to the pulsar distances to simulate measurement uncertainties \corr{of the distances}; for each pulsar we generate one random number and scale this to $5\%$, $15\%$, or $25\%$ of the distance of the pulsar.
\corr{In reality the distance PDF would be non-Gaussian. The exact form depends on the combination of observables which are used to infer the distance. We use Gaussian PDFs to keep things simple. As long as the real distance PDFs are unimodal we do not expect this choice to have a significant effect on our study.}
We do not simulate additional measurement noise \corr{for the dispersion measures}, as it is expected to be small compared to the distance uncertainty. This leaves us with a number of data sets described in Table~\ref{table:data_sets}. As can be seen in this table we omit the combinations of $1000$ pulsars at $25\%$ distance error (as we do not hope for a good reconstruction in that case) and $10000$ pulsars at $5\%$ distance error (as we deem it to be too unrealistic).
\begin{table}
\caption{The types of data sets simulated for all Galaxy models. The columns indicate the number of pulsars, the rows the relative distance uncertainties.}
\label{table:data_sets}
\centering
\begin{tabular}{r c c c}
\hline\hline
& 1000 pulsars & 5000 pulsars & 10000 pulsars \\
\hline
$25\%$ unc. & & \checkmark & \checkmark \\
$15\%$ unc. & \checkmark & \checkmark & \checkmark \\
$5\%$ unc. & \checkmark & \checkmark & \\
\hline
\end{tabular}
\end{table}
The aforementioned measurement scenarios are chosen to see the effect of the population size and the distance error on the reconstruction in isolation. A more realistic setting is of course a mix of distance uncertainties where more distant pulsars on average have larger distance errors. We therefore create one additional measurement scenario for 10000 pulsars, where we assign the uncertainty magnitude of each pulsar randomly\footnote{Each pulsar is assigned probabilities to belong to either the $5\%$, the $15\%$ or the $25\%$ set. The probabilities depend on its distance, making more distant pulsars more likely to have higher uncertainties. The pulsar is then randomly assigned to an uncertainty set according to the probabilities.}. The distance uncertainties are distributed as shown in Fig.~\ref{fig:comb-hist}. In this measurement set, 2969 pulsars have a $5\%$ distance error, 3400 pulsars have a $15\%$ distance error, and 3631 pulsars have a $25\%$ distance error. Throughout the rest of this paper we refer to this data set as the ``mixed data set''.
\begin{figure}
\includegraphics[width=0.49\textwidth]{histogram2}
\caption{A histogram showing distribution of distance uncertainties with respect to the distance from Sun in the mixed measurement set.}
\label{fig:comb-hist}
\end{figure}
\corr{A very rough estimate of the scales that we can hope to resolve is given by mean distance between neighboring pulsars and the average misplacement due to distance errors. The mean distance between neighboring pulsars is $490\,\mathrm{pc}$ for 1000, $290\,\mathrm{pc}$ for 5000, and $230\,\mathrm{pc}$ for 10000 pulsars. The average misplacement is $380\,\mathrm{pc}$ for $5\%$, $1100\,\mathrm{pc}$ for $15\%$, and $1900\,\mathrm{pc}$ for $25\%$ distance errors and $1300\,\mathrm{pc}$ for the mixed data set. Interpreting these distances as independent uncertainties one can combine them by adding the squares and taking the square root. This provides us with a rough estimate of sampling distances. In Table~\ref{table:distances} we list these distances for each data set.}
\begin{table}
\caption{The estimated sampling distances for each data set. The columns indicate the number of pulsars, the rows the relative distance uncertainties.}
\label{table:distances}
\centering
\begin{tabular}{r r r r}
\hline\hline
& 1000 pulsars & 5000 pulsars & 10000 pulsars \\
\hline
$25\%$ unc. & & $1900\,\mathrm{pc}$ & $1900\,\mathrm{pc}$ \\
$15\%$ unc. & $1200\,\mathrm{pc}$ & $1100\,\mathrm{pc}$ & $1100\,\mathrm{pc}$ \\
$5\%$ unc. & $600\,\mathrm{pc}$ & $500\,\mathrm{pc}$ & \\
\hline
\end{tabular}
\end{table}
\subsection{Algorithm setup}
\label{sec:setup_algorithm}
The algorithm is set up in a $128 \times 128 \times 48$ pixel grid centered on the Galactic center with pixel dimensions\footnote{
\corr{We note that the pixels of our algorithm setup are significantly larger than those of the input models. This is on purpose, since in reality there will always be structure smaller than the chosen pixel size.}
} of $281.25\,\mathrm{pc} \times 281.25\,\mathrm{pc} \times 250\,\mathrm{pc}$.
\corr{While the dispersion measures in our data sets are free from instrumental noise, it is assumed to be $2\%$ in the algorithm. This provides a lower limit for the effective noise covariance (Eq.~\ref{eq:effective_noise}) and thus ensures stability of the inference without losing a significant amount of precision.}
The initial guess for the power spectrum is a broken power law with an exponent of $-3.66$\footnote{\corr{We could in principle use any power spectrum as an initial guess. The choice here comes from no particular reasoning. It has negligible influence on the final result (see Appendix~\ref{sec:convergence}).}}. For the propagated distance uncertainty it is
\begin{equation}
\sigma_i = \frac{\sqrt{\mathrm{Var}[d_i]}}{d_i} D\!M_i,
\end{equation}
where $d_i$ is the distance of the pulsar (see Sec.~\ref{sec:likelihood}). \corr{The} initial guesses of the Galactic profile functions\footnote{
We note that while the priors for the profile functions prefer linear forms, all functional forms are allowed in principle.
} are
\begin{equation}
\alpha(r) = \frac{-r}{28000\,\mathrm{pc}}\quad \mathrm{and} \quad \beta(|z|) = \frac{-|z|}{1600\,\mathrm{pc}}.
\end{equation}
We discuss the convergence and final values of the power spectrum, effective errors, and profile functions in Appendix~\ref{sec:convergence}.
\section{Simulation evaluation}
\label{sec:reconstructions}
\corr{Our algorithm accounts for most of the variance in the data while regularizing the result to avoid overfitting. Most of the reconstructions shown in this section have corresponding reduced $\chi^2$ values close to 1, indicating that they show all structures which are sufficiently constained by the data.
We discuss the reduced $\chi^2$ values in detail in Appendix~\ref{sec:chisquared}.}
\subsection{Density in the \corr{midplane}}
\label{sec:midplane_comp}
The simulations show that with the amount of pulsars \corr{with} reliable distance estimates \corr{that the SKA should deliver} reconstruction of the free electron density in the vicinity of the Sun becomes feasible (see Fig.~\ref{fig:compilation}). \corr{However, small-scale features are difficult to identify in the reconstruction. Identifying} spiral arms remains challenging \corr{as well}, especially beyond the Galactic center. \corr{To resolve the spiral arms in the vicinity of the Sun, between 5000 and 10000 pulsars with distance accuracies between $5\%$ and $15\%$ are needed.}
As is evident from the figure, \corr{small} distance uncertainties increase the quality of the reconstruction significantly. The reconstruction from 5000 pulsars with $5\%$ distance uncertainty is better in quality than the one from 10000 pulsars with $15\%$ distance uncertainty\footnote{
In principle, this behavior is not surprising, as one measurement of a scalar quantity $a$ with standard deviation $\sigma$ contains the same amount of information as 9 independent measurements with standard deviation $3\sigma$ (assuming Gaussian PDFs).
}.
All reconstructions \corr{smooth out small-scale structure in the electron density, for example at the Galactic center.}
\corr{If an over-density appears at the wrong location this indicates that the data do not constrain the overdensity well.
For completeness we also show the recovered Galactic profile in the Galactic plane for} 5000 pulsars with $5\%$ distance uncertainty \corr{in Fig.~\ref{fig:profile-Gplane}. For other data sets the plot would look very similar.}
\corr{In Appendix~\ref{sec:cheated} we show a reconstruction where the Galactic profile and the correlation structure a known \textit{a priori} and in Appendix~\ref{sec:uncertainty} we show and discuss} the uncertainty estimate of the algorithm.
In Fig.~\ref{fig:compilation_fluct} we compare the performance of the reconstruction algorithm for the three input model
fluctuation strengths using 5000 pulsars \corr{with} $5\%$ and $15\%$ distance uncertainty.
One can see that the strength of the fluctuations does not influence the quality of the reconstructions by a great amount. The reconstructions of the models with \corr{stronger fluctuations exhibit stronger} fluctuations as well, while all reconstruction omit/smear features to a similar degree.
However, one can see that it becomes more difficult to reconstruct the Perseus arm towards the Galactic \corr{anticenter} if the fluctuations in the electron density are strong. This is to be expected, as the spiral arm is also harder to recognize in the original model as the fluctuations become stronger.
\begin{figure*}[p]
\begin{tabular}{c c c}
\begin{overpic}[width=0.33\textwidth]{mid_original}
\put(35,87){\boldmath\color{white}\textbf{original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_25}
\put(33,87){\boldmath\color{white}$5000$ $25\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_10000_25}
\put(30,87){\boldmath\color{white}$10000$ $25\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth]{mid_1000_15}
\put(33,87){\boldmath\color{white}$1000$ $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_15}
\put(33,87){\boldmath\color{white}$5000$ $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_10000_15}
\put(30,87){\boldmath\color{white}$10000$ $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth]{mid_1000_05}
\put(35,87){\boldmath\color{white}$1000$ $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_05}
\put(35,87){\boldmath\color{white}$5000$ $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_10000_mixed}
\put(30,87){\boldmath\color{white}$10000$\ \textbf{mixed}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\end{tabular}
\caption{Several reconstructions of the Galaxy model with medium strength fluctuations. All panels show top-down views of the \corr{electron} density in the Galactic plane \corr{using a} linear color scale in units of $\mathrm{cm}^{-3}$. \corr{The panels span $36000\,\mathrm{pc}$ in each dimension. The} Sun is located at the white dot depicted in \corr{each} panel.
The rows show reconstructions with distance errors \corr{of} $25\%$, $15\%$, and $5\%$ respectively (from top to bottom). The columns show reconstructions with $1000$, $5000$, and $10000$ pulsars respectively (from left to right). The layout follows Table~\ref{table:data_sets}. The top left panel shows the original input model (modified NE2001). The bottom right panel shows the reconstruction of the mixed measurement set.
}
\label{fig:compilation}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{mid_5000_05perc_wp-Gplane}
\caption{The recovered Galactic profile in the Galactic plane. Shown is a top-down view the Galaxy as in Fig.~\ref{fig:compilation}, but here in logarithmic color scale. The input model had medium strength fluctuations, it was recovered using 5000 pulsars with $5\%$ distance uncertainty (corresponding to the bottom middle panel in Fig.~\ref{fig:compilation}). Other fluctuation strengths and data sets would yield a very similar image.}
\label{fig:profile-Gplane}
\end{figure}
\begin{figure*}[p]
\begin{tabular}{c c c}
\begin{overpic}[width=0.33\textwidth]{low_5000_15}
\put(33,87){\boldmath\color{white}\textbf{weak} $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_15}
\put(30,87){\boldmath\color{white}\textbf{medium} $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{high_5000_15}
\put(31,87){\boldmath\color{white}\textbf{strong} $15\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth]{low_5000_05}
\put(35,87){\boldmath\color{white}\textbf{weak} $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_05}
\put(32,87){\boldmath\color{white}\textbf{medium} $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{high_5000_05}
\put(33,87){\boldmath\color{white}\textbf{strong} $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth]{low_original}
\put(28,87){\boldmath\color{white}\textbf{weak original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_original}
\put(25,87){\boldmath\color{white}\textbf{medium original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{high_original}
\put(26,87){\boldmath\color{white}\textbf{strong original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\end{tabular}
\caption{Reconstructions of the three Galaxy models with fluctuation strengths using 5000 pulsars. All panels show top-down views of the \corr{electron} density in the Galactic plane \corr{using a} linear color scale in units of $\mathrm{cm}^{-3}$. \corr{The panels span $36000\,\mathrm{pc}$ in each dimension.}
The rows show reconstructions with distance errors \corr{of} $15\%$ and $5\%$ respectively (from top to bottom). The bottom row shows the original Galaxy models. The columns show reconstructions and original input models (modified NE2001) with weak, medium and strong fluctuations respectively (from left to right).
}
\label{fig:compilation_fluct}
\end{figure*}
\corr{
In Fig.~\ref{fig:contrast} we show the contrast enhanced Galaxy model as well as its reconstruction using 5000 pulsars with distance uncertainties of $5\%$. As is clear from the Figure, the algorithm is able to resolve much more detailed structure compared to the reconstruction of the unenhanced Galaxy model (bottom middle panel in Fig.~\ref{fig:compilation}). We want to stress that the pulsar population and their distance uncertainties are exactly the same for both cases. The increase in quality comes merely from the increased contrast and the resulting stronger imprint of under- and overdensities in the dispersion data. Therefore, we conclude that if the contrast of the real Galaxy is much stronger that in NE2001, our algorithm could resolve the Galaxy much better that the a study on NE2001 indicates.
}
\begin{figure*}[p]
\centering
\begin{tabular}{c c}
\begin{overpic}[width=0.45\textwidth]{contrast_original}
\put(29,87){\boldmath\color{white}\textbf{contrast original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.45\textwidth]{contrast_5000_05perc}
\put(30,87){\boldmath\color{white}\textbf{contrast} $5000$ $5\%$}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\end{tabular}
\caption{\corr{Input model (left) and reconstruction (right) of the contrast enhanced Galaxy model using 5000 pulsars with distance uncertainties of $5\%$. Both panels show top-down views of the electron density in the Galactic plane using a linear color scale in units of $\mathrm{cm}^{-3}$. The panels span $36000\,\mathrm{pc}$ in each dimension.}
}
\label{fig:contrast}
\end{figure*}
\subsection{Vertical fall-off}
\label{sec:scale-heights}
A quantity of interest in any model of the Galactic free electron density is the drop-off of the average density with respect to distance from the Galactic plane. \corr{This behavior can be seen in Fig.~\ref{fig:profile-vert}, which displays a vertical cut through the Galactic profile reconstructed using 5000 pulsars with $5\%$ distance uncertainty.}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth,trim=0mm 25mm 0mm 80mm,clip=true]{mid_5000_05perc_wp-vertical}
\caption{A vertical cut through the same Galactic profile as in Fig.~\ref{fig:profile-Gplane}. Shown is the slice containing the Sun (white dot) and the Galactic center (middle). The image spans $36\,\mathrm{kpc}\times12\,\mathrm{kpc}$. The color scale is logarithmic.}
\label{fig:profile-vert}
\end{figure}
\corr{In} our parametrization the function $\beta$ in Eq.~\eqref{eq:profiles} describes the average log-density at a certain distance from the Galactic plane. In Fig.~\ref{fig:compilation_zprof} we show the estimates for $\beta$ corresponding to the reconstructions shown in Fig.~\ref{fig:compilation} along with their uncertainties (see Appendix~\ref{sec:z_profile_uncertainity} for their calculation). The uncertainty regions reflect that a vertical fall-off can be explained by a global profile as well as by density fluctuations \corr{close to the Sun. This uncertainty is nearly independent of the quality of the data set, but depends on the strength of fluctuations on kpc scales. These are always present unless the data probe a simplistic disk. Therefore, there is a lower bound of precision to which our algorithm can determine the vertical fall-off behavior.}
\corr{We compare the reconstructed vertical scaling to a global} and a local estimate \corr{generated from the original input model.} The global estimate describes vertical fall-off throughout the whole model whereas the local estimate describes the vertical fall-off close to the Sun\footnote{
The global estimate is calculated by averaging the logarithmic density at fixed vertical distances over the whole horizontal plane. The local estimate is calculated by averaging the logarithmic density at fixed vertical distances in a sub-area of the horizontal plane, which is centered on Sun and has a size of $1500\,\mathrm{pc}\times1500\,\mathrm{pc}$.
}. \corr{For completeness we also provide best fitting scale heights for exponential fall-offs in the figure, i.e., we fit the vertical scaling to $n_\mathrm{e} \propto e^{-|z|/H_z}$. The uncertainties of these estimates are calculated by performing the fit on multiple posterior samples of $\beta$. Both, the local and the global estimate have significantly lower scale heights than the $950\,\mathrm{pc}$ from NE2001 (thick disk). This is probably due to the combination of the thick disk with the thin disk of NE2001 (which has a scale height of $140\,\mathrm{pc}$).}
As is evident from the figure, the reconstructed z-profile is dominated by the local behavior of the density and agrees with it within the error bars throughout all data sets\footnote{
The reconstructed vertical fall-off is dominated by the near-Sun region since this is the part of the Galaxy where the density is reconstructed best.
}, for the regime $|z| < 2400\,\mathrm{pc}$. \corr{However, the width of the uncertainty region prohibits a clear decision whether the vertical fall-off follows a single exponential function or a thick disk and a thin disk, as is the case for NE2001.
In our input model we set $n_\mathrm{e}$ to zero for $|z|>2400\,\mathrm{pc}$. In that regime our reconstruction is unreliable.}
\begin{figure*}[p]
\begin{tabular}{c c c}
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_real_comp.pdf}
\put(40,70){\boldmath\color{black}\textbf{original}}
\put(9,22){\color{black}$H_z=610\,\mathrm{pc}$ (local)}
\put(9,15){\color{black}$H_z=880\,\mathrm{pc}$ (global)}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_5000_25_comp.pdf}
\put(40,70){\boldmath\color{black}$5000$ $25\%$}
\put(9,15){\color{black}$H_z=(740\pm120)\,\mathrm{pc}$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_10000_25_comp.pdf}
\put(40,70){\boldmath\color{black}$10000$ $25\%$}
\put(9,15){\color{black}$H_z=(630\pm80)\,\mathrm{pc}$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_1000_15_comp.pdf}
\put(40,70){\boldmath\color{black}$1000$ $15\%$}
\put(9,15){\color{black}$H_z=(650\pm110)\,\mathrm{pc}$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_5000_15_comp.pdf}
\put(40,70){\boldmath\color{black}$5000$ $15\%$}
\put(9,15){\color{black}$H_z=(590\pm70)\,\mathrm{pc}$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_10000_15_comp.pdf}
\put(40,70){\boldmath\color{black}$10000$ $15\%$}
\put(9,15){\color{black}$H_z=(690\pm120)\,\mathrm{pc}$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_1000_05_comp.pdf}
\put(40,70){\boldmath\color{black}$1000$ $5\%$}
\put(9,15){\color{black}$H_z=(550\pm60)\,\mathrm{pc}$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_5000_05_comp.pdf}
\put(40,70){\boldmath\color{black}$5000$ $5\%$}
\put(9,15){\color{black}$H_z=(740\pm130)\,\mathrm{pc}$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth,trim=20mm 0mm 15mm 5mm,clip=true]{mid_10000_mixed_comp.pdf}
\put(35,70){\boldmath\color{black}$10000$\ \textbf{mixed}}
\put(9,15){\color{black}$H_z=(570\pm80)\,\mathrm{pc}$}
\end{overpic}
\\
\end{tabular}
\caption{The recovered z-dependent fall-off in logarithmic units (function $\beta$ in Eq.~\eqref{eq:profiles}), input model with medium strength fluctuations. The top left panel shows the global z-profile (dashed line) as well as the local z-profile (dotted line). In all other panels the solid line is the recovered z-profile while the global and local z-profile are replotted (in dashed and dotted respectively). The gray areas indicate the $1\sigma$ uncertainty around the recovered z-profile. In the bottom left corner of each panel we show the best fitting exponential scale height (and its 1$\sigma$ uncertainty for the reconstructions).}
\label{fig:compilation_zprof}
\end{figure*}
\section{Summary and conclusions}
\label{sec:discussion}
We presented an algorithm that performs \corr{nonparametric tomography of the} Galactic free electron density using pulsar dispersion measures and distances. The algorithm produces a three-dimensional map and a corresponding uncertainty map. It estimates the correlation structure and the scales of the disk shape automatically, requiring only approximate initial guesses for them. The uncertainties of pulsar distance estimates are consistently propagated.
Using our algorithm we investigated the feasibility of \corr{nonparametric} tomography with the upcoming Square Kilometer Array. To that end, we created three Galaxy models \corr{with various} fluctuation strengths \corr{and one with enhanced contrast} and \corr{simulated mock observations of these models} using between 1000 and 10000 pulsars. Our results indicate that with the amount of pulsars \corr{that the SKA should deliver, nonparametric tomography becomes feasible.} However, detecting spiral arms in the free electron density from pulsar dispersion measures alone remains challenging \corr{if the input model has unenhanced contrast}. We find that to distinguish the spiral arms in the vicinity of the Sun, between 5000 and 10000 pulsars with distance accuracies between $5\%$ and $15\%$ are needed. The vertical fall-off behavior of the free electron density was recovered for all mock data sets we investigated. However, a clear decision whether the vertical fall-off of free electron density is best described by a single exponential function or a thick disk and a thin disk \corr{could not be made by} our algorithm.
\corr{One way to increase the sensitivity of the algorithm for Galactic features would be to include them into the prior description. Including higher order statistics (non-Gaussian priors) one could make the inference more sensitive for spiral arm structures in the electron density. In cosmology, for example, higher order statistics allowed for a better recovery of cosmological filaments (e.g. \cite{Jasche-2013}). Modeling of HII regions and supernova remnants is also beyond the scope of Gaussian statistics.
Another approach would be to include parametrized structures which are known from stellar observations, e.g., spatial templates for spiral arm locations. This would connect data driven tomography (with infinite degrees of freedom) with classical model fitting.}
The algorithm can also be used for other tomography problems with line-of-sight measurements, such as stellar absorption coefficients. It could also be extended to infer vector fields, enabling inference of the Galactic magnetic fields from pulsar rotation measures. Furthermore, a joint reconstruction of the Galactic free electron density and the magnetic field using pulsar dispersion, measures, pulsar rotation measures as well as extragalactic Faraday sources should be investigated.
\begin{acknowledgements}
We want to thank Niels Oppermann and Marco Selig for fruitful collaboration and advice. We also want to thank Henrik Junklewitz, Jimi Green, Jens Jasche and Sebastian Dorn for useful discussions. The calculations were realized using the \textsc{NIFTY}\footnote{\url{http://www.mpa-garching.mpg.de/ift/nifty/}} package by \cite{nifty-2013}. Some of the minimizations described in Sec.~\ref{sec:filter_equations} were performed using the \mbox{L-BFGS-B} algorithm (\cite{LBFGS-1995}). We acknowledge the support by the DFG Cluster of Excellence "Origin and Structure of the Universe". The simulations have been carried out on the computing facilities of the Computational Center for Particle and Astrophysics (C2PAP). We are grateful for the support by Frederik Beaujean through C2PAP
This research has been partly supported by the DFG Research Unit 1254 and has made use of NASA's Astrophysics Data System.
\end{acknowledgements}
\begin{appendix}
\section{Parameters of the power spectrum prior}
\label{app:parameters}
The inverse Gamma distribution is defined as
\begin{equation}
\mathcal{I}(p_k;\alpha_k,q_k) = \frac{1}{q_k\Gamma(\alpha_k-1)} \left( \frac{p_k}{q_k} \right)^{-\alpha_k}\, \exp\!\left( - \frac{q_k}{p_k} \right).
\end{equation}
The mean and variance of this distribution are
\begin{equation}
\begin{split}
\left\langle p_k \right\rangle_{(p_k)} & = q_k/(\alpha_k-2) \ \ && \mathrm{for}\ \ \alpha>2\\
\left\langle p_k^2 \right\rangle_{(p_k)} -\left\langle p_k \right\rangle_{(p_k)}^2 & = \frac{q_k^2}{(\alpha_k-3)(\alpha_k-2)^2} \ \ && \mathrm{for}\ \ \alpha>3.\\
\end{split}
\end{equation}
There are three properties of the prior that we want to fulfill by choosing $\alpha_k$ and $q_k$. The prior of the monopole $p_0$, which corresponds to the variance of a global prefactor of the density, should be close to Jeffreys prior, i.e. the limit $\alpha_0 \rightarrow 1$, $q_0 \rightarrow 0$. The reason for this is that we do want the algorithm to stay consistent if one changes the units, say go from $\mathrm{pc}$ to $\mathrm{kpc}$. Such changes introduce a global prefactor in front of the density. Jeffreys prior has no preferred scale as it is flat for $\log p_0$ and therefore all prefactors are equally likely \textit{a priori}.
For other $k$-bins the parameters should favor, but not enforce falling power spectra. Furthermore, since $p(k)$ is the average power of many independent Fourier components, its \textit{a priori} variance should be inversely proportional to $\varrho_k$ (the amount of degrees of freedom in the respective $k$-bin), while the \textit{a priori} mean should be independent of $\varrho_k$.
We therefore set the parameters as
\begin{equation}
\begin{split}
q_k = f \,\varrho_k \qquad \mathrm{and} \qquad
\alpha_k = 1 + \frac{k}{100 k_\mathrm{min}} \varrho_k,
\end{split}
\end{equation}
where $k_\mathrm{min}$ is the first non-zero $k$-value and $f$ is a prefactor, which defines a lower cut-off of the power spectrum calculated in Eq.~\eqref{eq:powspec_approx}. The choice of $f$ does not influence the result as long as it is suitably low, but higher $f$ accelerate the convergence of the algorithm. The denominator of $100 k_\mathrm{min}$ before $\varrho_k$ is chosen to introduce a preference for falling power spectra starting two orders of magnitude from the fundamental mode $k_\mathrm{min}$ (note that the 1 is subtracted in Eq.~\eqref{eq:smoothness-prior}). As long as the denominator is not too small it has very little influence on the result of the algorithm but smaller denominators increase the convergence speed. We found $100 k_\mathrm{min}$ to be a good compromise.
The parameter $\sigma_p$ in Eq.~\eqref{eq:smoothness-prior} describes how much the power spectrum is expected to deviate from a power law. We choose $\sigma_p = 1$. If the power spectrum is locally described by a power law, $\sigma_p = 1$ means that the typical change of the exponent within a factor of $e$ in $k$ should be of order $1$.
\section{Functional derivatives of the Hamiltonian}
\label{app:derivatives}
To minimize the Hamiltonian in Sec.~\ref{sec:filter_equations} the first derivative with respect to $s$ is needed. It is
\begin{equation}
\begin{split}
\frac{\delta}{\delta s^\dagger} \mathcal{H}(s,D\!M|p,\tilde{N}) & = S^{-1}s + \widehat{\left(\mathrm{e}^s \right)} M \left(\mathrm{e}^s \right) - \widehat{\left(\mathrm{e}^s \right)} j,
\end{split}
\end{equation}
where the hat converts a field to a diagonal operator in position space, e.g.\ \mbox{$\widehat{\xi}(\vec{x},\vec{y}) = \xi(\vec{x})\delta(\vec{x}-\vec{y})$}, and we used the shorthand notations
\begin{equation}
\begin{split}
\qquad S^{-1} & \equiv \sum\limits_k S^{(k)} p_k^{-1}\\
M & \equiv \widehat{\Delta}\tilde{R}^\dagger \tilde{N}^{-1}\tilde{R}\widehat{\Delta}\\
j & \equiv \widehat{\Delta}\tilde{R}^\dagger \tilde{N}^{-1} D\!M.
\end{split}
\end{equation}
The second derivative in Eq.~\eqref{eq:cov_estimate} is
\begin{equation}
\begin{split}
\frac{\delta^2}{\delta s \delta s^\dagger} \mathcal{H}(s,D\!M|p,\tilde{N}) & = S^{-1} + \widehat{\left(\mathrm{e}^s \right)} M \widehat{\left(\mathrm{e}^s \right)} \\
& \quad+ \widehat{\left(\mathrm{e}^s \right)} \widehat{M\left(\mathrm{e}^s \right)} - \widehat{\left(\mathrm{e}^s \right)}\,\widehat{j}.
\end{split}
\end{equation}
The last term in the second derivative can be problematic as it can break the positive definiteness\footnote{Mathematically, the second derivative has to be positive definite at the minimum, but in high dimensional parameter spaces this is not guaranteed in numerical practice.} of the second derivative, which is crucial to apply inversion techniques such as the conjugate gradient method efficiently.
However, a closer inspection of the last two terms (omitting the hats for readability),
\begin{equation}
M\left(\mathrm{e}^s \right) - j = \widehat{\Delta}\tilde{R}^\dagger \tilde{N}^{-1} \left( \tilde{R}\widehat{\Delta}\mathrm{e}^s - D\!M \right) \propto \widetilde{D\!M} - D\!M,
\end{equation}
shows that their contribution is proportional to the difference between the real dispersion data $D\!M$ and the idealized data generated by the map, $\widetilde{D\!M}=\tilde{R}\widehat{\Delta}\mathrm{e}^s$. These two terms counteract each other at the minimum and we therefore omit them to gain numerical stability. Hence the second derivative is approximated as
\begin{equation}
\frac{\delta^2}{\delta s \delta s^\dagger} \mathcal{H}(s,D\!M|p,\tilde{N}) \approx S^{-1} + \widehat{\left(\mathrm{e}^s \right)} M \widehat{\left(\mathrm{e}^s \right)}.
\end{equation}
\section{Convergence}
\label{sec:convergence}
In this section, we display the convergence behavior of the power spectrum, effective errors, and profile functions. For the sake of brevity, we will limit the discussion to the reconstruction of the Galaxy model with fluctuations of medium strength using the data set of 5000 pulsars with a distance error of $5\%$. In Appendix~\ref{sec:cheated} we show the reconstruction of this model and data set using the real power spectrum and profile functions as a benchmark on how well our iterative estimation of them does.
In Fig.~\ref{fig:power_convergence} we show the convergence of the power spectrum. As is evident the power moves away from the initial guess to a fixed point. Compared to the spectrum of the logarithmic input model the converged spectrum misses power in both, the large-scale and the small-scale regime. The loss of power in the large-scale regime is due to the profile field absorbing large features, in the small-scale regime it is due to the general loss of small-scale power.
The loss of small-scale power comes from two effects. First, the dispersion measure data sample the density sparse and irregularly. Without the regularization imposed by the prior, this would lead to severe aliasing as is commonly known from Fourier analysis. As the prior typically suppresses aliasing from large scales to small scales and the algorithm consequently interpretes the power as noise. Aliasing from small scales to large scales is negligible, since the input model is spatially correlated and has thus a falling power spectrum. Second, there is the loss of power due to the distance uncertainties. These make the the likelihood less informative about small-scale structures which are in consequence surpressed by the prior. This effect yields no aliasing but smoothens the resulting map (which is desired to avoid overfitting). For further details about the loss of power in filtering algorithms such as the one in this paper we refer to \cite{Crit-2011}.
The fixed point power spectrum falls as a power law with index $-5.5$ for $k>2\times10^{-4}$. \corr{Our algorithm allows for spectral indices up to $-5.5$.} Without this limit, the power spectrum would fall\footnote{
This means that the algorithm underestimates the power on scales which are not sufficiently probes by the data set. This does not influence the quality of the density map too much, but it makes the algorithm underestimate the posterior uncertainty.
} to a minimal value of $q_k/\varrho_k$ for $k\gtrsim 3\times 10^{-4}$. However, introducing a hard limit speeds up the convergence and using a slope of $5.5$ made no difference towards the lower limit for the resulting maps in our tests. One can think of this hard limit to be part of the power spectrum prior.
\begin{figure}
\includegraphics[width=0.5\textwidth]{mid_5000_05perc_power_withreal2.pdf}
\caption{A plot of the power spectrum changing with the iterations. The thick dashed line is the initial guess, the bulge of black lines is where the algorithm converges. The thick solid red line is the power spectrum of the logarithmic Galaxy model with medium fluctuations. The power spectrum is in arbitrary units.}
\label{fig:power_convergence}
\end{figure}
In Fig.~\ref{fig:error_convergence} we show the convergence of the propagated distance variance (Eq.~\ref{eq:noise_addition}) of a random selection of 10 data points. The behavior seen in this plot is qualitatively the same for all data points we investigated. As one can see, most data points reach convergence rather quickly, but there are also outliers. In this plot, the lowest line exhibits a kink after it had seemingly already converged. Such behavior is unfortunately not entirely suppressible, but it appears to have very little effect on the resulting map, as only a small fraction of data points does this.
\begin{figure}
\includegraphics[width=0.5\textwidth]{mid_5000_05perc_vari}
\caption{The propagated distance variance of 10 data points changing with the iterations. The units of the variances are $\left(\frac{\mathrm{pc}}{\mathrm{cm}^3}\right)^2$.}
\label{fig:error_convergence}
\end{figure}
In Figs.~\ref{fig:zfunc_convergence}~and~\ref{fig:rfunc_convergence} we show the convergence of the profile functions, where we shifted the functions by a global value to line them up at $\beta(|z|\!=\!0)$ and $\alpha(r\!=\!0)$. We note that the functions $\alpha$ and $\beta$ are degenerate with respect to a global addition in their effect on the Galactic profile field and degenerate with the monopole of $s$ as well. This is why a shift by a constant for plotting purposes is reasonable. The z-profile function $\beta$ seems to reach a fixed point for $|z|<2400\,\mathrm{pc}$. For higher $|z|$ the profile function reaches no clear fixed point. However, for the Galactic profile, where the profile function is exponentiated, this makes only a small difference, since $\beta$ is already three $\mathrm{e}$-foldings below its values at $|z|=0$. The radial profile function $\alpha$ seems to only correct the initial guess mildly and it is not clear whether the result is independent from the initial guess. However, it appears that
$\alpha$ does reach a fixed point.
\begin{figure}
\includegraphics[width=0.5\textwidth]{mid_5000_05perc_zfunc}
\caption{The z-profile function of of $\log n_\mathrm{e}$ changing with the iterations. The thick dashed line is the initial guess.} \label{fig:zfunc_convergence}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{mid_5000_05perc_rfunc}
\caption{The radial profile function of $\log n_\mathrm{e}$ changing with the iterations. The thick dashed line is the initial guess.}
\label{fig:rfunc_convergence}
\end{figure}
\section{Goodness of fit ($\chi^2$) of the reconstructions}
\label{sec:chisquared}
In this section, we discuss the goodness of fit characterized by the reduced $\chi^2$ value,
\begin{equation}
\chi^2 = \frac{1}{N_\mathrm{data}}\sum\limits_{i=1}^{N_\mathrm{data}} \frac{\left(D\!M_i - \widetilde{D\!M}_i\right)^2}{\sigma^2_i},
\end{equation}
where $\widetilde{D\!M}$ is the dispersion data reproduced by applying the response to our reconstruction,
\begin{equation}
\widetilde{D\!M}_i = \left(\tilde{R}m^{(\rho)}\right)_i,
\end{equation}
and $\sigma_i$ is the crudely propagated distance uncertainty given by
\begin{equation}
\sigma_i = \frac{\sqrt{\mathrm{Var}[d_i]}}{d_i} D\!M_i.
\end{equation}
\corr{To put the $\chi^2$ value into perspective we compare them with the $\chi^2$ values of the null model ($m^{(\rho)} = 0$) and the Galactic profile only ($m^{(\rho)} = \Delta$)}
The reduced $\chi^2$ values corresponding to the maps shown in Fig.~\ref{fig:compilation} are shown in Table~\ref{tab:compilation_chi}. The reconstruction of the mixed data set has a reduced $\chi^2$ of 1.31.
In Table~\ref{tab:compilation2_chi}, we show the reduced $\chi^2$ values of the maps shown in Fig.~\ref{fig:compilation_fluct}. \corr{For the reconstruction of the contrast enhanced input model shown in Fig.~\ref{fig:contrast} the reduced $\chi^2$ value is $345$ for the null model, $110$ for the profile only and $2.6$ for the full reconstrution.}
\begin{table}
\caption{The reduced $\chi^2$ values corresponding to the maps shown in Fig.~\ref{fig:compilation}.}
\label{tab:compilation_chi}
\begin{tabular}{ r c c c }
\hline\hline
data set & null model & profile only & full map\\
\hline
5000 PSR @ 25\% & 17.0 & 1.53 & 0.89 \\
10000 PSR @ 25\% & 16.9 & 1.48 & 0.89 \\
1000 PSR @ 15\% & 44.4 & 3.20 & 1.21 \\
5000 PSR @ 15\% & 44.7 & 3.78 & 1.24 \\
10000 PSR @ 15\% & 44.6 & 3.48 & 1.11 \\
1000 PSR @ \phantom{1}5\% & 345 & 23.2 & 3.34 \\
5000 PSR @ \phantom{1}5\% & 345 & 23.5 & 2.56 \\
\end{tabular}
\end{table}
\begin{table}
\caption{The $\chi^2$ values corresponding to the maps shown in Fig.~\ref{fig:compilation_fluct}.}
\label{tab:compilation2_chi}
\begin{tabular}{ r c c c }
\hline\hline
data set & null model & profile only & full map\\
\hline
weak @ 15\% & 44.7 & 2.67 & 1.05 \\
medium @ 15\% & 44.7 & 3.78 & 1.24 \\
strong @ 15\% & 44.7 & 6.44 & 1.54 \\
weak @ \phantom{1}5\% & 345 & 16.2 & 2.34 \\
medium @ \phantom{1}5\% & 345 & 23.5 & 2.56 \\
strong @ \phantom{1}5\% & 345 & 47.1 & 3.01 \\
\end{tabular}
\end{table}
\corr{It is evident from the tables, that our reconstructions account for a large fraction of the data variance in all cases. The Galactic profile without local fluctuations also accounts for a large fraction of the variance, especially if the distance uncertainties are high and the flucutaion strength of the input model is weak.
For our reconstructions} the $\chi^2$ values are close to 1 for the $25\%$ and $15\%$ data sets. Therefore, we assume that our inference mechanism resolved the most relevant information in the data sets and that the prior assumptions are not too restrictive for these data sets. For the $5\%$ reconstructions the $\chi^2$ values are around $3$. This is a hint that the data might contain more information than the reconstruction resolves and that more elaborate prior assumptions might yield a better map. However, how to achieve this is a non-trivial question and we do not aim to answer it in this work.
\section{Reconstruction with real power spectrum and profile functions}
\label{sec:cheated}
The posterior map our algorithm finds depends on the prior power spectrum, the effective errors, and the profile functions, all of which are simultaneously estimated from the data. To benchmark the efficiency of this joint estimation, we investigate the case where the real power spectrum as well as the real profile functions are known, i.e.~only iterating the effective errors. The resulting map serves as an indicator whether our Ansatz with unknown hyper parameters is sensible or whether the problem is too unrestricted in that setting.
We depict the map resulting from the real hyper parameters in Fig.~\ref{fig:cheated}. As one can see, the morphology of the result does not change. \corr{More of the small-scale structure is resolved} and the intensity of the overdensity between Sun and the Galactic center, which belongs to the ring the original model, is more pronounced. Consequently, this map has a better reduced $\chi^2$ value of $1.95$ compared to the value of $2.56$ of our map with unknown hyper parameters. But considering the amount of unknowns this is a satisfactory result. We therefore regard our estimation procedure for the hyper parameters as sensible.
\begin{figure*}
\begin{tabular}{c c c}
\begin{overpic}[width=0.33\textwidth]{mid_5000_05perc_cheat}
\put(34,87){\boldmath\color{white}\textbf{cheated}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_05}
\put(34,87){\boldmath\color{white}\textbf{inferred}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_original}
\put(34,87){\boldmath\color{white}\textbf{original}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\end{tabular}
\caption{Top-down view of the reconstructed electron densities in the Galactic plane (in units of $\mathrm{cm}^{-3}$), if we use the real power spectrum and Galactic profile functions (``cheated''), the results from our algorithm (``inferred''), and the original input model.}
\label{fig:cheated}
\end{figure*}
\section{Uncertainty map}
\label{sec:uncertainty}
Here, we discuss the $1\sigma$ uncertainty map of the reconstruction of the Galaxy model with medium fluctuations using the data set with 5000 pulsars and $5\%$ distance uncertainty. We compare the uncertainty map for unknown profile and power spectrum and the uncertainty map for known profile and power spectrum (see Appendix~\ref{sec:cheated}) with the corresponding absolute errors.
The density maps can be seen in Fig.~\ref{fig:cheated}.
The uncertainty estimates $\sigma^{(\rho)}$ (see Eq.~\eqref{eq:uncertainty}) are shown in Fig.~\ref{fig:uncertainty}. These uncertainties are underestimated as they are calculated from the curvature of the negative log-posterior around its minimum (see Eq.~\eqref{eq:inverse_Hessian}), not from the full distribution. By visual comparison with the absolute error\footnote{
To calculate the absolute error the original Galaxy model is downsampled to the resolution the algorithm uses.
}, $|m^{(\rho)}-\rho|$, we estimate that the uncertainty estimates are underestimated by a factor of roughly 3. However, their morphology seems to be reliable.
\begin{figure*}
\begin{tabular}{c c}
\begin{overpic}[width=0.33\textwidth]{mid_5000_05_abs}
\put(35,97){\boldmath\color{black}{\textbf{inferred}}}
\put(40,87){\boldmath\color{black}\large{\textbf{$\sigma^{(\rho)}$}}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_05_err}
\put(35,97){\boldmath\color{black}{\textbf{inferred}}}
\put(32,87){\boldmath\color{white}\large{\textbf{$|m^{(\rho)}-\rho|$}}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\begin{overpic}[width=0.33\textwidth]{mid_5000_05_cheat_abs}
\put(35,97){\boldmath\color{black}{\textbf{cheated}}}
\put(40,87){\boldmath\color{white}\large{\textbf{$\sigma^{(\rho)}$}}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
&
\begin{overpic}[width=0.33\textwidth]{mid_5000_05_cheat_err}
\put(35,97){\boldmath\color{black}{\textbf{cheated}}}
\put(32,87){\boldmath\color{white}\large{\textbf{$|m^{(\rho)}-\rho|$}}}
\put(24.1,55.65){\boldmath\color{white}$\bullet$}
\put(24.1,55.65){\boldmath\color{black}$\circ$}
\end{overpic}
\\
\end{tabular}
\caption{Top-down view on the Galactic plane, showing the uncertainty estimate ($\sigma^{(\rho)}$, left panels) and the absolute error ($|m^{(\rho)}-\rho$|, right panels) for our reconstruction $m^{(\rho)}$ in units of $\mathrm{cm}^{-3}$. The input density $\rho$ has fluctuations of medium strength and is sampled by 5000 pulsars with $5\%$ distance uncertainty.
The top row shows the scenario with unknown power spectrum and Galactic profile, the bottom row shows the scenario with known power spectrum and profile. We note that the left and right panels have different color bars.}
\label{fig:uncertainty}
\end{figure*}
\section{Uncertainties of the vertical fall-off}
\label{sec:z_profile_uncertainity}
In principle the posterior variance of $\alpha$ and $\beta$ is the diagonal of the operator $D_{(\xi)}$ (see Eq.~\eqref{eq:profile_Hamiltonian}). However this diagonal is too large, since $\alpha$ and $\beta$ are completely degenerate with respect to a constant shift ($\alpha+c$ and $\beta-c$ yield the same profile as $\alpha$ and $\beta$). This degeneracy yields a large point variance, which is not instructive for quantifying the uncertainty of the vertical fall-off. Therefore, we project out the eigenvector corresponding to the constant shift before calculating the diagonal of $D_{(\xi)}$. This corrected diagonal is the squared $1\sigma$ uncertainty that we plot in Fig.~\ref{fig:compilation_zprof}.
\end{appendix}
\bibliographystyle{aa}
|
2,877,628,088,338 | arxiv | \section{Introduction}
In \cite{FGV13} the authors constructed via Dirichlet form techniques a reflected distorted Brownian motion in $E:=[0,\infty)^n$, $n\in\mathbb{N}$, with sticky boundary behavior which solves the system of stochastic differential equations
\begin{align} \label{sde!}
d\mathbf{X}_{t}^i=\mathbbm{1}_{(0,\infty)}\big(\mathbf{X}^i_t\big)\,\Big(\sqrt{2}\,dB^i_t+\partial_i\ln \varrho \big(\mathbf{X}_t\big)\,dt\Big)+\frac{1}{\beta}\,\mathbbm{1}_{\{0\}}\big(\mathbf{X}^i_t\big)\,dt,\quad i\in I,
\end{align}
or equivalently
\begin{multline}
d\mathbf{X}_{t}^i=\mathbbm{1}_{(0,\infty)}\big(\mathbf{X}^i_t\big)\,\Big(\sqrt{2}\,dB^i_t+\partial_i\ln \varrho \big(\mathbf{X}_t\big)\,dt\Big)+d\ell_t^{0,i},\\
\text{with}\quad \ell_t^{0,i}:=\frac{1}{\beta}\int_0^t\mathbbm{1}_{\{0\}}\big(\mathbf{X}_{s}^i\big)\,ds,\quad i\in I,
\end{multline}
weakly for quasi every starting point with respect to the underlying Dirichlet form. Here $I:=\{1,\ldots,n\}$, $\beta$ is a real and positive constant and $(B^i_t)_{t\ge 0}$ are one dimensional independent standard Brownian motions, $i\in I$. $\varrho$ is a continuously differentiable density on $E$ such that for all $B \subset I$, $\varrho$ is almost everywhere positive on $E_+(B)$ with respect to the Lebesgue measure and for all $\varnothing\not=B\subset I$, $\sqrt{\varrho\big|_{E_{+}(B)}}$ is in the Sobolev space of weakly differentiable functions on $E_{+}(B)$, square integrable together with its derivative, where $E_{+}(B):=\{ x \in E \big|~x_i >0 \text{ for all } i \in B, ~x_i=0 \text{ for all } i \in I \setminus B \}$. $\varrho$ continuously differentiable on $E$ implies that the drift part $\big(\partial_i\ln \varrho \big)_{i\in I}$ is continuous on $\{ \varrho > 0 \}$. Moreover, $\ell_t^{0,i}$ is the central local time of the solution to (\ref{sde!}), i.e., it holds almost surely
\begin{align*}
\ell^{0,i}_t=\frac{1}{\beta}\int_0^t\mathbbm{1}_{\{0\}}\big(\mathbf{X}_{s}^i\big)\,ds
&= \lim_{\varepsilon \downarrow 0} \frac{1}{2 \varepsilon} \int_0^t \mathbbm{1}_{(-\varepsilon,\varepsilon)} \big(\mathbf{X}_{s}^i\big)\, d\langle \mathbf{X}^i \rangle_{s}.
\end{align*}
A solution to the associated martingale problem is even provided under the weaker assumptions that $\varrho$ is almost everywhere positive, integrable on each set $E_+(B)$ with respect to the Lebesgue measure and that the respective Hamza condition is fulfilled.\\
This kind of stochastic differential equation is strongly related to the sticky Brownian motion on the half-line $[0,\infty)$ (which is occasionally also called Brownian motion with delayed reflection or slowly reflecting Brownian motion). In \cite{EP12} the authors study Brownian motion on $[0,\infty)$ with sticky boundary behavior and provide existence and uniqueness of solutions to the SDE system
\begin{align}\label{equEP12}
\left\{
\begin{array}{l}
dX_t=\frac{1}{2}d\ell_t^{0+}\big(X\big)+\mathbbm{1}_{(0,\infty)}\big(X_t\big)\,dB_t\\
\mathbbm{1}_{\{0\}}\,dt=\frac{1}{\mu}\,d\ell_t^{0+}\big(X\big),
\end{array}
\right.
\end{align}
for reflecting Brownian motion $X$ in $[0,\infty)$ sticky at $0$, where $X:=\big(X_t\big)_{t\ge 0}$ starts at $x\in [0,\infty)$, $\mu\in (0,\infty)$ is a given constant, $\ell^{0+}\big(X\big)$ is the right local time of $X$ at $0$ and $B:=\big(B_t\big)_{t\ge 0}$ is the standard Brownian motion. In particular, H.-J.~Engelbert and G.~Peskir show that the system (\ref{equEP12}) has a jointly unique weak solution and moreover, they prove that the system (\ref{equEP12}) has no strong solution, thus verifying Skorokhod's conjecture of the non-existence of a strong solution in this case. For an outline of the historical evolution in the study of sticky Brownian motion we refer to the references given in \cite{EP12} and also to \cite{KPS10}.\\
In view of the results provided in \cite{EP12}, the construction of a weak solution as given in \cite{FGV13} is the only reasonable one. However, the construction via Dirichlet form techniques has the well-known disadvantage that the constructed process solves the underlying stochastic differential equation only for quasi-every starting point with respect to the Dirichlet form. Hence, in the present paper we construct a transition semigroup by Girsanov transformations and investigate its properties in order to strengthen the results of \cite{FGV13}. In this way, we obtain a diffusion with strong Feller transition function which solves (\ref{sde!}) for {\em every} starting point in the state space $E$ and furthermore, we also show an ergodicity theorem for {\em every} starting point in the state space $E$ under the assumptions on the density given in Condition \ref{conditions}. Moreover, we establish connections between the analytic Dirichlet form construction and classical probabilistic methods. Using these relations, we additionally prove uniqueness of weak solutions to (\ref{sde!}).\\
In the theory of Dirichlet forms it is a common approach to use results of the regularity theory of elliptic partial differential equations in order to deduce that the associated resolvent and semigroup admit a certain regularity and thereby, it is possible to construct a pointwise solution to the underlying martingale problem or stochastic differential equation for an explicitly known set of starting points under very weak assumptions on the density $\varrho$. For example, this has recently been realized in case of distorted Brownian motion on $\mathbb{R}^d$, $d\in \mathbb{N}$, in \cite{AKR03}, in case of absorbing distorted Brownian motion on $\Omega \subset \mathbb{R}^d$, $d\in \mathbb{N}$, in \cite{BGS13}, in the case of reflecting Brownian motion on Lipschitz domains in \cite{FT96} and in case of reflecting distorted Brownian motion on $\Omega \subset \mathbb{R}^d$, $d\in \mathbb{N}$, under some smoothness condition on the boundary $\partial \Omega$ in \cite{FG07} and \cite{BG14}. However, in the present setting which involves not only the Lebesgue measure but also multiple measures on the boundary of the state space $E$ due to the product structure of the problem, the elliptic regularity theory is not yet investigated and from our present point of view the required results are out of reach. For this reason, we choose the probabilistic approach of random time changes and Girsanov transformations in order to obtain a strong Feller transition semigroup which seems to be a new approach in this area.\\
Our results apply to the so-called wetting model (also refered to as the Ginzburg-Landau $\nabla\phi$ interface model with entropic repulsion and pinning). More precisely, in a finite volume $\Lambda\subset \mathbb{Z}^d$, $d\in\mathbb{N}$, the scalar field $\boldsymbol{\phi}_t:=\big(\boldsymbol{\phi}_t(x)\big)_{x\in\Lambda}$, $t\ge 0$, is described by the stochastic differential equations
\begin{multline}\label{sde}
d\boldsymbol{\phi}_t(x)=-\mathbbm{1}_{(0,\infty)}\big(\boldsymbol{\phi}_t(x)\big)\sum_{\stackunder{\scriptscriptstyle{|x-y|=1}}{\scriptscriptstyle{y\in\Lambda}}}V'\big(\boldsymbol{\phi}_t(x)-\boldsymbol{\phi}_t(y)\big)\,dt\\
+\mathbbm{1}_{(0,\infty)}\big(\boldsymbol{\phi}_t(x)\big)\sqrt{2}dB_t(x)+d\ell_{t}^{\scriptscriptstyle{0}}(x),\quad x\in\Lambda,
\end{multline}
subject to the conditions:
\begin{align*}
&\boldsymbol{\phi}_t(x)\ge 0,\quad \ell_{t}^{\scriptscriptstyle{0}}(x)\mbox{ is non-decreasing with respect to }t,\quad \ell^{\scriptscriptstyle{0}}_{0}(x)=0,\\
&\int_0^\infty \boldsymbol{\phi}_t(x)\,d\ell_{t}^{\scriptscriptstyle{0}}(x)=0,\\
&\beta \ell_{t}^{\scriptscriptstyle{0}}(x)=\int_0^t\mathbbm{1}_{\{0\}}\big(\boldsymbol{\phi}_s(x)\big)\,ds\quad\mbox{for fixed }\beta> 0,\nonumber\\
\end{align*}
where $\ell_{t}^{\scriptscriptstyle{0}}(x)$ denotes the \emph{local time} of $\boldsymbol{\phi}_t(x)\mbox{ at }0$. Here $|\cdot|$ denotes the norm induced by the euclidean scalar product on $\mathbb{R}^d$, $V\in C^2(\mathbb{R})$ is a symmetric, strictly convex potential and $\big\{(B_t(x))_{t\ge 0}\,|\,x\in\Lambda\big\}$ are independent standard Brownian motions. In dimension $d=2$ this model describes the wetting of a solid surface by a fluid. More details on interface models are e.g. presented in~\cite{Ga02}, \cite{Fu05}.\\
In \cite[Sect.~15.1]{Fu05} J.D.~Deuschel and T.~Funaki investigated (\ref{sde}) and gave reference to classical solution techniques as developed e.g. in \cite{WaIk89}. The methods provided therein require more restrictive assumptions on the drift part as in our situation (e.g. the drift is assumed to be bounded and Lipschitz continuous), moreover, do not apply directly (the geometry and the behavior on the boundary differs). First steps in the direction of applying \cite{WaIk89} are discussed in \cite{Fu05} by J.-D. Deuschel and T. Funaki.\\
As far as we know the only reference that applies to the system of stochastic differential equations (\ref{sde}) is \cite{Gra88}. By means of a suitable choice of the coefficients the system of equations given by \cite[(II.1)]{Gra88} coincides with (\ref{sde}), but amongst others the drift part is also assumed to be Lipschitz continuous and boundend. For this reason, it is not possible to apply the results of \cite{Gra88} to the setting invenstigated by J.-D.~Deuschel and T.~Funaki, since the potential $V$ naturally causes an unbounded drift (see also Example \ref{example}).\\
Our paper is organized as follows. In Section \ref{secmain} we state the required conditions on the density as well as our main results. In Section \ref{DFtrans} we recall some facts about sticky Brownian motion and present the connections of the Dirichlet form constructed in \cite{FGV13} to classical methods from probability theory. In particular, we establish relations to random time changes and Girsanov transformations. In Section \ref{Feller} a Feller transition semigroup is constructed under the conditions given in Section \ref{secmain}. This semigroup is used to construct a pointwise solution to (\ref{sde!}) and the corresponding Dirichlet form is identified. Moreover, in Section \ref{appl} the setting is applied to the dynamical wetting model. Finally, we prove uniqueness of weak solutions to (\ref{sde!}) in Section \ref{uniqueness}.
\section{Main results}\label{secmain}
In the following we denote by $dx_i$ the one dimensional Lebesgue measure and by $\delta_0^i$ the Dirac measure in $0$, where $i=1,\dots,n$ gives reference to the component of $x=(x_1,\dots, x_n) \in E=[0,\infty)^n$. Define the product measure $d\mu_n:= \prod_{i=1}^n (dx_i + \beta \delta_0^i)$ on $(E,\mathcal{B}(E))$. We denote by $d_{\text{euc}}$ the Euclidean metric.\\
First, we like to note that the proofs of the results in \cite{FGV13} are still valid under the following weaker assumptions:
\begin{theorem} \label{thmimpro}
All results of \cite{FGV13} hold true under the assumption that $\varrho$ fulfills
\begin{enumerate}
\item $\varrho$ is $\mu_n$-a.e. positive on $E$ such that $\varrho \in L^1(E;\mu_n)$,
\item $\varrho \in C(E)$,
\item $\sqrt{\varrho_{|E_+(B)}} \in H^{1,2}_{\text{loc}}(E_+(B))$ for every $\emptyset \neq B \subset I$,
\item $\text{cap}(\{\varrho=0\})=0$ (with respect to the form $(\mathcal{E}^{\varrho},D(\mathcal{E}^{\varrho}))$ defined below).
\end{enumerate}
\end{theorem}
We state the following proposition in order to be able to give afterwards suitable conditions on the density $\varrho$:
\begin{proposition} \label{propindep}
There exists a diffusion process $\mathbb{M}=(\Omega,\mathcal{F},(\mathcal{F}_t)_{t \geq 0}, (X_t)_{t \geq 0}, (\mathbb{P}_x)_{x \in E})$ (called $n$ independent sticky Brownian motions on $[0,\infty)$) solving the SDE
\begin{align*} dX^i_t= \mathbbm{1}_{(0,\infty)}(X^i_t) \sqrt{2} dB^i_t + \frac{1}{\beta}~ \mathbbm{1}_{\{0\}}(X^i_t)dt, \quad i=1,\dots,n,
\end{align*}
where $(B_t)_{t \geq 0}$ is an $n$-dimensional standard Brownian motion, and the transition semigroup $(p_t^{\beta,n})_{t >0}$ of $\mathbb{M}$ has the doubly Feller property, i.e. it is a Feller transition semigroup which admits additionally the strong Feller property (see Definition \ref{defDF}). Moreover,
the Dirichlet form associated to $n$ independent sticky Brownian motions on $[0,\infty)$ is given by the conservative, strongly local, strongly regular Dirichlet form $(\mathcal{E}^n,D(\mathcal{E}^n))$, i.e., the closure on $L^2(E;\mu_n)$ of the bilinear form
\[ \mathcal{E}^n(f,g)= \int_E \sum_{i=1}^n \mathbbm{1}_{\{ x_i \neq 0 \}} ~ \partial_i f ~\partial_i g ~ d\mu_n \ \ \ \text{ for } f,g \in C_c^1(E). \]
\end{proposition}
\begin{condition} \label{conditions}
$\varrho=\phi^2$ is strictly positive, fulfills the conditions (i)-(iii) of Theorem \ref{thmimpro} and
\begin{align} \label{condlnphi} \nabla \ln \phi= \frac{\nabla \phi}{\phi} \in L^{\infty}_{\text{loc}}(E;\mu_n).
\end{align}
Moreover, for every $t >0$ and every compact set $D \subset E$ holds
\begin{align} \lim_{k \rightarrow \infty} \sup_{x \in D} ~\mathbb{E}_x(\mathbbm{1}_{\{ \tau_k \leq t \}}~Z_t) =0, \label{condZ}
\end{align}
where $(Z_t)_{t \geq 0}$ is given by
\[ Z_t=\exp\big(\sqrt{2} \sum_{i=1}^n \int_0^t \partial_i \ln \phi(X_s) \mathbbm{1}_{(0,\infty)}(X_s^i) dB_s^i - \sum_{i=1}^n \int_0^t (\partial_i \ln \phi(X_s))^2 \mathbbm{1}_{(0,\infty)}(X_s^i) ds\big) \]
and $\tau_k:= \inf \{ t>0|~ X_t \notin [0,k)^n \}$ with $(X_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ as stated in Proposition \ref{propindep}.
\end{condition}
\begin{remark}
\begin{enumerate}
\item All proofs of the following results are still valid if $\varrho$ is not necessarily strictly positive, but Condition \ref{conditions} additionally requires (iv) of Theorem \ref{thmimpro},
\begin{align} \nabla \ln \phi= \frac{\nabla \phi}{\phi} \in L^{\infty}_{\text{loc}}(E \backslash \{ \varrho=0\};\mu_n),
\end{align}
$D$ is an arbitrary compact subset of $E \backslash \{ \varrho =0\}$ and $\tau_k$ is defined by
\[ \tau_k:= \inf \{ t>0|~ X_t \notin [0,k)^n \backslash B_{\frac{1}{k}}(\{ \varrho=0\}) \},\]
where $B_{\frac{1}{k}}(\{ \varrho=0\}):=\{ x \in E|~\inf_{y \in \{\varrho=0\}}~ d_{\text{euc}}(x,y) \leq \frac{1}{k} \}$. In this case, a strong Feller process on the state space $E_1:= E \backslash \{ \varrho=0\}$ can be constructed and the corresponding Dirichlet form is defined analogously but on the space $L^2(E_1;\varrho \mu)$. The additional condition guarantees that the constructed process (using a Girsanov transformation by $(Z_t)_{t \geq 0}$) never hits the set $\{ \varrho=0\}$.
\item (\ref{condlnphi}) is equivalent to $\nabla \phi \in L^{\infty}_{\text{loc}}(E;\mu_n)$, since $\phi$ is assumed to be strictly positive and continuous.
\item (\ref{condZ}) holds for example if $\sup_{x \in D} \mathbb{E}_x(Z_t^p) < \infty$ for some $p>1$ (see Remark \ref{remZ}).
\end{enumerate}
\end{remark}
Under the above assumptions on $\varrho$ it holds:
\begin{theorem} \label{thmmain1}
There exits a conservative diffusion process $\mathbb{M}^{\varrho}=(\Omega,\mathcal{F},(\mathcal{F}_t)_{t \geq 0}, (X_t)_{t \geq 0}, (\mathbb{P}^{\varrho}_x)_{x \in E})$ on $E$ with strong Feller transition function $(p_t)_{t \geq 0}$, i.e., $p_t(\mathcal{B}_b(E)) \subset C_b(E)$, such that the associated Dirichlet form is given by the closure of the symmetric bilinear form $(\mathcal{E}^{\varrho},\mathcal{D})$ on $L^2(E;\varrho \mu_n)$, where
\begin{align*} \mathcal{E}^{\varrho}(f,g)&:= \sum_{\emptyset \neq B \subset \{1,\dots,n\}} \mathcal{E}_B(f,g) \\
&= \int_E \sum_{i=1}^n \mathbbm{1}_{\{ x_i \neq 0 \}} ~ \partial_i f ~\partial_i g ~ \varrho d\mu_n \ \ \ \text{ for } f,g \in \mathcal{D}:=C_c^1(E)
\end{align*}
with
\[ \mathcal{E}_B(f,g):= \int_{E} \sum_{i \in B} \partial_i f ~ \partial_i g~\varrho d\lambda_B^{n,\beta}, \]
where $d\lambda_B^{n,\beta}:=\beta^{n-\#B} \prod_{j \in B} dx_j \prod_{j \in B^c} \delta_0^j$. In particular, $(p_t)_{t \geq 0}$ fulfills the absolute continuity condition \cite[(4.2.9)]{FOT94}, i.e., the transition probabilities $p_t(x,\cdot)$, $x \in E$, $t>0$, given by $p_t(x,A):=\mathbb{P}_x(X_t \in A)$, $A \in \mathcal{B}(E)$, are absolutely continuous with respect to $\varrho \mu_n$.
\end{theorem}
\begin{theorem} \label{thmmain2}
Let $\mathbb{M}^{\varrho}$ be the diffusion process of Theorem \ref{thmmain1}. It holds for each $i=1,\dots,n$
\begin{align} \label{main} X_t^i=X_0^i + \sqrt{2} \int_0^t \mathbbm{1}_{(0,\infty)}(X_s^i) dB_s^i + \int_0^t \mathbbm{1}_{(0,\infty)}(X_s^i)~ \partial_i \ln \varrho(X_s) ds + \frac{1}{\beta} \int_0^t \mathbbm{1}_{\{0\}}(X_s^i) ds
\end{align}
$\mathbb{P}^{\varrho}_x$-a.s. for every $x \in E$, where $(B_t^i)_{t \geq 0}$, $i=1,\dots,n$, are independent standard Brownian motions. Moreover, it holds
\begin{align} \label{eqnergo} \lim_{t \rightarrow \infty} \frac{1}{t} \int_0^t F(X_s) ds= \frac{\int_E F \varrho d\mu_n}{\int_E \varrho d\mu_n} \end{align}
$\mathbb{P}^{\varrho}_x$-a.s. for every $x \in E$ and $F \in L^1(E; \varrho \mu_n)$.
\end{theorem}
\begin{remark}
Let $\Gamma \subset \partial E$ such that $\int_{\Gamma} \varrho d\mu_n >0$. Then it follows by (\ref{eqnergo}) that
\[ \lim_{t \rightarrow \infty} \frac{1}{t} \int_0^t \mathbbm{1}_{\Gamma}(X_s) ds= \frac{\int_{\Gamma} \varrho d\mu_n}{\int_E \varrho d\mu_n} >0 \]
$\mathbb{P}^{\varrho}_x$-a.s. for every $x \in E$. This confirms the sticky behavior of the process on the boundary.
\end{remark}
\begin{theorem}
The solution to (\ref{main}) is unique in law.
\end{theorem}
\begin{remark} Let $\varrho:E \rightarrow (0,\infty)$, $\varrho=\exp(-2H)$, be defined by a potential $V$ with nearest neighbor pair interaction, i.e., $H$ is given by
\begin{align} \label{hamilt} H(x_1,\cdots,x_n)= \frac{1}{4} \sum_{\stackunder{|i-j|=1}{i,j \in \{0,\dots,n+1\}}} V(x_i-x_j),
\end{align}
where $x_0:=x_{n+1}:=0$ and $V:\mathbb{R} \rightarrow [-b,\infty)$, $b \in [0, \infty)$, fulfills the conditions of \cite[(2.2)]{Fu05}:
\begin{enumerate}
\item[(i)] $V \in C^2(\mathbb{R})$,
\item[(ii)] $V$ is symmetric, i.e., $V(r)=V(-r)$ for all $r \in \mathbb{R}$,
\item[(iii)] $V$ is strictly convex, i.e., $c_{-} \leq V^{\prime \prime}(r) \leq c_{+}$ for all $r \in \mathbb{R}$ and some constants $c_{-},c_{+} >0$.
\end{enumerate}
Denote by $\phi:=\sqrt{\varrho}=\exp(-H)$ the square root of $\varrho$.\\
Define $\mathbb{V}^{\prime}(i,x)$ for $i=1,\dots,n$ and $x \in E$ by
\[ \mathbb{V}^{\prime}(i,x):= \sum_{\stackunder{|i-j|=1}{j \in \{0,\dots,n+1\}}} V^{\prime}(x_i-x_j). \]
In this case, Condition \ref{conditions} is fulfilled and the stated results hold accordingly with the drift function given by $\partial_i \ln \varrho=- \mathbb{V}^{\prime}(i,\cdot)$, $i=1,\dots,n$.
\end{remark}
\section{Sticky Brownian motion and Dirichlet form transformations} \label{DFtrans}
\subsection{Sticky Brownian motion on the halfline} \label{secsticky}
Define the Dirichlet form $(\hat{\mathcal{E}},D(\hat{\mathcal{E}}))$ as the closure of
\[ \hat{\mathcal{E}}(f,g):= \int_{[0,\infty)} f^{\prime}(x) g^{\prime}(x)dx, \ \ f,g \in C_c^1([0,\infty)), \]
on $L^2([0,\infty);dx)$. It is well-known that reflecting Brownian motion is associated to $(\hat{\mathcal{E}},D(\hat{\mathcal{E}}))$ and $D(\hat{\mathcal{E}})=H^{1,2}((0,\infty))$ is the Sobolev space of order one.\\
Let $(\tilde{B}_t)_{t \geq 0}$ be a standard Brownian motion defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Then $\hat{X}_t:=|x + \sqrt{2} \tilde{B}_t|$, $t \geq 0$, yields reflecting Brownian motion on $[0,\infty)$ starting at $x \in [0,\infty)$ and by Tanaka's formula
\begin{align} \label{refltanaka} \hat{X}_t=x + \sqrt{2} \hat{B}_t + L_t^{0+}, \ \ t \geq 0,
\end{align}
where $\hat{B}_t:=\int_0^t \text{sgn}(x+ \sqrt{2}\tilde{B}_s) d\tilde{B}_s$, $t \geq 0$, is a standard Brownian motion and $(L_t^{0+})_{t \geq 0}$ is the right local time in $0$, i.e.,
\[ L_t^{0+}= \lim_{\epsilon \rightarrow 0} \int_0^t \mathbbm{1}_{[0,\epsilon)}(\hat{X}_s) ds \]
in probability. Here, we differ from classical notation by the factor $\sqrt{2}$ (see also Remark \ref{remsqrt2}). The Dirichlet form associated to $(\hat{X}_t)_{t\geq 0}$ is $(\hat{\mathcal{E}},D(\hat{\mathcal{E}}))$ and $(L_t^{0+})_{t \geq 0}$ is an additive functional which is in Revuz correspondance with the Dirac measure $\delta_0$ in $0$. Consider the additive functional $A_t:= t + \beta L_t^{0+}$, $t \geq 0$, for some real constant $\beta >0$. Note that $A_0=0$ and $A_t \rightarrow \infty$ a.s. as $t \rightarrow \infty$. Then sticky Brownian motion on $[0, \infty)$ is usually constructed by a random time change using the inverse $\tau(t)$ of $A_t$. More precisely,
$X_t:= \hat{X}_{\tau(t)}$ (starting in $x$) solves the stochastic differential equation
\begin{align} \label{1dstickysde} dX_t= \mathbbm{1}_{(0,\infty)}(X_t) \sqrt{2} dB_t + \frac{1}{\beta}~ \mathbbm{1}_{\{0\}}(X_t)dt,
\end{align}
where $(B_t)_{t \geq 0}$ is a standard Brownian motion.
For details on Feller's Brownian motions and in particular, sticky Brownian motion and its transition semigroup, see e.g. \cite{EP12}, \cite{KPS10}, \cite{GS72} or \cite{Kni81}.\\
In \cite[Chapter 6]{FOT94} and \cite[Chapter 5]{ChFu11} is presented how a random time change by an additive functional affects the underlying Dirichlet form. Let $\mu$ denote the Revuz measure corresponding to $(A_t)_{t \geq 0}$. Clearly, $d\mu=dx + \beta \delta_0$. In particular, $\mu$ has full support $[0,\infty)$. Thus, the Dirichlet form $(\mathcal{E},D(\mathcal{E}))$ on $L^2([0,\infty);\mu)$ associated to $(X_t)_{t \geq 0}$ has the representation
\[ \mathcal{E}(f,g)=\hat{\mathcal{E}}(f,g) \ \ f,g \in D(\mathcal{E})=D(\hat{\mathcal{E}}) \cap L^2([0,\infty);\mu). \]
In particular, $D(\mathcal{E})=H^{1,2}((0,\infty)) \cap L^2([0,\infty);\mu) = H^{1,2}((0,\infty))$ by Sobolev embedding. Moreover, $C_c^1([0,\infty))$ is dense in $D(\mathcal{E})$ by \cite[Theorem 5.2.8(i)]{ChFu11} and thus, it is a special standard core of $(\mathcal{E},D(\mathcal{E}))$. Hence, the closure of
\begin{align} \label{1dform}
\mathcal{E}(f,g)= \int_{[0,\infty)} f^{\prime}(x) g^{\prime}(x)dx= \int_{[0,\infty)} \mathbbm{1}_{(0,\infty)}(x)~f^{\prime}(x) g^{\prime}(x)d\mu, \ \ f,g \in C_c^1([0,\infty)),
\end{align}
on $L^2([0,\infty);\mu)$ is the Dirichlet form associated to $(X_t)_{t \geq 0}$.
\begin{remark} \label{remsqrt2}
Note that our notion for the solution to the equations (\ref{refltanaka}) and (\ref{1dstickysde}) as reflecting Brownian motion and sticky reflecting Brownian motion on $[0,\infty)$ respectively differs by the factor $\sqrt{2}$ from classical literature in view of the underlying SDE (\ref{sde!}). If $(Y^{\gamma}_t)_{t \geq 0}$ solves
\[ dY^{\gamma}_t = \mathbbm{1}_{(0,\infty)}(Y^{\gamma}_t) dB_t + \frac{1}{\gamma} \mathbbm{1}_{\{0\}}(Y_t^{\gamma}) dt ~\text{ for } \gamma >0, \]
we obtain the solution to (\ref{1dstickysde}) by setting $X_t:=\sqrt{2}~ Y^{\sqrt{2}\beta}_t$. This identity is useful in order to derive the resolvent density and transition density for the solution to (\ref{1dstickysde}).
\end{remark}
Let $F$ be a locally compact separable metric space and denote by $C_0(F):=\{ f \in C(F)|~ \forall \epsilon >0~ \exists K \subset F \text{ compact }: |f(x)| < \epsilon ~ \forall x \in F \backslash K \}$ the space of continuous functions on $F$ vanishing at infinity. We can specify the resolvent and transition semigroup of sticky Brownian motion on $[0,\infty)$. \cite[Corollary 3.10, Corollary 3.11]{KPS10} state the following (see also \cite[Section 6.1]{Kni81}):
\begin{theorem} \label{thmdensity}
The transition function of sticky Brownian motion on $[0,\infty)$ yields a Feller semigroup on $C_0([0,\infty))$, i.e., $p_t(C_0([0,\infty))) \subset C_0([0,\infty))$ and $\lim_{t \downarrow 0} \Vert p_t f -f \Vert_{\infty} =0$ for each $f \in C_0([0,\infty))$. For $\lambda >0$, $x,y \in [0,\infty)$, the resolvent kernel $r_{\lambda}^{\beta}(x,dy)$ of the Brownian motion with sticky origin (i.e., the solution to (\ref{1dstickysde})) is given by
\begin{align}
r_{\lambda}^{\beta}(x,dy)= \frac{r_{\lambda}^D(x,\frac{y}{\sqrt{2}})}{\sqrt{2}} dy + \frac{1}{2 (\sqrt{\lambda}+ \beta \lambda )} \big(2 e^{-\sqrt{2 \lambda}(x+\frac{y}{\sqrt{2}})}dy + \sqrt{2} \beta ~e^{-\sqrt{2 \lambda} x} \delta_0(dy)\big),
\end{align}
where $r_{\lambda}^D(x,y)= \frac{1}{\sqrt{2\lambda}} (e^{-\sqrt{2\lambda}|x-y|}-e^{-\sqrt{2\lambda}(x+y)})$ is the resolvent density of Brownian motion with Dirichlet boundary conditions.
Furthermore, by the inverse Laplace transform it follows that, for $t >0$, the transition kernel $p^{\beta}(t,x,dy)$ of the Brownian motion with sticky origin is given by
\begin{align} \label{semigroup}
p^{\beta}(t,x,dy)=\frac{p^D(t,x,\frac{y}{\sqrt{2}})}{\sqrt{2}} dy + 2 g_{0,\sqrt{2} \beta}(t,x+\frac{y}{\sqrt{2}})dy + \sqrt{2} \beta~ g_{0,\sqrt{2} \beta}(t,x)~ \delta_0(dy),
\end{align}
where $p^D(t,x,y)=p(t,x,y)-p(t,x,-y)$ is the transition density for Brownian motion with Dirichlet boundary conditions, $p(t,x,y)= \frac{1}{\sqrt{2\pi t}} e^{-\frac{(x-y)^2}{2t}}$ and
\[ g_{0,\gamma}(t,x)= \frac{1}{\gamma} \exp(\frac{2x}{\gamma}+\frac{2t}{{\gamma}^2})~ \textnormal{erfc}(\frac{x}{\sqrt{2t}} + \frac{\sqrt{2 t}}{\gamma}), \ \ \text{for } \gamma >0,~ t >0,~ x \geq 0,\]
with the complementary errorfunction $\textnormal{erfc}(x)=\frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-z^2} dz$, $x \in \mathbb{R}$.
\end{theorem}
\begin{remark}
Note that (\ref{semigroup}) implies that $p^{\beta}(t,x,\cdot)$ is absolutely continuous with respect to the measure $d\mu=dx + \beta \delta_0$ for each $x \in [0,\infty)$, $t>0$. Therefore, the so-called {\em absolute continuity condition} \cite[(4.2.9)]{FOT94} is fulfilled. In the following we see that the transition semigroup possesses even stronger properties.
\end{remark}
Thus, with $p^{\beta}(t,x,dy)$ as above and $p^{\beta}_t$, $t >0$, the transition semigroup of sticky Brownian motion starting in $x \in [0,\infty)$, it holds
\[ \mathbb{E}_x(f(X_t))=p^{\beta}_tf(x)= \int_{[0,\infty)} f(y) ~p^{\beta}(t,x,dy) \]
for each $f \in C_0([0,\infty))$. Furthermore, the resolvent $r_{\lambda}^{\beta}$ is given by
\[ \mathbb{E}_x \big( \int_0^{\infty} e^{-\lambda s} f(X_s) ds \big) = \int_0^{\infty} e^{-\lambda s} p_s^{\beta}f(x) ds= r_{\lambda}^{\beta} f(x)=\int_{[0,\infty)} f(y)~ r^{\beta}_{\lambda}(x,dy).\]
The proof of Theorem \ref{thmdensity} is based on the so-called first passage time formular (see \cite[(6.4)]{Kni81}).\\
Let $\lambda >0$ and define $A^{\beta}:=\lambda - (r_{\lambda}^{\beta})^{-1}$ on $\mathcal{D}:=r_{\lambda}^{\beta} (C_0([0,\infty)))$ (which is independent of $\lambda$). By \cite[Theorem 6.2, Theorem 6.4]{Kni81} is holds that
\begin{align} \label{genC} A^{\beta} f= f^{\prime \prime}, \ \ \ f \in \mathcal{D}=\{ f \in C_0([0,\infty)) \cap C^2([0,\infty)|~ f^{\prime \prime} \in C_0([0,\infty)) \text{ and } \beta f^{\prime \prime}(0)=f^{\prime}(0) \}. \end{align}
The condition $\beta f^{\prime \prime}(0)=f^{\prime}(0)$ for $f \in C^2([0,\infty))$ is called Wentzell boundary condition.
\begin{definition} \label{defDF}
Let $F$ be a locally compact separable metric space. A transition semigroup $p_t$, $t >0$, of an $F$-valued Markov process is said to have the {\em Feller property} if $p_t (C_0(F)) \subset C_0(F)$ and $\lim_{t \downarrow 0} \Vert p_t f - f \Vert_{\infty}=0$ for each $f \in C_0(F)$. Furthermore, it is called {\em strong Feller} if $p_t(\mathcal{B}_b(F)) \subset C_b(F)$ for each $t >0$. If the transition semigroup has both Feller and strong Feller property, we say that it possesses the {\em doubly Feller property}.
\end{definition}
We can also deduce the following:
\begin{proposition} \label{1ddoublyfeller}
The transition semigroup $(p_t^{\beta})_{t >0}$ of sticky Brownian motion on $[0,\infty)$ has the doubly Feller property.
\end{proposition}
\begin{proof}
In consideration of Theorem \ref{thmdensity} it rests to show that $p_t(\mathcal{B}_b([0,\infty))) \subset C_b([0,\infty))$. Let $f \in \mathcal{B}_b([0,\infty))$ and $t >0$. It is well-known that
\[ \frac{e^{-x^2}}{x + \sqrt{x^2 +2}} < \text{erfc}(x) \leq \frac{e^{-x^2}}{x + \sqrt{x^2 + \frac{4}{\pi}}} \]
for each $x \geq 0$ (see \cite[7.1.13]{AS64}). Let $x \in [0, \infty)$ and $(x_n)_{n \in \mathbb{N}}$ a sequence in $[0,\infty)$ such that $x_n \rightarrow x$ as $n \rightarrow \infty$. Then $G_n(y):=f(y) g_{0,\sqrt{2} \beta}(t,x_n +\frac{y}{\sqrt{2}})$ converges for each fixed $y \in [0,\infty)$ to $G(y):=f(y) g_{0,\sqrt{2} \beta}(t,x +\frac{y}{\sqrt{2}})$ as $n \rightarrow \infty$ by continuity of $g_{0,\sqrt{2} \beta}$ in the second variable. Moreover, for each $y \in [0,\infty)$ holds
\begin{align*}
|G_n(y)| &\leq \Vert f \Vert_{\infty} K_1 \exp(\frac{\sqrt{2} x_n +y)}{\beta}) \text{erfc}(\frac{x_n}{\sqrt{2t}}+\frac{y}{2\sqrt{t}}) \\
&\leq \Vert f \Vert_{\infty} K_2 \exp(\frac{y}{\beta}) \text{erfc}(\frac{y}{2 \sqrt{t}}) \\
&\leq \Vert f \Vert_{\infty} K_3 \exp(\frac{y}{\beta}) \exp(-\frac{y^2}{4t})=:H(y)
\end{align*}
for suitable constants $K_1$, $K_2$ and $K_3$. Note that the function $H$ is integrable with respect to the Lebesgue measure on $[0,\infty)$. Thus, dominated convergence yields
\[ \int_{[0,\infty)} G_n(y) dy \rightarrow \int_{[0,\infty)} G(y) dy \]
and by this, we can conclude that $p^{\beta}_tf$ is continuous and bounded.
\end{proof}
\begin{remark}
Denote by $(T_t^{\beta})_{t \geq 0}$ the $L^2([0,\infty);\mu)$-semigroup of $(\mathcal{E},D(\mathcal{E}))$ defined in (\ref{1dform}). Then, by the previous considerations, for all $f \in \mathcal{B}_b([0,\infty)) \cap L^2([0,\infty);\mu)$ it holds that $p_t^{\beta} f$ is a $\mu$-version of $T_t^{\beta} f$. Note also that the $L^2([0,\infty);\mu)$-generator $(L,D(L))$ is given by
\[ Lf(x)= \mathbbm{1}_{(0,\infty)}(x) f^{\prime \prime}(x) + \mathbbm{1}_{\{0\}}(x) \frac{1}{\beta} f^{\prime}(x) \ \ \ \text{ for } f \in D(L)= H^{2,2}((0,\infty)),\]
where $H^{2,2}((0,\infty))$ denotes the Sobolev space of order two.
This can be shown using integration by parts, the fact that $D(\mathcal{E})=H^{1,2}((0,\infty))$ and the definition of the space $H^{2,2}((0,\infty))$. For $f \in C_c^2([0,\infty)) \subset D(L)$ such that the Wentzell boundary condition $\beta f^{\prime \prime}(0)=f^{\prime}(0)$ is fulfilled, it holds $Lf=f^{\prime \prime}$ similar to the generator of the $C_0([0,\infty))$-semigroup given in (\ref{genC}). However, in the $L^2$-setting the boundary behavior is rather described by the measure $\mu$ instead of the domain of the generator.
\end{remark}
Next we will constuct the Dirichlet form corresponding to $n$ independent sticky Brownian motions on $[0,\infty)$, $n \in \mathbb{N}$. In \cite[Chapter V, Section 2.1]{BH91} it is shown how to construct finite tensor products of Dirichlet spaces. Moreover, the corresponding semigroup of the product Dirichlet form has an explicit representation. In our setting this construction yields the semigroup of an $n$-dimensional process on $E=[0,\infty)^n$, $n \in \mathbb{N}$, such that the components are independent sticky Brownian motions on $[0,\infty)$. In particular, this approach justifies the choice of the Dirichlet form structure used in \cite{FGV13}.\\
Let $(\mathcal{E}_i,D(\mathcal{E}_i))$, $i=1,\dots,n$, be $n$ copies of the Dirichlet form in (\ref{1dform}). Note that each such form is defined on the space $L^2([0,\infty);\mu)$. In accordance with \cite[Definition 2.1.1]{BH91} we define the product Dirichlet form $(\mathcal{E}^n,D(\mathcal{E}^n))$ on $L^2([0,\infty)^n; \mu_n)$ with $d\mu_n=\prod_{i=1}^n (dx_i + \beta \delta_0^i)$ by
\begin{small}
\begin{align} \label{ndform}
\mathcal{E}^n(f,g):=\sum_{i=1}^n \int_{[0,\infty)^{n-1}} \mathcal{E}_i(f(x_1,\dots, x_{i-1},\cdot,x_{i+1},\dots,x_n),g(x_1,\dots ,x_{i-1},\cdot ,x_{i+1},\dots ,x_n)) \prod_{j \neq i} (dx_j + \beta \delta_0^j)
\end{align}
\end{small}
for $f,g \in D(\mathcal{E}^n)$, where
\begin{align*} D(\mathcal{E}^n):=\{ &f \in L^2([0,\infty)^n;\mu_n)\big|~\text{for each } i=1,\dots,n \text{ and for } \prod_{j \neq i}(dx_j+\beta \delta^j_0)-{a.e. } \\
&(x_1,\dots ,x_{i-1},x_{i+1},\dots ,x_n) \in [0,\infty)^{n-1}: f(x_1,\dots, x_{i-1},\cdot,x_{i+1},\dots,x_n) \in D(\mathcal{E}_i) \}
\end{align*}
First, we proof the following:
\begin{lemma} \label{lemdense}
$C_c^1([0,\infty)^n)$ is dense in $D(\mathcal{E}^n)$.
\end{lemma}
\begin{proof}
Note that $C_c^1([0,\infty)^n) \subset D(\mathcal{E}^n)$ by definition of $D(\mathcal{E}^n)$.\\
W.l.o.g. let $n=2$. By \cite[Proposition 2.1.3 b)]{BH91} $D(\mathcal{E}_1) \otimes D(\mathcal{E}_2)$ is dense in $D(\mathcal{E}^2)$. We show that $C_c^1([0,\infty)) \otimes C_c^1([0,\infty)) \subset C_c^1([0,\infty)^2)$ is dense in $D(\mathcal{E}_1) \otimes D(\mathcal{E}_2)$. Then the assertion follows by a diagonal sequence argument. So let $h \in D(\mathcal{E}_1) \otimes D(\mathcal{E}_2)$ such that $h(x_1,x_2)=f(x_1)g(x_2)$ for $\mu_2$-a.e. $(x_1,x_2) \in [0,\infty)^2$, $f \in D(\mathcal{E}_1)$ and $g \in D(\mathcal{E}_2)$. Choose sequences $(f_k)_{k \in \mathbb{N}}$ and $(g_k)_{k \in \mathbb{N}}$ in $C_c^1([0,\infty))$ such that $f_k \rightarrow f$ in $D(\mathcal{E}_1)$ and $g_k \rightarrow g$ in $D(\mathcal{E}_2)$ as $k \rightarrow \infty$ and define, for $k \in \mathbb{N}$, $h_k \in C_c^1([0,\infty)) \otimes C_c^1([0,\infty))$ by $h_k(x_1,x_2):=f_k(x_1)g_k(x_2)$, $x_1,x_2 \in [0,\infty)$.
Then it follows immediately by assumption and the product structure that $h_k \rightarrow h$ as $k \rightarrow \infty$ in $L^2([0,\infty)^2; d\mu)$. Moreover, for $k,l \in \mathbb{N}$
\begin{align*}
\mathcal{E}^2(h_k-h_l)&=\int_{[0,\infty)} \mathcal{E}_1((h_k-h_l)(\cdot,x_2)) (dx_2 + \beta \delta_0^2) + \int_{[0,\infty)} \mathcal{E}_2((h_k-h_l)(x_1, \cdot)) (dx_1 + \beta \delta_0^1)\\
\leq &~\mathcal{E}_1(f_k-f_l)~ \Vert g_k \Vert_{L^2([0,\infty);dx+\beta \delta_0)} + \mathcal{E}_1(f_l)~ \Vert g_k - g_l \Vert_{L^2([0,\infty);dx+\beta \delta_0)} \\
&+ \mathcal{E}_2(g_k-g_l)~ \Vert f_k \Vert_{L^2([0,\infty);dx+\beta \delta_0)} + \mathcal{E}_1(g_l)~ \Vert f_k - f_l \Vert_{L^2([0,\infty);dx+\beta \delta_0)}.
\end{align*}
Hence, $\mathcal{E}^2(h_k-h_l) \rightarrow 0$ as $k,l \rightarrow \infty$ and thus, $h_k \rightarrow h$ as $k \rightarrow \infty$ in $D(\mathcal{E}^2)$.
\end{proof}
Let $f,g \in C_c^1([0,\infty)^n)$. Then for each $i=1,\dots,n$ and fixed $(x_1,\dots ,x_{i-1},x_{i+1},\dots ,x_n) \in [0,\infty)^{n-1}$ we have
\begin{align*}
\mathcal{E}_i&(f(x_1,\dots, x_{i-1},\cdot,x_{i+1},\dots,x_n),g(x_1,\dots ,x_{i-1},\cdot ,x_{i+1},\dots ,x_n)) \\
&= \int_{[0,\infty)} \partial_i f(x_1,\dots ,x_n)~ \partial_i g(x_1,\dots ,x_n) ~dx_i.
\end{align*}
Set $\{ j \neq i \}:=\{1,\dots,i,i+1,\dots,n \}$. If $A$ is a subset of some set $I$, we denote by $A^c$ the set $I \backslash A$. Due to the identity
\[ \prod_{j \neq i} (dx_j + \beta \delta_0^j)= \sum_{A \subset \{j \neq i\}} \beta^{\#A^c} \prod_{j \in A} dx_j \prod_{j \in A^c} \delta_0^j \]
we get by rearranging the terms that
\[ \mathcal{E}^n(f,g)= \sum_{\emptyset \neq B \subset \{1,\dots,n\}} \mathcal{E}_B(f,g) \]
with
\[ \mathcal{E}_B(f,g):= \int_{[0,\infty)^n} \sum_{i \in B} \partial_i f ~ \partial_i g~d\lambda_B^{n,s}, \]
where $d\lambda_B^{n,\beta}:=\beta^{n-\#B} \prod_{j \in B} dx_j \prod_{j \in B^c} \delta_0^j$. In other words, $(\mathcal{E}^n,D(\mathcal{E}^n))$ defined in (\ref{ndform}) coincides with the form defined in \cite[(2.3)]{FGV13} disregarding that in our present setting the density function $\varrho$ is identically one. Moreover, (\ref{ndform}) can also be rewritten in the form
\[ \mathcal{E}^n(f,g)= \int_E \sum_{i=1}^n \mathbbm{1}_{\{ x_i \neq 0 \}} ~ \partial_i f ~\partial_i g ~ d\mu_n \ \ \ \text{ for } f,g \in C_c^1(E). \]
From the present point of view $(\mathcal{E}^n,D(\mathcal{E}^n))$, defined as in (\ref{ndform}), is the sum of $n$ subforms and each such form for $i=1,\dots,n$ describes the dynamics of the process on $[0,\infty)^n$ for all configurations where the $i$-th component is not pinned to zero. In contrast, the forms $\mathcal{E}_B$, $\emptyset \neq B \subset \{1,\dots,n\}$ describe the dynamics of the process for all configurations where exactly the components specified by $B$ are non-zero.\\
By a minor generalization of the results in \cite{FGV13} we get the following lemma:
\begin{lemma}
The Dirichlet form $(\mathcal{E}^n,D(\mathcal{E}^n))$ on $L^2([0,\infty)^n;\mu_n)$, $n \in \mathbb{N}$, is conservative, strongly local, strongly regular and symmetric.
\end{lemma}
Let $x=(x_1,\dots,x_n),y=(y_1,\dots,y_n) \in [0,\infty)^n$, $n \in \mathbb{N}$. Then the transition kernel $p_t^{\beta,n}(x,dy)$ of $n$ independent sticky Brownian motions on $[0,\infty)$ is given by
\[ p_t^{\beta,n}(x,dy)=\prod_{i=1}^n p_t^{\beta}(x_i,dy_i). \]
Thus, for $f \in C_0([0,\infty)^n)$ we have
\[ p_t^{\beta,n}f(x)=\int_{[0,\infty)^n} f(y_1,\dots,y_n) \prod_{i=1}^n p_t^{\beta}(x_i,dy_i). \]
By Theorem \ref{thmdensity} we have an explicit representation of $p_t^{\beta,n}(x,dy)$ and by the same arguments as in Proposition \ref{1ddoublyfeller} the doubly Feller porperty holds also for $p_t^{\beta,n}$:
\begin{proposition}
The transition semigroup $(p_t^{\beta,n})_{t >0}$ of $n$ independent sticky Brownian motions on $[0,\infty)$ has the doubly Feller property.
\end{proposition}
Let $(T_t^i)_{t \geq 0}$ be the $L^2([0,\infty);\mu)$-semigroup of the forms $(\mathcal{E}_i,D(\mathcal{E}_i))$, $i=1,\dots,n$. Set for $f \in L^2([0,\infty)^n;\mu_n)$, $i=1,\dots,n$, and $\mu_n$-a.e. $(x_1,\dots,x_n) \in [0,\infty)^n$
\[ \hat{T}_t^{\beta,i} f(x_1,\dots,x_n):= T_t^i f(x_1,\dots,x_{i-1},\cdot,x_{i+1},\dots,x_n)(x_i). \]
and
\[ T_t^{\beta,n} f= \hat{T}_t^{\beta,1} \cdots \hat{T}_t^{\beta,n} f.\]
By \cite[Proposition 2.1.3 a)]{BH91} $(T_t^{\beta,n})_{t \geq 0}$ is the $L^2([0,\infty)^n;\mu_n)$-semigroup associated to the form $(\mathcal{E}^n,D(\mathcal{E}^n))$ defined in (\ref{ndform}) and the order of the $\hat{T}_t ^{\beta,i}$, $i=1,\dots,n$, is arbitrary.\\
Let $f \in \mathcal{B}_b([0,\infty)^n) \cap L^2([0,\infty)^n;\mu_n)$. Then we have for $\mu_n$-a.e. $x=(x_1,\dots,x_n) \in [0,\infty)^n$
\begin{align} \label{iter1}
\hat{T}_t^{\beta,n} f(x_1,\dots,x_n)=T_t^n f(x_1,\dots,x_{n-1},\cdot)(x_n) &=p_t^{\beta} f(x_1,\dots,x_{n-1},\cdot)(x_n) \\
&=\int_{[0,\infty)} f(x_1,\dots,x_{n-1},y_n) p^{\beta}_t(x_n,dy_n) \notag
\end{align}
and similarly
\begin{align} \label{iter2}
\hat{T}_t^{\beta,n-1} \hat{T}_t^{\beta,n} f(x_1,\dots,x_n)= \int_{[0,\infty)} \int_{[0,\infty)} f(x_1,\dots,x_{n-2},y_{n-1},y_n) p^{\beta}_t(x_n,dy_n) p^{\beta}_t(x_{n-1},dy_{n-1}).
\end{align}
Proceeding successively as in (\ref{iter1}) and (\ref{iter2}), together with the preceding considerations, proves Proposition \ref{propindep}.
\subsection{Girsanov transformations} \label{secgirsanov}
We summerize some results on Girsanov transformations of a Markov process and the associated Dirichlet form. The statements can be found in \cite{Ebe96} and \cite[Chapter 6]{FOT94}. In some cases we do not state the results in full generality, since for our purposes it is sufficient to simplify the assumptions.\\
Let $\mathbb{M}=(\Omega,\mathcal{F},(\mathcal{F}_t)_{t \geq 0}, (X_t)_{t \geq 0}, (\mathbb{P}_x)_{x \in F})$ be a $\mu$-symmetric strong Markov process with state space $F \subset \mathbb{R}^n$, $n \in \mathbb{N}$, continuous sample paths and infinite lifetime, where $\mu$ is a positive Radon measure on $(F,\mathcal{B}(F))$ with full support. We suppose that the process is canonical, i.e., $\Omega=C([0,\infty),F)$ and $X_t(\omega)=\omega(t)$ for $\omega \in \Omega$ and $t \geq 0$. Moreover, assume that its Dirichlet form $(\mathcal{E},D(\mathcal{E}))$ on $L^2(F;\mu)$ is strongly regular, strongly local, conservative and that it possesses a square field operator $\Gamma$. We denote its generator by $(L,D(L))$. Suppose that $\mathcal{D}:=C_c^1(F)$ a dense subspace of $D(\mathcal{E})$, $D(L) \cap \mathcal{D}$ is dense in $D(\mathcal{E})$ and for every $f \in \mathcal{D}$ holds $f,\Gamma(f) \in L^{\infty}(F;\mu)$. Denote by $(p_t)_{ t>0}$ the transition semigroup of $\mathbb{M}$, i.e., for $f \in \mathcal{B}_b(F)$ holds
\[ p_tf(x):= \mathbb{E}_x(f(X_t)),\]
and we suppose that the transition density $p_t(x,\cdot)$, $x \in F$, $t >0$, possesses the absolute continuity condition \cite[(4.2.9)]{FOT94}. \\
A function $f$ is said to be in $D(\mathcal{E})_{\text{loc}}$ if for any relatively compact open set $G$ there exists a function $g \in D(\mathcal{E})$ such that $f=g$ $\mu$-a.e. on $G$. Fix some $\phi \in D(\mathcal{E})_{\text{loc}} \cap C(F)$ such that $\phi >0$ $\mu$-a.e.. Define $\varrho:=\phi^2$ and the symmetric bilinear form $(\mathcal{E}^{\varrho},\mathcal{D}^{\varrho})$ on $L^2(F; \varrho \mu)$ by
\begin{align} &\mathcal{D}^{\varrho}:=\{ f \in D(\mathcal{E})|~\int_F (\Gamma(f)+f^2) \varrho d\mu < \infty \}, \\
&\mathcal{E}^{\varrho}(f,g):= \int_F \Gamma(f,g)~ \varrho d\mu. \notag
\end{align}
In particular, $\mathcal{D}^{\varrho}=D(\mathcal{E})$ if $\varrho$ is bounded. \\
Under the above assumptions the conditions (D1)-(D3) of \cite{Ebe96} are fulfilled using the strong regularity of $(\mathcal{E},D(\mathcal{E}))$ and moreover, $\varrho$ is locally bounded. Thus, by \cite[Theorem 1.1, Corollary 1.3]{Ebe96} we can conclude the following:
\begin{lemma} The symmetric bilinear form $(\mathcal{E}^{\varrho},D(\mathcal{E}))$ is densely defined and closable on \linebreak $L^2(F;\varrho \mu)$ and its closure $(\mathcal{E}^{\varrho}, D(\mathcal{E}^{\varrho}))$ is a strongly local Dirichlet form.
Moreover, $(\mathcal{E}^{\varrho}, D(\mathcal{E}^{\varrho}))=(\mathcal{E}^{\varrho},\overline{\mathcal{D}}))$, i.e., $\mathcal{D}$ is a dense subset of $D(\mathcal{E}^{\varrho})$.
\end{lemma}
Due to \cite[Theorem 5.5.1]{FOT94} it is possible to give a Fukushima decomposition of the process $\mathbb{M}$ of the form
\begin{align} \label{fukudec} \ln \phi (X_t) - \ln \phi(X_0)= M_t^{[\ln \phi]} + N_t^{[\ln \phi]} \ \ \mathbb{P}_x-\text{a.s. for each } x \in F,
\end{align}
where $M_t^{[\ln \phi]}$ is a martingale additive functional and $N_t^{[\ln \phi]}$ is a continuous additive functional. The function $\ln \phi$ is possibly unbounded. In this case, the decomposition (\ref{fukudec}) requires some localization argument (see e.g. \cite[(6.3.19)]{FOT94}). Define the positive multiplicative functional $(Z_t)_{t \geq 0}$ by
\begin{align} \label{MF} Z_t=\exp( M_t^{[\ln \phi]} - \frac{1}{2} \langle M^{[\ln \phi]} \rangle_t).
\end{align}
Furthermore, let $(\tilde{p}_t)_{t > 0}$ be defined by
\[ \tilde{p}_t f(x):= \mathbb{E}_x(Z_t f(X_t)) \]
for $f \in \mathcal{B}_b(F)$. \\
By \cite[Section 6.3]{FOT94} $(\tilde{p}_t)_{t > 0}$ is a transition function and there exists a corresponding $\varrho \mu$-symmetric right process $\mathbb{M}^{\varrho}=(\Omega,(X_t)_{t \geq 0},(\mathbb{P}^{\varrho}_x)_{x \in F})$. Moreover, the Dirichlet form of $\mathbb{M}^{\varrho}$ is given by $(\mathcal{E}^{\varrho},D(\mathcal{E}^{\varrho}))$. We say that the process $\mathbb{M}^{\varrho}$ and the transition semigrpup $(\tilde{p}_t)_{t >0}$ are the Girsanov transformation of $\mathbb{M}$ and $(\tilde{p}_t)_{t >0}$ respectively by the multiplicative functional $(Z_t)_{t \geq 0}$.
\section{Construction of the strong Feller transition semigroup} \label{Feller}
In \cite{Chu85} criteria are given under which the doubly Feller property is preserved under the transformation by a multiplicative functional $(Z_t)_{t \geq 0}$. This concept is extended in \cite{CK08}. It is shown that the conditions on $(Z_t)_{t \geq 0}$ can be weakened. Moreover, the setting is applied to Feynman-Kac and Girsanov transformations. In particular, precise conditions on the Revuz measure of the underlying additive functionals are given. We quote a result of \cite{CK08} concerning the preservation of the doubly feller property under Girsanov transformations. Since we deal with strong Markov processes with {\em continuous sample paths}, we restrict the results to this setting instead of stating them in full generality.\\
Let $\mathbb{M}=(\Omega,\mathcal{F},(\mathcal{F}_t)_{t \geq 0}, (X_t)_{t \geq 0}, (\mathbb{P}_x)_{x \in F})$ be again a $\mu$-symmetric strong Markov process with state space $F \subset \mathbb{R}^n$, $n \in \mathbb{N}$, continuous sample paths and infinite lifetime, where $\mu$ is a positive Radon measure on $(F,\mathcal{B}(F))$ with full support. As before, denote by $(p_t)_{ t>0}$ the transition semigroup of $\mathbb{M}$. Assume that $(p_t)_{ t>0}$ possesses the doubly Feller property. \\
Let $r_{\lambda}(x,y)$, $\lambda >0$, $x,y \in F$, be the resolvent kernel of $\mathbb{M}$, i.e., the resolvent $(r_{\lambda})_{\lambda>0}$ of $\mathbb{M}$ is given by
\[ r_{\lambda}f(x)=\int_F f(y) r_{\lambda}(x,y) d\mu(y) \]
for $f \in \mathcal{B}_b(F)$, $\lambda >0$ and $x \in F$. For a Borel measure $\nu$ on $\mathcal{B}(F)$ we define the $\lambda$-\textit{potential of} $\nu$ by $R_{\lambda}\nu(x):= \int_F r_{\lambda}(x,y) d\nu(y)$, $\lambda >0$.\\
Let $B$ be a non-empty open subset of $F$ and denote by $B_{\Delta_B}:=B \cup \{\Delta_B\}$ the one-point compactification of $B$. Define $(X_t^B)_{ t \geq 0}$ by
\begin{align*}
X_t^B:=
\left\{
\begin{array}{l}
X_t \ \ \text{ if } t < \tau_B \\
\Delta_B \ \text{ if } t \geq \tau_B
\end{array}
\right.
\end{align*}
where $\tau_B :=\inf \{ t>0 |~ X_t \notin B \}$.
The transition semigroup of $(X_t^B)_{ t \geq 0}$ is given by
\[ p_t^B(x,A)=\mathbb{P}_x(X_t \in A,~ t < \tau_B) \]
and
\[ p_t^B(x,\{\Delta_B\}):=1-p_t^B(x,B), \ \ p_t^B(\Delta_B, \{ \Delta_B \}):=1, \]
for $x \in B$, $A \in \mathcal{B}(B)$. A function $f \in \mathcal{B}_b(F)$ is extended to $\Delta_B$ by setting $f(\Delta_B)=0$. For functions of this form, the transition semigroup of $(X_t^B)_{ t \geq 0}$ reads
\[ p_t^B f(x)=\mathbb{E}_x (f(X_t) \mathbbm{1}_{\{t < \tau_B\}}). \]
The set $B$ is called \textit{regular} if for each $x \in F \backslash B$, we have $\mathbb{P}_x(\tau_B=0)=1$.\\
Let $(M_t)_{t \geq 0}$ be a continuous locally square integrable martingale additive functional and denote by $\mu_{\langle M \rangle}$ the Revuz measure of $(\langle M \rangle_t)_{t \geq 0}$. Furthermore, the transition semigroup $(\tilde{p}_t^B)_{t \geq 0}$ is given by
\[ \tilde{p}_t^Bf(x):=\mathbb{E}_x(Z_t f(X_t) \mathbbm{1}_{\{t < \tau_B\}}), \]
where $Z_t:=\exp (M_t-\frac{1}{2} \langle M \rangle_t )$, $t \geq 0$ and corresponds to the process obtained from $\mathbb{M}^{\varrho}$ (see Section \ref{secgirsanov}) killed when leaving $B$. In the special case $B=F$ this definition reduces to the transition semigroup of $\mathbb{M}^{\varrho}$.
\begin{definition}
A Borel measure $\nu$ on $\mathcal{B}(F)$ is said to be of
\begin{enumerate}
\item[(i)] {\em Kato class} if $\lim_{\lambda \rightarrow \infty} \sup_{x \in F} R_{\lambda} \nu(x)=0$,
\item[(ii)] {\em extended Kato class} if $\lim_{\lambda \rightarrow \infty} \sup_{x \in F} R_{\lambda} \nu(x) < 1$,
\item[(iii)] {\em local Kato class} if $\mathbbm{1}_K \nu$ is of Kato class for every compact set $K \subset F$.
\end{enumerate}
\end{definition}
\begin{theorem} \label{thmGT}
Assume that $\frac{1}{2} \mu_{\langle M \rangle}$ is a positive Radon measure of local and extended Kato class and let $B$ be a regular open subset of $F$. Then $(\tilde{p}_t^B)_{t \geq 0}$ has the doubly Feller property. Moreover, $(Z_t)_{t \geq 0}$ is a martingale and
\begin{align*}
&\lim_{t \rightarrow 0} \sup_{x \in D} ~\mathbb{E}_x( |Z_t-1| \mathbbm{1}_{\{t < \tau_D\}}) =0 \text{ for any relatively compact open set } D \subset B, \\
& \sup_{0 \leq s \leq t}~ \sup_{x \in B} ~\mathbb{E}_x ( Z_s^p \mathbbm{1}_{\{s < \tau_B \}} ) < \infty \text{ for some } p>1 \text{ and each } t>0.
\end{align*}
\end{theorem}
\begin{proof}
See \cite[Theorem 3.3]{CK08}.
\end{proof}
Consider again the $n$ independent sticky Brownian motions on $[0,\infty)$ discussed in Section \ref{secsticky} with transition function $(p_t^{\beta,n})_{t >0}$ and Dirichlet form $(\mathcal{E}^n,D(\mathcal{E}^n))$ on $L^2(E;\mu_n)$. In the following, we introduce a density function $\varrho=\phi^2$. Under suitable conditions on $\phi$ it is possible to perform a Girsanov transformation such that the transition semigroup of the transformed process $\mathbb{M}^{\varrho}$ still possesses the strong Feller property (or even the doubly Feller property). By the preceding section the transformed Dirichlet form is of the form considered in \cite{FGV13}. In this way, we are able to strengthen the results in \cite{FGV13}.
\begin{remark}
For functions $\phi$ such that the conditions of Theorem \ref{thmGT} are fulfilled for $(Z_t)_{t \geq 0}$ as in (\ref{MF}) and $B=E$, we immediately get that the transition function has the doubly Feller property and the process $\mathbb{M}^{\varrho}$ solves (\ref{sde!}) for every starting point in $E$. Unfortunately, we are also interested in densities $\varrho$ such that the corresponding Revuz measure is not of extended Kato class. Such potentials are of particular interest for the application to the so-called wetting model in the theory of stochastic interface models. For this reason, we construct a strong Feller transition semigroup for a larger class of densities using Theorem \ref{thmGT} and an approximation argument. A direct application fails, since the Kato condition on $\mu_{\langle M \rangle}$ ensures that the drift caused by the Girsanov transformation does not "explode". However, this criterion does only take into account the variation of the drift, but not its direction, which is of particular importance in our setting.
\end{remark}
\begin{example} \label{example}
Let $n=1$ and $\phi(x):=\exp(-\frac{1}{2} x^2)$. In this case, $(\ln \phi)^{\prime}(x)=-x$. Hence, we expect that the process $\mathbb{M}^{\phi}$ has the representation
\[ dX_t= \sqrt{2}~ \mathbbm{1}_{(0,\infty)}(X_t) dB_t - 2 X_t~ \mathbbm{1}_{(0,\infty)}(X_t) dt + \frac{1}{\beta} \mathbbm{1}_{\{0\}}(X_t) dt. \]
Note that the additional drift term is always non-positive, since $X_t \in [0,\infty)$ for all $t >0$ and thus, it attracts the process to $0$. However, the logarithmic derivative of $\phi$ is unbounded and the energy measure is even not of extended Kato class. Indeed,
\begin{align*}
R_{\lambda} \mu_{\langle \ln \phi \rangle}(x) &= \int_{[0,\infty)} r^{\beta}_{\lambda}(x,y) d\mu_{\langle \ln \phi \rangle}(y) \\
&=2 \int_{[0,\infty)} \big( \frac{1}{\sqrt{2\lambda}} (e^{-\sqrt{2\lambda}|x-y|}-e^{-\sqrt{2\lambda}(x+y)}) + \frac{1}{\sqrt{2 \lambda}+\beta \lambda} 2 e^{-\sqrt{2 \lambda}(x+y)} \big)~ y^2 dy
\end{align*}
is unbounded in $x$ for each fixed $\lambda >0$, since
\[ \int_{[0,\infty)} \frac{1}{\sqrt{2\lambda}} e^{-\sqrt{2\lambda}|x-y|}~ y^2 dy = \frac{1}{\lambda} x^2 - \frac{1}{2 \lambda^2} e^{-\sqrt{2 \lambda} x} + \frac{1}{\lambda^2} \rightarrow \infty \ \text{ as } x \rightarrow \infty,
\]
whereas the remaining terms converge to $0$ as $x \rightarrow \infty$. Thus, it is not possible to apply Theorem \ref{thmGT} to this specific choice of $\phi$.
\end{example}
Assume that $\phi$ is given such that Condition \ref{conditions} is fulfilled. Then $\phi \in D(\mathcal{E}^n)_{\text{loc}}$ and the energy measure $\mu_{\langle \ln \phi \rangle}=\mu_{\langle M^{[\ln \phi]} \rangle}$ is given by
\begin{align}
d\mu_{\langle \ln \phi \rangle}(x)= 2 \sum_{i=1}^n (\partial_i \ln \phi(x))^2~ dx_i \prod_{j \neq i} (dx_j + \beta \delta_0^j) =2 \sum_{i=1}^n \mathbbm{1}_{(0,\infty)}(x_i)~(\partial_i \ln \phi(x))^2~ d\mu_n(x)
\end{align}
and thus, by Revuz correspondence we see that
\begin{align} \label{variationAF}
\langle M^{[\ln \phi]} \rangle_t = 2 \sum_{i=1}^n \int_0^t (\partial_i \ln \phi(X_s))^2~ \mathbbm{1}_{(0,\infty)}(X_s^i) ds.
\end{align}
By this we can deduce that $(M^{[\ln \phi]}_t)_{t \geq 0}$ has the representation
\begin{align} \label{AF}
M^{[\ln \phi]}_t= \sqrt{2}~ \sum_{i=1}^n \int_0^t \partial_i \ln \phi(X_s)~ \mathbbm{1}_{(0,\infty)}(X_s^i) dB_s^i.
\end{align}
\begin{example} \label{exbounded} Let $\nabla \ln \phi$ additionally be essentially bounded w.r.t. $\mu_n$. Then $\frac{1}{2} \mu_{\langle \ln \phi \rangle}=\frac{1}{2} \mu_{\langle M^{[\ln \phi]} \rangle}$ is of local and extended Kato class.
\end{example}
Let $k \in \mathbb{N}$ and $K:=[0,k)^n$ as well as $\tau_k:=\inf \{ t >0 |~ X_t \notin K \}$. Let $\phi_k$ be given such that $\phi_k=\phi$ on $K$, Condition \ref{conditions} is fulfilled for $\phi_k$ and $\nabla \ln \phi_k \in L^{\infty}(E;\mu_n)$. We define the exponential functional $(Z^k_t)_{t \geq 0}$ by
\[ Z^k_t := \exp(M^{[\ln \phi_k]}_t - \frac{1}{2} \langle M^{[\ln \phi_k]} \rangle_t ).\] Note that we are in fact only interested in the restriction of $\phi$ to the set $K$, since the function is used to define a Girsanov transformation of $(p_t^{\beta,n})_{t >0}$ which is killed when leaving $K$. Nevertheless, in order to give meaning to $Z_t^k$ for $t \geq \tau_k$, we extend $\phi_k$ to $E$.
\begin{theorem} \label{strongfeller}
Let $\varrho=\phi^2$ be given as in Condition \ref{conditions} and $Z_t=\exp(M_t^{[\ln \phi]}- \frac{1}{2} \langle M^{[\ln \phi]} \rangle_t)$, $t \geq 0$. Then the transition function $(p_t)_{t \geq 0}$ defined by $p_tf(x)=\mathbb{E}_x(Z_tf(X_t))$ for $f \in \mathcal{B}_b(E)$ and $x \in E$ which corresponds to the strong Markov process $\mathbb{M}^{\varrho}$ has the strong Feller property.
\end{theorem}
\begin{proof}
Let $k>0$ and $K=[0,k)^n$. $K$ is regular, i.e., $\mathbb{P}_x(\tau_K =0)=1$ for each $x \in E \backslash K$. We define the transition function $(p^k_t)_{t \geq 0}$ similar as $(p_t)_{t \geq 0}$ by $p_t^k f(x):=\mathbb{E}_x(Z_t^k f(X_t) \mathbbm{1}_{\{t <\tau_K\}})$. By the assumptions on $\phi_k$, Example \ref{exbounded} and Theorem \ref{thmGT}, $(p_t^k)_{t \geq 0}$ has the doubly Feller property for each $k>0$. Let $f \in \mathcal{B}_b(E)$ and choose a constant $C(f) < \infty$ such that $|f(x)| \leq C(f)$ for all $x \in E$. Clearly, $p_tf \in \mathcal{B}_b(E)$. Hence, it suffices to show that $p_tf$ is continuous. We have for $x \in D:=[0,d]^n$, $d>0$,
\begin{align*}
| p_tf(x) - p_t^kf(x)| &=| \mathbb{E}_x(Z_t f(X_t)) -\mathbb{E}_x(Z_t^k f(X_t) \mathbbm{1}_{\{t < \tau_K\}}) | \\
&=|\mathbb{E}_x(Z_t f(X_t) \mathbbm{1}_{\{ t \geq \tau_K\}})| \\
&\leq C(f)~ |\mathbb{E}_x(Z_t \mathbbm{1}_{\{ t \geq \tau_K\}})| \\
&\leq C(f)~ \sup_{x \in D} |\mathbb{E}_x(Z_t \mathbbm{1}_{\{ t \geq \tau_K\}})|
\rightarrow 0 \quad \text{as } k \rightarrow \infty.
\end{align*}
uniformly on $D$ by (\ref{condZ}). Hence, $p_t f$ is continuous on $D$ for each $d >0$ and so $p_t f \in C_b(E)$.
\end{proof}
\begin{remark} \label{remZ}
Let $D \subset E$ be compact. Then $\lim_{k \rightarrow \infty} \sup_{x \in D} ~\mathbb{E}_x(\mathbbm{1}_{\{ \tau_k \leq t \}}~Z_t) =0$ holds for example if there exists some $p>1$ such that $\sup_{x \in D} \mathbb{E}_x(Z_t^p) < \infty$. Indeed, let $1<q<\infty$ such that $\frac{1}{p}+\frac{1}{q}=1$. Then
\[ \sup_{x \in D} \mathbb{E}_x(\mathbbm{1}_{\{\tau_k \leq t\}} Z_t) \leq \sup_{x \in D} \mathbb{E}_x(Z_t^p)^{\frac{1}{p}}~ \sup_{x \in D} \big(\mathbb{P}_x(\tau_k \leq t)\big)^{\frac{1}{q}}.\]
Define $C_t:= \max_{i=1,\dots,n} \max_{0 \leq s \leq t} X_s^i$ for $t \geq 0$. Then for $x \in D$ and $k >d$
\[ \mathbb{P}_x( \tau_k \leq t) \leq \mathbb{P}_0( C_t \geq k-d ) \leq n~ \sqrt{\frac{t}{2 \pi}} \frac{4}{k-d} \exp(-\frac{(k-d)^2}{2t}) =: C(k) \rightarrow 0 \ \text{ as } k \rightarrow \infty \]
due to \cite[p.96,(8.3)']{KS98}, since $C_t \leq \max_{i=1,\dots,n} \max_{0 \leq s \leq t} |B_s^i|$ almost surely with respect to $\mathbb{P}_0$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thmmain1}]
By Section \ref{secgirsanov} there exists a strong Markov process $\mathbb{M}^{\varrho}$ with transition semigroup $(p_t)_{t \geq 0}$ and the Dirichlet form associated to $\mathbb{M}^{\varrho}$ is given by the closure of $(\mathcal{E}^{\varrho},\mathcal{D})$ on $L^2(E;\varrho \mu_n)$. Note that in this case $\mathcal{D} \cap D(L) \supset C_c^2(E)$ and $C_c^2(E)$ is also dense in $D(\mathcal{E}^n)$. Indeed, Lemma \ref{lemdense} is based on the fact that $C_c^1([0,\infty))$ is dense for the one dimensional form which also holds for $C_c^2([0,\infty))$ (and even $C_c^{\infty}([0,\infty))$) by \cite[Theorem 5.2.8(i)]{ChFu11}. The strong Feller property is shown in Theorem \ref{strongfeller} and the last statement holds by \cite[Exercise 4.2.1]{FOT94}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmmain2}]
The statement follows by the results proven in \cite[Corollary 4.18, Theorem 5.6]{FGV13} considering that the absolute continuity condition \cite[(4.2.9)]{FOT94} is fulfilled.
\end{proof}
\section{Application to the dynamical wetting model} \label{appl}
\subsection{Densities corresponding to potential energies}
In the following, let $\phi \in C^2(E)$ be strictly positive such that $\phi \in L^2(E;\mu_n)$. Set $H:=- \ln \phi$ (thus, $\phi=\exp(-H)$) and assume additionally that there exist real constants $K_1 \geq 0,~K_2$ and $K_3$ such that
\begin{enumerate}
\item $H(x) \geq -K_1$ for all $x \in E$,
\item $\partial_i H(x) \leq K_2$ for all $x \in \{ x_i=0\}:=\{ x \in E|~x_i=0\}$, $i=1,\dots,n$,
\item $\partial_i^2 H(x) \leq K_3$ for all $x \in E$, $i=1,\dots,n$.
\end{enumerate}
If we can verify (\ref{condZ}), Condition \ref{conditions} is fulfilled and thus, the results of Theorem \ref{thmmain1} and Theorem \ref{thmmain2} hold accordingly.\\
Using (\ref{AF}), (\ref{variationAF}) and Ito's formula we see that
\begin{align}
M_t^{[\ln \phi]} - \frac{1}{2} \langle M^{[\ln \phi]} \rangle_t &= \sqrt{2} \sum_{i=1}^n \int_0^t \partial_i \ln \phi (X_s) \mathbbm{1}_{(0,\infty)} (X_s^i) dB_s^i -\sum_{i=1}^n \int_0^t \big(\partial_i \ln \phi(X_s) \big)^2 \mathbbm{1}_{(0,\infty)}(X_s^i) ds \notag \\
&=H(X_0)-H(X_t)+ \frac{1}{\beta} \sum_{i=1}^n \int_0^t \partial_i H(X_s) \mathbbm{1}_{\{0\}}(X_s^i) ds \label{AFdecomp} \\
&\ \ + \sum_{i=1}^n \int_0^t \big( \partial_i^2 H(X_s) - \partial_i H(X_s)^2 \big) \mathbbm{1}_{(0,\infty)}(X_s^i) ds \notag \\
&\leq H(x) + K_1 + \frac{n}{\beta} K_2 t + n K_3 t \notag
\end{align}
$\mathbb{P}_x$-a.s. for each $x \in E$.\\
Let $p>1$ be arbitrary, $D \subset E$ compact. Then it holds
\[ \sup_{x \in D}~ \mathbb{E}_x(Z_t^p) \leq \exp\big(p~( \sup_{x \in D} H(x) + K_1 + \frac{n}{\beta} K_2 t + n K_3 t)\big) < \infty \quad \text{for every } t>0.\]
Thus, in view of Remark \ref{remZ}, (\ref{condZ}) holds true.
\subsection{Densities corresponding to potential energies given by pair potentials}
Assume that $H$ is given by a potential with nearest neighbor pair interaction, i.e., $H$ is defined as in (\ref{hamilt}). In particular, $\kappa:=\int_{\mathbb{R}} \exp(-V(r)) dr < \infty$, $V$ is convex, $V^{\prime}(0)=0$ and $V^{\prime}$ is non-decreasing. Then, we have $H(x) \geq -\frac{n}{2} b$,
\[ \partial_i H (x)= \frac{1}{2} \sum_{\stackunder{|i-j|=1}{j \in \{0,\dots,n+1\}}} V^{\prime}(x_i-x_j) \ \ \big(= \frac{1}{2} V^{\prime}(x_i-x_{i-1}) + \frac{1}{2} V^{\prime}(x_i-x_{i+1}) \big)
\]
for $i=1,\dots,n$ and moreover,
\[ \partial^2_i H (x)=\frac{1}{2} \sum_{\stackunder{|i-j|=1}{j \in \{0,\dots,n+1\}}} V^{\prime \prime}(x_i-x_j) \ \ \big(= \frac{1}{2} V^{\prime \prime}(x_i-x_{i-1}) + \frac{1}{2} V^{\prime \prime}(x_i-x_{i+1}) \big).
\]
Since $\partial_i H (x)=\frac{1}{2} (V^{\prime}(-x_{i-1}) + V^{\prime}(-x_{i+1})) \leq 0$ if $x_i=0$, we get $\partial_i H(x) \leq 0$ for all $x \in \{ x_i=0\}$ and furthermore, $\partial_i^2 H(x) \leq c_+$ , $i=1,\dots,n$. Thus, (i)-(iii) above are fulfilled with $K_1:=\frac{n}{2}b$, $K_2:=0$ and $K_3:=c_+$. Note that $\phi \in L^2(E;\mu_n)$.
\section{Uniqueness of weak solutions} \label{uniqueness}
\begin{theorem}
Let $\varrho=\phi^2$ be given as in Condition \ref{conditions}. Then the solution to (\ref{main}) is unique in law.
\end{theorem}
\begin{proof}
By \cite[\S 24, Theorem 1, Corollary 1]{GS72} the one dimensional sticky Brownian motion on $[0,\infty)$ is unique in law. Thus, the same holds true for $n$ independent sticky Brownian motions for each $n \in \mathbb{N}$. Finally, we can conclude that the solution to (\ref{main}) is unique in law due to \cite[Chapter IV, Theorem 4.2]{WaIk89}, since its law is constructed by a Girsanov transformation.
\end{proof}
\subsection*{Acknowledgment}
We thank Torben Fattler for helpful comments and discussions. R.~Vo{\ss}hall gratefully acknowledges financial support in the form of a fellowship of the German state Rhineland-Palatine.
|
2,877,628,088,339 | arxiv | \section{Introduction}
\blfootnote{This work was supported by DFG grant {TR 1223/2-1}.}
Switched differential-algebraic equations (DAEs) form an important class of switched systems, where the dynamics are not only discontinuous with respect to time, but also the state trajectories are constrained by certain algebraic equations which may also change as the system switches from one mode to another.
Such dynamical models have found utility e.g.\ in the analysis of electrical power distribution \citep{GrosTren14} and of electrical circuits \citep{Tren12}.
We consider switched DAEs of the following form
\begin{equation}\label{eq:sysLin}
\begin{aligned}
E_\sigma \dot x &= A_\sigma x + B_\sigma u \\
y &= C_\sigma x
\end{aligned}
\end{equation}
where $x, u , y$ denote the state (with dimension $n\in\mathbb{N}$), input (with dimension $\mathtt{u}\in\mathbb{N}$) and output (with dimension $\mathtt{y}\in\mathbb{N}$) of the system, respectively. The switching signal $\sigma: (t_0,\infty) \rightarrow \mathbb{N}$ is a piecewise constant, right-continuous function of time and in our notation it changes its value at time instants $t_1<t_2<\ldots$ called the switching times.
For a fixed $\sigma$, the triplet $(x,u,y)$ is used to denote signals satisfying \eqref{eq:sysLin}.
We adopt the convention that over the interval $[t_p, t_{p+1})$ of length $\tau_p:=t_{p+1}-t_p$, the active mode is defined by the quadruple $(E_p,A_p,B_p,C_p)\in\mathbb{R}^{n\times n} \times \mathbb{R}^{n\times n} \times \mathbb{R}^{n\times \mathtt{u}} \times \mathbb{R}^{\mathtt{y}\times n}$, $p \in \mathbb{N}$.
Over the interval $(t_0,t_1)$, it is assumed that the system has some past which may be described by $(E_0,A_0,B_0,C_0)$.
The solution concept for switched DAEs is studied in \citep{Tren09d} and a brief discussion is also included in Section~\ref{sec:sol}.
Dynamical system with discontinuous, or constrained trajectories have gathered a lot of interest in the control community, as they form an important class of hybrid, or discontinuous dynamical systems, see e.g.\ \citep{GoebSanf09}.
One direction of research for these systems includes the study of structural properties that could be used for control design problems, and in this regard the problem of state reconstruction and estimation is of particular interest \citep{TanwPhD11}.
In the current literature, one finds that the earlier work on observability and observers for switched/hybrid systems was aimed at using classical Luenberger observers for continuous dynamics and treat the switching or discontinuity as a perturbation that can be overcome by certain strong assumptions on system data.
This line of work also requires that the underlying observability notion allows instantaneous recovery of the state from the measured output \citep{BabaPapp05,VidaChiu03}.
Modified Luenberger observers have also been used for constrained dynamical systems using similar observability concepts: see \citep{TanwBrog14} when the constraint sets are convex (or, mildly nonconvex), and \citep{TanwBrog16} when the constraints result in impacts.
However, for switched dynamical systems, several different notions of observability can be defined.
In \citep{SunGe02, XieWang03}, a switched system comprising ordinary differential equations (ODEs) is called observable if there exists a switching signal that allows reconstruction of the initial state from the output measurements over an interval.
This concept also appears in the observer construction (for continuous state and switching signal) proposed in \citep{BallBenv03}.
However, in our recent work, a more relaxed notion of observability has been proposed for switched ODEs \citep{TanwShim13} and switched DAEs \citep{TanwTren12}.
The switching signal in this case is assumed to be known and fixed (i.e.\ playing the same role as the input $u$ which is also assumed to be known and not influenced by the observer). By measuring the outputs and inputs over an interval, and using the data of subsystems activated during that interval, it is determined whether state reconstruction is possible or not.
Several variants of this notion are also collected in the survey \citep{PetrTanw15}.
A state estimation algorithm based on these generalized observability concept can be found in \citep{ShimTanw14, TanwShim13, TanwShim15} for switched ODEs.
The main contribution of this paper is to address the observer design for switched DAEs which, apart from our preliminary work \citep{TanwTren13}, has not been addressed in this literature.
Already in the context of nonswitched systems several challenges arise in the study of DAEs.
The difference basically arises due to the presence of algebraic equations (static relations) in the description of the system because of which the state trajectories can only evolve on the sets defined by the algebraic equations of the active mode.
Observer designs have been studied for (nonswitched) DAEs since 1980's, e.g. \citep{Dai89a,FahmOrei89}.
Unlike ODEs, the observer design in DAEs requires additional structural assumptions and, furthermore, the order of the observer may depend on the design method.
Because of these added generalities, observer design for nonswitched DAEs is still an active research field \citep{BobiCamp11, Daro12}, and the recent survey articles summarize the development of this field \citep{BergReis15ppb, BobiCamp14}.
In studying switched DAEs, our modeling framework allows for time-varying algebraic relations.
The changes in algebraic constraints due to switching introduce jumps in the state of the system, and because of the possibility of a higher-index DAE, these jumps may get differentiated and generate impulsive solutions.
The notion of observability studied in this paper takes into account the additional structure due to algebraic constraints, and the added information from the outputs in case there are impulses observed in the measurements.
This observability concept is then used to construct a mapping from the output space to the state space, which allows us to theoretically reconstruct the state.
The key element of constructing this mapping is to show how the structure of a linear DAE is exploited to recover the information about the state in individual subsystems.
This structural decomposition is then combined with the expressions used for evolution of states in switched DAEs to accumulate all possible recoverable information from past measurements about the state at one time instant.
The construction then yields a systematic procedure for writing the value of the state at a time instant in terms of outputs measured over an interval for which the system is observable.
The theoretical mappings constructed in the process are often not realizable in practice, but the derivation is used to describe a general class of state observers that generate asymptotically converging state estimates.
The main result states that if the observable components of the individual subsystems can be estimated well-enough, and the required observability assumption persistently holds with time, then the estimates converge asymptotically.
Our initial work on observers for switched DAEs \citep{TanwTren13}, and even the observers proposed for switched ODEs \citep{ShimTanw14, TanwShim13} can be seen as a special case of the general class of state estimators studied in this paper.
\section{Contribution and Layout}
This section provides a summary of all the technical results that are developed in this article, and a coherent view of how the different components are connected together to solve the state estimation problem for switched DAEs. The reader may also refer to this section as an index for finding the appropriate section for technical terms.
The main contribution of this article is to provide a systematic procedure for designing observers for system class~\eqref{eq:sysLin} which generate asymptotically convergent state estimates.
The flow-diagram which shows the working of the proposed observer is given in Figure~\ref{fig:obsAll}.
\begin{figure}[hbt]
\centering
\begin{tikzpicture}
\draw (0,0) node [rectangle, rounded corners, draw, align=center, minimum height =0.65cm, text centered] (sys) {System dynamics\\[0.4\baselineskip] $E_\sigma \dot x = A_\sigma x + B_\sigma u$};
\draw (0,-2) node [rectangle, rounded corners, draw, align=center, minimum height =0.65cm, text centered] (copy) {System copy with resets \\[0.4\baselineskip] $E_\sigma \dot{\widehat x} = A_\sigma \widehat x + B_\sigma u$\\ $\widehat x(t_p^+) = \Pi_p(\widehat x(t_p^-) - \xi_p)$};
\draw (0,-4.5) node [rectangle, rounded corners, draw, align=center, minimum height =0.65cm] (estimate) {Partial state estimations\\[0.5ex] $\begin{aligned}
\widehat{z}_k &\approx U_k[0/z^\text{\normalfont diff}_k/z_k^\text{\normalfont imp}]\\[0.2ex]
z^\text{\normalfont diff}_k &= \mathcal{O}_{(t_{k-1},t_k)}^\text{\normalfont diff}(y^e_{(t_{k-1},t_k)})\\
z^\text{\normalfont imp}_k &= \mathcal{O}_{[t_k]}^\text{\normalfont imp} (y^e[t_k])
\end{aligned}$};
\draw (0,-7) node [rectangle, rounded corners, draw, align=center, minimum height =0.65cm] (corr) {Error Correction\\ $\xi_p = \mathcal{O}_{q}^p(\widehat{\mathbf{z}}_{q+1}^p) - \Xi_{q}^{p-1}(\boldsymbol{\xi}_{q+1}^{p-1})$};
\coordinate (tr) at ([xshift=2.5cm)]sys.east);
\coordinate (mr) at ([yshift=-2cm)]tr);
\coordinate (br) at ([yshift=-4cm)]tr);
\coordinate (tl) at ([xshift=-2cm)]sys.west);
\coordinate (tml) at ([xshift=0.5cm)]tl);
\coordinate (ml) at ([yshift=-2cm)]tml);
\coordinate (bl) at ([yshift=-4cm)]tml);
\draw [thick,black] (mr) node [draw, circle,inner sep = 0.25mm] (sum) {$-$};
\draw [ultra thick, ->] (sys.east) node[anchor = south west] {$\ \ y = C_\sigma x$} -- (tr) -- (sum.north);
\draw [ultra thick, ->] (copy.east) node[anchor = south west]{$\ \widehat y = C_\sigma \widehat x$} -- (sum.west);
\draw [ultra thick, ->] (sum.south) node[anchor = north west]{$ y^e$} |- (estimate.east);
\draw [thick, ->] (tl) node[anchor = south west] {$(\sigma,u)$} -- (sys.west);
\draw [thick, ->] (tml) -- ([yshift=0.25cm]ml) -- ([yshift=0.25cm]copy.west);
\draw [ultra thick, ->] (corr.west) -| ([xshift = -0.5cm, yshift=-0.25cm]ml) -- ([yshift=-0.25cm]copy.west) node [anchor=north east] {$\xi_1, \xi_2, \dots\!$};
\draw [thick, ->] ([xshift=-0.5cm]corr.west) -- ([xshift=-0.5cm, yshift = -0.75cm]corr.west) -- ([xshift=0.75cm, yshift = -0.75cm]corr.east) -- ([xshift=0.75cm]corr.east) -- (corr.east);
\draw[ultra thick,->] ([xshift=-1.8cm]estimate.south) node[anchor = north west] {$\widehat{z}_{q+1}$} -- ([xshift=-1.8cm]corr.north);
\draw[ultra thick,->] ([xshift=1.8cm]estimate.south) node[anchor = north west] {$\widehat{z}_{p}$} -- ([xshift=1.8cm]corr.north);
\draw[ultra thick,->] ([xshift=0.6cm]estimate.south) node[anchor = north west] {$\widehat{z}_{p-1}$} -- ([xshift=0.6cm]corr.north);
\draw ([xshift=-0.6cm]estimate.south) node[anchor = north west] {$\cdots$};
\end{tikzpicture}
\caption{A schematic representation of the state estimator.}
\label{fig:obsAll}
\end{figure}
Our proposed observer basically comprises two layers. In the first layer, a system copy with variable $\widehat x$ (also the state estimate) is simulated with same time scale as the actual plant, and $\widehat x$ is reset at some switching times $t_p$ by an error correction vector $\xi_p$.
The second layer comprises an algorithm to compute $\xi_p$ in very short (negligibly small) time. This article (in Sections~\ref{sec:obsCond}--\ref{sec:obsDesign}) develops the theoretical tools and numerical certificates that gurantee efficient implementation of this algorithm. The central idea is that $\xi_p$ closely approximates the estimation error prior to the reset times $e(t_p^-):=\widehat x(t_p^-) - x(t_p^-)$ by using the output measurements over the interval $(t_q,t_p]$, $q<p$.
If this approximation can be achieved to a desired accuracy, and these sufficiently rich approximates are injected into the estimate's dynamics by resetting $\widehat x$ often enough, then our algorithms ensure that $\widehat x(t)$ converges to $x(t)$ as $t$ tends to infinity.
There are three primary mechanisms involved in achieving this desired goal:
\begin{itemize}[leftmargin=*]
\item Determinability notion in Section~\ref{sec:obsCond} captures the property that, by measuring the output of \eqref{eq:sysLin} over an interval $(t_q,t_p]$, we must be able to recover the state of the system $x(t_p^+)$ exactly; and Theorem~\ref{thm:DAEDetMult} provides conditions on system data to achieve this property.
\item In Section~\ref{sec:outMaps}, under the assumption that system~\eqref{eq:sysLin} is determinable over the interval $(t_q,t_p]$, we build the theoretical engine to compute $\xi_p$.
This involves looking at the observable components of the individual subsystems for the error dynamics denoted by $z_k$, $k = q+1, \cdots, p$. Using the structure of a DAE, we define the maps $\mathcal{O}_k^\text{\normalfont diff}$, $\mathcal{O}_k^\text{\normalfont imp}$, and decompose $z_k$ into $z_k^\text{\normalfont cons}$, $z_k^\text{\normalfont diff} := \mathcal{O}_k^\text{\normalfont diff}(y^e_{(t_{k-1},t_k]})$, $z_k^\text{\normalfont imp} := \mathcal{O}_k^\text{\normalfont imp}(y^e_{[t_k]})$ based on whether they can be recovered from algebraic constraints, smooth output measurements, or impulsive output measurements, respectively.
Theorem~\ref{thm:mapZs} then provides the construction of a matrix $\mathcal{O}_q^p$ such that
\[
e(t_p^+) = \Pi_p\mathcal{O}_q^p \mathbf{z}_{q+1}^p,
\]
where $\mathbf{z}_{q+1}^p:=(z_{q+1}, \dots, z_p)$, and $\Pi_p$ is a projector depending on the system data $(E_p,A_p)$.
\item In Section~\ref{sec:obsDesign}, we show how to construct the estimate $\widehat z_k$ for the observable component of the error dynamics. These estimates are then used to define the vector $\xi_p$ recursively (see Fig.~\ref{fig:obsAll} also). The final result, Theorem~\ref{thm:obsGenConv}, proves the asymptotic convergence of $\widehat x(t)$ toward $x(t)$, as $t \rightarrow \infty$, in a rigorous manner.
\end{itemize}
\section{Preliminaries}\label{sec:prelim}
Before addressing the problem of observability, or state estimation, it helps to recall the solution framework associated with \eqref{eq:sysLin}, and certain algebraic properties of the system matrices, that will form the foundation of the analysis carried out in this paper.
In Section~\ref{sec:sol}, we provide a motivation for the solution concept that is adopted for system class~\eqref{eq:sysLin}.
Then, to analyze such solutions, certain properties of the matrix pairs $(E_p, A_p)$, $p \in \mathbb{N}$, are described in Section~\ref{sec:basicProp}.
\subsection{Solution Concept for Switched DAEs}\label{sec:sol}
To discuss the solution concept for the switched DAEs described in \eqref{eq:sysLin}, let us first consider a matrix pair $(E,A)$ and the problem of finding a trajectory $x$ which solves the following initial-trajectory problem (ITP):
\begin{subequations}\label{eq:ITP}
\begin{align}
x_{(-\infty,0)}&=x^0_{(-\infty,0)}\label{eq:ITPa}\\
(E\dot{x})_{[0,\infty)} &= (Ax)_{[0,\infty)}, \label{eq:ITPb}
\end{align}
\end{subequations}
in some appropriate sense.
In this equation, $x^0$ is some initial trajectory, and $f_\mathcal{I}$ denotes the restriction of $f$ to an interval $\mathcal{I}$ and will be defined formally later.
To avoid certain complications (such as nonuniquess of solutions), we limit our attention to \emph{regular} matrix pairs $(E,A)$, i.e.\ we assume that $\det(sE-A)$ is not the zero polynomial.
A fundamental tool in studying \eqref{eq:ITP} is the notion of {\em consistency space}, denoted $\mathfrak{C}_{(E,A)}$, and defined as
\[
\mathfrak{C}_{(E,A)}:=\left\{x_0\in\mathbb{R}^n \, \vert \, \exists \, x \in\mathcal{C}^1: E\dot{x}=Ax \wedge x(0)=x_0 \right \}
\]
where $\mathcal{C}^1$ is the space of differentiable functions $x:\mathbb{R}\to\mathbb{R}^n$.
Intuitively speaking, $\mathfrak{C}_{(E,A)}$ corresponds to the set of initial conditions which are consistent with the algebraic equations encoded in \eqref{eq:ITPb}, and thus a smooth solution evolving within $\mathfrak{C}_{(E,A)}$ can be obtained.
An algebraic characterization of $\mathfrak{C}_{(E,A)}$ in terms of the matrices $E,A$ will be given later in this section.
At this point, it follows from the definition that if $x^0(\cdot)$ is absolutely continuous and $x^0(0^-) \in \mathfrak{C}_{(E,A)}$, then an absolutely continous trajectory $x$ can be defined, such that \eqref{eq:ITP} is satisfied, and furthermore $x(t) \in \mathfrak{C}_{(E,A)}$, for each $t \ge 0$. It is also shown in \citep{Tren09d} that such a solution is unique if, and only if, the matrix pair $(E,A)$ is regular.
However, if $x^0(0^-) \not\in \mathfrak{C}_{(E,A)}$, then it is not so obvious how the solution to \eqref{eq:ITP} must be obtained.
A valid solution $x$ with $x(0^-) = x^0(0^-)$ must ``jump'' to $x(0^+) \in \mathfrak{C}_{(E,A)}$, and from there onwards, the solution can be extended uniquely over the interval $(0,\infty)$.
In contrast to systems with ODEs which only comprise operations involving integration, the DAE \eqref{eq:ITPb} allows the possibility of (higher-order) differentiation.
For this reason, if a jump is introduced at time instant $t = 0$, then the notion of derivatives of jumps must also be introduced in the generalized solution concept associated with \eqref{eq:ITP}.
This motivates us to enlarge the solution space from functions to {\em distributions}, because the derivative of a jump is then formally defined as the {\em Dirac impulse} (or Dirac delta).
But the restriction to an interval as required in \eqref{eq:ITP} cannot be defined for general distributions \citep[Lem.~5.1]{Tren13a}, and hence we consider the smaller space of {\em piecewise smooth distributions}, denoted by $\D_{\pw\cC^\infty}$, for which \eqref{eq:ITP} is indeed well defined.
We refer the reader to \citep{Tren09d} for formal details, but for this paper, it suffices to recall that $x \in (\D_{\pw\cC^\infty})^n$ is written as
\begin{subequations}\label{eq:xGenDist}
\begin{align}
x &= x^f_\mathbb{D} + x[\cdot],
\intertext{where $x_\mathbb{D}^{f}$ denotes the distribution induced by the piecewise smooth function $x^f:\mathbb{R}\to\mathbb{R}^n$ and $x[\cdot]$ denotes the impulsive part of $x$ given by}
x[\cdot] &= \sum_{k\in\mathbb{Z}} x[t_k] = \sum_{k \in \mathbb{Z}} \sum_{i=0}^{n_k} a_k^i \delta_{t_k}^{(i)},
\end{align}
\end{subequations}
where $\setdef{t_k\in\mathbb{R}}{k\in\mathbb{Z}}$ is a strictly increasing set without finite accumulation and $\delta_{t_k}^{(i)}$ denotes the $i$-th derivative of the Dirac impulse with support at $t_k$. The $n_k+1$ coefficients $a_k^0, a_k^1,\ldots, a_k^{n_k}\in\mathbb{R}^n$ parameterize the \emph{impulsive part} of $x$ at $t_k$, denoted by $x[t_k]$. Furthermore, the \emph{left-} and \emph{right-sided evaluations} $x(t^-)$, $x(t^+)$ are well defined for each $t\in\mathbb{R}$. Finally, a \emph{distributional restriction} is well defined; in fact for $x$ as in \eqref{eq:xGenDist} the restriction to some interval $\mathcal{I}\subseteq\mathbb{R}$ is defined via
\begin{equation}\label{eq:distr_restr}
x_{\mathcal{I}} := (x^f_\mathcal{I})_\mathbb{D} + \sum_{t_k\in\mathcal{I}} x[t_k],
\end{equation}
where $x^f_\mathcal{I}(t)=x^f(t)$ for $t\in\mathcal{I}$ and $x^f_\mathcal{I}(t)=0$ otherwise.
If in \eqref{eq:ITP} it is assumed that $x^0 \in (\D_{\pw\cC^\infty})^n$, then we seek a solution $x$ of the form \eqref{eq:xGenDist}, and in this particular case, the impulse times $t_k$ either correspond to the impulse times of $x^0$, or the Dirac impulse that occurs at $t = 0$ due to the jump from an inconsistent initial condition to the consistency space.
When studying switched DAEs of the form \eqref{eq:sysLin}, the consistency spaces for individual subsystems, in general, are different and affected by the presence of inputs as well.
Hence, the switch from one subsystem to another may introduce jumps, and its higher order derivatives which we represent formally using distributions.
One can draw an analogy between \eqref{eq:ITP} and how the solution of a switched DAE~\eqref{eq:sysLin} is extended beyond a switching time instant: The initial trajectory $x^0 \in \D_{\pw\cC^\infty}$ corresponds to some past over the interval $(t_0,t_1)$, and then at $t_1$, the system first switches to a new subsystem described by a DAE, which would introduce a jump to the active consistency space, along with the possibility of Dirac impulses.
In general, the state $x(t_p^-)$ may not be consistent with the consistency space of the subsystem $(E_p,A_p,B_p,C_p)$ which is active over the interval $[t_p, t_{p+1})$, and the solution is extended by introducing a jump to the consistency space of the new subsystem, and introducing Dirac impulses at the switching times.
The foregoing discussion motivates us to adopt the framework of piecewise smooth distributions to express solutions of a switched DAE.
\begin{Definition}
The triplet $(x,u,y)$ is called a \emph{(distributional) solution} of the switched DAE \eqref{eq:sysLin} on some interval $\mathcal{I}\subseteq (t_0,\infty)$ if, and only if, $x \in (\D_{\pw\cC^\infty})^n$, $u\in (\D_{\pw\cC^\infty})^{\mathtt{u}}$, $y \in (\D_{\pw\cC^\infty})^{\mathtt{y}}$, and \eqref{eq:sysLin} is satisfied within $\D_{\pw\cC^\infty}$ on $\mathcal{I}$, i.e.\
\[
(E_\sigma\dot{x})_{\mathcal{I}} = (A_\sigma x+B_\sigma u)_{\mathcal{I}},\quad y_{\mathcal{I}} = (C_\sigma x+D_\sigma u)_{\mathcal{I}}.
\]
If $\mathcal{I} = (t_0,\infty)$ we call $(x,u,y)$ a \emph{global} solution.
\end{Definition}
In \citep{Tren09d} it is shown that \eqref{eq:sysLin} always has a distributional solution which is uniquely determined on $(t_0,\infty)$ by $x(t_0^+)$ and the input $u$, provided \emph{all matrix pairs $(E_k,A_k)$ are regular}. Throughout this paper we always make this assumption.
\begin{Remark}\label{rem:local_sol}
Under the regularity assumption and for a fixed input $u$, any solution of \eqref{eq:sysLin} on some interval $(s,t)$ can be uniquely extended to a solution on $(s,\infty)$; however, it is not always possible to extend this solution in the past as the following simple example shows:
\[
0 = x \text{ on }(t_0,t_1),\quad\text{and}\quad \dot{x} = 0 \text{ on }[t_1,\infty).
\]
The only global solution is $x=0$, but $x=c$ for any constant $c\in\mathbb{R}^n$ is a solution on $[t_1,\infty)$ which for $c\neq 0$ cannot be extended on the whole interval $(t_0,\infty)$.
\end{Remark}
\subsection{Properties of a matrix pair $(E,A)$} \label{sec:basicProp}
We now collect some important properties and definitions for a regular matrix pair $(E,A)$, which can then of course be applied to each subsystem in \eqref{eq:sysLin}. The properties, and the notations introduced in the process, are then used throughout the paper.
A very useful characterization of regularity is the following well-known result (see e.g.\ \citep{BergIlch12a}).
\begin{Proposition}[Regularity and quasi-Weierstra\ss\ form]
A matrix pair $(E,A)\in\mathbb{R}^{n\times n}\times\mathbb{R}^{n\times n}$ is regular if, and only if, there exist invertible matrices $S,T\in\mathbb{R}^{n\times n}$ such that
\begin{equation}\label{eq:QWF}
(SET,SAT) = \left( \begin{bmatrix} I & 0 \\ 0 & N \end{bmatrix}, \begin{bmatrix} J & 0 \\ 0 & I \end{bmatrix} \right) ,
\end{equation}
where $J \in \mathbb{R}^{n_1\times n_1}$, $0\leq n_1\leq n$, is some matrix and $N\in\mathbb{R}^{n_2 \times n_2} $, $n_2:=n-n_1$, is a nilpotent matrix.
\end{Proposition}
The most useful aspect of describing a DAE in quasi-Weierstra\ss\ form \eqref{eq:QWF} is that it decomposes the differential and algebraic part.
If we partition the state $x$ of the system $E\dot x = A x$ after applying coordinate transformation as $(v^\top,w^\top)^\top = T x$ then \eqref{eq:QWF} reads as follows: $\dot v = J v$ and $N\dot w = w$.
The smooth part of the solution is obtained by solving the ODE $\dot v = J v$, and the only classical solution for the equation $N\dot w = w$ is $w=0$. For the latter, the distributional response to an inconsistent initial value is given by
\[
w[0] = -\sum_{i=0}^{n_2-2}N^{i+1} w(0^-) \delta_0^{(i)}.
\]
We thus obtain an alternate meaning to the solution of a DAE, that is, the ODE component continues to evolve smoothly, whereas the pure algebraic component jumps to the origin with the possibility of Dirac impulses at the time the system is switched on.
To describe the system in the form \eqref{eq:QWF}, one can calculate the matrices $S,T$ by constructing the so called Wong-sequences from the matrices $E,A$, see \citep{BergIlch12a} for details.
Based on these transformations, we define now the following important matrices.
\begin{Definition}\label{def:proj}
Consider the regular matrix pair $(E,A)$ with corresponding quasi-Weierstra\ss\ form \eqref{eq:QWF}.
The \emph{consistency projector} of $(E,A)$ is given by
\[
\Pi_{(E,A)} = T \begin{bmatrix} I & 0\\ 0 & 0 \end{bmatrix}T^{-1}.
\]
Furthermore, let
\[\begin{aligned}
A^\text{\normalfont diff} := T \begin{bmatrix} J & 0 \\ 0 & 0 \end{bmatrix} T^{-1}, \text{ and } \
E^\text{\normalfont imp} := T \begin{bmatrix} 0 & 0 \\ 0 & N \end{bmatrix} T^{-1}.
\end{aligned}
\]
Finally, if also an output matrix $C$ is considered let
\[
C^\text{\normalfont diff} := C \Pi_{(E,A)}.
\]
\end{Definition}
Note that none of the above matrices depends on the specific choice of the (non-unique) quasi-Weierstra\ss\ form \eqref{eq:QWF}.
To give an intuitive interpretation of the objects introduced in Definition~\ref{def:proj}, it is noted that the definition of the consistency projector yields $\im \Pi_{(E,A)}=\mathfrak{C}_{(E,A)}$, and the mapping $\Pi_{(E,A)}$ defines the jump rule onto the consistency space for inconsistent initial conditions, see Lemma~\ref{lem:consProj}.
After the jump, the state evolves within the consistency space, and the matrices $A^\text{\normalfont diff}$ and $C^\text{\normalfont diff}$ are used to describe this evolution process, see Lemma~\ref{lem:diffProj}.
The impulsive part of the solution \eqref{eq:ITP} resulting from differentiation of the jump at time $t=0$, denoted by $x[0]$, is described by $E^\text{\normalfont imp}$, see Lemma~\ref{lem:impulses}.
\begin{Lemma}[{\citep[Thm.~4.2.8]{Tren09d}}]\label{lem:consProj}
Consider the ITP \eqref{eq:ITP} with regular matrix pair $(E,A)$ and with arbitrary initial trajectory $x^0\in(\D_{\pw\cC^\infty})^n$. There exists a unique solution $x\in(\D_{\pw\cC^\infty})^n$ and
\[
x(0^+)=\Pi_{(E,A)} x(0^-)
\]
\end{Lemma}
\begin{Lemma}\label{lem:diffProj}
For any regular matrix pair $(E,A)$ and output matrix $C$, the following implication holds for all continuously differentiable $(x,y)$:
\[\left.\begin{aligned}
E\dot{x} &= Ax,\\
y &= Cx
\end{aligned}\right\}\ \Rightarrow\
\left\{\begin{aligned}
\dot{x} &= A^\text{\normalfont diff} x,\\
y &= C^\text{\normalfont diff} x
\end{aligned}\right.
\]
In particular, any classical solution $x$ of $E\dot{x}=Ax$ satisfies
\[
x(t) = \mathtt{e}^{A^\text{\normalfont diff} (t-s)} x(s),\quad s,t\in\mathbb{R}.
\]
\end{Lemma}
\begin{proof}
The proof is a simple consequence of \citep[Lem.~3]{TanwTren10}.
\end{proof}
\begin{Remark}
One may wonder why we replace the output matrix $C$ by $C^\text{\normalfont diff}$. As a motivation, consider a simple DAE with $E = \begin{smallbmatrix} 1 & 0 \\ 0 & 0\end{smallbmatrix}$, $A = I_{2\times 2}$, and $C = [0\ 1]$. For this system, we have $A^\text{\normalfont diff} = \begin{smallbmatrix} 1 & 0\\0 & 0\end{smallbmatrix}$, and $C^\text{\normalfont diff} = [0 \ 0]$.
Hence, with $C^\text{\normalfont diff}$, it is obvious that we cannot deduce any information about the state from the continuous flow and its corresponding output $y$. However, when using the original output equation the corresponding unobservable space of the ODE is not the whole space, i.e., some parts of the state can be recovered from the output. Although it is true that we can recover $x_2$ from the output, it is misleading because in the DAE, $x_2$ is always zero due to the algebraic constraint. Indeed, the ability to recover $x_2$ does not depend on the actual output matrix $C$.
\end{Remark}
\begin{Lemma}[{\citep[Cor.~5]{TanwTren10}}]\label{lem:impulses}
Consider the ITP \eqref{eq:ITP} with regular matrix pair $(E,A)$ and the corresponding $E^\text{\normalfont imp}$ matrix from Definition~\ref{def:proj}.
For the unique solution $x\in(\D_{\pw\cC^\infty})^n$ of \eqref{eq:ITP}, it holds that
\begin{equation}\label{eq:impSol}
x[0] = -\sum_{i=0}^{n-2}(E^\text{\normalfont imp})^{i+1} x(0^-) \delta_0^{(i)},
\end{equation}
where $\delta_0^{(i)}$ denotes the $i$-th (distributional) derivative of the Dirac-impulse $\delta_0$ at $t=0$
\end{Lemma}
\section{Determinability Conditions}\label{sec:obsCond}
To address the problem of state estimation, we first need to study the appropriate notion of observability for switched DAEs.
Several different notions of observability can be defined in the context of switched systems as proposed in a survey chapter \citep{PetrTanw15}.
For the purpose of this paper, it suffices to recall that our approach towards studying observability has three distinctive features compared to the linear time-invariant ODE systems.
\emph{Final-state observability:}
For nonswitched dynamical systems described by linear ODEs, observability relates to recovering the unknown initial value of the state using the input and output measurements.
For switched systems, due to the presence of non-invertible state reset maps (which is the case for switched DAEs), the backward flow in time may not be uniquely defined. Because of this reason, recovering the current state does not imply recovering the initial state (the converse however always holds).
For asymptotic state estimation, it then only makes sense to talk about observability of the current-state, or final-state value at the end of an interval.
This concept is closely related to the notion of final-state observability defined in \citep[Chapter 6]{Sont98a}.
\emph{Large-time observability:}
Typically, for continuous LTI systems, the interval over which the inputs and outputs are measured to recover the state can be arbitrarily small. However, for switched systems, the observation over larger intervals and the change in dynamics due to switching allow us to extract additional information about the state.
This motivates us to study the problem of observability without requiring observability of the individual subsystems in the classical framework, and devise a framework to take different information from multiple modes to derive relaxed conditions.
The concept of large-time observability has also been found useful in the study of nonlinear ODEs \citep{HespLibe05}.
\emph{Algebraic structure:}
The state of a system comprising switched DAEs is constrained by certain algebraic equations in the model description. This is taken into account by the notions of R-observability or behavioral observability for unswitched DAEs (c.f.\ the recent survey \cite{BergReis16a}).
For the model-based state estimation, which is being studied in this paper, computing consistency spaces for individual subsystems from the algebraic equations provides additional information about the state.
Moreover, one may observe the Dirac impulses in the output, which also result due to different algebraic equations allowed in the system class \eqref{eq:sysLin}, and these impulsive measurements prove useful in state estimation. This is related to the notion of impulse observability or observability at infinity (c.f.\ \cite{BergReis16a}), but here we only extract some partial information without assuming that the subsystem are impulse observable.
In the light of the above discussion, we now introduce the notion of {\em determinability}, that combines the properties of large-time and final-state observability. It is formally defined as follows:
\begin{Definition}[Determinability] \label{defn:detObs}
The switched DAE \eqref{eq:sysLin} is called $(s,t]$-\emph{determinable} for $t_0\leq s < t$ if for every pair of local solutions $(x,u,y), (\bar{x},\bar{u},\bar{y})$ of \eqref{eq:sysLin} on $(s,\infty)$ with $u\equiv \bar{u},$ the following implication holds
\begin{equation}\label{eq:det_def}
y_{(s,t]}=\bar{y}_{(s,t]}\ \Longrightarrow\ x_{(t,\infty)} = \bar{x}_{(t,\infty)}.
\end{equation}
\end{Definition}
Hence, for a fixed switching signal $\sigma$ and given input, the system \eqref{eq:sysLin} is considered determinable over an interval if the output measurements over that interval allow us to reconstruct the value of the state in the future.
Note that the left expression of the implication \eqref{eq:det_def} is equivalent to the conditions $y_{(s,t)}=\bar{y}_{(s,t)}$ and $y[t] = \bar{y}[t]$; in particular the knowledge of the distributional output $y[t]$ will play a major role in the forthcoming observer construction. Furthermore, the right expression of the implication \eqref{eq:det_def} is, due to the regularity assumption, equivalent to the condition $x(t^+) = \bar{x}(t^+)$.
\begin{Remark}
In Definition~\ref{defn:detObs}, we only consider solutions that satisfy \eqref{eq:sysLin} over the interval $(s,\infty)$ and such solutions are not required to satisfy the DAE prior to time $s$.
The reason being, to characterize $(s,t]$-determinability, we are interested in computing the set of indistinguishable states at time $t^+$ using \emph{only} the information from the input and output measurements over the interval $(s,t]$.
To obtain such a characterization it is essential to allow for $x(s^-)$ to be arbitrary;
Otherwise, if $x(s^-)$ is constrained as a solution of a DAE, then that information could possibly reduce the set of states reachable at time $t^+$, see Remark~\ref{rem:local_sol}.
In other words, the constrained past would influence the computation of the set of indistinguishable states which is not desired for our purposes.
\end{Remark}
Based on \citep[Propostion~9]{TanwTren12}, we can restrict our attention to zero-determinability.
Although there is a slight difference in the definition used in this paper, and the one given in \citep{TanwTren12}, the proof of the following result goes through with the same arguments, and is thus not repeated here.
\begin{Proposition}[{\citep[Propostion~9]{TanwTren12}}]\label{prop:fwdObsZero}
The switched DAE \eqref{eq:sysLin} is $(s,t]$-determinable if, and only if, the following implication holds for any local solution $(x,u,y)$ of \eqref{eq:sysLin} on $(s,\infty)$ with $u\equiv 0$:
\[
y_{(s,t]} \equiv 0 \quad \Longrightarrow \quad x_{(t,\infty)} = 0.
\]
\end{Proposition}
Thus, in studying whether the state can be completely determined from the knowledge of external measurements, one can ignore the role of inputs and set them to zero.
In essence, for studying determinability conditions, we can reduce our attention to the following system class:
\begin{equation}\label{eq:sysHomLin}
\begin{aligned}
E_\sigma \dot e &= A_\sigma e \\
y^e & = C_\sigma e.
\end{aligned}
\end{equation}
The reason for changing the notation for state variable from $x$ to $e$ in the homogenous system \eqref{eq:sysHomLin} is that later, in the observer design, we will be applying our notion of determinability, and the arguments that follow, to a certain homogenous system of error dynamics. Hence, it is helpful to work with this notation at this point.
The objective now is to compute the set of indistinguishable states for \eqref{eq:sysHomLin} in the sense of Definition~\ref{defn:detObs}.
To do so, we first study the single switch case where we show how the structural decomposition in quasi-Weierstra\ss\ form \eqref{eq:QWF} and the result of Lemmas~\ref{lem:consProj}, \ref{lem:diffProj} and \ref{lem:impulses} allow us to extract the information about the state from the output measured on one switching interval.
When studying the case of multiple switches in Section~\ref{sec:multObs}, we describe how this information is then accumulated at a single time instant to arrive at a characterization of determinability.
\subsection{Observable Information from Single Switch}
Consider the homogeneous switched DAE \eqref{eq:sysHomLin} on $(t_0,\infty)$ and let $t_1>t_0$ be the first switching instant after $t_0$. Let the active subsystem over the interval $(t_0,t_1)$ be denoted by $(E_0,A_0,C_0)$ and assume that the active mode after that is $(E_1,A_1,C_1)$.
We are interested in knowing what information can be deduced about $e(t_1^+)$ using the knowledge of the output $y^e$ of \eqref{eq:sysHomLin} measured on $(t_0,t_1]$ and the system matrices. Invoking Lemma~\ref{lem:consProj} we know that $e(t_1^+)=\Pi_1 e(t_1^-)$ and it suffices therefore to focus on $e(t_1^-)$ in the following.
\emph{Consistency space:} Independently of the observed output it holds that $e(t_1^-)\in\mathfrak{C}_0$, where $\mathfrak{C}_0:=\mathfrak{C}_{(E_0,A_0)}$ denotes the consistency space of the DAE corresponding to the matrix pair $(E_0,A_0)$.
\emph{Differential unobservable space:} If $y^e_{(t_0,t_1)}\equiv 0$ then $(y^e)^{(i)}(t_1^-)=0$ for all $i\in\mathbb{N}$, hence, invoking Lemma~\ref{lem:diffProj}, we have $e(t_1^-)\in\ker O_0^\text{\normalfont diff}$, where $\ker O_0^\text{\normalfont diff}$ denotes the unobservable space of the ODE $\dot{e}=A_0^\text{\normalfont diff} e$, $y^e=C_0^\text{\normalfont diff} e$, i.e.\footnote{By $[M_1/M_2/\dots/M_k]$, we denote the matrix which is obtained by stacking the matrices $M_1,M_2,\dots,M_k$ (with the same number of columns) over each other.}
\begin{equation}\label{eq:observabilitymatrix}
O_0^\text{\normalfont diff}:=[C_0^\text{\normalfont diff} / C_0^\text{\normalfont diff} A_0^\text{\normalfont diff} / \cdots / C_0^\text{\normalfont diff} (A_0^\text{\normalfont diff})^{n-1}].
\end{equation}
\emph{Impulse unobservable space:} Finally, due to Lemma~\ref{lem:impulses}, from the equality $0=y^e[t_1]=C_1 e[t_1]$ it follows that $e(t_1^-)\in\ker O_1^{\text{\normalfont imp}}$, where
\[
O_1^\text{\normalfont imp}:=[C_1 E_1^\text{\normalfont imp} / C_1 (E_1^\text{\normalfont imp})^2 / \cdots / C_1 (E_1^\text{\normalfont imp})^{n-1}].
\]
Altogether this provides the motivation to introduce the \emph{locally unobservable subspace}:
\begin{equation}\label{eq:W1}
\mathcal{W}_1 := \mathfrak{C}_0 \cap \ker O_0^\text{\normalfont diff} \cap \ker O_1^\text{\normalfont imp}.
\end{equation}
\begin{Proposition}\label{prop:fwdObsSing}
The following equality of sets holds:
\[
\mathcal{W}_1= \setdef{e(t_1^-)}{(e,y^e) \text{ solves \eqref{eq:sysHomLin} and }y^e_{(t_0,t_1]} = 0}.
\]
\end{Proposition}
The proof appears in \ref{app:proofs}.
As a consequence of this result, we have
\[
\Pi_1 \mathcal{W}_1 = \setdef{e(t_1^+)}{(e,y^e) \text{ solves \eqref{eq:sysHomLin} and }y^e_{(t_0,t_1]} = 0}
\]
and we call $\Pi_1 \mathcal{W}_1$ the locally undeterminable space.
\subsection{Gathering Information over Multiple Switches}\label{sec:multObs}
We are now interested in combining information from different modes to characterize how much knowledge about the state can be recovered if we observe measurements over intervals involving multiple switches.
The discussion in the previous section applied to the interval $(t_{p-1},t_p]$, yields that the measurements over an interval $(t_{p-1},t_p]$, $ p \ge 1$, allow us to deduce $e(t_p^-) \in \mathcal{W}_p$, where
\[
\mathcal{W}_p := \mathfrak{C}_{p-1}\cap\ker O_{p-1}^\text{\normalfont diff} \cap \ker O^{\text{\normalfont imp}}_{p}
\]
is the locally unobservable space at $t_p$. The objective is to gather the information from previous locally unobservable subspaces and deduce more knowledge about the value of the state at a given time. To do so, we introduce the following sequence of subspaces:
\begin{equation}\label{eq:DAESeqDet}
\begin{aligned}
\mathcal{Q}_q^{q+1} &:= \mathcal{W}_{q+1},\\
\mathcal{Q}_q^{p} & := \mathcal{W}_{p} \cap \mathtt{e}^{A^\text{\normalfont diff}_{p-1} \tau_{p-1}} \Pi_{p-1} \mathcal{Q}_q^{p-1}, \ p > q+1.\\
\end{aligned}
\end{equation}
The intuition behind this definition is as follows: Our aim is to let $\mathcal{Q}^p_q$ satisfy
\begin{equation}\label{eq:impFwdObsMult}
\boxed{\mathcal{Q}^p_q = \setdef{e(t_p^-)}{(e,y_e)=(e,0)\text{ solves \eqref{eq:sysHomLin} on }(t_q,t_p]},}
\end{equation}
in particular, $\Pi_p \mathcal{Q}_q^p$ is the \emph{$(t_q,t_p]$-undeterminability space} of \eqref{eq:sysHomLin} in the sense that it is the subspace of points $e(t_p^+)=\Pi_p e(t_p^-)$ which cannot be determined from the knowledge of $y^e$ on $(t_p,t_q]$.
Indeed we have the following result.
\begin{Theorem}[Determinability characterization]\label{thm:DAEDetMult}
Consider the homogeneous switched DAE \eqref{eq:sysHomLin} with corresponding consistency projectors $\Pi_p$ and subspaces $\mathcal{Q}_q^p$, $p>q$, recursively defined by \eqref{eq:DAESeqDet}. Then \eqref{eq:impFwdObsMult} holds for any $p>q\geq 0$. In particular, the switched DAE \eqref{eq:sysLin} is $(t_q,t_p]$-determinable, if and only if,
\begin{equation}\label{eq:CharacDetDAE}
\boxed{\Pi_p \mathcal{Q}_q^{p} = \{0 \}.}
\end{equation}
\end{Theorem}
The proof is given in \ref{app:proofs}. Note that the determinability characterization \eqref{eq:CharacDetDAE} is significantly weaker than the condition $\mathcal{Q}_q^p = \{0\}$ used in \citep{TanwTren13}. In fact, in the latter we aimed to recover $e(t_p^-)$, while for determinability it suffices to determine $e(t_p^+)=\Pi_p e(t_p^-)$ and any uncertainty within $\ker \Pi_p$ is irrelevant for determinability in the sense of Definition~\ref{defn:detObs}.
\subsection{An illustrative example}\label{sec:example}
To illustrate the above theoretical results, consider an academic example given by \eqref{eq:sysLin} with a periodic switching signal $\sigma$. The first switching time is $t_0 = 0$ and, for $k\in\mathbb{N}$,
\[
\tau_{4k} = 1,\ \tau_{4k+1} = \pi/4,\ \tau_{4k+2} = 1,\ \tau_{4k+3} = 1.
\]
For $p=0,1,2,3$, the subsystem which is active on the interval $[t_{4k+p},t_{4k+p+1})$ is described by $(E_{4k+p},A_{4k+p},B_{4k+p},C_{4k+p})$, $k \in \mathbb{N}$, defined as,
\[\begin{aligned}
p=0&:\ \left(I,\begin{smallbmatrix} -1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{smallbmatrix}, \begin{smallbmatrix} 0\\ 1\\ 0\\ 0 \end{smallbmatrix}, \begin{smallbmatrix} 1 & 0 & 0 & 0 \end{smallbmatrix} \right),\\
p=1&:\ \left(I,\begin{smallbmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & -1 & 0 & 0\\ -1 & -1 & -1 & -1 \end{smallbmatrix}, \begin{smallbmatrix} 0\\ 0\\ 1\\ 0 \end{smallbmatrix}, \begin{smallbmatrix} 0 & 0 & 0 & 0 \end{smallbmatrix} \right),\\
p=2&:\ \left(\begin{smallbmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \end{smallbmatrix},\begin{smallbmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{smallbmatrix}, \begin{smallbmatrix} 0\\ 0\\ 0\\ 1 \end{smallbmatrix}, \begin{smallbmatrix} 0 & 0 & 0 & 1 \end{smallbmatrix} \right),\\
p=3&:\ \left(I,\begin{smallbmatrix} 0 & 0 & 0 & 0\\ 1 & -1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{smallbmatrix}, \begin{smallbmatrix} 1\\ 0\\ 0\\ 0 \end{smallbmatrix}, \begin{smallbmatrix} 0 & 1 & 0 & 0 \end{smallbmatrix} \right).\\
\end{aligned}
\]
The subsystems indexed by $p=0,1,3$ are actually ODEs and the one given by $p=2$ is already in quasi-Weierstrass form \eqref{eq:QWF}. Hence, $T_p=S_p=I$, for each $p$. The matrices appearing in Definition~\ref{def:proj}, for $p=0,1,3$, are
\[
\Pi_p = I,\ A^\text{\normalfont diff}_p = A,\ C^\text{\normalfont diff}_p = C,\ E^\text{\normalfont imp}_p = 0,
\]
and, for $p=2$, are
\[
\Pi_2 = \begin{smallbmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{smallbmatrix},\ A^\text{\normalfont diff}_2 = 0,\ C^\text{\normalfont diff}_2 = 0,\ E^\text{\normalfont imp}_2 = E.
\]
The corresponding locally unobservable spaces are
\[\begin{aligned}
\mathcal{W}_1 &= \im \begin{smallbmatrix} 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{smallbmatrix},&
\mathcal{W}_2 &= \im \begin{smallbmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{smallbmatrix},\\
\mathcal{W}_3 &= \im \begin{smallbmatrix} 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{smallbmatrix},&
\mathcal{W}_4 &= \im \begin{smallbmatrix} 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{smallbmatrix},
\end{aligned}
\]
where $\im M$ denotes the image of the linear map induced by the matrix $M$ (i.e., its column space).
The $(t_q,t_p]$-undeterminable spaces $\mathcal{Q}_q^p$ with $\Pi_p Q_q^p = 0$ are as follows
\[
Q_0^2 = \im \begin{smallbmatrix} 0 \\ 0 \\ 0 \\ 1 \end{smallbmatrix}, \quad Q_2^4 = \{0\}.
\]
In particular, the switched DAE is determinable on each of the intervals $(t_{4k},t_{4k+2}]$, $(t_{4k+2},t_{4k+4}]$, $k\in\mathbb{N}$.
\section{Reconstructing the State}\label{sec:outMaps}
Based on the determinability notions studied in the previous section, we are now interested in knowing how the state of the homogenous error system \eqref{eq:sysHomLin} can actually be reconstructed from the information obtained by (nonzero) output error measurements.
We approach the problem of state reconstruction in the same manner as we derived the determinability conditions in the previous section.
This approach involves two major steps:
\begin{enumerate}
\item Reconstruct the observable components at each switching time from the local (in time) information.
\item Combine the information from each subsystem to find the whole state.
\end{enumerate}
Once we have derived a method to theoretically compute the state of the error dynamics exactly, we can then take additional practical elements, such as estimation errors, into account.
\subsection{Observable Component at a Switching Instant}\label{sec:mapOne}
The first step in reconstructing the state of the error dynamics is to be able to write down a systematic way of extracting the observable information around each switching time from locally active subsystems.
Based on the discussion in the previous section, we know that for any solution $(e,y^e)$ of \eqref{eq:sysHomLin} with $y^e_{(t_0,t_1)} = 0$ and $y^e[t_1] = 0$, the state $e(t_1^-)$ is contained in the locally unobservable space $\mathcal{W}_1$ given by \eqref{eq:W1}. For a general solution $(e,y^e)$ with \emph{nonzero} $y^e$ the corresponding state vector $e(t_1^-)$ is not contained in $\mathcal{W}_1$.
\emph{In the following, we will decompose $e(t_1^-)$ according to the direct sum $\mathbb{R}^n=\mathcal{W}_1\oplus\mathcal{W}_1^\bot$.} For that, let us introduce orthonormal matrices $W_1$ and $Z_1$ such that $\im W_1 = \mathcal{W}_1$ and $\im Z_1 = \mathcal{W}_1^\bot$. Let $d_{w_1}$ denote the dimension of $\mathcal{W}_1$, then there exist unique elements $w_1 \in \mathbb{R}^{d_{w_1}}$, and $z_1 \in \mathbb{R}^{n-d_{w_1}}$ such that
\begin{equation}\label{eq:xZW}
e(t_1^-) = W_1 w_1 + Z_1 z_1.
\end{equation}
Because of orthonormality of the matrix $Z_1$, multiplying the above equation from left by $Z_1^\top$, we get $z_1 = Z_1^\top e(t_1^-)$.
Similarly, multiplication from left by $W_1^\top$ gives $w_1=W_1^\top e(t_1^-)$.
It is noted that $w_1$ is the unobservable component of $e(t_1^-)$ and $z_1$ the component of the vector $e(t_1^-)$ that can be recovered from measuring $y^e_{(t_0,t_1)}$ and $y^e[t_1]$. In the sequel, we construct a mapping that relates the vector $z_1$ to the measurements of $y_{(t_0,t_1]}^e$.
We first make the observation that
\begin{align*}
\mathcal{W}_1^\bot & = {(\mathfrak{C}_0 \cap \ker O_0^\text{\normalfont diff} \cap \ker O_1^{\text{\normalfont imp}})}^\bot \\
& = \mathfrak{C}_0^\bot + \im\left({O^\text{\normalfont diff}_0}^\top\right) + \im\left({O_{1}^{\text{\normalfont imp}}}^\top\right),
\end{align*}
that is, $\mathcal{W}_1^\bot$ is a sum of three subspaces. Since $z_1$ corresponds to the projection of $e(t_1^-)$ onto the subspace $\mathcal{W}_1^\bot$, we further decompose the vector $z_1$ along each of the three constituent subspaces of $\mathcal{W}_1^\bot$, which are subsequently denoted by $z_1^\text{\normalfont cons}$, $z_1^\text{\normalfont diff}$, and $z_1^\text{\normalfont imp}$.
More formally, we let $Z_0^\text{\normalfont cons}$, $Z_0^\text{\normalfont diff} $, and $Z_1^\text{\normalfont imp} $ be matrices with orthonomal columns such that
\[\begin{aligned}
& \im Z_0^\text{\normalfont cons} = \mathfrak{C}_0^\bot, & \quad & z_1^\text{\normalfont cons} := {Z_0^{\text{\normalfont cons}}}^\top e(t_1^-),\\
& \im Z_0^\text{\normalfont diff} = \im\left({O^\text{\normalfont diff}_0}^\top\right), & \quad & z_1^\text{\normalfont diff} := {Z_0^{\text{\normalfont diff}}}^\top e(t_1^-), \\
& \im Z_1^\text{\normalfont imp} = \im\left({O^\text{\normalfont imp}_1}^\top\right), & \quad & z_1^\text{\normalfont imp} := {Z_1^{\text{\normalfont imp}}}^\top e(t_1^-).
\end{aligned}
\]
Note that the images of the matrices $Z_0^\text{\normalfont cons}$, $Z_0^\text{\normalfont diff}$ and $Z_1^\text{\normalfont imp}$ might intersect non-trivially with each other. In this case, some part of the unknown error $e(t_1^-)$ can be determined from the consistency or classically differentiable part as well as from the impulsive information. From a mathematical point of view this redundancy can be eliminated by choosing a full column rank matrix $U_1$ such that
\begin{equation}\label{eq:zbardeff}
[ Z_0^\text{\normalfont cons} \, , \ Z_0^\text{\normalfont diff} \, , \ Z_1^\text{\normalfont imp} ]\, U_1 = Z_1,
\end{equation}
and for brevity we let $\overline{Z}_1:=[ Z_0^\text{\normalfont cons} \, , \ Z_0^\text{\normalfont diff} \, , \ Z_1^\text{\normalfont imp} ]$.
We thus obtain,
\begin{equation}\label{eq:z1Comp}
z_1 = U_1^\top \overline{Z}_1^\top e(t_1^-) = U_1^\top \begin{pmatrix} z_1^\text{\normalfont cons} / z_1^\text{\normalfont diff} / z_1^\text{\normalfont imp} \end{pmatrix}.
\end{equation}
We now describe, how each component of the vector $z_1$ can be calculated.
\emph{The consistency information $z_1^\text{\normalfont cons}$:}
By definition, it holds that ${Z_0^\text{\normalfont cons}}^\top \mathfrak{C}_0 = \{0\}$ and hence $z_1^\text{\normalfont cons} = {Z_0^{\text{\normalfont cons}}}^\top e(t_1^-) = 0$ because any solution of the homogenous DAE \eqref{eq:sysHomLin} evolves within the consistency space $\mathfrak{C}_0$ on the interval $(t_0,t_1)$.
\emph{Mapping for the differentiable part $z_1^\text{\normalfont diff}$:} In order to define $z_1^\text{\normalfont diff} \in \mathbb{R}^{r_0}$, where $r_0 = \rank O_0^\text{\normalfont diff}$, we first introduce the function $\mathbf{z}_0^\text{\normalfont diff}:(t_0,t_1) \rightarrow \mathbb{R}^{r_0}, t\mapsto {Z_0^\text{\normalfont diff}}^\top e(t)$, which represents the observable component of the subsystem $(E_0, A_0, C_0)$ that can be recovered from the smooth output measurements $y^e$ over the interval $(t_0,t_1)$. It follows from Lemma~\ref{lem:reduced_obs} in \ref{app:lemmas} that the evolution of $\mathbf{z}_0^\text{\normalfont diff}$ is governed by an observable ODE
\begin{equation}\label{eq:zsys}
\begin{aligned}
\dot{\mathbf z}_0^\text{\normalfont diff} &= S_0^\text{\normalfont diff} \mathbf{z_0}^\text{\normalfont diff}, \\
y^e &= R_0^\text{\normalfont diff} {\mathbf z}_0^\text{\normalfont diff},
\end{aligned}
\end{equation}
where $S_0^\text{\normalfont diff}:={Z_0^\text{\normalfont diff}}^\top A_0^\text{\normalfont diff} Z_0^\text{\normalfont diff}$ and $R_0^\text{\normalfont diff}:=C_0^\text{\normalfont diff} Z_0^\text{\normalfont diff}$. Because of the observability of the pair $(S_0^\text{\normalfont diff}, R_0^\text{\normalfont diff})$ in \eqref{eq:zsys}, there exists a (linear) map $\mathcal{O}_{(t_0,t_1)}^\text{\normalfont diff}$ such that
\[
\mathbf z_0^\text{\normalfont diff} = \mathcal{O}_{(t_0,t_1)}^\text{\normalfont diff} (y^e_{(t_0,t_1)})
\]
and we set
\[
z_1^\text{\normalfont diff} = \mathbf z_0^\text{\normalfont diff}(t_1^-).
\]
From the vast literature that exists on observability of classical linear time-invariant systems, one can find various ways for representing and approximating the map $\mathcal{O}^\text{\normalfont diff}_{(t_0,t_1)}$. For our purposes, this choice is not essential.
We are just interested in knowing that there exist techniques which allow us to (approximately) recover $z_1^\text{\normalfont diff}$ and in the design of our estimators in Section~\ref{sec:obsDesign}, it will be specified what (approximation) properties are required.
\emph{Mapping for the impulsive part $z_1^\text{\normalfont imp}$:}
A particular characteristic of the switched DAEs is that the different algebraic constraints for different modes may lead to Dirac impulses in the solution trajectories at switching times. If such impulses are observed in the output, then this information can be used to recover a certain part of the state $e(t_1^-)$. The impulsive part of the output at switching time $t_1$ can be represented as
\[
y^e[t_1] = \sum_{i=0}^{n-2} \eta_1^i \delta_{t_1}^{(i)},
\]
where due to Lemma~\ref{lem:impulses} the coefficients $\eta_1^i$ satisfy the relation $\boldsymbol{\eta}_1 = -O_{1}^{\text{\normalfont imp}} e(t_1^-)$, with $\boldsymbol{\eta}_1 := ({\eta_1^0}/\cdots/{\eta_1^{n-2}})\in\mathbb{R}^{(n-1) \mathtt{y}}$.
We now want to find a linear map $\mathcal{O}_{[t_1]}^\text{\normalfont imp}$ such that
$
z_1^\text{\normalfont imp} = \mathcal{O}_{[t_1]}^\text{\normalfont imp} (y^e[t_1]).
$
For that, we chose a matrix $U_1^\text{\normalfont imp}$ such that
\begin{equation}\label{eq:U1imp}
-{O_{1}^{\text{\normalfont imp}}}^\top U_1^\text{\normalfont imp} = Z_1^\text{\normalfont imp},
\end{equation}
then
\begin{equation}\label{eq:z1imp}
z_1^\text{\normalfont imp} = Z_1^{\text{\normalfont imp}^\top} e(t_1^-) = - U_1^{\text{\normalfont imp}^\top} O_{1}^{\text{\normalfont imp}} e(t_1^-) = {U_1^\text{\normalfont imp}}^\top \boldsymbol{\eta}_1,
\end{equation}
hence $\mathcal{O}_{[t_1]}^\text{\normalfont imp}$ is given by:
\[
y^e[t_1]= \sum_{i=0}^{n-2} \eta_1^i \delta_{t_1}^{(i)}\quad \mapsto\quad {U_1^\text{\normalfont imp}}^\top (\eta_1^0/\ldots/\eta_1^{n-2}).
\]
\subsection{Construct Global Mapping from Local Mappings}\label{sec:mapComb}
In the previous subsection, we described how the information over an interval with one switch can be combined with the data of active subsystems over that interval to reconstruct the observable information from that data at the switching time.
This can obviously be done at each switching time $t_k$, $k\in\mathbb{N}$, that is, by looking at the intervals $(t_{k-1},t_k]$ and measuring $y^e_{(t_{k-1},t_k)}$ and $y^e[t_k]$, one can repeat the procedure in Section~\ref{sec:mapOne} to compute $z_k$ where
\begin{equation}\label{eq:zpZp}
z_k = Z_k^\top e(t_k^-)
\end{equation}
and $Z_k$ is an orthonormal matrix with range space $\mathcal{W}_k^\bot$.
Because of our notion of determinability, the next step is to be able to combine these local informations obtained at each switching time to reconstruct the entire state of the system \eqref{eq:sysHomLin}.
More specifically, assuming that \eqref{eq:sysHomLin} is $(t_q,t_p]$-determinable for some $q,p\in \mathbb{N}$, we are interested in finding a linear map $\mathcal{O}_q^p$ such that
\[
e(t_p^+) = \Pi_p\mathcal{O}_q^p (\mathbf{z}_{q+1}^p),
\]
where $\mathbf{z}_{q+1}^p := (z_{q+1}/z_{q+2}/\ldots/z_p)$. We next construct this map $\mathcal{O}_q^p$.
A schematic representation of the development of this section is given in Figure~\ref{fig:combInfo}.
\begin{figure}[hbt]
\centering
\scalebox{0.9}{\begin{tikzpicture}
\draw (0,0) rectangle (7,5);
\draw[fill = red!20] (0,0) rectangle (2,5);
\draw (1,1.5) node [text width = 1.8cm, align = center, inner sep = 0] {\footnotesize Undet.\ states over $(t_q,t_{q+1}]$};
\draw (1,3) node [fill=green!20, rectangle, draw, align=center, text width = 2cm, minimum height = 0.5cm, inner sep = 0, text centered] (z1) {$z_{q+1}$};
\draw[fill = red!20] (2,0) rectangle (4,5);
\draw (3,2) node [text width = 1.8cm, align = center, inner sep = 0] {\footnotesize Undet.\ states over $(t_{q+1},t_{q+2}]$};
\draw (3,4) node [fill=green!20, rectangle, draw, align=center, text width = 2cm, minimum height = 0.5cm, inner sep = 0, text centered] (z2) {$z_{q+2}$};
\draw (4.5,2) node [rectangle, align=center, inner sep = 0, text width = 1cm, text centered] (dots) {$\dots$};
\draw[fill = red!20] (5,0) rectangle (7,5);
\draw (6,2) node [text width = 1.8cm, align = center, inner sep = 0] {\footnotesize Undet.\ states over $(t_{p-1},t_{p}]$};
\draw (6,0.5) node [fill=green!20, rectangle, draw, align=center, text width = 2cm, minimum height = 0.5cm, inner sep = 0, text centered] (zm) {$z_{p}$};
\draw (8.5,2.5) node [rotate=-90, fill= green!20, rectangle, draw, align=center, text width = 5cm, minimum height = 1cm, inner sep = 0, text centered] (xi1) {\small $e(t_p^+) = \Pi_p\mathcal{O}_q^p (z_{q+1}, z_{q+2}, \dots, z_p)$};
\draw[thick,->] (z1.east) -- +(6,0);
\draw[thick,->] (z2.east) -- +(4,0);
\draw[thick,->] (zm.east) -- +(1,0);
\draw[->] (0,-0.5) -- (9,-0.5) node [anchor=west] {$t$};
\draw (0,-0.4) -- (0,-0.6) node [anchor = north] {$t_q$};
\draw (2,-0.4) -- (2,-0.6) node [anchor = north] {$t_{q+1}$};
\draw (4,-0.4) -- (4,-0.6) node [anchor = north] {$t_{q+2}$};
\draw (5,-0.4) -- (5,-0.6) node [anchor = north] {$t_{p-1}$};
\draw (7,-0.4) -- (7,-0.6) node [anchor = north] {$t_{p}$};
\end{tikzpicture}}
\caption{Accumulating local information $z_k$, $k=q,q+1,\ldots,p$ at time instant $t_p$ to calculate $e(t_p^+)$.}
\label{fig:combInfo}
\end{figure}
For $k\in\mathbb{N}$ with $q < k \leq p $ let $P_{q}^{k}$ and $Q_{q}^{k}$ be matrices with orthonormal columns such that
\[
\im Q_q^k = \mathcal{Q}_{q}^{k}\quad\text{and}\quad \im P_q^k = (\mathcal{Q}_{q}^{k})^\bot,
\]
where $\mathcal{Q}_q^k$ is recursively defined by \eqref{eq:DAESeqDet} and is the subspace of all points $e(t_k^-)$ which cannot be determined from $y^e_{(t_q,t_k]}$. There exist unique vectors $\varphi_q^k$, and $\chi_q^k$ such that
\begin{equation}\label{eq:xPkQk}
e(t_{k}^-) = P_{q}^{k} \varphi_{q}^{k} + Q_{q}^{k} \chi_{q}^{k}.
\end{equation}
Because of orthonormality of the matrices $P_q^k$ and $Q_q^k$, we have that $\varphi_{q}^{k} = {P_{q}^{k}}^\top e(t_k^-)$ and $\chi_{q}^{k} = {Q_{q}^{k}}^\top e(t_{k}^-)$. In particular, if \eqref{eq:sysHomLin} is $(t_q,t_p]$-determinable, we have $\mathcal{Q}_q^p \subseteq \ker \Pi_p$ and therefore
\[
e(t_p^+) = \Pi_p P^p_q \varphi_q^p,
\]
i.e.\ the state $e(t_p^+)$ can be recovered if we are able to find an expression for $\varphi_q^k$, $k=q+1,q+2,\ldots,p$.
Note that $\varphi_q^{q+1} = z_{q+1}$ corresponds to the determinable information from the interval $(t_q,t_{q+1}]$, and we already discussed in \ref{sec:mapOne} how it can be obtained. We now derive a recursive expression for $\varphi_q^k$, $k=q+2,q+3,\ldots,p$ in terms of $\varphi_q^{k-1}$.
For that we need to introduce a matrix $\Theta_q^{k-1}$ with orthonormal columns such that
\begin{equation}\label{eq:Thetaqk}
\im \Theta_q^{k-1} = \left(\mathtt{e}^{A_{k-1}^\text{\normalfont diff}\tau_{k-1}}\Pi_{k-1} \mathcal{Q}_{q}^{k-1}\right)^\bot.
\end{equation}
Note that then by definition
\[
\im P_q^k = {\mathcal{Q}_q^k}^\bot = \im [Z_k, \Theta_q^{k-1}],
\]
in particular there is a matrix $U_q^k$ such that
\[
P_q^k = [Z_k, \Theta_q^{k-1}] U_q^k.
\]
From
\begin{align}
e (t_k^-) & = \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1} \ e(t_{k-1}^-)\notag \\
& = \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1} \left( P_{q}^{k-1} \varphi_{q}^{k-1} + Q_{q}^{k-1} \chi_{q}^{k-1} \right), \label{eq:err2b}
\end{align}
together with
\[
{\Theta_{q}^{k-1}}^\top \mathtt{e}^{A_{k-1}^\text{\normalfont diff}\tau_{k-1}}\Pi_{k-1} Q_{q}^{k-1} = 0.
\]
we can conclude that
\begin{align}
\varphi_q^k &= {P_q^k}^\top e(t_k^-) = {U_q^k}^\top \begin{bmatrix} Z_k^\top \\ {\Theta_q^{k-1}}^\top \end{bmatrix} e(t_k^-)\notag\\
&= {U_q^k}^\top \begin{pmatrix} z_k \\ {\Theta_q^{k-1}}^\top \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1} P_{q}^{k-1} \varphi_{q}^{k-1} \end{pmatrix},\label{eq:phiqk}
\end{align}
which is the desired recursion formula for $\varphi_q^k$ for $k=q+2,q+3,\ldots,p$ with ``initial value'' $\varphi_q^{q+1} = z_{q+1}$.
We have thus arrived at the following result:
\begin{Theorem}\label{thm:mapZs}
Consider the homogenous switched DAE \eqref{eq:sysHomLin} with corresponding $A^\text{\normalfont diff}_p,\Pi_p$ as in Definition~\ref{def:proj}, $\tau_p:=t_{p+1}-t_p$, $p\in\mathbb{N}$, and assume $(t_q,t_p]$-determinability for some $0\leq q < p$. Furthermore, consider for $k=q+2,q+3,\ldots,p$ the matrices $U_q^k$, $\Theta_q^{k-1}$, $P_q^{k-1}$ as above and let
\[\begin{aligned}
F_q^k &:= {U_q^k}^\top \begin{bmatrix} I \\ 0 \end{bmatrix},\quad\text{and}\quad F_q^{q+1}:=I,\\
G_q^k &:= {U_q^k}^\top \begin{bmatrix}0 \\ {\Theta_q^{k-1}}^\top \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1} P_q^{k-1}\end{bmatrix}
\end{aligned}
\]
and with $\mathbf{z}_{q+1}^p=(z_{q+1},z_{q+2},\cdots,z_p)$ let \footnote{
For $q+1 \le k \le p-1$, the notation $\left(\prod_{j=k+1}^{p}G_q^{j} \right)$ denotes the product of matrices $G_q^p \, G_q^{p-1} \dots \, G_q^{k+1}$ (note the decreasing order).
By convention, when $p \le k$, the resulting product is set to identity.
}
\begin{equation}\label{eq:Oqp}
\mathcal{O}_q^p(\mathbf{z}_{q+1}^p) := P_q^p \sum_{k=q+1}^p \left(\prod_{j=k+1}^p G_q^j\right) F_q^k z_k.
\end{equation}
Then
\[
\boxed{e(t_{p}^+) = \Pi_p\mathcal{O}_q^{p}(\mathbf{z}_{q+1}^{p})},
\]
i.e., the linear map $\mathcal{O}_q^p$ describes how the state $e(t_p^+)$ can be recovered from the knowledge of locally observable parts $z_{q+1}$, $z_{q+2}$, \ldots, $z_p$, for which the construction was provided in Section~\ref{sec:mapOne}.
\end{Theorem}
\begin{Remark}[Dependence on switching signal]
While the reconstruction of the locally observable component $z_k$ at the switching time $t_k$ only depends on the two modes that are active prior and after the switch, i.e., $(E_{k-1},A_{k-1},C_{k-1})$ and $(E_k,A_k,C_k)$, the overall reconstruction of the state, as in Theorem \ref{thm:mapZs}, additionally depends on the duration $\tau_k=t_{k+1}-t_k$ of each mode, because the matrix $\Theta_q^{k-1}$ used in the construction depends on $\tau_{q+1}, \dots, \tau_{k-1}$. Hence the map $\mathcal{O}_q^p$ depends on $\tau_{q+1}, \tau_{q+2}, \ldots, \tau_p$ (but not on $\tau_q$).
\end{Remark}
\subsection{Example}
We revisit our example from Section~\ref{sec:example} and recall that it is $(t_0,t_2]$ determinable, so that the above derivation can be used to recover the state $e(t_2^+)$ of the homogenous switched DAE \eqref{eq:sysHomLin} from the output $y^e$ on $(t_0,t_2]$. For the reconstruction via \eqref{eq:phiqk}, the following matrices are needed:
\[
P_0^1 = \begin{smallbmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ 0 & 0 \end{smallbmatrix},\
\Theta_0^1 = \begin{smallbmatrix} 0 & 1 \\ -\sqrt{2}/2 & 0 \\ \sqrt{2}/2 & 0 \\ 0 & 0\end{smallbmatrix},\
U_0^2 = \begin{smallbmatrix} 0 & 1 & 0 \\0 & -\sqrt{2} & 0 \\ 1 & 0 & 0 \end{smallbmatrix},
\]
\[
\mathtt{e}^{A^\text{\normalfont diff}_1\tau_1}\Pi_1 \approx \begin{smallbmatrix} 1 & 0 & 0 & 0 \\ 0 & \sqrt{2}/2 & \sqrt{2}/2 & 0 \\ 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 \\ -0.544 & -0.251 & -\sqrt{2}/2 & 0.456 \end{smallbmatrix}.
\]
Furthermore, for estimating $z^\text{\normalfont diff}_1$ via \eqref{eq:zsys}, an observer for the following ODE has to be implemented:
\[
\dot{\mathbf{z}}_0^\text{\normalfont diff} = \begin{smallbmatrix} -1 & 0 \\ 0 & 0 \end{smallbmatrix} \mathbf{z}_0^\text{\normalfont diff}, \quad y^e = \begin{smallbmatrix} 1 & 0 \end{smallbmatrix} \mathbf{z}_0^\text{\normalfont diff}.
\]
Finally, for estimating $z^\text{\normalfont imp}_2$ via \eqref{eq:z1imp} from the impulses in the output, we use the compression matrix $U_2^\text{\normalfont imp} = [1\ 0\ 0\ 0]^\top$.
\section{Design of State Estimators}\label{sec:obsDesign}
In the previous section, we provided a method for reconstructing the state of the homogenous error dynamic system (without inputs) by constructing the mappings that exist between the output error and the observable components of the individual system.
In practice, these mappings are not numerically robust, or physically realizable, and we are thus interested in obtaining estimates of the state trajectories through numerically robust algorithms.
So, the purpose of this section is to design an observer for the system class \eqref{eq:sysLin} under the interval wise determinability assumption introduced in Section~\ref{sec:obsCond}, which generates asymptotically convergent state estimates.
The underlying structure of these estimators is based on the result of Theorem~\ref{thm:mapZs} developed in Section~\ref{sec:outMaps}.
Our proposed observer for the system class~\eqref{eq:sysLin} is given by (see also Figure~\ref{fig:obsAll}):
\begin{subequations}\label{eq:obsSyn}
\begin{empheq}[box=\fbox]{align}
E_{\sigma} \dot{\widehat{x}}_p &= A_{\sigma} \widehat{x}_p + B_{\sigma} u, \text{ on } [t_p,t_{p+1}^+], \label{eq:obsSyna}\\
\widehat{x}_{p}(t_{p}^-) &= \widehat{x}_{p-1}(t_p^-) - \xi_p \label{eq:obsSynb}, \quad p\in\mathbb{N},
\end{empheq}
\end{subequations}
with arbitrary initial condition $\widehat{x}_0(t_0^-) \in \mathbb{R}^n$ and $[t_p,t_{p+1}^+]:=[t_p,t_{p+1}+\varepsilon)$ for some arbitrarily small $\varepsilon>0$. The desired estimate $\widehat x$ is defined as:
\[
\widehat{x}:=\sum_{p \in \mathbb{N}} (\widehat{x}_p)_{[t_{p},t_{p+1})}.
\]
It is seen that the observer consists of a system copy and unlike classical methods where the continuous dynamics of the estimate are driven by an error injection term, {\em the observer \eqref{eq:obsSyn} updates the state estimate only at discrete switching instants by an error correction vector $\xi_p$}, which is determined by the difference between the system output $y=C_\sigma x$ and system copy output $\widehat{y}=C_\sigma \widehat{x}$. To give an intuitive interpretation of how to calculate $\xi_p$, note that, \emph{under the assumption} that for some $p \in \mathbb{N}$, the correction term $\xi_p$ satisfies
\[
\Pi_p \xi_p = \Pi_p(\widehat{x}_{p-1}(t_p^-) - x(t_p^-)),
\]
with $x$ being a solution of \eqref{eq:sysLin},
the equation~\eqref{eq:obsSynb} gives $\widehat{x}_p(t_p^+) = \Pi_p\widehat{x}_p(t_p^-) = \Pi_p x(t_p^-) = x (t_p^+)$, and from there onwards the system copy \eqref{eq:obsSyn} with $\xi_k=0$, $k>p$ will follow exactly the original system, at least in theory.
In reality, however, even after a perfect match at time $t_p$, the system copy will deviate again from the original system due to some uncertainties and we have to apply a correction $\xi_{\tilde{p}}$ at a later switching time $t_{\tilde{p}}$ again. It is not necessary (and may also not be possible) to apply this correction at every step. So our goal is to compute $\xi_p$ repeatedly, for ``sufficiently many'' $p \in \mathbb{N}$, such that $\Pi_p\xi_p$ approximates the value of state estimation error at time $t_p^+$ closely as $p$ gets large.
Since the growth of error between the reset times can be upper bounded by the solution of a linear ODE, the resets (under determinability assumption) allow us to make the estimation error at switching times arbitrarily small, which eventually results in convergence of $\widehat {x}(t)$ toward $x(t)$ as $t$ tends to infinity.
With this motivation, we introduce the state estimation error. Let $e_p := \widehat{x}_p - x$ denote the state estimation error on $[t_{p},t_{p+1})$ and $e:=\sum_p (e_p)_{[t_{p},t_{p+1})} = \widehat{x} - x$, then
\begin{subequations}\label{eq:errDyn}
\begin{align}
E_p \dot e_p &= A_p e_p,\quad\text{ on }[t_p,t_{p+1}), \label{eq:errDyna}\\
e_{p}(t_p^-) & = e_{p-1}(t_p^-) - \xi_{p}. \label{eq:errDynb}
\end{align}
\end{subequations}
Note that equations~\eqref{eq:obsSyna} and \eqref{eq:errDyna} are both to be interpreted in the sense of distributions, in particular, the impulsive parts $x_p[t_p]$ and $e_p[t_p]$ are uniquely determined by \eqref{eq:obsSyn} and \eqref{eq:errDyn}, respectively. However, the error dynamics \eqref{eq:errDyna} are homogenous and there are no impulses between two switches. As a result, the solution of \eqref{eq:errDyna} for $t \in (t_{p},t_{p+1})$ is described as
\begin{equation}\label{eq:errFlowOneSwitch}
\begin{aligned}
e(t) = e_p(t) & = \mathtt{e}^{A_p^\text{\normalfont diff}(t-t_{p})} \Pi_p e_p(t_{p}^-) \\
& = \mathtt{e}^{A_p^\text{\normalfont diff}(t-t_{p})} \Pi_p \left(e(t_{p}^-) - \xi_p \right).
\end{aligned}
\end{equation}
The output estimation error is
\[
y^e = C_p \widehat{x}_p - y
\]
on each open interval $(t_{p},t_{p+1})$. The impulsive error $y^e[t_p]$ at the switching times is obtained by the difference between $y[t_p]$ and the output impulse resulting from \eqref{eq:obsSyna} \emph{without} taking the correction $\xi_p$ into account, that is,
\begin{equation}\label{eq:impulse_error}
y^e[t_p] := C_p \widehat{x}_{p-1}[t_p] -y[t_p]. \\
\end{equation}
Note that, in general,
$
y^e[t_p] \neq C_{p} \widehat{x}[t_p] - y[t_p],
$
because $\widehat{x}[t_p] = \widehat{x}_p[t_p] \neq \widehat{x}_{p-1}[t_p]$. In fact, the knowledge of $y^e[t_p]$ is based on the knowledge of $\widehat{x}_{p-1}$ which will be used to determine $\xi_p$, which in turn determines $C_p \widehat{x}_p[t_p]$.
Furthermore, note that $y[t_p]$ as well as $\widehat{x}_{p-1}[t_p]$ depends on $u[t_p]$ and the derivates $u^{(i)}(t_p^+)$ immediately after time $t_p$ (see \citep[Thm.~6.5.1]{Tren12}). This will render the observer slightly acausal, as the information immediately after $t_p$ is used to set the value of $\widehat{x}_p(t_p^-)$.
For DAEs, this is not surprising because the transfer functions are not necessarily proper and hence involve differentiation.
However, this is not a serious problem from an implementation-point-of-view because the system copy \eqref{eq:obsSyn} is not required to run synchronously to the original system.
Another way to overcome this issue is to assume that the input is smooth at the switching instants (i.e.\ jumps in the inhomogeneity are induced by a switching $B$-matrix), then $u^{(i)}(t_p^+)=u^{(i)}(t_p^-)$.
Now that we have described the homogenous (input-free) dynamics for the state estimation error, and the corresponding output equation, we are interested in computing the vector $\xi_p$ such that $\Pi_p \xi_p$ estimates $e_{p-1}(t_p^+) = \Pi_p e_{p-1}(t_p^-)$.
We make use of the analysis carried out in Section~\ref{sec:outMaps} to implement the following basic idea in calculating $\xi_p$:
\emph{Step 1:} Identify the observable component $z_p$ of the individual subsystems for the error dynamics \eqref{eq:errDyn}. For subsystem $p\in \mathbb{N}$, we let $Z_p$ be an orthonormal matrix with range space $\mathcal{W}_p^\bot$, in particular, $z_p = Z_p^\top e(t_p^-)$.
\emph{Step 2:} Under the assumption that for $p \in \mathbb{N}$, there exists a positive integer $q$ such that $(t_q, t_p]$-determinability holds, we derive a linear function $\Xi_{q}^{p-1}(\cdot)$ such that any solution $e$ of the error dynamics \eqref{eq:errDyn} satisfies
\begin{equation}\label{eq:preXi}
\Pi_p e (t_p^-) = \Pi_p \left(\mathcal{O}_{q}^p (\mathbf{z}_{q+1}^p) - \Xi_{q}^{p-1}(\boldsymbol{\xi}_{q+1}^{p-1})\right)
\end{equation}
where
\[
\boldsymbol{\xi}_{q+1}^{p-1} = (\xi_{q+1},\xi_{q+2},\ldots,\xi_{p-1}).
\]
and $\mathcal{O}_q^p$ is given by \eqref{eq:Oqp}.
\emph{Step 3:} Finally, the estimates $\widehat z_k$ for the observable components $z_k$ are constructed at times $t_k^-$, $q+1 \leq k \le p$, and the error correction vector $\xi_p$ is defined as
\begin{equation}\label{eq:preErrC}
\xi_p := \mathcal{O}_{q}^p(\widehat{\mathbf{z}}_{q+1}^p) - \Xi_{q}^{p-1}(\boldsymbol{\xi}_{q+1}^{p-1}), \\
\end{equation}
where
$
\widehat{\mathbf{z}}_{q+1}^p = (\widehat{z}_{q+1},\widehat{z}_{q+2},\ldots,\widehat{z}_p).
$
The ``error correction'' block is basically a compact representation of the structure given in Figure~\ref{fig:combInfo}, where additionally the effect of the state resets $\xi_p$ are taken into account.
In Section~\ref{sec:outMaps}, we have shown how to compute the map $\mathcal{O}_{q}^p$ using the output measurements for a homogeneous system. For computing the error correction vector $\xi_p$, we use the same operator but applied to the estimates of the observable components.
Computations of the estimates $\widehat{z}_p$ of components of $z_p$, $p \in \mathbb{N}$, will be discussed in Section~\ref{sec:zEst}.
Before that, we close this section by deriving the second missing component for computing $\xi_p$ in \eqref{eq:preErrC}, which is the map $\Xi_{q}^{p-1}$ and it appears due to the state resets at previous switching instants \eqref{eq:errDynb}\footnote{This analysis can be skipped, if the state-resets are only applied at the end of a determinable interval $(t_q,t_p]$. However, in general we allow a reset of the estimator state at any switching time.}.
From \eqref{eq:preXi}, it is clear that the computation of $\Xi_{q}^{p-1}$ requires us to write $\Pi_p e(t_p^-)$ in terms of $z_{q+1}, \dots,z_{p}$, and $\xi_{q+1}, \dots, \xi_{p-1}$.
Analogously to the derivation leading to Theorem~\ref{thm:mapZs}, there exist unique vectors $\psi_q^k$ and $\chi_q^k$ such that, cf.\ \eqref{eq:xPkQk},
\[
e(t_{k}^-) = P_{q}^{k} \psi_{q}^{k} + Q_{q}^{k} \chi_{q}^{k},\quad k=q+1,q+2,\ldots,p,
\]
where, as before, $P_q^k$ and $Q_q^k$ are orthonormal matrices whose columns span $(\mathcal{Q}_q^k)^\bot$ and $\mathcal{Q}_q^k$, respectively, and $\mathcal{Q}_q^k$ is given by \eqref{eq:DAESeqDet}. Invoking
\[
e(t_k^-) = \mathtt{e}^{A_{k-1}^\text{\normalfont diff}\tau_{k-1}} \Pi_{k-1} \left(e(t_{k-1}^-) - \xi_{k-1}\right),
\]
we arrive at the following recursion formula, cf.\ \eqref{eq:phiqk},
\[
\psi_q^k = {U_q^k}^\top \begin{pmatrix} z_k \\ {\Theta_q^{k-1}}^\top \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1}\left(P_q^{k-1}\psi_q^{k-1} - \xi_{k-1}\right)\end{pmatrix}
\]
with ``initial condition'': $\psi_q^{q+1} = z_{q+1}$. We thus obtain the following generalization of Theorem~\ref{thm:mapZs}:
\begin{Proposition}\label{prop:mapCompRecur}
Consider the error system \eqref{eq:errDyn} and assume $(t_q,t_p]$-determinability of \eqref{eq:sysLin} for some $0\leq q < p$. Using the notation from Theorem~\ref{thm:mapZs} let
\[
H_q^k := {U_q^k}^\top \begin{bmatrix} 0 \\ {\Theta_q^{k-1}}^\top \mathtt{e}^{A_{k-1}^\text{\normalfont diff} \tau_{k-1}} \Pi_{k-1} \end{bmatrix}
\]
and
\begin{equation}\label{eq:Xiqp-1}
\Xi_q^{p-1}(\boldsymbol{\xi}_{q+1}^{p-1}) := \sum_{k=q+1}^{p-1} \left(\prod_{k+2}^{p} H_q^j\right) \xi_{k},
\end{equation}
then
\[
\Pi_p e(t_p^-) = \Pi_p \left(\mathcal{O}_q^p(\mathbf{z}_{q+1}^p) - \Xi_q^{p-1}(\boldsymbol{\xi}_{q+1}^{p-1})\right).
\]
\end{Proposition}
\subsection{Observable Components and their Estimates}\label{sec:zEst}
Before presenting the main convergence result, the last ingredient for computing state resets $\xi_p$ for the estimator are the estimates of the observable components identified at each switching time.
In Section~\ref{sec:mapOne}, it was shown that these observable components $z_k$, $k\in\mathbb{N}$, comprise three subcomponents: $z_k^\text{\normalfont cons}$, $z_k^\text{\normalfont diff}$, and $z_k^\text{\normalfont imp}$.
Because of the algebraic constraints, we set $z_k^\text{\normalfont cons} = 0$, but for $z_k^\text{\normalfont diff}$ and $z_k^\text{\normalfont imp}$, we compute some appropriate estimates, denoted $\widehat{z}_k^\text{\normalfont diff}$ and $\widehat{z}_k^\text{\normalfont imp}$, respectively.
Using these estimates of the individual components, we let $\widehat{z}_k$ denoted the estimate of $z_k$, and define it as
\[
\widehat{z}_k:= U_k^\top(z_k^\text{\normalfont cons}/\widehat{z}_k^\text{\normalfont diff} / \widehat{z}_k^\text{\normalfont imp})
\]
with suitable compression matrix $U_k$ defined analogously as in \eqref{eq:zbardeff}. In the next two subsections, we explain how the estimates $\widehat{z}_k^\text{\normalfont diff}$ and $\widehat{z}_k^\text{\normalfont imp}$ must be calculated for the convergence result proved in Section~\ref{sec:obsConv}.
\subsubsection{Estimate the smooth part $z_k^\text{\normalfont diff}$}
Based on the discussion in Section~\ref{sec:mapOne}, it is possible to introduce a function $\mathbf{z}_{k-1}^\text{\normalfont diff}(\cdot) = {Z_{k-1}^\text{\normalfont diff}}^\top e(\cdot)$ on $(t_{k-1},t_k)$, $k \in \mathbb{N}$, and define an operator $\mathcal{O}_{(t_{k-1},t_{k})}^\text{\normalfont diff}$ such that $\mathbf{z}_{k-1}^\text{\normalfont diff} = \mathcal{O}_{(t_{k-1},t_{j})}^\text{\normalfont diff}(y^e_{(t_{pk1},t_k)})$ denotes the component of the state that can be recovered on $(t_{k-1},t_k)$ from the smooth part of the output.
We are interested in computing the estimate of the vector $z_{k}^\text{\normalfont diff} = \mathbf{z}_{k-1}^\text{\normalfont diff}(t_{k}^-)$.
The property that we require from the estimator is the following one:
\begin{eprop}
\item \label{prop:diff} For a given $\varepsilon_k>0$, $k\in\mathbb{N}$, there is an estimator $\widehat{\mathcal{O}}_k^\text{\normalfont diff}$ such that
\[
\widehat{z}_k^\text{\normalfont diff} = \widehat{\mathcal{O}}_k^\text{\normalfont diff} (y^e_{(t_{k-1},t_k)})
\]
has the property that
\[
\vert z_k^\text{\normalfont diff} - \widehat z_k^\text{\normalfont diff} \vert \le \varepsilon_k |\mathbf z_{k-1}^\text{\normalfont diff}(t_{k-1}^+)|,
\]
where $|\cdot|$ denotes the Euclidian norm.
\end{eprop}
In the literature, one can find many estimation techniques for linear systems of the form \eqref{eq:zsys}.
One example of an estimator which satisfies this property is the classical \emph{Luenberger observer}.
Indeed, for $\mathbf z_{k-1}^\text{\normalfont diff}$ satisfying the equation \eqref{eq:zsys} (defined on $(t_{k-1},t_k)$), such an estimator is of the following form:
\[
\dot {\widehat {\mathbf z}}_{k-1}^\text{\normalfont diff} = (S_{k-1}^\text{\normalfont diff} - L_{k-1} R_{k-1}^\text{\normalfont diff}) \widehat{\mathbf{z}}_{k-1}^\text{\normalfont diff} + L_{k-1} y^e\ \text{on } (t_{k-1}, t_{k}),
\]
and we choose $\widehat {\mathbf z}_{k-1}^\text{\normalfont diff} (t_{k-1}^+) = 0$.
Because $(S_{k-1}^\text{\normalfont diff}, R_{k-1}^\text{\normalfont diff})$ is an observable pair by construction, it follows from the {\em squashing lemma} \citep[Lemma~1]{PaitMors94} that for a given $\varepsilon_k > 0$, and $\tau_{k-1} > 0$, there exists a matrix $L_{k-1}$ such that
$
\|\mathtt{e}^{(S_{k-1}- L_{k-1} R_{k-1})\tau_{k-1}} \| \le \varepsilon_{k},
$
where $\|\cdot\|$ denotes the induced matrix norm with respect to the Euclidian norm $|\cdot|$.
By setting $\widehat{z}_{k}^\text{\normalfont diff} = \widehat{\mathbf{z}}_{k-1}^\text{\normalfont diff}(t_{k}^-)$, and looking at the dynamics for $\mathbf{z}_{k-1}^\text{\normalfont diff} - \widehat{\mathbf{z}}_{k-1}^\text{\normalfont diff}$, it follows that the desired estimate $\vert z_{k}^\text{\normalfont diff} - \widehat{z}_{k}^\text{\normalfont diff} \vert \le \varepsilon_{k} |\mathbf{z}_k^\text{\normalfont diff}(t_{k-1}^+)|$ holds.
By now, there exist many different estimation techniques for linear systems in the literature and the motivation for not fixing one particular estimation technique is to allow the possibility of several other estimators which have their own advantages.
\subsubsection{Estimate the impulsive part $z_k^\text{\normalfont imp}$}
To estimate $z_k^\text{\normalfont imp}$ from the impulsive part of the output, we write
the impulsive output of the error system \eqref{eq:impulse_error} as:
\[
y^e[t_k] = \sum_{i=0}^{n-2} \eta_k^i \delta_{t_k}^{(i)}.
\]
Then, recalling \eqref{eq:z1imp},
\[
z_k^\text{\normalfont imp} = U_k^{\text{\normalfont imp}^\top} \boldsymbol{\eta}_k,
\]
where $U_k^\text{\normalfont imp}$ is a compression matrix defined analogously as in \eqref{eq:U1imp} and $\boldsymbol{\eta}_k:=(\eta_k^0/\eta_k^1/\ldots/\eta_k^{n-2})$.
Recall that
\[
y^e[t_k] = C_k \widehat{x}_{k-1}[t_k] - y[t_k] =: C_k \sum_{i=0}^{n-2} \zeta_k^i \delta_{t_k}^{(i)} - \sum_{i=0}^{n-2} \nu_k^i \delta_{t_k}^{(i)},
\]
i.e.,
\[
\eta_k^i = C_k \zeta_{k}^i - \nu_k^i,
\]
where $\nu_k^i$ must be obtained via measuring the impulsive part of the system's output and $\zeta_k^i$ results from running the system copy \eqref{eq:obsSyn}; in particular, $\zeta_k^i$ depends on the impulsive part of the input $u[t_k]$ as well as on the derivatives $u^{(i)}(t_k^+)$, $i=0,\ldots,n-2$, of the input immediately after the switch at $t_k$.
While the input may be known exactly (and hence $\zeta_k^i$ may be calculated analytically), obtaining $\nu_k^i$ from measurements may prove to be a very difficult task using physical sensors, because Dirac impulses do not occur in reality. One possibility to approximately determine $\nu_k^i$ is the following observation: Assume $\int$ denotes an ideal integrator, then
$
\nu_k^0 = \left(\int y\right)(t_k+) - \left(\int y\right)(t_k^-),
$
and in general
\[
\nu_k^i = \bigg(\underbrace{\int\int\cdots\int}_{i+1\text{ times}} y\bigg)(t_k^+) - \left(\int\int\cdots\int y\right)(t_k^-).
\]
If in reality, a Dirac impulse in $y[t_k]$ is ``smeared out'' on the interval $[t_k,t_k+\varepsilon]$ then one gets an estimate of $\nu_k^0$ by
$
\nu_k^0 \approx \left(\int y\right)(t_k+\varepsilon) - \left(\int y\right)(t_k),
$
and analogously for $\nu_k^i$. The smaller $\varepsilon$ is and the better the integrator is implemented, the closer the approximation is to the exact value $\nu_k^i$ (and also $\eta_k^i$). Hence, we may approximate $y^e[t_k]$ as
$
\widehat y^e[t_k] \approx \sum_{i=0}^{n-2} \widehat{\eta}_k^i \delta_{t_k}^{(i)}
$
and consequently
\[
\widehat{z}^\text{\normalfont imp}_k := U_k^{\text{\normalfont imp}^\top} \widehat{\boldsymbol{\eta}}_k.
\]
\begin{eprop}
\item \label{prop:imp}For a given $\varepsilon_k > 0$, $k\in\mathbb{N}$, one can obtain an approximation $\widehat{\boldsymbol{\eta}}_k$ of $\boldsymbol{\eta}_k$ such that
\[
|\boldsymbol{\eta}_k - \widehat{\boldsymbol{\eta}}_k| \le \varepsilon_k |\boldsymbol{\eta}_k|.
\]
\end{eprop}
\subsection{Convergence}\label{sec:obsConv}
We now show that the error correction vector $\xi_p$, $p \in \mathbb{N}$, computed from these estimates would make the state estimation error converge to zero, if the estimates of the observable components at each switching time are good enough, and a certain determinability assumption over intervals holds repeatedly.
To formalize this result, let us introduce the following assumption:
\begin{ass}
\item \label{ass:det}Assume that there exists a pair of subsequence in $\mathbb{N}$, non-decreasing and unbounded, denoted as $\{(q_i,p_i)\}_{i = 1}^\infty$ with $q_i<p_i<p_{i+1}$ and such that \eqref{eq:sysLin} is $(t_{q_i},t_{p_i}]$-determinable, i.e.\
\begin{equation}\label{eq:assDet}
\mathcal{Q}_{q_i}^{p_i} \subseteq \ker \Pi_{p_i} , \quad i = 1,2,3, \cdots.
\end{equation}
\end{ass}
Assumption~\ref{ass:det} basically requires that system \eqref{eq:sysLin} is persistently determinable, i.e., after any time instant, a determinable interval appears again eventually. At the end of these determinability intervals our observer resets the state estimate. Depending on the estimation accuracies formulated in \ref{prop:diff} and \ref{prop:imp} for individual components, the state resets make the overall estimation error sufficiently small. Afterwards the system copy runs without any continuous feedback, but its deviation from the original state is bounded by the systems dynamics. More formally, we can formulate the following qualitative convergence result:
\begin{Theorem}\label{thm:obsGenConv}
Consider the switched system \eqref{eq:sysLin} satisfying Assumption~\ref{ass:det}. For the impulsive observer \eqref{eq:obsSyn}, let
\[
\xi_p = \begin{cases} \mathcal{O}_{q_i}^{p_i}(\widehat{\mathbf{z}}_{q_i+1}^{p_i}) - \Xi_{q_i}^{p_i-1}(\boldsymbol{\xi}_{q_i+1}^{p_i-1}) & \text{ if } p = p_i,\ i\in\mathbb{N},\\
0, & \text{ otherwise},
\end{cases}
\]
where the map $\mathcal{O}_{q_i}^{p_i}$ is given by \eqref{eq:Oqp}, the estimates $\widehat{\mathbf{z}}_{q_i+1}^{p_i}=(\widehat{z}_{q_i+1}, \widehat{z}_{q_i+2},\ldots, \widehat{z}_{p_i})$ are computed as in Section~\ref{sec:zEst}, the map $\Xi_{q_i}^{p_i-1}$ is given by \eqref{eq:Xiqp-1} and $\boldsymbol{\xi}_{q_i+1}^{p_i-1}= (\xi_{q_i+1},\xi_{q_i+2},\ldots,\xi_{p_i-1})$.
For each $p \in \mathbb{N}$, there exists $\varepsilon_p > 0$ such that, if $\widehat z_p$ satisfies the estimation properties \ref{prop:diff} and \ref{prop:imp} for the given $\varepsilon_p$, then it holds that
\[
\boxed{ \lim_{t\rightarrow \infty} |\widehat{x}(t^+) - x(t^+)| = 0.}
\]
\end{Theorem}
The proof of Theorem~\ref{thm:obsGenConv} is constructive from design viewpoint and a quantitative bound on the $\varepsilon_p$ is computed.
Before proving this result in its generality, we highlight two special cases, whose convergence proofs form the basis for the general proof of Theorem~\ref{thm:obsGenConv}.
\begin{Definition}[Interval-wise and sliding window observer]
Consider our general observer design as given in Theorem~\ref{thm:obsGenConv}. We call this observer \emph{interval-wise observer} if $q_{i+1}=p_i$ for all $i\in\mathbb{N}$, i.e.\ the determinability intervals cover the whole time axes without overlap. On the other hand, when there is the maximal possible overlap, i.e.\ $p_{i+1}=p_i+1$ for all $i\in\mathbb{N}$, then we call our observer \emph{sliding window observer}.
\end{Definition}
\begin{proof}[Proof of Theorem~\ref{thm:obsGenConv}]
We first observe in general that, due to Proposition~\ref{prop:mapCompRecur} and \eqref{eq:preErrC} for $p=p_i$,
\[\begin{aligned}
e(t_{p_i}^+) &= \Pi_{p_i} ( e(t_{p_i}^-)-\xi_{p_i})\\
&= \Pi_{p_i}\left(\mathcal{O}_{q_i}^{p_i}(\mathbf{z}_{q_i+1}^{p_i}) - \Xi_{q_i}^{p_i-1}(\boldsymbol{\xi}_{q_i+1}^{p_i-1})\right.\\
&\phantom{{}= \Pi_{p_i}~}\left.-\left(\mathcal{O}_{q_i}^{p_i}(\widehat{\mathbf{z}}_{q_i+1}^{p_i}) - \Xi_{q_i}^{p_i-1}(\boldsymbol{\xi}_{q_i+1}^{p_i-1}\right)\right)\\
&= \Pi_{p_i}\mathcal{O}_{q_i}^{p_i}(\mathbf{z}_{q_i+1}^{p_i}-\widehat{\mathbf{z}}_{q_i+1}^{p_i}).
\end{aligned}
\]
From Lemma~\ref{lem:zk-hatzk_bound} in \ref{app:lemmas}, for each $k \in \mathbb{N}$, there exists a constant $M^z_{k-1,k}>0$ depending on $\tau_{k-1}=t_k-t_{k-1}$, $(E_{k-1},A_{k-1},C_{k-1})$ and $(E_k,A_k,C_k)$ such that
\begin{equation}\label{eq:boundzk}
|z_k - \widehat{z}_k| \leq \varepsilon_k M^z_{k-1,k} |e(t_{k-1}^+)|.
\end{equation}
Using the notation of Theorem~\ref{thm:mapZs}, \eqref{eq:Oqp} and \eqref{eq:boundzk} yield
\begin{equation}\label{eq:recursive_error_bound}
|e(t_{p_i}^+)| \leq \sum_{k=q_i+1}^{p_i} \varepsilon_k M^{\mathcal{O}}_{k,p_i} M^z_{k-1,k} |e(t_{k-1}^+)|,
\end{equation}
where
\[
M^{\mathcal{O}}_{k,p_i} := \left\| \Pi_{p_i}P_{q_i}^{p_i} \left(\prod_{j=k+1}^{p_i} G_{q_i}^j\right) F_{q_i}^k\right\|.
\]
\emph{Interval-wise observer.} For this case, we first utilize the fact that $\xi_k=0$ for $k=q_i+1,q_i+2,\ldots,p_i-1$, and by invoking Lemma~\ref{lem:diffProj}, we have for these $k$:
\[
e(t_k^+) = \left(\prod_{j=q_i}^{k-1} \Pi_{j+1}\mathtt{e}^{A^\text{\normalfont diff}_j \tau_j} \right) e(t_{q_i}^+).
\]
Substitution in \eqref{eq:recursive_error_bound} gives
\[
|e(t_{p_i}^+)| \leq c_i |e(t_{q_i}^+)| = c_i |e(t_{p_{i-1}}^+)|.
\]
The constant $c_i$ is defined as:
\[
c_i := \sum_{k=q_i+1}^{p_i} \varepsilon_k M^{\mathcal{O}}_{k,p_i} M^z_{k-1,k} M^\text{\normalfont diff}_{q_i,k-2}
\]
where, for $k = q_i+2, \dots, p_i$, we let
\begin{equation}\label{eq:M^Adiff}
M^\text{\normalfont diff}_{q_i,k-2} := \left\| \prod_{j=q_i}^{k-2} \Pi_{j+1}\mathtt{e}^{A^\text{\normalfont diff}_j \tau_j}\right\|
\end{equation}
and by convention, $M^\text{\normalfont diff}_{q_i,q_i-1} = 1$.
On each determinability interval $(p_{i-1},p_i]$ we therefore can chose $\varepsilon_{p_{i-1}+1}, \varepsilon_{p_{i-1}+2}, \ldots \varepsilon_{p_i}$ sufficiently small such that $c_i\in(0,1)$ and $c_i$ is uniformly bounded away from $1$. We can thus conclude that in this case
\begin{equation}\label{eq:etpi-convergence}
e(t_{p_i}^+) \underset{i\to\infty}{\longrightarrow} 0.
\end{equation}
Note that the ability to let $i$ tend to infinity (and also $t_{p_i}\to\infty$) follows from assumption \ref{ass:det}.
Next, we have for $t\in(t_k,t_{k+1})\subseteq (t_{p_{i}},t_{p_{i+1}})$:
\[Ich
e(t^+) = \mathtt{e}^{A^\text{\normalfont diff}_k (t-t_k)} \left(\prod_{j=p_{i}}^{k-1} \Pi_{j+1}\mathtt{e}^{A^\text{\normalfont diff}_j \tau_j} \right) e(t_{p_{i}}^+),
\]
which gives
$
|e(t^+)| \leq \overline{M}^\text{\normalfont diff}_{p_{i},p_{i+1}} |e(t_{p_{i}}^+)|,
$
where
\[
\overline{M}^\text{\normalfont diff}_{p_{i},p_{i+1}} :=\sup_{t\in (t_{p_{i}},t_{p_{i+1}})} \left\|\mathtt{e}^{A^\text{\normalfont diff}_{k(t)} (t-t_{k(t)})} \left(\prod_{j=p_{i}}^{k(t)-1} \Pi_{j+1}\mathtt{e}^{A^\text{\normalfont diff}_j \tau_j} \right)\right\|,
\]
and $k(t)\in\{p_{i},p_{i}+1,\ldots, p_{i+1}-1\}$ is such that $t\in(t_{k(t)},t_{k(t)+1})$. We may now chose $c_{i}$ small enough (by choosing $\varepsilon_{p_{i-1}+1}$, \ldots, $\varepsilon_{p_{i}}$ to be sufficiently small) such that $c_{i}\overline{M}^\text{\normalfont diff}_{p_{i},p_{i+1}}$ is uniformly bounded, say by $\overline c >0$, then for all $t\in(t_{p_{i}},t_{p_{i+1}})$ we have
\[
|e(t^+)|\leq \overline{M}^\text{\normalfont diff}_{p_{i},p_{i+1}} c_{i} |e(t_{p_{i-1}}^+)| \leq \overline c \, |e(t_{p_{i-1}}^+)|,
\]
and the convergence of $|e(t^+)|$ towards zero as $t\to\infty$ now follows from \eqref{eq:etpi-convergence}.
\emph{Sliding window observer.} By Assumption~\ref{ass:det}, the sequence $(q_i)_{i\in\mathbb{N}}$ is nondecreasing and unbounded, i.e., for sufficiently large $i$ we have $q_i\geq p_1$. By using the relation $p_{i+1}=p_i+1$, we can rewrite \eqref{eq:recursive_error_bound} as
\[
|e(t_{p_i}^+)| \leq \sum_{k=q_i+1}^{p_i} \varepsilon_k M^{\mathcal{O}}_{k,p_i} M^z_{k-1,k} |e(t_{p_{i-p_i+k-1}}^+)|
\]
for sufficiently large $i$. Let
\[
c_i := (p_i-q_i) \max_{k=q_i+1,\ldots,p_i}\varepsilon_k M^\mathcal{O}_{k,p_i} M^z_{k-1,k}.
\]
Since $(q_i)_{i\in\mathbb{M}}$ is non-decreasing and unbounded, for any fixed $k$, there are only finitely many indices $i$ such that $c_i$ depends on $\varepsilon_k$. Hence for sufficiently small $\varepsilon_k$ we have $c_i\in(0,1)$ uniformly bounded away from $1$ and
\[
|e(t_{p_i}^+)| \leq \frac{c_i}{p_i-q_i} \sum_{j=1}^{p_i-q_i} |e(t_{p_{i-j}}^+)|.
\]
Applying Lemma~\ref{lem:convSeq} from the \ref{app:lemmas} results in
\begin{equation}\label{eq:slide_err_ti}
|e(t_{p_i}^+)|\underset{i\to\infty}{\longrightarrow} 0.
\end{equation}
Next, for any $t\in (t_{p_i},t_{p_{i}+1})$, we have $e(t^+) = \mathtt{e}^{A^\text{\normalfont diff}_{p_i}(t-t_{p_i})} e(t_{p_i}^+)$ and hence
\[
|e(t^+)| \leq M^\text{\normalfont diff}_{p_i} |e(t_{p_i}^+)| \leq M^\text{\normalfont diff}_{p_i} c_i \frac{1}{p_i-q_i}\sum_{j=1}^{p_i-q_i} |e(t_{p_{i-j}}^+)|,
\]
where
$
M^\text{\normalfont diff}_{p_i} = \sup_{s\in(0,\tau_{p_i})} \left\|\mathtt{e}^{A^\text{\normalfont diff}_{p_i}s}\right\|.
$
For sufficiently small $\varepsilon_k$, we can ensure that $M^\text{\normalfont diff}_{p_i} c_i$ is uniformly bounded, say by $\overline c >0$. Furthermore, for any $\varepsilon>0$ there exists an index $i_{\varepsilon}$ such that $|e(t_p^+)|\leq \varepsilon$ for all $p\geq q_{i_\varepsilon}$, hence for all $t\in(t_{p_i},t_{p_i+1})$ and all $i\geq i_\varepsilon$ we have
\[
|e(t^+)| \leq \overline{c} \, \frac{1}{p_i-q_i}\sum_{j=1}^{p_i-q_i} |e(t_{p_{i-j}}^+)| \leq \overline{c} \varepsilon,
\]
This shows convergence of $e(t^+)$ towards $0$ for $t\to\infty$.
\emph{The general case.} We now combine the proof ideas from the interval-wise and sliding window observer to also prove the general case.
To this end, for a fixed $i \in \mathbb{N}$, introduce the function $h(i,\cdot): \{q_i, \dots, p_i-1\} \rightarrow \mathbb{N}$ such that\footnote{
For the interval-wise observer, $h(i,k)=p_{i-1} = q_i$. For sliding-window observer, where $p_{i-1} = p_i - 1$, we had $h(i,k)= k = p_i-p_i + k = p_{i-p_i+k}$.}
\[
h(i,k) = \max \{j \, \vert \, p_j \le k\}
\]
and let the set $J_i$ be the range of $h(i,\cdot)$, that is, $J_i := \{h(i,k), k=q_i, \dots, p_i-1\}$.
For each $k = q_i, \dots, p_i - 1$, it holds that, recalling \eqref{eq:M^Adiff},
\[
\vert e(t_{k}^+) \vert \le M_{p_{h(i,k)},k-1}^\text{\normalfont diff} |e(t_{p_{h(i,k)}}^+)|
\]
and \eqref{eq:recursive_error_bound} becomes
\[
\vert e(t_{p_i}^+) \vert \le \!\!\!\sum_{k=q_i+1}^{p_i} \!\!\!\varepsilon_{k} M_{k,p_i}^{\mathcal{O}} M_{k-1,k}^z M_{p_{h(i,k-1)},k-2}^\text{\normalfont diff} \, \vert e(t_{p_{h(i,k-1)}}) \vert .
\]
Let $\vert J_i \vert$ denote the cardinality of $J_i$, and let
\[
c_i := \left \vert J_i \right \vert \max_{k=q_i+1,\ldots,p_i}\varepsilon_k M^\mathcal{O}_{k,p_i} M^z_{k-1,k} M_{p_{h(i,k-1)},k-2},
\]
then by choosing $\varepsilon_k$ sufficiently small, we again have $c_i \in (0,1)$ uniformly bounded away from 1, and
\[
|e(t_{p_i}^+)| \leq \frac{c_i}{\vert J_i\vert} \sum_{j=1}^{\vert J_i \vert} |e(t_{p_{i-j}}^+)|.
\]
Once again, it follows from Lemma~\ref{lem:convSeq} in \ref{app:lemmas} that $|e(t_{p_i}^+)| \to 0$ as $i \rightarrow \infty$. To show that, $|e(t^+)|$ converges to zero for $t \in (t_{p_i},t_{p_{i+1}})$, one can follow exactly the same arguments as in the case of interval-wise observer to concluded that $e(t^+)\to 0$ as $t\to\infty$.
\end{proof}
\begin{Remark}[Convergence of impulsive part {$\widehat x[t_p] - x[t_p]$}]
It was already observed that
$
e[t_p] = -\sum_{i=0}^{n-2} (E_p^\text{\normalfont imp})^{i+1} (e(t_p^-) - \xi_p) \delta_{t_p}^{(i)}.
$
From Theorem~\ref{thm:obsGenConv}, we have $e(t_p^-)$ converging to zero for large $p \in \mathbb{N}$. Since $\xi_p$ is by construction an estimate of $e(t_p^-)$ which gets closer and closer to the real value of $e(t_p^-)$ as $p$ gets large, it follows that the coefficients multiplying Dirac impulses get smaller with time.
\end{Remark}
\section{Simulations}\label{sec:sim}
For the simulation of a sliding window observer, we refer to our conference paper \citep{TanwTren13}. The simulations for the interval-wise observer are now presented for the example considered in Section~\ref{sec:example}. As a known input to the system, we chose
$
u(t) = 1 + \sin(t).
$
The corresponding output of the switched system \eqref{eq:sysLin} with initial condition $x(t_0) = [1,\ 3/2,\ 2,\ 5/2]^\top$ is shown in Figure~\ref{fig:output}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.375\textwidth]{output.pdf}
\caption{Output $y=y^f_\mathbb{D} + y[\cdot]$ of \eqref{eq:sysLin} with regular part $y^f$ (blue) and indication of Dirac impulses in $y[\cdot]$ (red).}\label{fig:output}
\end{figure}
For our observer design, we estimate $z^{\text{\normalfont diff}}_p$ using a classical Luenberger observer with
$
L_0 = [1/4,3/8]^\top,\quad L_3 = [3/8, 1/4]^\top,
$
and we add ten percent measurement noise in the impulsive part of $y[\cdot]$. The system copy starts with zero initial value. The resulting estimation errors (without the Dirac impulses) are shown in Figure~\ref{fig:errors}.
\begin{figure}[hbt]
\includegraphics[width=0.475\textwidth]{error-blue.pdf}
\caption{Estimation errors (without Dirac impulses), $x_1-\widehat{x}_1$ upper left figure, $x_2-\widehat{x}_2$ upper right figure, $x_3-\widehat{x}_3$ lower left figure, $x_4-\widehat{x}_4$ lower right figure.}\label{fig:errors}
\end{figure}
Clearly, the estimation error converges (slowly) to zero. Note that the convergence can be accelerated significantly by using a more aggressive gain matrix in Luenberger observers. However, there exists a lower bound on the gain matrices which is determined by the length of the determinability intervals.
\section{Conclusions}\label{sec:conc}
The paper considered the problem of state estimation in switched linear DAEs.
The notion of determinability studied in this paper relates to the reconstruction of the state value at some time by processing outputs and inputs over an interval.
This does not necessarily require observability of the initial state, or the individual subsystems.
The geometric characterization of determinability is then used for synthesis of a class of state estimators.
In contrast to classical estimation techniques which require continuous output injection, in our approach the estimator is reset at some discrete time instants after processing the external measurements over an interval.
For future work, we are interested in developing state estimators which only require the property of detectability from system.
Our preliminary results on detectability of switched DAEs have appeared in \citep{TanwTren15}.
Similar concepts have been used for studying the notion of controllability in switched DAEs \citep{KustRupp15} and a duality result is also available \citep{KustTren15pp}. It would be interesting to investigate if such ideas can be used for designing stabilizing controllers.
|
2,877,628,088,340 | arxiv | \section{Introduction}
Recent strides in quantum computing have made it possible to execute quantum algorithms on real quantum hardware. Contrary to classical computing, efficient quantum circuits are necessary for successful execution due to the decoherence of qubits. If a quantum circuit takes too long to execute, it will not produce any usable results. Moreover, due to poor gate fidelities, each additional gate in the quantum circuit adds a small error to the computation. In the absence of fault-tolerant quantum computers, circuits with more gates produce less accurate results.
Therefore, we need to reduce the gate complexity of the executed quantum circuits. This requires resource-efficient algorithms and improved quantum compiling procedures.
\begin{comment}
This not only requires more resource-efficient algorithms, but also extremely efficient quantum compiling procedures.
\end{comment}
When mapping a quantum circuit to the physical layer, one has to consider the numerous constraints imposed by the underlying hardware architecture. For example, in a superconducting quantum computer~\cite{stassi2020scalable}, connectivity of the physical qubits restricts multi-qubit operations to adjacent qubits. These restrictions are known as \textit{connectivity constraints} and can be represented by a \textit{connected graph} (also known as a \textit{topology}). Each vertex represents a distinct physical qubit. When two qubits are adjacent, there is an edge between the corresponding vertices.
Thus, we are interested in improving the routing of a quantum circuit onto the quantum computer. Current routing strategies are dominated by SWAP-based approaches~\cite{li2019tackling,qiskit,sivarajah2020tket}. These strategies move the logical qubits around on different quantum registers. The drawback of this is that every SWAP-gate adds 3 CNOT gates to the circuit, adding only more gates to the original circuit. As a result, it will take much longer to execute a quantum circuit, and thus introduce more errors to the computation.
\begin{figure}[t!]
\renewcommand{\thefigure}{(a)}
\begin{minipage}[b]{0.9\linewidth}
\centering
\begin{tikzpicture}[scale=0.5]
\begin{scope}[every node/.style={circle,minimum size= .07 cm,draw}]
\node (1) at (-2,2) {$\mathbf{1}$};
\node (2) at (0,2) {$2$};
\node (3) at (2,2) {$3$};
\node (4) at (-2,0) {$4$};
\node (5) at (0,0) {$5$};
\node (6) at (2,0) {$6$};
\node (7) at (-2,-2) {$7$};
\node (8) at (0,-2) {$8$};
\node (9) at (2,-2) {$\mathbf{9}$};
\end{scope}
\draw[thick] (1)--(2);
\draw (2)--(3)--(6)--(9);
\draw[thick] (9)--(8);
\draw (8)--(7)--(4)--(1);
\draw[thick] (2)--(5);
\draw (5)--(6) (4)--(5);
\draw[thick] (5)--(8);
\end{tikzpicture}
\label{a}
\caption{9-qubit square grid}
\end{minipage}\hfill
\renewcommand{\thefigure}{(b)}
\begin{minipage}[b]{0.49\linewidth}
\[
\Qcircuit @C=.7em @R=.7em @!R {
\lstick{\ket{1}} & \qswap & \qw & \qw & \qw & \qw & \qw & \qswap & \qw & \rstick{\ket{1}}\\
\lstick{\ket{2}} & \qswap \qwx & \qw & \qswap & \qw & \qswap & \qw & \qswap \qwx & \qw & \rstick{\ket{2}}\\
\lstick{\ket{5}} & \qw & \qw & \qswap \qwx & \ctrl{1} & \qswap \qwx & \qw & \qw & \qw & \rstick{\ket{5}}\\
\lstick{\ket{8}} & \qswap & \qw & \qw & \targ & \qw & \qw & \qswap & \qw & \rstick{\ket{8}}\\
\lstick{\ket{9}} & \qswap \qwx & \qw & \qw & \qw & \qw & \qw & \qswap \qwx & \qw & \rstick{\ket{1 \oplus 9}}\\
}
\]
\label{b}
\caption{SWAP template with fixed allocation}
\end{minipage}\hfill
\renewcommand{\thefigure}{(c)}
\begin{minipage}[b]{0.5\linewidth}
\[
\Qcircuit @C=.7em @R=.7em @!R {
\lstick{\ket{1}} & \qswap & \qw & \qw & \qw & \qw & \qw & \rstick{\ket{2}}\\
\lstick{\ket{2}} & \qswap \qwx & \qw & \qswap & \qw & \qw & \qw & \rstick{\ket{5}}\\
\lstick{\ket{5}} & \qw & \qw & \qswap \qwx & \ctrl{1} & \qw & \qw & \rstick{\ket{1}}\\
\lstick{\ket{8}} & \qswap & \qw & \qw & \targ & \qw & \qw & \rstick{\ket{1 \oplus 9}}\\
\lstick{\ket{9}} & \qswap \qwx & \qw & \qw & \qw & \qw & \qw & \rstick{\ket{8}}\\
}
\]
\label{c}
\caption{SWAP template with dynamic allocation}
\end{minipage}\hfill
\renewcommand{\thefigure}{(d)}
\begin{minipage}[b]{0.9\linewidth}
\[
\Qcircuit @C=.7em @R=.7em @!R {
\lstick{\ket{1}} & \qw & \qw & \qw & \ctrl{1}& \qw & \qw & \qw & \qw & \qw &\ctrl{1}& \qw & \qw & \qw & \rstick{\ket{1}}\\
\lstick{\ket{2}} & \qw & \qw & \ctrl{1} & \targ & \ctrl{1} & \qw &\qw & \qw &\ctrl{1}&\targ &\ctrl{1}& \qw & \qw & \rstick{\ket{2}}\\
\lstick{\ket{5}} & \qw & \ctrl{1}&\targ & \qw & \targ &\ctrl{1}&\qw &\ctrl{1}& \targ& \qw &\targ &\ctrl{1}& \qw & \rstick{\ket{5}}\\
\lstick{\ket{8}} & \ctrl{1}&\targ & \qw & \qw & \qw & \targ &\ctrl{1}&\targ & \qw & \qw & \qw &\targ & \qw & \rstick{\ket{8}}\\
\lstick{\ket{9}} & \targ & \qw & \qw & \qw & \qw & \qw &\targ & \qw & \qw & \qw & \qw &\qw & \qw & \rstick{\ket{1 \oplus 9}}\\
}
\]
\label{d}
\caption{Routed $\CNOT$s synthesized with \textit{Steiner-Gauss}~\cite{kissinger2020cnot}}
\end{minipage}
\renewcommand{\thefigure}{1}\caption{Three different routing strategies for $\CNOT(1,9)$ on a given topology. Figure (a): the target 9-qubit square grid topology. Figure (b): a SWAP-based method where the logical qubits are moved to different registers such that the $\CNOT$ can be executed. After that the logical qubits are moved back to their original registers ($\CNOT$ count: 19). Figure (c): a SWAP-based method like in (b) but the logical qubits are not moved back to their original registers ($\CNOT$ count: 10). Figure (d): an equivalent CNOT circuit synthesized with \textit{Steiner-Gauss} algorithm~\cite{kissinger2020cnot} ($\CNOT$ count: 12).}
\label{fig:swapTemplate}
\end{figure}
Alternatively, a recent paradigm shift in routing strategies has introduced Steiner-tree based synthesis~\cite{amy2020staq, kissinger2020cnot, nash2020quantum,meijer-vandegriend2020architecture,vandaele2022phase,gheorghiu2020reducing} as a tool for changing the circuit to fit the connectivity constraints of the quantum hardware.
The intuition behind these techniques is that by lifting the rigid representation of the quantum circuit to a more flexible representation and then synthesizing a new circuit from the flexible representation, we can make global improvements to the circuit more easily. This will be explained in more detail in \cref{sec:paritymatrix}. By taking into account the connectivity constraints of the underlying hardware architectures during the synthesis procedure, we create a routing procedure.
The \textit{Steiner-Gauss} algorithm provides the first Steiner-tree based synthesis approach and it is used to synthesize $\CNOT$ circuits~\cite{kissinger2020cnot, nash2020quantum}.
Later, Steiner-tree based synthesis was also used for synthesizing CNOT and $R_z$ gates simultaneously~\cite{nash2020quantum, meijer-vandegriend2020architecture,vandaele2022phase}, and even for simultaneous synthesis of CNOT, $R_z$, and $NOT$ gates leaving only the $H$ gates to \textit{slice and build} the circuit from~\cite{gheorghiu2020reducing}.
One major drawback of the existing Steiner-tree based methods is that they are not flexible with respect to the allocation of the qubits.
The logical qubits of the synthesized circuit will always be stored in the same qubit registers where they were originally allocated.
However, this is not always optimal as illustrated in \cref{fig:swapTemplate}. The figure shows that we can find a smaller circuit by reallocating the logical qubits (\cref{fig:swapTemplate}.c) than with \textit{Steiner-Gauss} (\cref{fig:swapTemplate}.d). This is true even though the latter synthesized less CNOTs than the SWAP-based method using a fixed qubit allocation (\cref{fig:swapTemplate}.b). Thus, we could say that that the synthesized circuit in \cref{fig:swapTemplate}.d contains implicit SWAP-gates.
Moreover, the outputs of a quantum circuit can trivially be remapped to the original ordering using classical methods.
In other words, preparing a qubit on a register and measuring it from a different register is a classical operation that is cheap and does not influence the computation of the quantum circuit.
Thus, we need a synthesis procedure that can dynamically change the qubit allocation.
The original CNOT synthesis procedures~\cite{nash2020quantum,kissinger2020cnot} are based on Gaussian elimination to make the parity matrix representing the circuit into the identity matrix (see \cref{sec:background} for more detail).
Dynamically changing the qubit locations is the same as eliminating the parity matrix into a permutation of the identity matrix (see \cref{sec:permutation} for a more detailed explanation).
To do this, we need to determine to which permutation of the identity matrix to synthesize a priori, which is not a trivial task.
However, by adjusting the \textit{RowCol} algorithm~\cite{wu2019optimization}, we can determine the new qubit allocation whilst synthesizing the CNOT circuit.
In this paper, we propose \textit{PermRowCol}: a new Steiner-tree based synthesis method for compiling CNOT circuits that can dynamically choose the output qubit allocations.
Although CNOTs do not form not a universal gate set, it is the only gate that needs to be routed in the universal $\{\CNOT,R_z,R_x\}$ and Clifford+T gate sets. If you would slice the circuit into subcircuits which only contain $\CNOT$s and then route those chunks, you would still have a general procedure. We discuss possible extensions to arbitrary quantum circuits in \cref{sec:extension}.
The paper is structured as follows: First, we will give an introduction to Steiner-tree based synthesis (\cref{sec:background}). Then, we will describe our algorithm (\cref{sec:methods}) and show how well it perform against
\textit{Steiner-Gauss}~\cite{kissinger2020cnot} and \textit{RowCol}~\cite{wu2019optimization}
(\cref{sec:results}). Then, we will discuss the implications of our work (\cref{sec:conclusion}).
Lastly, we will discuss possible improvements of our algorithm that we are still working on (\cref{sec:future}) with the goal to fairly compare \textit{PermRowCol} against common compilers such as SABRE~\cite{li2019tackling}, Qiskit~\cite{qiskit}, and TKET~\cite{sivarajah2020tket}.
\section{Preliminaries}\label{sec:background}
Here we give a short introduction to the core concepts required to understand our proposed algorithm. In \cref{sec:paritymatrix}, we define the matrix representation of $\CNOT$ circuits. In \cref{sec:steinertree} we describe the concept of Steiner trees that we use for synthesizing CNOT circuits that adhere to a given topology (\cref{sec:synthesis}). Lastly, we explain the core idea behind our algorithm: the dynamic reallocation of qubits using permutation matrices.
\subsection{The parity matrix of a CNOT circuit}\label{sec:paritymatrix}
In this paper, we concern ourselves with circuits that contain only CNOT gates, called a \textit{CNOT circuit}.
$\CNOT$ is short for "controlled not" and acts on two qubits: a control and a target. We write $\CNOT(c,t)$ to denote a CNOT applied between control qubit $c$ and target qubit $t$. In circuit notation, we have:
\[
\Qcircuit @C=1em @R=.3em @!R {
\lstick{\ket{c}} & \ctrl{1} & \qw & \rstick{\ket{c}}\\
\lstick{\ket{t}} & \targ & \qw & \rstick{\ket{c \oplus t}}
}
\]
We can think of the control qubit $c$ as controlling whether a NOT gate is applied to the target qubit.
Meaning that when $\ket{c} = \ket{0}$, the $\CNOT$ acts trivially on $\ket{t}$, leaving it the same. Otherwise, when $\ket{c} = \ket{1}$, it changes $\ket{t} = \ket{0}$ to $\ket{t} = \ket{1}$ and vice versa. Alternatively, we write that the $\CNOT$ changes $\ket{t}$ to $\ket{c \oplus t}$, where $\oplus$ means addition modulo 2. We can see this in the output of the example circuit notation in \cref{fig:parity_example}.
\begin{figure}[h!]
\begin{subfigure}{.5\textwidth}
\centering
\[
\Qcircuit @C=1em @R=.3em @!R {
\lstick{\ket{x_1}} & \qw & \ctrl{1} & \qw & \targ & \qw & \qw & \rstick{\ket{x_2}}\\
\lstick{\ket{x_2}} & \qw & \targ & \ctrl{1} & \ctrl{-1} & \qw & \qw & \rstick{\ket{x_1 \oplus x_2}}\\
\lstick{\ket{x_3}} & \ctrl{1} & \qw & \targ & \qw & \ctrl{1} & \qw & \rstick{\ket{x_1 \oplus x_2 \oplus x_3}}\\
\lstick{\ket{x_4}} & \targ & \qw & \qw & \qw & \targ & \qw & \rstick{\ket{x_1 \oplus x_2 \oplus x_4}}
}
\]
\vspace{.85 cm}
\caption{A $\CNOT$ circuit composed of $4$ qubits.}
\label{fig:4qcircuit}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\[
\vect{A} = \begin{blockarray}{ccccc}
& x'_1 & x'_2 & x'_3 & x'_4 \\
\begin{block}{c(cccc)}
x_1 & 0 & 1 & 1 & 1 \\
x_2 & 1 & 1 & 1 & 1 \\
x_3 & 0 & 0 & 1 & 0 \\
x_4 & 0 & 0 & 0 & 1 \\
\end{block}
\end{blockarray}
\]
\caption{The parity matrix of the circuit in \cref{fig:4qcircuit}.}
\label{fig:paritymatrix}
\end{subfigure}
\caption{The matrix representation of a $4$-qubit $\CNOT$ circuit.}
\label{fig:parity_example}
\end{figure}
If we were to apply a $\CNOT$ multiple times on multiple qubits, we can keep track of which qubits appear in the sum (modulo 2) at the output of the circuit. See for example in \cref{fig:4qcircuit}, where we apply 5 $\CNOT$s to 4 qubits. The entire behaviour of the CNOT circuit is described by the sums at the output of the circuit. We call such a sum a \textit{parity} because it keeps track of whether a logical qubit participates in the sum or not. As such, we can write a parity as a binary string of the length of the number of qubits in the circuit. In that string, a $0$ means that the corresponding logical qubit does not occur in the sum and a $1$ means that that qubit did occur in the sum.
We can use the output parities of a CNOT circuit to create a square matrix where each column represents a parity and each row represents the input qubits. This matrix is called a \textit{parity matrix} and it has some interesting properties. Most importantly, the parity matrix of an empty circuit is the identity matrix:
\begin{center}
\begin{minipage}{.2\linewidth}
\[
\Qcircuit @C=1em @R=1em @!R {
\lstick{\ket{c}} & \qw & \qw & \rstick{\ket{c}}\\
\lstick{\ket{t}} & \qw & \qw & \rstick{\ket{t}}
}
\]
\end{minipage}%
\begin{minipage}{.2\linewidth}
\vspace{.4 cm}
\[
\sim \begin{blockarray}{ccc}
\begin{block}{c(cc)}
c & 1 & 0 \\
t & 0 & 1 \\
\end{block}
\end{blockarray}
\]
\end{minipage}
\end{center}
Additionally, applying a $\CNOT$ corresponds to adding the target row to the control row in the parity matrix:
\begin{center}
\begin{minipage}{.2\linewidth}
\[
\Qcircuit @C=1em @R=1em @!R {
\lstick{\ket{c}} & \ctrl{1} & \qw & \rstick{\ket{c}}\\
\lstick{\ket{t}} & \targ & \qw & \rstick{\ket{c \oplus t}}
}
\]
\end{minipage}%
\begin{minipage}{.2\linewidth}
\vspace{.4 cm}
\[
\sim \begin{blockarray}{ccc}
\begin{block}{c(cc)}
c & 1 & 1 \\
t & 0 & 1 \\
\end{block}
\end{blockarray}
\]
\end{minipage}
\end{center}
This means that we can extract a CNOT circuit from a parity matrix by adding rows of the parity matrix to each other until we obtain the identity matrix.
A commonly used algorithm to do this is \textit{Gaussian elimination}~\cite{2008_PMH} which we assume our readers know from linear algebra.
The method of constraining the extracted CNOTs to a specific topology is discussed in \cref{sec:synthesis} for which we need to understand what Steiner trees are.
\subsection{Steiner tree}
\label{sec:steinertree}
The method we use to synthesize a new CNOT circuit from a parity matrix relies on Steiner trees to enforce the connectivity constraints of a given topology. In this section, we will define what a Steiner tree is and how we approximate them in our algorithm. Note that Steiner trees are not strictly necessary to constrain the CNOT circuit synthesis procedure (see e.g.~\cite{2008_PMH} for an alternative) but it is the method we use.
We start at the basics; a \textit{graph} is an order pair $G=(V_G,E_G)$, where $V_G$ is a set of \emph{vertices} and $E_G$ is a set of edges. Each edge is defined as $e=(u,v)$ where $u,v\in V_G$. Each such pair is called an \emph{edge}. The \textit{degree} of a vertex is the number of edges that are incident to that vertex. Graphs also have a \textit{weight} assigned to each edge by a weight function $\omega_E: E_G \rightarrow \real$.
The connectivity graph of a quantum computer is generally considered a \textit{simple} graph,
meaning that it is an \emph{undirected} graph (i.e., $(u,v) \equiv (v,u)$) with all edge weights equal to $1$ that has at most one edge between two distinct vertices (i.e. $\forall e,e'\in E_G: e\neq e'$), and no self-loops (i.e., $(u,u)\notin E_G$).
Some graphs are \textit{connected}, meaning that for every vertex there exists a sequence of edges (a \textit{path}) from which we can go from that vertex to any other vertex in the graph. A graph that is not connected is \textit{disconnected}. The topology of a quantum computer needs to be a connected graph if we want any arbitrary two qubits to interact with each other.
A \textit{cut vertex} is a vertex that when removed from the graph, will result in a disconnected graph. We use the term \textit{non-cutting vertex} to mean a vertex that is not a cut vertex.
A \textit{subgraph} $G'=(V_G^{'},E_G^{'})$ of $G$ is a graph that is wholly contained in $G$ i.e. $V_G^{'}\subseteq V_G$, $E_G^{'}\subseteq E_G$, and $\forall(u,v)\in E_G^{'}:u,v\in V_G^{'}$.
A \textit{tree} is an undirected connected graph that has no paths that start and end at the same vertex, it is \textit{acyclic}. A minimum spanning tree $T$ of a connected graph $G$ is a subgraph of $G$ with the same vertices $V_G$ and the subset of the edges $E_G$ such that the sum of the edge weights is minimal and $T$ is still connected.
A Steiner tree is similar to a minimum spanning tree:
\begin{definition}[\textbf{Steiner tree}]
Given a graph $G=(V_G,E_G)$ with a weight function $\omega_E$ and a set of vertices $\tset\subseteq V_G$, a Steiner tree $T=(V_T,E_T)$ is a tree that is a subgraph of $G$ such that $\tset\subseteq V_T$ and the sum of the weights of the edges in $E_T$ is minimized. The vertices in $\tset$ are called \emph{terminals} while those in $V_T\setminus \tset$ are called \emph{Steiner nodes}.
\label{defn:steinerTree}
\end{definition}
Computing Steiner trees is NP-hard and the related decision problem is NP-complete~\cite{1972_K}. There are a number of heuristic algorithms that compute approximate Steiner trees~\cite{2005_RZ, 2013_BGRS, 1992_HR}. There is a a trade-off between the size of the approximate Steiner tree and the runtime of the algorithm, so the choice of algorithm is determined by its application. Here we use Prim's and Floyd-Warshall algorithm~\cite{cormen2022introduction} to build a minimal spanning tree over all terminals using all-pairs-shortest-paths as weights and adding the paths to the tree.
\subsection{Synthesizing $\CNOT$ circuits for specific topologies}\label{sec:synthesis}
Given the parity matrix of a CNOT circuit, we want to synthesize an equivalent CNOT circuit such that all $\CNOT$s are allowed according to a given connectivity graph. From \cref{sec:paritymatrix}, we know that every $\CNOT$ corresponds to a row addition in the parity matrix and that we can use Gaussian elimination to turn the parity matrix into the identity matrix. If we keep track of which row additions where performed during the Gaussian elimination process, we obtain a semantically equivalent CNOT circuit. \textit{Semantically equivalent} means that the parity matrix of the original circuit and the extracted circuit is the same, hence the circuits have the same input-output behaviour.
If we want the extracted $\CNOT$s to adhere to the given connectivity constraints, we need to adjust our method to only allow the additions of rows that correspond to vertices that are connected in the topology. There are several methods to do this~\cite{kissinger2020cnot,nash2020quantum,2008_PMH} but our algorithm is based on the \textit{RowCol} algorithm from \cite{wu2019optimization}, so we will explain that method.
The task of CNOT circuit synthesis is to turn a parity matrix into the identity matrix using the allowed row additions. This means that we need to add rows together in the matrix such that each row and column only contains a 1 on the diagonal and 0 everywhere else. We call this process of adding rows \textit{elimination}. \textit{RowCol} is based on the strategy to pick a qubit and eliminate its corresponding column and row
such that the row and column are identity and the qubit is no longer needed.
Steiner trees are used to find the best path over which the row additions are performed.
Then, we can remove the row and column from the parity matrix as well as the vertex from the topology and restart the algorithm on the smaller problem.
To make sure that the topology stays connected, the qubit is chosen to be a non-cutting vertex. Due to the structure of the \textit{RowCol} algorithm, we can remove the qubits in arbitrary order as long as the order does not disconnect the connectivity graph.
A column is eliminated by identifying all 1s in the column and building a Steiner tree $T$ with the diagonal as root and the rows with a 1 as terminals. We want to add the rows with a 1 together such that we end up with an identity column. However, due to the connectivity constraints, we need to use the Steiner nodes to "move" the 1s to the terminals. We do this by traversing $T$ from the bottom up, when we reach a Steiner node, we add its child to itself, making the row at the column we are eliminating into a 1. Then, we traverse $T$ again and adding each row to its children when reached. As a result, only the root will have a 1 in the column we are eliminating and every other row will have a 0. Thus, the column is eliminated. This procedure is also described as pseudo-code in \cref{app:subroutines} in \cref{alg:eliminatecolumn}.
A row is eliminated in a similar manner. However, it is less straightforward to find which rows need to be added to eliminate the desired row. We can find these rows by solving the system of linear equations defined by the parity matrix. Then, we can once more build a Steiner tree $T'$ with the diagonal as root and the rows to add to the desired row as terminals. Then we traverse $T'$ top down from the root, adding every Steiner node to its parent. Then, we traverse $T'$ bottom up and add every node to its parent. As a result, the rows belonging to the Steiner node are added twice to its parent and they do not participate in the sum because we add rows modulo 2. Moreover, every terminal is added together and propagated to the root with the bottom up traversal. This procedure is also described as pseudo-code in \cref{app:subroutines} in \cref{alg:eliminaterow}.
By iteratively picking a non-cutting vertex from the graph, eliminating its column and row, and removing these three from the problem, we have a functional CNOT circuit synthesis strategy that takes the given topology into account. Moreover, the algorithm is semantics preserving, so that the parity matrices for both the original and synthesized circuit are equal.
\subsection{Dynamic qubit allocation with permutation matrices}\label{sec:permutation}
Current CNOT synthesis methods from parity matrices synthesize the parity matrix exactly. Meaning that the transformation described by the CNOT circuit is exactly the given parity matrix. Clearly, it makes sense to design an algorithm to preserve the semantics of the parity matrix. However, in the case of using CNOT synthesis methods for routing CNOTs in a quantum circuit, the parity matrix might be too rigid a representation.
Recall that in the parity matrix, the columns corresponds to the parities that need to be created by the quantum circuit. The order of the columns corresponds to the different qubit registers on which those parities are created. That means that if we can arbitrarily change which parities end up on which qubit registers, we can equivalently synthesize a CNOT circuit where the parity matrix has its columns reordered with respect to the original parity matrix. In the case of routing CNOTs, this is exactly possible. Moreover, SWAP-based methods are based on the fact that logical qubits can be moved around to different qubit registers. Crucially, reading the circuit output from different registers is a classical operation and therefore can be considered free in the quantum circuit.
Thus, by synthesizing the exact parity matrix, we are implicitly adding the constraint that each logical qubit needs to end up in the same qubit register as it started out with. Since this constraint is not necessary, we are implicitly swapping the logical qubits back to their original registers, adding unnecessary extra CNOT gates. This point is also illustrated in \cref{fig:swapTemplate} where the synthesized circuit (\cref{fig:swapTemplate}d) might have had less CNOTs if we would allow dynamic allocation (\cref{fig:swapTemplate}c) even though it already uses less CNOTs than the SWAP strategy with a fixed allocation. Thus, it is better to have an algorithm that is flexible in its qubit allocation. Additionally, if the CNOT synthesis is done as part of a slice-and-build approach where the circuit is cut into pieces (e.g. in \cite{gheorghiu2020reducing}) the cost of keeping the logical qubits in the same qubit registers grows linearly with the number of slices.
Unfortunately, it is not trivial to determine an optimal reallocation a priori.
In fact, this is why the \textit{Steiner-Gauss} results from \cite{kissinger2020cnot} used a genetic algorithm to find a better qubit allocation.
Our novel insight is that a parity matrix that is the identity matrix is essentially an allocation map where logical qubit $i$ is stored on the corresponding qubit register $i$.
Then, reordering the columns of the identity matrix corresponds to reading the logical qubits from different quantum registers that are defined by the new column order. An identity matrix with reordered columns is also called a \textit{permutation matrix}.
Thus, a parity matrix $M$ that is a permutation matrix can be seen as an \textit{allocation map} where logical qubit $i$ is stored in qubit register $j$ iff $M_{i,j}=1$. Therefore, it is sufficient to eliminate a parity matrix to a permutation matrix rather than identity. In the next section, we will give our new algorithm to do this.
\section{Algorithm}\label{sec:methods}
We propose \textit{PermRowCol}, an Steiner-tree based algorithm that can dynamically reallocate the logical qubits while routing.
It does this by eliminating the parity matrix to a permutation matrix instead of the identity matrix.
To do this, we build on the \textit{RowCol}~\cite{wu2019optimization} algorithm that we explained in \cref{sec:synthesis}. The algorithm picks a logical qubit and eliminates its row and corresponding column such that they can be removed from the problem. Our adjustment is rather simple, we disconnect the row and column to be removed such that they don't necessarily overlap at the diagonal.
Specifically, we pick the logical qubit corresponding to the row, and a column to be the new register for that logical qubit. Then, we can eliminate both the chosen column and the chosen row such that they can be removed from the problem.
We give the pseudo-code for our algorithm in \cref{alg:permrowcol}. The algorithm makes use of several subroutines that we give as pseudo-code in \cref{app:subroutines} but we will explain their behaviour here.
\SetKwComment{Comment}{/* }{ */}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\RestyleAlgo{ruled}
\begin{algorithm}
\caption{\textit{PermRowCol}}\label{alg:permrowcol}
\Input{Parity matrix $M$ and topology $G(V_G, E)$ with labels corresponding to the rows of $M$}
\Output{CNOT circuit $C$ and output qubit allocation $P$}
$P \gets [-1\dots -1]$ \Comment*[r]{$|V_G|$ times}
$C \gets $ New empty circuit\;
\While{$|V_G| > 1$}{
\Comment{Find the qubits $Vs$ that can (still) be removed without disconnecting $G$.}
$Vs \gets NonCuttingVertices(G)$\;
$r \gets \textbf{ChooseRow}(Vs, M)$\;
\Comment{Choose qubit register to allocate $r$ to.}
$c \gets \textbf{ChooseColumn}(M, r, [i:i\in [1\dots |P|]$ where $P[i]=-1])$\;
$Nodes \gets [i:i\in V_G$ where $M_{i,c}=1 ]$\;
$C.add(\textbf{EliminateColumn}(M, G, r, Nodes))$\;
$A \gets M$ without row $r$ and without column $c$\;
$B \gets M[r]$ without column $c$\;
$X \gets A^{-1}B$ \Comment*[r]{Find rows to add to eliminate row $r$}
$Nodes \gets [i : i \in V_G$ where $i=r$ or $X[Index(i)]=1]$\;
$C.add(\textbf{EliminateRow}(M, G, r, Nodes))$\;
\Comment{Update the output qubit allocation}
$P[c] \gets r$\;
$G \gets $ subgraph of $G$ with vertex $r$ and connecting edges removed\;
}
\Comment{Update the last output qubit allocation because the loop ends with 1 qubit in $G$}
$i \gets [i:i\in [1\dots |P|]$ where $P[i]=-1]$\Comment*[r]{Find index with -1}
$P[i] \gets V_G[0]$\;
\Return{$C, P$}
\end{algorithm}
We start with a parity matrix $M$ to synthesize over a topology graph $G$ where the labeling of $G$ corresponds to the numbering of rows in $M$.
First, we need to pick the logical qubit that we want to remove from our problem. The only constraint for picking is it needs to be a non-cutting vertex. So, we calculate the set of non-cutting vertices of $G$. These vertices correspond to the rows we can choose to eliminate. Then, we choose one of those rows. For our results we used the simple heuristic: the row with the least 1s in the parity matrix $M$ (see \cref{alg:chooserow}).
Next, we can choose any of the columns as the qubit register where the logical qubit needs to be stored after calculation, as long as it does not have a qubit assigned to it yet. The heuristic we used is the column that has a 1 on the chosen row and the least amount of 1s in the column (see \cref{alg:choosecolumn}).
Note that we can choose any arbitrary row and column, as long as the chosen row will not disconnect the graph $G$ when removed and the row and column have not been picked before. Thus, we can replace \cref{alg:chooserow} and \ref{alg:choosecolumn} by any other heuristic as long as the non-cutting constraint of the row is satisfied. We discuss the implications of our choice in \cref{sec:conclusion} and a possible improvement in \cref{sec:future}.
With the row and column chosen, we can start to add rows together such that the row and column only have a 1 at their intersection in the matrix $M$, i.e. eliminate them. We start with the column and gather the rows with a 1 in that column. Then, we want to build a Steiner tree with the chosen row as root and the gathered rows as nodes. For each Steiner node in the tree, we add its child to it, starting at the leafs. The leafs of the tree corresponds to rows in $M$ with a 1 in the chosen column. By construction, the Steiner nodes have a zero in the chosen column of $M$. Thus, with this method all Steiner nodes will be turned into a 1. Then, we traverse the tree again from the leafs to the root and adding every parent to its child, resulting in a matrix with all zeros in the chosen column except for at the chosen row (see \cref{alg:eliminatecolumn}).
Next, we do a similar thing for the chosen row. First, find which rows need to be added together such that the entire chosen row is filled with 0s except for at the chosen column. We do this by solving the system of linear equations defined by $M$. Then, we make a new Steiner tree with the chosen row as root and the found rows as nodes. Again we traverse the tree twice, once top-down and only adding Steiner nodes to their parents and once bottom-up and adding all nodes to their parents. Again, this results in adding all gathered rows to the chosen row and thus eliminating the chosen row (see \cref{alg:eliminaterow}).
Lastly, we update the output qubit allocation to keep track of our choices of row and column.
Since the CNOT generation is based on elementary row operations on $M$, we construct the Steiner trees based on the rows of $M$. Once a row in $M$ is an identity row (i.e. only contains a single 1), the row is done and we do not need it anymore. Thus, we can remove the corresponding vertex from the topology and restart the algorithm on the smaller problem, ensuring termination of the algorithm.
\section{Results}\label{sec:results}
We test our algorithm against \textit{Steiner-Gauss} (implementation from \cite{kissinger2020cnot}) and \textit{RowCol} to show the effect of dynamically reallocating qubits during synthesis.
To do this, we used the same test CNOT circuits that were used for \cite{kissinger2020cnot}. These are circuits with $q$ number of qubits and $d$ CNOTs that were sampled uniformly at random. The dataset provided by \cite{kissinger2020cnot} only contained 20 circuits per $(q,d)$-pair and we used the same script to add 80 more random circuits resulting in 100 circuits per $(q,d)$-pair.
Our implementation of these 3 algorithms, the CNOT circuit generation script, unit tests, as well as the dataset of random CNOT circuits that we used can be found on GitHub\footnote{\url{https://github.com/Aerylia/pyzx/tree/rowcol}}.
\begin{comment}
\todo[inline, color=yellow]{What do the results look like? Tables vs figures. What is on the axis? What should be compared? What insights can you gain?
For now, I think a table with SteinerGauss vs RowCol vs PermRowCol should be sufficient.
Later we could add a column for PermRowCol(A*) for the A* heuristic.
We could add a table with Genetic algorithms for Steinergauss and rowcol and reverse traversal for PermRowCol showing the influence of optimizing the qubit allocation.
Then we add a table for the best PermRowCol vs Qiskit, TKET and SABRE
}
\end{comment}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/9q-square.png}
\caption{Algorithm performance for the $9$-qubit square grid.}
\label{fig:9qsquare}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/16q-square.png}
\caption{Algorithm performance for the $16$-qubit square grid}
\label{fig:16qsquare}
\end{subfigure}
\caption{These figures show the number of CNOTs generated by \textit{Steiner-Gauss}~\cite{kissinger2020cnot} (orange), \textit{RowCol}~\cite{wu2019optimization} (green), and \textit{PermRowCol} (proposed, red) for different fictitious square grid topologies: 9 qubits (\ref{fig:9qsquare}) and 16 qubits (\ref{fig:16qsquare}). The blue $x=y$-line can be used to infer the CNOT overhead.}
\label{fig:faketopogies}
\end{figure}
We plot the number of CNOTs in the routed circuit against the number of CNOTs in the original random circuit.
The blue $x=y$ line helps to show the routing overhead of the algorithms. If a point is above the blue line, then the routed circuit required more CNOTs than the original circuit. This is expected, but we want as few CNOTs as possible. Similarly, when the original CNOT circuit had many CNOTs, it is possible for the algorithms to extract less CNOTs than in the original circuit. This means that the original circuit contains redundant CNOTs which are optimized away by the algorithms. This happens because after a certain amount of CNOTs, the parity matrix representing the circuit becomes a random matrix and synthesizing a random parity matrix requires a constant amount of qubits, as discussed in \cite{2008_PMH}.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ibm_qx5.png}
\caption{Algorithm performance for the $16$-qubit IBM QX5.}
\label{fig:qx5}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rigetti_16q_aspen.png}
\caption{Algorithm performance for Rigetti $16$Q Aspen.}
\label{fig:aspen}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ibm_q20_tokyo.png}
\caption{Algorithm performance for the $20$-qubit IBM Tokyo.}
\label{fig:tokyo}
\end{subfigure}
\caption{These figures show the number of CNOTs generated by \textit{Steiner-Gauss}~\cite{kissinger2020cnot} (orange), \textit{RowCol}~\cite{wu2019optimization} (green), and \textit{PermRowCol} (proposed, red) for different real topologies: IBM QX5 (\ref{fig:qx5}), Rigetti Aspen (\ref{fig:aspen}, and IBM Tokyo (\ref{fig:tokyo}). The blue $x=y$-line can be used to infer the CNOT overhead.}
\label{fig:realtopogies}
\end{figure}
\begin{comment}
\begin{table}
\centering
\sisetup{
round-mode = places,
round-precision = 2
}
\begin{tabular}{|l|l||c|c|c|}%
\hline
\bfseries CNOT & \bfseries Topology & \bfseries \textit{Steiner-Gauss} & \bfseries \textit{RowCol} & \bfseries \textit{PermRowCol
\csvreader[filter expr={test{\ifnumgreater{\thecsvinputline}{3}}}]
{"PermRowCol results.csv"}
{1=\original,2=\architecture,3=\qubits, 4=\SGc,5=\SGo,6=\RCc,7=\RCo, 8=\PRCc, 9=\PRCo
{\\\original & \architecture & \num{\SGc} (\num{\SGo} \%) & \num{\RCc} (\num{\RCo} \%) & \num{\PRCc} (\num{\PRCo} \%)
\\\hline
\end{tabular}
\caption{This table shows the performance of \textit{Steiner-Gauss}~\cite{kissinger2020cnot,nash2020quantum}, \textit{RowCol}~\cite{wu2019optimization}, and \textit{PermRowCol} (proposed) for different topologies. The shown numbers represent the average CNOT count over 100 circuits and the average CNOT overhead with respect to the original circuit in brackets behind it.}
\end{table}
\end{comment}
Figure \ref{fig:faketopogies} shows the performance of the three algorithms on two fictitious square grid topologies whereas \cref{fig:realtopogies} shows the performance on three real device topologies (the connectivity graphs are shown in \cref{app:topologies} \cref{fig:graphs}). We can clearly see that the proposed \textit{PermRowCol} algorithm outperforms the other algorithms for small numbers of CNOTs. For large number of CNOTs, it depends on the topology whether the proposed algorithm still outperforms the baselines. We will discuss the reason for this in \cref{sec:conclusion}.
For practical purposes, we are mainly interested in the performance for small numbers of CNOTs since realistic quantum circuits do not contain long sequences of CNOTs. In that case, the algorithm shows promise, but it is still incomplete. We are still working on improving it, which is discussed in \cref{sec:future}.
Our latest results will be available on the GitHub repository in the Jupyter Notebook that we used for creating these results\footnote{\url{https://github.com/Aerylia/pyzx/blob/rowcol/demos/PermRowCol\%20results.ipynb}}.
\section{Discussion and conclusion}\label{sec:conclusion}
In this paper, we proposed a new algorithm for synthesising CNOT circuits from a parity matrix which dynamically reallocates the qubits to different qubit registers while respecting the connectivity constraints of a quantum computer.
We have shown that in some but not all cases, the reallocation results in a smaller $\CNOT$ overhead. Hence there are still improvements to make. Firstly, we need to identify why in some cases, \textit{PermRowCol} performs worse than the original \textit{RowCol}.
\textit{RowCol} differs from \textit{PermRowCol} because we pick a different row and column to eliminate, resulting in the reallocation of qubits. If \textit{PermRowCol} produces more $\CNOT$ overhead than that of \textit{RowCol}, it means that we picked a wrong reallocation.
Thus, the simple heuristics that we have used in our algorithm does not perform as expected when synthesizing a random parity matrix.
This makes sense because our heuristic relies on the number of $1$s in each row and column; and in a random parity matrix, the number of $1$s in each row and column is approximately the same. Therefore, the \textit{PermRowCol} algorithm in our choice of heuristic may decide to eliminate a random row and column that will simplify the problem to a parity matrix that will later require more $\CNOT$s.
This suspicion is strengthened when comparing the algorithms' performance on synthesizing the same circuits but with different topologies. For example, the 16-qubit square grid machine, the IBM X5 machine, and the Rigetti Aspen machine all consist of 16 qubits, but have distinct connectivity constraints. As shown in \cref{fig:faketopogies,fig:realtopogies}, for large circuits ($\geq 200$ \CNOT s), \textit{PermRowCol} generates the most CNOTs with respect to the $16$-qubit square grid (\cref{app:topologies}(c)), but the least for Rigetti Aspen and IBM QX5 (\cref{app:topologies}). This might be because the topologies of the real devices are sparser than their fictitious counterparts (see \cref{table:analysis}). This means that when removing random non-cutting vertices from the connectivity graph, we will not have many options to choose from for the real devices. Conversely, removing random non-cutting vertices from a square grid will restrict the options less because the vertices are connected through more paths which makes it less likely to pick the wrong option. However, when synthesizing circuits without any connectivity constraints, \cref{app:unconstrained} and \cref{fig:unrestricted} show otherwise. We suspect that this is because picking the wrong qubit allocation is less detrimental when CNOTs are allowed between arbitrary qubits.
\begin{table}[h!]
\begin{tabular}{l|ll|l|l}
\hline
Qubit Count & \multicolumn{2}{c|}{Topology} & Average Graph Distance & Average Degree \\ \hline
\multirow{4}{*}{16} & \multicolumn{1}{l|}{\multirow{2}{*}{Fictitious devices}} & 16Q-Square & $2.5$ & $4$ \\
& \multicolumn{1}{l|}{} & 16Q-Fully Connected & $1$ & $15$ \\ \cline{2-5}
& \multicolumn{1}{l|}{\multirow{2}{*}{Real devices}} & Rigetti 16Q-Aspen & $3.25$ & $2.25$ \\
& \multicolumn{1}{l|}{} & IBM QX5 & $3.125$ & $2.75$ \\ \hline
\end{tabular}
\caption{The average graph distance and average vertex degree of topologies in \cref{fig:graphs}. It shows that the topologies of real devices (i.e., Rigetti 16Q-Aspen and IBM QX5) have greater average graph distance and smaller average vertex degree than those of fictitious devices (i.e., 16Q-Square and 16Q-Fully Connected). Hence, the topologies of real devices are sparser than their fictitious counterparts.}
\label{table:analysis}
\end{table}
In conclusion, we need to find a better method for deciding the dynamic allocation than the heuristic we have used for this paper. This is also why we have not yet compared the performance of \textit{PermRowCol} against established quantum compilers such as Qiskit, TKET, and SABRE.
We will discuss possible directions for improvements in the next section.
\section{Future work}\label{sec:future}
Algorithm \textit{PermRowCol} is shown to be promising, but it does not yet reliably improve upon existing methods. To further optimize our synthesis method, we could either directly improve \textit{PermRowCol} (\cref{sec:improvements}) or use \textit{PermRowCol} within other algorithms such that it can be used to synthesize arbitrary quantum circuits (\cref{sec:extension}).
\subsection{Possible improvements of the PermRowCol algorithm}\label{sec:improvements}
First of all, as discussed in \cref{sec:conclusion}, we need to find a better heuristic for choosing the row and column to eliminate.
Unfortunately, it is quite difficult to come up with such a heuristic.
Luckily, the structure of \textit{PermRowCol} allows us to implement a heuristic based on the A* algorithm~\cite{russell2002artificial}.
It will increase the computation time of the algorithm, but if that means we can reduce the number of CNOTs to a point where a quantum circuit can be executed with less error or before the state decoheres, it is worth it.
Secondly, we can try to find a better initial qubit allocation for our algorithm. We can do this by adding the Reverse Traversal Strategy (RTS) from SABRE to our algorithm.
This method was previously unnecessary for Steiner-tree-based approaches because the methods would keep the qubits in the same registers, so iteratively improving the qubit placement would not do anything. However, with \textit{PermRowCol}, we can now dynamically reallocate our qubits, so we can use the RTS. The idea behind RTS is quite simple. Quantum circuits are reversible, so if we have a circuit with an arbitrary initial qubit allocation and we route that circuit such that the output qubit allocation is different from the initial one. We can run our routing algorithm on the reverse circuit with the output qubit allocation as the initial qubit allocation and possibly find a new, better initial qubit allocation for the original circuit. We can iteratively go back and forth to try to find the best initial allocation, routed circuit, and output allocation. Our plan is to simply add this algorithm to our \textit{PermRowCol} algorithm such that the initial qubit allocation is also optimized, hopefully resulting in less CNOTs.
Something that we are not currently working on, but can still be an interesting research direction is to take into account the different gate error-rates. When building our Steiner-trees we can easily weight the edges by the quality of the gates that they represent. This might result in some interesting error-mitigation techniques. Moreover, due to the dynamic allocation in \textit{PermRowCol}, we might also be able to redesign the algorithm such that it can also take the quality of single qubit gates that are executed after the CNOT circuit into account. This might result in certain qubits not being allocated to certain registers. These kinds of selection rules can easily be added to \textit{PermRowCol} by changing the heuristic for choosing which column to eliminate.
Additionally, \textit{PermRowCol} can still be improved by adding a blockwise elimination method~\cite{2008_PMH}. In fact, the paper that introduced \textit{RowCol} also introduced \textit{size-block elimination} (SBE) that uses a similar strategy but with Steiner-trees and Gray codes. Eliminating blocks of rows and columns might not be very efficient on sparse graphs, but it might be interesting as an alternative to unconstrained Gaussian elimination tasks where a permutation matrix is also a valid solution (e.g. ZX-diagram extraction~\cite{backens2021there}).
\subsection{Extensions to arbitrary quantum circuits}\label{sec:extension}
It is clear that CNOTs are not the only gates in arbitrary quantum circuits. So a good research direction is to extend the \textit{PermRowCol} algorithm such that we can route and allocate any quantum circuit. There are two approaches we could take to extend our algorithm to arbitrary quantum circuits: (1) we can synthesize the full circuit from a flexible representation or (2) we can cut the circuit into pieces we can synthesize separately and glue those pieces back together. We will discuss these strategies by building from our \textit{PermRowCol} algorithm to more general quantum circuits until we end up with the class of all quantum circuits.
Our proposed algorithm can only synthesize circuits consisting of only CNOTs. We can extend this to the class of CNOTs and $R_z$ rotations. This class is also called \textit{phase polynomials} and are described by the set of parities at which each $R_z$ occurs as well as a parity matrix describing the output parities of the quantum circuit, this is also called the \textit{sum-over-paths} notation~\cite{amy2018controlled}. The key here is that various Steiner-tree-based methods have been proposed for synthesizing the parities for each $R_z$ gate~\cite{nash2020quantum,meijer-vandegriend2020architecture,vandaele2022phase}. The remaining parity matrix can then be synthesized by \textit{PermRowCol}. However, phase polynomials can efficiently be simulated on classical computers~\cite{amy2018controlled}. To extend phase polynomials to arbitrary quantum circuits, we need to add the $H$ gate which these methods cannot synthesize. Nevertheless, we can cut our circuit into pieces at the $H$-gates, synthesize the phase polynomial between them, and glue all the pieces back together. Here is where \textit{PermRowCol} can make a difference because the dynamic reallocation of the qubits allows this algorithm to move the $H$-gates to different quantum registers.
Similarly, we can replace the \textit{Steiner-Gauss} algorithm in \cite{gheorghiu2020reducing} for synthesizing CNOTs, $R_z$, and NOT gates. By also including the NOT gate in the synthesis procedure, it allows to synthesize the phase polynomial before and after the NOT gate to be synthesized in unison. Rather than synthesizing the first phase polynomial, placing the $NOT=H R_z(\pi) H$, and then synthesizing the latter phase polynomial. \cite{gheorghiu2020reducing} also proposes a method of generalizing to arbitrary quantum circuits called \textit{slice-and-build}. Which works similarly to the cutting procedure described for extending the phase polynomials.
Next, we can extend the concept of phase polynomials to something called \textit{Pauli exponentials}~\cite{cowtan2020generic}. It could be considered a generalized version of the sum-over-paths notation. One could think of it as follows: instead of keeping track of a binary parity at which an $R_z$ gate occurs, it keeps track of \textit{Pauli-string}s that describes the parity at which a gate happens in terms of the Pauli's $I, X, Y, Z$ rather than binary. Note that this is an oversimplification. However, if a Pauli exponential only contains Pauli strings with $I$ and $Z$, we have a phase polynomial.
Quantum circuits can be extracted from Pauli exponentials with an algorithm called \textit{PauliSynth}~\cite{cowtan2020generic}.
An architecture-aware version of \textit{PauliSynth} does not yet exist. However, the Pauli exponential procedure uses the synthesis of phase polynomials as a subroutine. Thus, it is worthwhile to research a method for making that algorithm into an architecture-aware version, possibly with Steiner-trees. Nevertheless, since \textit{PauliSynth} uses Gaussian elimination, it can be worthwhile to replace it with \textit{PermRowCol} even for fully connected topologies (see \cref{app:unconstrained} \cref{fig:unrestricted}). Although \textit{PauliSynth} was created for synthesizing particular quantum chemistry circuits, it can be argued that Pauli exponentials should be an important primitive for quantum computation~\cite{li2022paulihedral}. Either way, the restriction of \textit{PauliSynth} is that the Pauli-strings in the Pauli exponential need to be commuting. This is not always the case for arbitrary quantum circuits. Nevertheless, we can simply cut our circuit into sequences of commuting Pauli-strings and Clifford gates and run \textit{PauliSynth} multiple times.
Lastly, we can use \textit{PermRowCol} in the extraction of ZX-diagrams~\cite{backens2021there}. ZX-calculus is complete for quantum computation, so we can write any quantum circuit as a ZX-diagram. Then, we can make the diagram into a normal form from which to extract an optimized circuit. In the extraction procedure, they use Gaussian elimination, which can be replaced by \textit{PermRowCol}. Even though the extraction procedure doesn't take any topologies into account, it can still be worthwhile to use \textit{PermRowCol}.
In ZX-calculus, only connectivity matters, so reordering the outputs of the extracted circuit is equivalent to bending wires and therefore free. Thus, it is better to end up with crossing wires than to extract CNOT gates. In \cref{app:unconstrained}, we show that \textit{PermRowCol} can result in less CNOTs with respect to Gaussian elimination and \textit{RowCol}.
Therefore, \textit{PermRowCol} is an algorithm with the potential to improve many quantum circuits.
\subsection*{Acknowledgements}
The authors like to thank Jukka K. Nurminen, Douwe van Gijn, and Dustin Meijer for proof-reading.
\bibliographystyle{plain}
|
2,877,628,088,341 | arxiv | \section{Introduction}
A significant part of materials science is devoted to the problem of finding the electronic structure of a given material. As a result, numerous computational techniques have been developed to study this problem. These techniques can roughly be classified into two kinds: \emph{First-principles} methods solve the problem using the fundamental physical principles and properties of atoms comprising the material. For weakly-interacting systems, density functional theory (DFT)~\cite{hohenberg_kohn_dft} is the dominant (mean field) technique for solving the electronic structure problem from first principles.
In contrast, \emph{empirical} methods aim to capture the relevant physical properties using a simplified model. Such models are usually matched to known properties of the material, which can be obtained from either experiments or first-principles calculations. An example of such an empirical method is given by the tight-binding approximation, which describes a material as a set of localized orbitals and predefined electron hopping terms between them. While the first-principles methods typically have superior accuracy, empirical methods are often used due to their lower computational cost. In particular, calculations of complex device geometries are often inaccessible to a direct first-principles study. As such, the construction of reliable empirical models is of significant importance. And the technique of creating Wannier tight-binding models~\cite{marzariMaximallyLocalizedGeneralized1997,souzaMaximallyLocalizedWannier2001} from first-principles calculations is arguably one of the most popular tools in nowadays computational materials science. The use of Wannier tight-binding models allows one to combine the simplicity of empirical methods with the correct wave function properties obtained from first-principles.
In recent years, \emph{high-throughput} techniques made a profound impact in various fields of materials science~\cite{franceschettiInverseBandstructureProblem1999,johannessonCombinedElectronicStructure2002,curtaroloPredictingCrystalStructures2003,curtaroloHighthroughputHighwayComputational2013}. While the domain eludes a strict definition, a common feature of such techniques is that computational tools are applied to a wide range of candidate materials, or variations of a given material, in search of some beneficial property. Existing codes and techniques are combined and applied on a scale that was not previously possible. A range of automated frameworks~\cite{aiida, jainFireWorksDynamicWorkflow2015} support this by facilitating the combination of separate calculations into logical workflows. The challenge in designing such a high-throughput workflow is to make it resilient to varying input parameters. Since the number of calculations performed is too large to be human-controlled, many decisions -- for example which calculation to perform based on the output of a previous calculation -- need to be encoded into the automated workflow.
In this paper, we introduce steps for addressing two standardly known problems of using Wannier90~\cite{wannier90, wannier90_updated} in combination with any \textit{ab initio} software to construct tight-binding models: the absence of symmetries present in the original compound in the obtained tight-binding model, and the neccessity to search for optimal inner and outer energy windows for projection of the first-principles energy bands.
We do not, however, treat the issue of selecting the initial projections used by Wannier90.
As such, we create automated workflows which are applicable to large classes of materials with similar orbital character of the bands of interest. However, these workflows are not yet applicable to high-throughput scenarios in the sense that they can trivially be applied to arbitrary compounds.
Nevertheless, the presented workflows are written in a way that they could be combined with efforts to address the problem of selecting initial projections~\cite{mustafaAutomatedConstructionMaximally2015}.
In Sec.~\ref{sec:wannier_tb}, we review the general process of calculating the Wannier tight-binding models by means of Wannier90 and explain the proposed and implemented symmetrization and automatic energy window choice procedures. Sec.~\ref{sec:implementation} describes how these procedures are used for the development of an automated workflow using the {AiiDA}~\cite{aiida} framework. While this workflow automates the tight-binding calculation itself, there are still some tunable parameters which might be eliminated by a more sophisticated system. By using a modular design approach, we provide an extensible framework for implementing such improvements. In the final section, we illustrate the application of this workflow to calculate tight-binding models for strained III-V semiconductor materials. These are useful in the pursuit of Majorana devices~\cite{kitaevUnpairedMajoranaFermions2001,lutchynMajoranaFermionsTopological2010,oregHelicalLiquidsMajorana2010}, enabling the study of transport properties for different topological devices with III-V quantum wells, where strains play an important role in the topological transition.
\section{Construction of Wannier-like tight-binding models}
\label{sec:wannier_tb}
In this section, we describe the process of generating \textit{symmetrized} Wannier-like tight-binding (SWTB) models. First, we give a short description of the method for creating Wannier tight-binding models (WTB) as introduced in the works of Refs.~\cite{marzariMaximallyLocalizedGeneralized1997,souzaMaximallyLocalizedWannier2001} and implemented in the Wannier90~\cite{wannier90,wannier90_updated} software package. Next, we describe a method for symmetrizing these WTBs in a post-processing step. Finally, we describe a scheme to enhance the band-structure accuracy by \textit{optimizing the energy windows} used by Wannier90.
\subsection{Wannier tight-binding construction}
Tight-binding models represent a common way to describe crystalline systems in a computationally cheap way. The material is described as a system of localized orbitals with positions ${\vec{t}}_i$ in the unit cell, and hopping terms $H^{ij}[{\vec{R}}]$ between the $j$-th orbital in the unit cell at location ${\vec{R}}$ and the $i$-th orbital in the home unit cell ${\vec{R}}={\bf 0}$. From these parameters, the matrix Hamiltonian can be written as~\footnote{In this work, we use the tight-binding convention I of Ref.~\cite{pythtb_formalism}.}
\begin{equation}\label{eqn:tight_binding}
\mathcal{H}^{ij}({\vec{k}}) = \sum_{\vec{R}} H^{ij}[{\vec{R}}]e^{i {\vec{k}}.({\vec{R}} + {\vec{t}}_j - {\vec{t}}_i)}.
\end{equation}
For the case of spinful systems, we choose the indices $i, j$ to include the spin index for simplicity.
The Wannier tight-binding (WTB) method utilizes localized Wannier functions as basis orbitals to capture the compound's physics. These basis Wannier functions are obtained from first-principles simulations. This procedure is based on the work of Refs.~\cite{marzariMaximallyLocalizedGeneralized1997,souzaMaximallyLocalizedWannier2001} and implemented in the Wannier90~\cite{wannier90,wannier90_updated} code. After obtaining the necessary Wannier90 input files from a first-principles calculation, two steps are performed to construct these Wannier functions:
In a first step, the Bloch wave-functions $\ket{\psi_{n, {\vec{k}}}}$ calculated by the first-principles code are \emph{disentangled} to obtain $M$ wave-functions, where $M$ is the target number of basis Wannier functions in WTB. For selecting the Bloch wave-functions which are involved in this procedure, one needs to choose an \emph{outer} energy window. Optionally, an \emph{inner} energy window can be chosen. States inside this inner window will be preserved by the disentanglement. An optimization routine is performed to select the $M$ states such that the ``change of character'' $\Omega_\text{I}$ (defined in Ref.~\cite{souzaMaximallyLocalizedWannier2001}) is minimized. As an initial guess for this optimization procedure, $M$ localized trial orbitals $\ket{g_m}$ are used. Because the disentanglement procedure needs to discard some states, it usually changes both the symmetry and the energy bands of the model in comparison with first-principles results. Consequently, choosing good values for both the energy windows and the trial orbitals has a strong effect on the quality of the resulting model.
As a second (optional) step, another optimization is performed to find a unitary transformation such that the resulting Wannier functions are maximally localized~\cite{marzariMaximallyLocalizedGeneralized1997}. Again, the trial orbitals $\ket{g_m}$ are used to create an initial guess for this optimization. Typically, these orbitals are chosen to be those chemical atomic orbitals that contribute most to the bands of interest. A method for constructing Wannier orbitals without the need for such a guess is described in Ref.~\cite{mustafaAutomatedConstructionMaximally2015}.
\subsection{Symmetrization}
An important feature of tight-binding models, especially for studying topological effects, is that they preserve certain crystal symmetries. For a given symmetry group $G$, the symmetry constraint on the Hamiltonian matrix is given by~\cite{dresselhausGroupTheoryApplication2007}
\begin{equation}\label{eqn:symmetry_constraint}
\forall g \in G: \mathcal{H}(\vec{k}) = D^{\vec{k}}(g) \mathcal{H}(g^{-1}\vec{k}) D^{\vec{k}}(g^{-1}),
\end{equation}
where $D^{\vec{k}}(g)$ is the ${\vec{k}}$-dependent representation of the symmetry $g$ from the group $G$. We define the ${\vec{k}}$ - \emph{independent} part $D(g)$ of the representation as
\begin{equation}
D^{\vec{k}}(g) = e^{i \boldsymbol{\alpha}_g.{\vec{k}}} D(g),
\end{equation}
where $\boldsymbol{\alpha}_g$ is the translation vector of the symmetry.
For a Hamiltonian which does not fulfill these symmetry constraints, we define the \emph{symmetrized} Hamiltonian as the group average
\begin{equation}\label{eqn:symmetrized_hamiltonian}
\tilde{\mathcal{H}}(\vec{k}) = \frac{1}{|G|} \sum_{g \in G} D^{\vec{k}}(g) \mathcal{H}(g^{-1}\vec{k}) D^{\vec{k}}(g^{-1}).
\end{equation}
This procedure projects the Hamiltonian onto the symmetric subspace, meaning that the modified Hamiltonian respects \cref{eqn:symmetry_constraint}, as shown in \cref{appendix:symmetrized_tb}. Furthermore, if the original Hamiltonian is already symmetric, the original and symmetrized Hamiltonians are identical. Since this construction does not explicitly construct the corresponding Wannier functions, we term these models symmetrized \emph{Wannier-like} tight-binding models (SWTB).
It is important to note that the eigenstates and eigenvalues of the symmetrized Hamiltonian may differ significantly from those of the non-symmetrized Hamiltonian. In fact, for an anti-symmetric initial Hamiltonian, meaning that
\begin{equation}
D^{\vec{k}}(g)\mathcal{H}(g^{-1}{\vec{k}})D^{\vec{k}}(g^{-1}) = - \mathcal{H}({\vec{k}})
\end{equation}
for some symmetry $g$, the symmetrized result vanishes completely. However, given a Hamiltonian which \emph{almost} respects the symmetry, this technique can effectively eliminate small symmetry-breaking terms.
In the context of tight-binding models, this symmetrization technique can only straightforwardly be applied when the underlying basis set is symmetric. If the tight-binding basis contains an orbital $\ket{\alpha}$ centered around the position $\vec{r}$, it must also contain $g\ket{\alpha}$ centered around $g \vec{r}$ for all symmetries $g \in G$. For example, if the model for a material which has $C_4^x$ symmetry contains a $p_x$ orbital at the origin, it must also contain a $p_y$ orbital at the origin.
For Wannier tight-binding models, this means that the technique can generally only be applied when the step of maximally localizing the Wannier functions is omitted, and pre-defined atomic orbitals are used.
When this condition is met however, the method can be applied for both unitary and anti-unitary symmetries, as well as non-symmorphic symmetry groups.
To apply the group average to tight-binding models, it is convenient to rewrite Eq.~\ref{eqn:symmetrized_hamiltonian} directly in terms of the hopping matrices $H[\vec{R}]$ (see App.~\ref{app:symmetrized_H_in_real_space} for derivation):
\begin{equation}
\tilde{H}^{ij}[{\vec{R}}] = \frac{1}{|G|} \sum_{\substack{g \in G \\{l,m}}} D_{il}(g) H^{lm}[S_g^{-1}({\vec{R}} - \vec{T}_{ij}^{ml})] D_{mj}(g^{-1}),
\end{equation}
where $S_g$ is the real-space rotation matrix of the symmetry $g$, $\vec{T}_{ij}^{ml} = S_g({\vec{t}}_m - {\vec{t}}_l) - {\vec{t}}_j - {\vec{t}}_i$, and the indices $m, l$ only go over values for which $\vec{T}_{ij}^{ml}$ is a lattice vector. Note that we use the ${\vec{k}}$ - independent part $D(g)$ of the representation here.
Fig.~\ref{fig:symmetrization} shows the results of this symmetrization procedure on a tight-biding model for bulk silicon in the diamond cubic crystal structure, with atom-centered $sp^3$ orbitals. The initial model already approximately fulfills the symmetry condition, which is reflected in the fact that the bandstructure does not change in the electronvolt scale. However, at the sub-millielectronvolt scale the band degeneracies are lifted in the original model, but restored after the symmetrization procedure. Since the symmetry group of the diamond cubic structure $\mathrm{Fd}\bar{3}\mathrm{m}$ (no. $227$) is non-symmorphic, this example demonstrates that the symmetrization technique is capable also of enforcing such symmetries. In panel b of \cref{fig:symmetrization}, we compare the symmetrization using the full symmetry group to a partial symmetrization enforcing only the symmorphic subgroup. Adding non-symmorphic symmetries enforces the four-fold degeneracy at the $\mathrm{X}$ point and two-fold degeneracy on the $\mathrm{X} - \mathrm{U}$ line, whereas symmorphic symmetries only enforce a two-fold degeneracy on the $\Gamma - \mathrm{X}$ line.
\begin{figure*}\centering
\includegraphics[width=\textwidth]{symmetrize_nonsymmorphic}
\caption{(Color online) Comparison of the initial (blue) and symmetrized (orange) bandstructure for a tight-binding model of silicon with atom-centered $sp^3$ orbitals. (a) In the $\mathrm{eV}$ scale, there are no visible differences between the two models. (b) A zoom in around the $X$ point on the $\mathrm{meV}$ scale reveals a slight lifting of the band degeneracies in the initial model. This incorrectness is resolved in the symmetrized model. For comparison, a symmetrized bandstructure taking into account only symmorphic symmetries (green) is also shown.}
\label{fig:symmetrization}
\end{figure*}
To determine the matrix representations $D(g)$, we use the fact that Wannier90 allows one to manually choose the trial orbitals $\ket{g_m}$. As a result, the basis after the disentanglement procedure corresponds to the chosen orbitals, up to some numerical error. Since the behavior of the basis orbitals under symmetries is known, $D(g)$ can be determined in this way. For the treatment of spin, we use the rotation matrices as given in ref.~\cite{haberThreeDimensionalProperImproper2011}. The action of time-reversal on the spin basis $\{\ket{\uparrow}, \ket{\downarrow} \}$ is given by $\sigma_y \hat{K}$, where $\hat{K}$ represents complex conjugation.
An automated method for generating the representation matrices for given atomic orbitals is available in the \texttt{symmetry-representation} package.
Importantly, we used Wannier90 \textit{without} performing the maximal localization step. It is the case in the illustrated application of Sec.~\ref{sec:strain_models}, where this allows us to preserve the orbital basis. Alternatively, one could use the basis transformation matrices $U^{({\vec{k}})}$ provided by Wannier90~\cite{wannier90} to transform $D(g)$ into the maximally-localized basis. While this approach produces computationally cheaper localized models, the drawback is that the basis is different for each produced tight-binding model. As a result, comparing models is more difficult. Also, linear interpolation between models, as described in \cref{sec:strain_interpolation}, would require a change of basis.
Another approach to obtaining symmetric tight-binding models is to use the site-symmetry mode implemented in Wannier90~\cite{sakumaSymmetryadaptedWannierFunctions2013}. However, this method is limited to symmetries which leave a given real-space coordinate invariant (site symmetries), and does not include time-reversal. The method presented here has no such limitation, but is instead limited to models which have a symmetric set of basis functions as described above. The site-symmetry mode also relies on obtaining the symmetry information from the first-principles code, which is currently implemented only for Quantum Espresso~\cite{espresso,espresso_updated}. The workflow described in Sec.~\ref{sec:aiida_workflows} could be adapted to allow using this approach with only minimal changes.
\subsection{Optimization for bandstructure fit}
As described above, an important parameter in running Wannier90 is the choice of the so-called energy windows~\cite{wannier90}. There are two such windows: The \emph{outer} window determines which states are taken into account for the disentanglement procedure. At every ${\vec{k}}$-point, it must contain at least $M$ bands, where $M$ is the desired number of bands in the tight-binding model. The \emph{inner} (or frozen) window on the other hand determines which states should not be modified during disentanglement. It can contain at most $M$ bands at any given ${\vec{k}}$.
Since the quality of the resulting tight-binding model depends sensitively on the choice of energy windows, a strategy for reliably choosing good windows is required.
A straightforward way of achieving this is by iteratively optimizing the window values. Having constructed and symmetrized a tight-binding model, its quality can be determined by comparing its bandstructure to a reference computed directly from first-principles \footnote{Because the first-principles calculation usually contains more than $M$ bands, we need to choose which bands should be represented by the tight-binding model.}. As a measure of their mismatch, we choose the average difference between the energy eigenvalues
\begin{equation}
\Delta = \frac{1}{M}\frac{1}{N_{\vec{k}}} \sum_{i=1}^{M}\sum_{\vec{k}} \left| \varepsilon_{i, {\vec{k}}}^\text{DFT} - \varepsilon_{i, {\vec{k}}}^\text{TB} \right|.
\label{eqn:average_band_difference}
\end{equation}
Some values of the energy windows cannot produce a tight-binding model, for example if the outer window contains less than $M$ bands. As a result, finding appropriate energy windows is a constrained, four-dimensional optimization problem. The Nelder-Mead (downhill simplex) algorithm~\cite{nelderSimplexMethodFunction1965} can be used to solve this problem~\footnote{The constraint is implemented by assigning an infinite value of $\Delta$ to invalid energy windows.}.
Fig.~\ref{fig:optimize_energy_window} shows the result of such an optimization procedure for unstrained InSb, as described in \cref{sec:strain_models}. A clear improvement is visible between the tight-binding model obtained with the initial windows chosen by hand (panel a), and the optimized window values (panel b). In particular, the conduction bands at the $X$ and $Z$ points are represented more accurately in the optimized model. Since the given bands for InSb are not entangled, it is also possible to skip the disentanglement step completely by using the \texttt{exclude\_bands} parameter of Wannier90 to ignore all other energy bands. The resulting bandstructure is shown in \cref{fig:optimize_energy_window}(c). Nevertheless, we find that the bandstructure using optimized disentanglement is slightly better ($\Delta=0.0327$) than the one without disentanglement ($\Delta=0.0375$), especially for the four lowest conduction bands on the $\mathrm{Z}$ - $\Gamma$ - $\mathrm{X}$ line.
\begin{figure*}\centering
\includegraphics[width=\textwidth]{optimize.pdf}
\caption{(Color online) Comparison between the reference first-principles bandstructure (blue) and bandstructures calculated from tight-binding models (orange) for InSb. The tight-binding model in (a) was calculated with the initial energy window, whereas (b) shows the model using the optimized energy window as detailed in \cref{tab:energy_window_opt}. The model in (c) was calculated without the disentanglement procedure, using the \texttt{exclude\_bands} parameter.}
\label{fig:optimize_energy_window}
\end{figure*}
Hence, it can be useful to apply the disentanglement procedure and energy window optimization even in cases where the bands are not inherently entangled, especially when the time required to run the tight-binding calculation is short compared to the initial first-principles calculation.
\section{Implementation in A\lowercase{ii}DA workflows}\label{sec:aiida_workflows}
\label{sec:implementation}
The AiiDA \cite{aiida} platform is a Python framework for performing high-throughput calculations, focused on the field of materials physics. It enables reproducible research by keeping track of inputs, outputs and settings for each calculation. On top of this provenance layer, it provides a toolset for automatically chaining calculations into user-defined workflows.
In this section, we describe the implementation of the Wannier tight-binding extraction scheme as an AiiDA workflow. This automation enables the application to the study of strain effects (described in \cref{sec:strain_models}). Special care has been taken to design the workflow in a modular way, which enables re-using parts of the workflow for purposes other than tight-binding extraction. We first discuss these design principles, before showing how they are applied in the tight-binding workflows.
The code for the AiiDA workflows is available in the open-source \texttt{aiida-tbextraction} package, and provided as supplementary material.
\subsection{Modular workflow design}
The basic principle of modular workflow design is to split up a single monolithic workflow into minimal sub-workflows or calculations that perform exactly one task. For example, the tight-binding model created by Wannier90 is post-processed by parsing it to an HDF5 format, followed by optionally changing the order of the basis and symmetrizing the model. While this could easily be implemented in a single script, splitting these three steps up into separate calculations allows separately re-using each of the steps.
More complex workflows are created by combining multiple sub-workflows into a logical unit at a higher abstraction level. Inputs to the sub-workflow are either forwarded directly from the input to the parent workflow or created within the parent workflow. Similarly, outputs from the sub-workflow can either be forwarded to be an output of the parent workflow or consumed directly to guide the further execution of the parent workflow.
Since a complex workflow can consist of multiple layers of wrapped sub-workflows, this modular approach is maintainable only if the overhead of forwarding input and output is minimal. Following the single responsibility principle, a parent workflow should not have to change if an input or output parameter of a sub-workflow changes, unless it directly interacts with this parameter. To achieve this, a syntax is needed to specify that a parent workflow will \emph{inherit} inputs or outputs of a sub-workflow, without explicitly listing each parameter.
In AiiDA, such a feature is available in the newly-introduced \emph{expose} functionality, as described in \cref{app:expose}.
The modular architecture improves not only the re-usability, but also the flexibility of workflows. Often, a given part of a workflow could be performed in different ways. For example, many different codes can perform the first-principles calculations in the tight-binding extraction workflows. Additionally, one might want to add steps such as relaxation or cut-off energy convergence.
To allow for this, the parent workflow can allow for dynamically selecting a workflow for performing a given task by passing it as an input~\footnote{For storing the workflow in the AiiDA database, it needs to be converted into an AiiDA data type. We chose to convert it into a string containing the fully qualified class name, from which we import the workflow when needed.}. An abstract workflow class defines the interface that a workflow must fulfill so that it can be used to perform the task. If needed, the parent workflow can allow for dynamic inputs, which are just forwarded to the specific workflow implementing the interface. In this way, the parent workflow can act as a template that defines an abstract series of steps, without knowledge of the detailed input flags available on each step.
\subsection{Tight-binding extraction workflow}\label{sec:tb_workflow}
Having discussed the design principles for modular workflows, we now show how these are applied to create a workflow for the construction of tight-binding models.
This workflow is implemented in the \texttt{Optimize\-First\-Principles\-Tight\-Binding} class as sketched in \cref{fig:workflow_diagram}. At the uppermost level, the workflow has two parts: \texttt{First\-Principles\-Run\-Base}, which executes the first-principles calculations, and \texttt{Window\-Search} which calculates the tight-binding model with energy window optimization.
\begin{figure}\centering
\includegraphics[width=0.8844\columnwidth]{workflow_diagram-crop}
\caption{Schematic of the AiiDA workflow for creating tight-binding models with energy window optimization. Workflows are shown in blue, and calculations in purple. Orange arrows show calls from parent- to child-workflows (or calculations). Dashed green arrows show the implicit data dependency between workflows of the same level. In calculation names, the suffix \texttt{Calculation} is omitted for brevity.}
\label{fig:workflow_diagram}
\end{figure}
Since different first-principles codes can produce the input files required by Wannier90, \texttt{First\-Principles\-Run\-Base} defines only the minimum interface needed to perform this task. As described in the previous section, a workflow that implements this interface for a specific first-principles code can then be chosen dynamically. As a result, the subsequent parts of the workflow are independent of which first-principles code is used.
The \texttt{Window\-Search} workflow performs the Nelder-Mead algorithm for finding the optimal energy window. Because optimization schemes are useful outside of this specific application, we implemented the Nelder-Mead method in a general way. The \texttt{Optimization\-Work\-Chain}, defined in the \texttt{aiida-optimize} module, can be used to solve generic optimization problems in the context of AiiDA workflows. It requires two inputs: A workflow which defines the function to be optimized, and an engine that implements the optimization method. Consequently, changing the whole workflow to use a different optimization method would be a simple matter of using a different engine.
Because AiiDA workflows need to be able to stop and re-start after any given step, the engine is written in an object-oriented instead of a procedural way. While this complicates implementing the Nelder-Mead method, it allows for serializing and storing the state of the engine.
The function which is optimized by the \texttt{Optimization\-Work\-Chain} is implemented in the \texttt{Run\-Window} workflow. It again consists of two parts: \texttt{Tight\-Binding\-Calculation} creates the tight-binding model itself, and \texttt{Model\-Evaluation\-Base} evaluates the quality of the model. The first step in the \texttt{Tight\-Binding\-Calculation} workflow is to run Wannier90 on the given input parameters. In a second step, the Wannier90 output is parsed and converted into the TBmodels~\cite{greschTBmodelsDocumentation} HDF5 format. A third, optional, ``slicing'' step is used to either permute the basis orbitals or discard some orbitals. Finally, the (also optional) symmetrization procedure is performed. Both the \texttt{Slice} and the \texttt{Symmetrize} calculation have a TBmodels HDF5 file as both input and output, meaning that they could be chained arbitrarily with other such post-processing steps.
For the evaluation of the tight-binding model, we again use an abstract interface class, \texttt{Model\-Evaluation\-Base}. While for the purposes of this paper we used the average difference of band energies (\cref{eqn:average_band_difference}) as a measure of model quality, other quantities might be more appropriate for different applications.
\section{Strain-dependent tight-binding models for Majorana devices}\label{sec:strain_models}
The quest for Majorana zero modes (MZMs) in condensed matter systems has recently attracted a lot of interest~\cite{kitaevUnpairedMajoranaFermions2001,lutchynMajoranaFermionsTopological2010, oregHelicalLiquidsMajorana2010,aliceaMajoranaFermionsTunable2010,pikulinZerovoltageConductancePeak2012a,aliceaNewDirectionsPursuit2012,miProposalDetectionBraiding2013,beenakkerSearchMajoranaFermions2013}. The non-abelian exchange statistics of Majorana Fermions makes these zero modes promising candidates for the realization of topological quantum computation devices~\cite{kitaevUnpairedMajoranaFermions2001,kitaevFaulttolerantQuantumComputation2003}. Experimental investigations of possible MZMs focus on the proposal by Lutchyn et. al. and Oreg et. al.~\cite{lutchynMajoranaFermionsTopological2010,oregHelicalLiquidsMajorana2010} in which MZMs appear on the boundaries of proximitized spin-orbit coupled quantum wires. Current experimental setups include semiconducting InAs nanowires with epitaxial superconducting Al~\cite{dengMajoranaBoundState2016}, and InAs/GaSb heterostructures in which the quantum spin Hall effect~\cite{kaneQuantumSpinHall2005,liuQuantumSpinHall2008} can be realized providing the possibility to proximity couple the helical edge state~\cite{pikulinZerovoltageConductancePeak2012a,miProposalDetectionBraiding2013}. While there is a good deal of evidence suggesting that MZMs exist in the wire-based setups~\cite{mourikSignaturesMajoranaFermions2012,churchillSuperconductornanowireDevicesTunneling2013}, a conclusive proof requires directly showing the braiding statistics of MZMs.
An important step in realizing braiding with the systems based on the helical edge state is the search for optimized device and material properties. For optimizing the topological gap, a better theoretical understanding of the electronic structure in such devices is required. In this section, we show how the workflows can be used to generate tight-binding models which form the basis for accurate device simulations. While these device simulations themselves are outside the scope of this work, this shows the potential use of the method for a topic of active research in current condensed matter physics.
Highly accurate first-principles methods, using hybrid functionals~\cite{hybrids}, or the \textit{GW} approximation~\cite{gw}, are computationally too demanding for the simulation of realistic device geometries and heterostructures. State of the art simulations of such structures use the $\mathbf{k.p}$ method~\cite{kaneChapterMethod1966}, or empirical tight-binding (ETB) methods~\cite{slaterSimplifiedLCAOMethod1954}. In both of these methods the Hamiltonian is parametrized by a small number of parameters which are obtained empirically, for example via fitting to the first-principles band structure. For both of these methods the choice of parameters is ambiguous and one can obtain a good fit of the bandstructure while at the same time the electronic wavefunction might be wrongly represented. This might lead to unphysical solutions in confined geometries~\cite{tanEmpiricalTightBinding2013,tanTightbindingAnalysisSi2015}, and low transferability of the bulk models to the heterostructure in general. Recently, it was shown that better matching the ETB with the first-principles calculations can improve their transferability~\cite{tanTightbindingAnalysisSi2015,tanTransferableTightbindingModel2016}.
Realistic simulations of heterostructures require a correct treatment of strains at interfaces. In the $\mathbf{k.p}$ and the ETB method this is usually done by strain-dependent parameter sets. However, often the symmetries are not broken correctly. In this context, the Wannier or Wannier-like tight-binding models can offer a significant improvement by accurately representing the first-principles wavefunction and correctly capturing the effect of strain. As a demonstration of the AiiDA workflows, we construct SWTB models for the III-V semiconductors InSb, InAs and GaSb.
Including spin-orbit coupling (SOC), we require only 14 basis functions, namely $s$ and $p$ orbitals centered on the In/Ga atom, and $p$ orbitals centered on the As/Sb atom. The popular $sp^3d^5s^*$ ETB models on the other hand require 40~\cite{jancuEmpiricalMathrmspdsTightbinding1998} basis functions. The reason for this is that WTB models generally include longer-range neighbor interactions, whereas ETB is typically limited to nearest-neighbor (or next-nearest-neighbor in some cases~\cite{boykinImprovedFitsEffective1997}) interactions to keep the number of parameters manageable. As illustrated in \cref{fig:hopping_weights}, the produced tight-binding models include long-range hopping parameters, with amplitudes quickly decaying with distance.
\begin{figure*}\centering
\includegraphics[width=\textwidth]{hopping_weights}
\caption{Average (blue, left axis) and total (orange, right axis) weights of the hopping parameters for the unstrained InSb tight-binding model, as a function of distance.}
\label{fig:hopping_weights}
\end{figure*}
To account for strain, we construct tight-binding models with biaxial (001), (110) and (111) strains, and the uniaxial [110] strain, as described in \cref{app:strain_details}. For each material and strain direction, we calculated $16$ models in the range of $\pm 4 \%$ strain. Including the unstrained models, we constructed a total of $195$ tight-binding models, showing the applicability of the AiiDA workflow to a large number of chemically and structurally similar compounds.
\subsection{Strained tight-binding workflow}
To automatically extract tight-binding models for different strain directions and strengths, we define an additional workflow, \texttt{Optimize\-Strained\-First\-Principles\-Tight\-Binding}, as shown in \cref{fig:strain_workflow_diagram}. The first step in this workflow, \texttt{Apply\-Strains\-With\-Symmetry}, creates the strained structures from the initial structure and strain parameters. Since strain can break crystal symmetries, the symmetries of the unstrained system are tested against the strained structure. With the strained structures and the remaining symmetries, we then use the \texttt{Optimize\-First\-Principles\-Tight\-Binding} workflow to create a tight-binding model for each strain value.
\begin{figure}\centering
\vspace{1em}
\includegraphics[width=\columnwidth]{strain_workflow_diagram-crop}
\caption{Sketch of the workflow for constructing strained tight-binding models. The color scheme is the same as in \cref{fig:workflow_diagram}.}
\label{fig:strain_workflow_diagram}
\end{figure}
\subsection{First-principles calculations}
In the first step of generating the SWTB we need to carry out a first-principles calculation of the bulk semiconductor structure.
We performed all first-principles calculations using the Vienna Ab-initio Simulation Package (VASP) utilizing projector augmented-wave (PAW) basis sets~\cite{vasp}. To obtain an accurate prediction of the band gap we employed hybrid functionals~\cite{hse}. The HSE03/HSE06 hybrid functionals proved to be successful in computing band structures of III-V semiconductors~\cite{heydEfficientHybridDensity2004}. These hybrid functionals are constructed by replacing a quarter of the density functional short-range exchange (which is the Perdew-Burke-Enzerhof functional in our case~\cite{pbe}) with its Hartree-Fock counterpart. The screening parameter $\mu$ defines the separation into long- and short-range parts. In the popular HSE06 scheme, it is set to $\mu = 0.2\,\mathrm{\AA}^{-1}$. We treated $\mu$ as an empirical parameter such that the calculated band gap is fitted to the experimental value. In this work, we used $\mu_\mathrm{InAs} = 0.20\,\mathrm{\AA}^{-1}$, $\mu_\mathrm{GaSb} = 0.15\,\mathrm{\AA}^{-1}$ and $\mu_\mathrm{InSb} = 0.23\,\mathrm{\AA}^{-1}$, following the prescriptions of Ref.~\cite{kimEfficientBandStructure2010}. Since the SOC of III-V semiconductors is significant, we accounted for it by using scalar-relativistic PAW potentials.
InAs, GaSb and InSb crystallize in the zincblende structure with space group $T_d^2$ (no. 216). For the unstrained structures we perform the first-principles calculation with the experimental lattice constant $a$ at 300K, that is $a_\mathrm{InAs} = 6.058\, \mathrm{\AA}$, $a_\mathrm{GaSb} = 6.096\, \mathrm{\AA}$, $a_\mathrm{InSb} = 6.479\, \mathrm{\AA}$, from ref.~\cite{madelungCondensedMatterGroup2002}.
A plane-wave energy cutoff of $380~\mathrm{eV}$ was used for all calculations. The Brillouin-zone integrations were sampled by a $6\times6\times6$ $\Gamma$-centered $k$-points mesh.
To get optimal results from the Wannier90 code in conjunction with VASP~\cite{vasp} we found that it is necessary to turn symmetries off in VASP, that is setting the \texttt{ISYM}-tag to 0. Since the states are obtained by a numerical diagonalization routine, they obtain a random phase at each $\mathbf{k}$-point. When symmetries are enabled however, the phases are the same for all vectors forming the star of $\mathbf{k}$. Since the convergence of Wannier90 is better if the numerical phases are random, turning symmetries off generally results in more localized Wannier functions after the projection step.
The interface for running first-principles calculations in the tight-binding extraction workflow is defined in the \texttt{First\-Principles\-Run\-Base} class (see \cref{sec:tb_workflow}). Here, we describe the specific sub-class used to implement these calculations with VASP~\cite{vasp}, \texttt{Vasp\-First\-Principles\-Run} (see \cref{fig:vasp_workflow}). In a first step, this workflow performs a self-consistent calculation. The resulting wave-function is then passed to calculations for the reference band-structure and the input files for Wannier90. Two workflows \texttt{Vasp\-Reference\-Bands} and \texttt{Vasp\-Wannier\-Input} are used to perform these calculations. The workflows are thin wrappers around the corresponding calculations from the \texttt{aiida-vasp} plugin~\cite{hauselmannAiiDAVASPDocumentation}, providing additional input and output validation. For the band-structure calculation, the workflow also adds the ${\vec{k}}$-point grid needed for hybrid functional calculations.
\begin{figure}\centering
\includegraphics[width=0.64321608\columnwidth]{vasp_workflow_diagram-crop}
\caption{Sketch of the \texttt{First\-Principles\-Run\-Base} subclass used for calculating the Wannier90 input and reference bands with VASP and hybrid functionals.}
\label{fig:vasp_workflow}
\end{figure}
\subsection{Strain interpolation}\label{sec:strain_interpolation}
Using the AiiDA workflow, we obtained tight-binding models for strains in the range of $\pm 4 \%$, in steps of $0.5 \%$. However, it is sometimes useful to have a finer control over the strain value without having to run additional first-principles calculations. A common way of obtaining this is by linear interpolation of the hopping parameters. Given two strain values $s_1$ and $s_2$, for which the hopping parameter $H^{s_i}[{\vec{R}}]$ are known, the hopping parameters for an unknown $s^*$ can be calculated as
\begin{equation}\label{eqn:linear_interpolation}
H^{s^*}[{\vec{R}}] = \alpha H^{s_1}[{\vec{R}}] + (1 - \alpha) H^{s_2}[{\vec{R}}],
\end{equation}
where
\begin{equation}
\alpha = \frac{s^* - s_2}{s_1 - s_2}.
\end{equation}
Since this method assumes that the hopping parameters are a linear function of strain value, it becomes unreliable when $s^*$ is too far away from $s_1$ and $s_2$. For this reason, we compared a tight-binding model for InSb with $2 \%$ biaxial (001) strain obtained from linear interpolation of $1 \%$ and $3 \%$ strain models with one calculated directly from first-principles. \Cref{fig:strain_interpolation} shows a comparison of the two band-structures, which we find to be almost identical.
Important to note is that while linear interpolation works well for strains of the same kind, this is not necessarily the case when combining two models with different strain directions. The reason for this is that the symmetries of a particular structure depend on the direction of the applied strain, but (unless it is zero) not on its strength. As a result, a tight-binding model resulting from linear interpolation between two models of a different strain direction would not have the correct symmetries.
\begin{figure*}\centering
\includegraphics[width=\textwidth]{strain_interpolation}
\caption{ Comparison between the InSb bandstructure obtained directly from the tight-binding model with $2 \%$ biaxial (001) strain (blue), and from the linear interpolation (orange) between models with $1 \%$ and $3 \%$ strain. The energy scale is fixed by setting the top of the valence bands at $\Gamma$ to zero. (a) At the electron-volt scale, the only visible difference is in the upper bands along the $\Gamma$ - K line. (b) Close-up of the bands around $\Gamma$. The bands for $1 \%$ (purple) and $3 \%$ strain (green) are also shown.}
\label{fig:strain_interpolation}
\end{figure*}
\subsection{Results}
To validate the tight-binding models obtained using the \texttt{aiida-tbextraction} workflows, several material parameters were calculated. \Cref{tab:eff_mass} shows effective masses and g-factors for the unstrained models, in comparison to first-principles~\cite{kimEfficientBandStructure2010} and experimental~\cite{kimEfficientBandStructure2010,vurgaftmanBandParametersIII2001} values.
Effective masses for the tight-binding models were calculated using a second-order polynomial fit with range $0.001~\textup{\AA}^{-1}$. The g-factor calculations were performed using both perturbation theory and a Landau level calculation~\cite{grafElectromagneticFieldsDielectric1995}, with good agreement ($< 0.5 \%$ difference) between the two methods.
\begin{table}
\begin{tabular}{@{}lllllll@{}}
\toprule
Material & Method & $|m^*_{\text{SO}}|$ & $|m^*_{\text{LH}}|$ & $|m^*_{\text{HH}}|$ & $|m^*_{\text{e}}|$ & g-factor \\
\midrule
& $\text{HSE}_\text{bgfit}$ & 0.129~ & 0.018~ & 0.245~ & 0.017~ & \\
\textbf{InSb} & SWTB & 0.118 & 0.016 & 0.219 & 0.015 & -49.8 \\
& Expt. & 0.110 & 0.015 & 0.263 & 0.014 & -50.6 \\
\midrule
& $\text{HSE}_\text{bgfit}$ & 0.112 & 0.033 & 0.343 & 0.027 & \\
\textbf{InAs} & SWTB & 0.118 & 0.036 & 0.340 & 0.029 & -15.3 \\
& Expt. & 0.140 & 0.027 & 0.333 & 0.026 & -15 \\
\midrule
& $\text{HSE}_\text{bgfit}$ & 0.143 & 0.047 & 0.235 & 0.042 & \\
\textbf{GaSb} & SWTB & 0.124 & 0.039 & 0.20 & 0.036 & -15.1 \\
& Expt. & 0.120 & 0.044 & 0.250 & 0.039 & -7.8 \\
\bottomrule
\end{tabular}
\caption{Effective masses of light hole (LH), heavy hole (HH), split-off hole and electron at $\Gamma$ point along [100] direction in the unstrained case. Values for symmetrized Wannier-like tight-binding models (SWTB) are compared to first-principles (HSE$_\text{bgfit}$)~\cite{kimEfficientBandStructure2010} and experimental results~\cite{kimEfficientBandStructure2010,vurgaftmanBandParametersIII2001}.}
\label{tab:eff_mass}
\end{table}
The effect of the energy window optimization is shown in \cref{tab:energy_window_opt}, which lists the initial and optimized windows, as well as the corresponding band-structure mismatch. As previously shown in \cref{fig:optimize_energy_window}, it can be seen that the mismatch is substantially reduced after optimization.
\begin{table}
\begin{tabular}{@{}lllllll@{}}
\toprule
\multicolumn{2}{@{}l}{\textbf{Material}} & \multicolumn{4}{l}{\textbf{Energy Windows (eV)}} & \textbf{$\Delta$}\\
\midrule
\multirow{2}{*}{InSb~} & initial & $(-4.5,$ & $[-4,$ & $6.5],$ & $16)$ & $0.107$ \\
& optimized~ & $(-4.44,$ & $[-3.24,$ & $8.67],$ & $14.01)$~ & $0.033$ \\
\midrule
\multirow{2}{*}{InAs~} & initial & $(-4.5,$ & $[-4,$ & $6.5],$ & $16)$ & $0.113$ \\
& optimized & $(-4.44,$ & $[-3.59,$ & $7.34],$ & $15.04)$ & $0.046$ \\
\midrule
\multirow{2}{*}{GaSb~} & initial & $(-4.5,$ & $[-4.5,$ & $7],$ & $16)$ & $0.082$ \\
& optimized & $(-5.35,$ & $[-3.34,$ & $7.90],$ & $14.27)$ & $0.043$ \\
\bottomrule
\end{tabular}
\caption{Initial and optimized energy windows used for calculating unstrained tight-binding models, and the corresponding band-structure mismatch as defined in \cref{eqn:average_band_difference}.}
\label{tab:energy_window_opt}
\end{table}
Finally, the effect of strain on the energy levels at high-symmetry points is shown in \cref{fig:strain_band_shift}. The numerical data is listed in the supplementary files~\footnote{See Supplemental Material at [URL] for tables containing the band energies at high-symmetry points for different values of strain.}.
\begin{figure*}\centering
\includegraphics[width=\textwidth]{strain_band_shift}
\caption{Strain dependence of band energies. The two highest valence bands and the lowest conduction band are shown at the $\Gamma$ (blue), $X$ (orange) and $L$ (green) points, where each band is doubly degenerate. Energy values are shifted such that the valence band maximum at $\Gamma$ is zero. The line represents values calculated from the tight-binding models with linear interpolation (\cref{eqn:linear_interpolation}) in steps of $0.1 \%$. For comparison, the points show values calculated from first-principles. We find a good agreement between the tight-binding and first-principles values, except for the conduction band value at the $L$ - point at $-4\%$ biaxial $(111)$ strain.}
\label{fig:strain_band_shift}
\end{figure*}
In the supplementary materials of this paper, an export of the AiiDA database is given~\footnote{See Supplemental Material at [URL] for an full export of the AiiDA database containing the calculations of strained InSb, InAs, and GaSb tight-binding models.}. This database contains the full provenance of each calculation performed to create the tight-binding models. For ease of accessibility, a separate data set containing only the $195$ strained tight-binding models is also given~\footnote{See Supplemental Material at [URL] for an archive containing the $195$ strained tight-binding models of InSb, InAs, and GaSb.}.
\section{Conclusion and Outlook}
\label{sec4}
We have implemented a workflow for an \textit{automatic} construction of Wannier tight-binding models from first-principles calculations. Building on the known procedure for calculating these models, we introduced a post-processing step to symmetrize the models, and an optimization of the energy windows used for disentanglement. These workflows are implemented in the \texttt{aiida-tbextraction} package, which is a free and open-source plugin for the AiiDA framework. As a test case, tight-binding models for strained III-V semiconductor materials were calculated. These results should enable device simulations for Majorana designs and other quantum devices.
The workflows have been implemented in a modular and extensible way. As a result, they can be used as building blocks for further improvements in automating the process of generating Wannier tight-binding models. Possible directions include extending the number of first-principles codes which are compatible with the plugin, adding different fitness criteria for the energy window optimization, and further minimizing the number of tunable parameters. For example, the need for choosing initial trial orbitals could be eliminated either by using another optimization step, or by utilizing the method of Ref.~\cite{mustafaAutomatedConstructionMaximally2015}.
\section*{Acknowledgments}
We would like to thank S. Huber, M. Uhrin, G. Pizzi, A. Marrazzo, M. K\"onz and D. Rodic for helpful discussions. The authors were supported by Microsoft Research, and the Swiss National Science Foundation through the National Competence Centers in Research MARVEL and QSIT. AAS also acknowledges the support of the Swiss National Science Foundation Professorship. Calculations were performed on the M\"onch cluster of ETH Zurich.
|
2,877,628,088,342 | arxiv | \section{Introduction}
A case-based reasoning (CBR) system solves new queries by retrieving a similar case from the case base (e.g., \cite{aamodt-plaza94,kolodner93,leake96-cbr-overview,mantaras-et-al05,riesbeck-schank89,richter-weber13}). If the solution of the retrieved case does not apply to the query, then an adaptation process modifies the solution to respond to situation differences. After the query is successfully solved, it is stored as a new case in the case base for future use. While a case base with good coverage and a good retrieval model allows the CBR system to retrieve similar cases, the case adaptation model determines the flexibility of the system to adjust the retrieved solutions for novel queries.
From the early days of CBR, adaptation has often been done using expert knowledge encoded in hand-crafted adaptation rules (e.g., \cite{hammond89}).
To alleviate the burden of knowledge engineering, the case difference heuristic (CDH) approach extracts case adaptation knowledge from cases in the case base \citep{hanney-keane96}.
The CDH approach collects pairs of cases from the case base, and generates rules that attribute the difference in the problem descriptions (the problem difference) of a pair to the difference in their solution descriptions (the solution difference). The problem difference determines the antecedent of the new adaptation rule, and the solution difference determines its consequent. The resulting rules have the following form: If the problem difference between a query Q and the problem of a retrieved case C matches the problem difference of a rule R, then the solution difference of R can be applied to the solution of the retrieved case C. For regression tasks, the consequent might be a numeric change to be applied to the predicted value of a retrieved case by addition, multiplication, or more complicated means. Furthermore, multiple rules can be generalized into one if they share similar preconditions and effects \citep{hanney-keane96}.
Deep learning (DL), using deep neural networks and often learning from massive amounts of data, has shown the ability to extract useful features from nonsymbolic data. It has been highly successful in many task domains, especially those resistant to other AI methods, such as computer vision tasks \citep{sinha2018deep}.
The CBR community has brought neural networks to CDH, using neural networks to learn adaptation knowledge from pairs of cases \citep{Policastro2003,POLICASTRO200626,liao-liu-chao18,craw-wiratunga-rowe06,leakeYe2021}. Up to this point, such approaches have used network learning to learn adaptations based on differences in sets of predefined features. They have not attempted to exploit one of the noteworthy strengths of deep learning: the ability to learn useful features from data.
This study extends previous work on a neural network based CDH method, NN-CDH \citep{leakeYe2021} by using a deep neural network to extract features, which are used both for retrieval and as inputs to adaptation knowledge learning using NN-CDH. From the perspective of CBR, the novel contribution of this paper is to use DL-generated features for adaptation learning. From the perspective of DL, the novel contribution is to use CBR-style adaptation to handle novel queries.
The proposed method, deep neural network based CDH (DL-CDH), is used in a CBR system for a computer vision task of predicting the age of a person from their facial image. System performance is compared to that of a standard DL regression model and a retrieval-only system.
By using DL-CDH, our CBR system improves the solution provided by its retrieval stage using adaptation knowledge that does not require engineered features. It achieves slightly lower accuracy than a counterpart DL regressor, while carrying benefits of CBR such as explanation by cases and lazy incremental learning. Moreover, the CBR system can outperform the DL regressor for out-of-distribution queries by using adaptation knowledge learned by DL-CDH.
\section{Background}
\subsection{Image Processing using CBR}\label{sec:ImageInCBR}
CBR researchers have been tackling computer vision problems for some time. \cite{perner01} describes image interpretation as a multi-level process, from pre-processing pixels at low levels, to segmenting and extracting features at intermediate levels, to classification and understanding at high levels. She proposes using CBR on different levels, mostly focusing on the case representation of images and the similarity measure between cases.
A series of works follows this line of research \citep{perner-holt-richter05,wilson-osullivan08,perner2017}.
In a recent survey \cite{perner2017} reviews applications of CBR in parameter selection, image interpretation, incremental prototype-based classification, novelty detection, and 1-D signal representation.
A more recent survey of CBR work \citep{Rahul2020} reports on multiple works on CBR image processing systems following a framework in which features extracted from images and their class labels are combined into cases upon which CBR systems operate.
This framework can be considered as a simpler version of that in \cite{perner01}. The research projects surveyed in this study focus on feature extraction and case retrieval for a wide range of domains.
Object detection and classification is a more direct problem in image interpretation. \cite{Turner2018} use a Convolutional Neural Network (CNN) to extract features and identify parts. Features and parts are used as a case's problem description and the object label is the solution description. In contrast to the work in \cite{Rahul2020}, they perform novel object discovery using CBR, by retaining a case if the system deems the case novel, and retrieving and reusing this new case later. \cite{Turner2019} further extend this approach by integrating it with a CNN, so that familiar queries are handled by a CNN while novel queries, when the CNN displays high uncertainty in its output, are handled by a CBR system.
\begin{comment}
In summary, existing CBR research in image processing leverages the retrieval, reuse, and even retain stages of CBR.
\end{comment}
The approach in this study is consistent with the framework by \cite{Rahul2020} and also uses a CNN for feature extraction as \cite{Turner2018}, adding learned case adaptation.
\subsection{Siamese Networks as a Similarity Measure}
A Siamese network takes two input vectors, extracts features from each input using the same feature extraction network, passes the extracted features into a distance layer, and calculates the distance between them \citep{bromley93,Bromley:1993:SVU:2987189.2987282}. Similar to \citet{Martin2017ACS} and \citet{mathisen-et-al19}, this study uses a Siamese network as a similarity measure for case retrieval in CBR.
\cite{wetzel2020twin} use a Siamese network to predict the target value differences given two data points. This approach can predict a target value of a query by pairing it with a data point, predicting the target value difference, and adding it back to the target value of the data point. Their method predicts a target value by projecting from an ensemble of training data points. This is in spirit very similar to the case difference heuristic methods described in the following section.
\subsection{Adaptation Based on Features Extracted by Neural Networks}
For the purpose of generating semi-factual and counterfactual case explanations, \cite{kenny2020} use a convolutional neural network to extract features, identify and modify exceptional features, and visualize the modified feature as an image using a generative adversarial network (GAN). Their work adapts the features generated by DL, while our work is adapting case solutions.
\subsection{Neural Network Based Applications of the Case Difference Heuristic}
Much research has investigated symbolic AI methods for reasoning about case differences to generate case adaptation rules ({\it e.g.}, \cite{hanney-keane96,jalali-Leake13-2,McDonnell-Cunningham06,McSherry98,wilke-etal97,aquin-et-al07,craw-wiratunga-rowe06}).
Of particular interest here is how network methods have been used for case difference heuristic processes.
\cite{Policastro2003,POLICASTRO200626} and \cite{liao-liu-chao18} use neural networks to learn relationships between problem differences and solution differences, and use these networks to predict a solution difference from a problem difference.
Inspired by previous work in network-based CDH,
\cite{leakeYe2021} develops a neural network based CDH approach, NN-CDH, in which a neural network learns to predict solution differences for regression problems based on the context of adaptation and the problem difference. NN-CDH has shown good results in improving retrieval results. In addition, in initial tests for scenarios in which when domain knowledge is hard to learn, such as novel queries in high dimensionality, the CBR system using NN-CDH for adaptation could outperform a baseline neural network regressor. Extending the work in \cite{leakeYe2021}, this study:
\begin{enumerate}
\item Uses features extracted by a deep neural network as problem descriptions of cases and learns adaptation knowledge from extracted features;
\item Evaluates the effects of CDH adaptations on a CBR system with two different retrieval methods.
\item Compares the performance of the CBR system with its counterpart DL system (instead of a neural network system which is compared with NN-CDH) in an image domain task (instead of tabular data domains).
\end{enumerate}
\begin{comment}
\subsection{Domain Adaptation (Transfer) in DL}
Deep learning methods have achieved excellent performance in many tasks. However, they still face performance degradation when training data and testing data are drawn from different domains and having different underlying distributions. Much research focuses on using a model trained on a source distribution on a different but related target domain.
Since deep learning methods can already extract transferable representations, it becomes necessary to reason about the relatedness between source and target domain to achieve domain adaptation.
******If we reinstated on transfer we'd keep the sentence below
\cite{wilson2020survey} survey methods and categorize DL domain adaptation types, where adversarial-based methods generate synthetic target data using Generative Adversarial Networks (GANs) \citep{mirza2014conditional}.
\noindent {\bf Divergence-based:} In Wilson and Cook's categorization, domain adaptation focuses on minimizing divergence between distributions.
A focus of this study is to evaluate system's performance on novel queries, which can be seen as target domain whose distribution is different from that of the source domain.
Divergence-based domain adaptation methods try to minimize the distribution divergence between source and target. Some methods try to use same network with same parameters to model both source and target domain, in order to learn domain-invariant feature representations. However, modeling two domains with the same network could affect the discriminative power of the network. \cite{rozantsev2018beyond} introduce a two-stream network to model the domain shift explicitly,
where one stream is for the source domain and the other for the target domain.
The loss of the model includes classification in the source and target domain, network parameters regularization and the discrepancy between the two domain representations.
Recent research orients on both domain alignment and preserving discriminative information. For domain adaptation in vision recognition task, \cite{damodaran2018deepjdot} proposes a model which finds a common latent space for the source and target domain and also carries discriminative power for both domains. It outperforms state-of-the-art methods on digits and office-home tasks.
\todo{Ye: I trimmed some to focus on the most relevant}
\noindent {\bf Adversarial-based domain adaptation:}
\end{comment}
\subsection{Age Prediction from Facial Images using Deep Learning}
This study examines the age prediction task involving out-of-distribution (OOD) samples. Age prediction is a well-studied topic in computer vision. Interestingly it is tackled both as a classification problem \citep{LH:CVPRw15:age,DUAN2018448} and as a regression problem \citep{7406403}. An ensemble attentional convolutional network is proposed in \cite{abdolrashidi2020age} to accomplish both age and gender prediction tasks. \cite{9144212} survey the detection of OOD data in DL, but handling of OOD data is one step beyond detection. To handle biases in age, ethnicity, and gender prediction, \cite{cao2021outofdistribution} proposed distribution-aware techniques in data augmentation and data curation to ensure fairness.
This study does not try to surpass state-of-art age prediction studies (see \cite{cao2021outofdistribution}). Instead we chose this task to test the effectiveness of the DL-CDH approach and the benefit of CDH adaptation in DL tasks involving OOD samples.
\section{A Deep Learning Based Case Difference Heuristic Approach}
\begin{comment}
This study proposes a deep learning based case difference heuristic approach, which we call DL-CDH, as a direct extension of NN-CDH.
\end{comment}
Similarly to its predecessor NN-CDH, DL-CDH learns adaptation knowledge by training a neural network over pairs of cases to predict solution difference based on problem difference (and adaptation context). In our testbed system, DL-CDH is implemented as a feedforward neural network with dense layers and dropout layers. However, the adaptation network of DL-CDH does not learn directly from the problem descriptions of the cases. Instead, the adaptation network learns using features extracted from a deep neural network for problem representations. Given a pair of cases, the network takes the pair of extracted features as input and outputs the predicted age difference between the two cases.
A CBR system solving DL regression tasks can use DL-CDH as its adaptation process. Given a query, the CBR system first retrieves a case. Then a deep neural network extracts the features of the query and the retrieved case. DL-CDH uses their features to predict the solution difference between the query and the retrieved case. Last, this solution difference is applied to the solution of the retrieved case to yield the final solution.
\section{Evaluation}
\subsection{Dataset and Preprocessing}
We evaluate the perfomance of DL-CDH for the task of predicting the age of a person given the image of a face.
We used Wikipedia images from the IMDB-WIKI dataset \citep{Rothe-IJCV-2018}. We filtered out all images with no face or more than one face, yielding a new dataset containing 22578 face images, each of which is a $224\times224$ image with RGB values per pixel, labeled with an age ranging from 1 to 100. The age distribution is shown in Figure \ref{fig:agedistribution}.
\begin{figure}[]
\centering
\includegraphics[width=0.45\textwidth]{wikidataset.png}
\caption{Age distribution of wiki dataset}
\label{fig:agedistribution}
\end{figure}
We used a pretrained CNN with the vgg-vd-16 \citep{simonyan2015deep} architecture to extract features from face images. The CNN is pretrained on the vgg-face dataset \citep{Parkhi15} for face recognition task.
We pass all images to the CNN and convert each image $x$ into a case with a feature vector of $2622$ dimensions and associated with a solution label, a numeric value $sol(x)$ representing the age of the image subject. We refer to the feature extractor function (VGG-Face CNN) as $f$, and the extracted features of a case $x$ is $f(x)= (x_1, x_2 ..., x_{2622})$.
The data set is prepared for experiments in two settings:
\begin{enumerate}
\item Normal Setting: The cases are used in a 10-fold cross-validation, where 80\% of the cases are used for training, 10\% for validation and 10\% for testing.
\item Novel Query Setting: The cases of age 20-50 are used in a 10-fold cross-validation with 90\% cases for training and 10\% for validation, while cases of age 0-20, 50-70, and $>$70 are respectively used as testing queries in multiple experiments.
\end{enumerate}
\subsection{Experimental Settings}
The testbed system involves only a retrieval stage and an adaptation stage. Because the goal is to study the effectiveness of adaptation, retention effects are not considered.
\begin{figure*}[tb]
\centering
\includegraphics[width=\textwidth]{DL-CDH_model.png}
\caption{Network Architectures of components of the Testbed System. Left: Base Regression Network Used in Regressor and DL-CDH. Center Left: Baseline Regressor. Center Right: Siamese Network Used to Retrieve Cases of Similar Ages. Right: DL-CDH Used to Adapt Initial Age Prediction from Retrieval}
\label{fig:network}
\end{figure*}
\subsubsection{Retrieval Method}
We implemented two similarity measures for retrieval. The first is a 1-nearest neighbor search over the extracted features $f(x)= (x_1, x_2 ...)$, where all features are weighted equally. The distance between two cases is calculated as the sum of feature-wise L1 distances.
The second similarity measure is a Siamese network trained on triplets of cases from the case base.
The network's architecture is depicted in center right of Figure \ref{fig:network}. The network is trained by triplets of $(a, p, n)$ generated on the fly: given an anchor case $a$, a positive case $p$ is another case chosen from cases of the same age as anchor $a$, and a negative case $n$ is another case chosen from cases whose age is at least 10 years different from that of anchor $a$.
The network is trained using the triplet margin loss:
\begin{equation*}
L(a,p,n) = \max(d(a_i, p_i)- d(a_i, n_i)+ margin, 0)
\end{equation*}
where $d(x_i,y_i) = ||x_i - y_i||_p$. In our implementation, we set $margin=1$ and $p=1$.
\subsubsection{Adaptation Method}
The network architecture of the testbed DL-CDH $R2$ is depicted in the right of Figure \ref{fig:network}. We trained DL-CDH with pairs of cases generated on the fly: For every training case $x$, another random training case $y$ is paired.
We concatenate $f(x)$ and $f(y)$ into a single vector $concat(f(x),f(y))$ as input to represent their difference, and the age difference becomes the expected output $sol(x)-sol(y)$. For the purpose of validation, every validation case in the validation set is paired with its nearest neighbor (under L1 distance) in the training case, and all such pairs form the validation pair set for the training of DL-CDH.
\subsubsection{Baseline Models}
For comparison, we implemented a neural network regressor $R1$ that directly predicts solution $sol(x)$ based on case features $f(x)$. The architecture is depicted in the center left of Figure \ref{fig:network}. To ensure fairness of the comparison, the baseline regressor $R1$ uses the same inner architecture as $R2$ in DL-CDH (as shown in the left of Figure \ref{fig:network}). In other words, $R1$ and $R2$ share the same layers, number of neurons, and activation functions, but they are trained for different purposes: $R1$ learns to predict age $sol(x)$ from feature set $f(x)$, while $R2$ learns to predict age difference $sol(x)-sol(y)$ from feature difference $concat(f(x),f(y))$. The regressor is trained until its error on the validation case set converges.
Last, a constant baseline is implemented by making predictions using the average label of all training samples.
\vspace{\baselineskip}
In summary, four systems are compared in the experiments: a constant baseline, baseline regressor, CBR system with L1 distance as similarity measure and DL-CDH as adaptation (referred as ``L1 + adaptation''), and CBR system with Siamese network as similarity measure and DL-CDH as adaptation (referred as ``Siamese + adaptation''). The experimental source code is available online\footnote{https://github.com/ziweizhao1993/DL-CBH}.
All systems are trained for 50 epochs using the Adam optimizer \citep{kingma2017adam} and a learning rate of $10^{-4}$. After the training of each system, the model with the highest validation accuracy is selected for testing.
\subsection{Experimental Results}
The testing errors of systems under different settings are shown in Table \ref{tab:results}. For the CBR systems, the error of initial solution by retrieval and the error of final solution after adaptation are both shown.
\begin{table*}[t]
\centering
\begin{tabular}{lcccccccc}
\multirow{2}{*}{} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Training\\ Age Range\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Query\\ Age Range\end{tabular}} & \multicolumn{2}{c}{Baseline} & \multicolumn{2}{c}{L1 + Adaptation} & \multicolumn{2}{c}{Siamese + Adaptation} \\ \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& & & Constant & Regressor & Retrieve & Adapt & Retrieve & Adapt \\ \midrule
\multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Normal\\ Setting\end{tabular}} & All & All & 12.94 & \textbf{5.9668} & 9.5805 & 8.1397 & 7.4944 & 7.8558 \\ \midrule
\multicolumn{1}{l}{\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Novel \\ Query \\ Setting\end{tabular}}} & \multirow{3}{*}{20$\sim$50} & \textless{}20 & 14.7184 & 8.948 & 10.1784 & 9.1764 & 8.6156 & \textbf{8.0906} \\
\multicolumn{1}{l}{} & & 50$\sim$70 & 25.8215 & 15.5101 & 17.3237 & 14.7365 & 14.8493 & \textbf{12.0175} \\
\multicolumn{1}{l}{} & & \textgreater{}70 & 45.8084 & 32.7671 & 35.8406 & 32.1052 & 33.049 & \textbf{28.8334} \\
\midrule
\end{tabular}
\caption{Average Error of Systems under Different Settings}
\label{tab:results}
\end{table*}
\subsubsection{Effects of the Adaptation by DL-CDH over Retrieval}
By comparing the errors before and after adaptation in the CBR systems, we observe that adaptation in general improves the initial solution by retrieval, with only one exception: In ``Siamese + adaptation'' under normal settings, on average the adaptation actually provides worse solutions compared to the initial solution by retrieval.
This is a phenomenon also observed and explained in \cite{LeakeYe2021underR}. Because retrieval and adaptation are trained independently, the two stages may be out of synchronization. In the case of ``Siamese + adaptation'' under normal setting, the retrieval stage is well trained to retrieve cases that are close to queries, while the adaptation stage is trained on random pairs and capable of adapting for relatively larger difference between case pairs. The adaptation stage is not trained to handle the pairs of query and retrieved cases and is therefore not guaranteed to improve the initial solution of retrieval. Adaptation-guided retrieval might be a way to alleviate this problem. \cite{LeakeYe2021underR} further studies this phenomenon and proposes an alternating optimization process in response.
In ``L1 + adaptation'' or in ``Siamese + adaptation'' under novel query settings, the retrieved cases are not as close to the queries and the adaptation stage is more suited to handle such adaptation. Therefore the initial solutions are consistently improved by adaptation.
\subsubsection{Comparison between Baseline Regressor and CBR System}
In the normal setting, the baseline regressor is the best performing system. The CBR systems perform worse than the regressor but better than the constant baseline. With abundant data, the regressor consistently outperforms its counterpart CBR systems because the regressor network is capable of learning the domain knowledge well and handling non-novel queries. This is not surprising given the quality of performance achieved by deep learning provided with sufficient data.
In the novel query setting, ``L1 + adaptation'' performs similarly to or even better than the baseline regressor. Additionally, ``Siamese + adaptation" further improves upon ``L1 + adaptation'' and is the best performing of all systems. As novel queries become harder to solve (further from the training case distribution), the benefit of ``Siamese + adaptation" increases. This shows that adaptation knowledge learned by DL-CDH indeed adapts the initial solution by retrieval to better solve the queries, and if the initial solution is closer to the real solution, the adaptation also tends to be more accurate.
The comparison between the baseline regressor and the CBR system is consistent with the results and discussion in \cite{leakeYe2021}, where NN-CDH, the predecessor of DL-CDH, performs worse than its counterpart regressor in an easy task domain but better than the regressor in a domain with novel queries. It also matches with the motivation of \cite{Turner2019}, where familiar queries are handled by a CNN, but novel queries are handled by a CBR system.
\section{Future Work}
\subsection{Extending to Classification Task Domains}
NN-CDH has been extended to learn adaptation knowledge in classification task domains \citep{ye-et-al21}.
As a descendent of NN-CDH, DL-CDH can also be extended for classification. As discussed in Section \ref{sec:ImageInCBR}, CBR has been applied in many image classification tasks using DL techniques, but the adaptation stage is not a focus in any existing research. We envision that DL-CDH, once extended for classification, can be a natural extension for works such as \cite{Turner2018,Turner2019}.
\subsection{Image Adaptation Using DL-CDH and GAN}
In work highly relevant to this study, \cite{kenny2020} propose modifying extracted DL features to generate counterfactual and semi-factual cases. They take the additional step of recovering image from the modified features by using a generative adversarial network. Inspired by their work, we consider the features modified by DL-CDH as candidates to project back to image space using a GAN.
In contrast to current models in adversarial-based adaptation \citep{DBLP:journals/corr/abs-1812-04948,DBLP:journals/corr/abs-1711-10678}, whose adaptations are trained and used in a network doing end-to-end processing,
adaptations in DL-CDH are more localized; consequently, they can be more transparent and subject to manual control.
\section{Conclusion}
This study extends the neural network based case difference heuristic approach by combining it with feature extraction from a deep neural network. Its performance was evaluated on a facial image data set for the task of age prediction.
Except for one situation in which retrieval and adaptation are out-of-synchronization, adaptation by DL-CDH consistently improves the initial solutions provided by retrieval. The testbed DL-CDH system consistently outperforms its counterpart neural network regressor when solving novel queries, but it still loses to its counterpart for queries within the distribution of the extensive training data. However, we note that the CBR system offers benefits beyond accuracy alone, such as lazy learning and explanation by cases, enabling online learning without costly retraining.
The work suggests future directions in extending DL-CDH for classification and combining it with GANs to adapt images.
\section*{Acknowledgement}
We acknowledge support from the Department of the Navy, Office of Naval Research (Award N00014-19-1-2655), and the
US Department of Defense (Contract W52P1J2093009).
\bibliographystyle{named}
|
2,877,628,088,343 | arxiv | \section{Introduction}
Private read-update-write (PRUW) \cite{pruw_jpurnal,ourICC,pruw,sparse,rd,dropout,sparseFL1} is the concept of reading data from and writing updates back to specific sections in a data storage system without revealing the section indices or the values of updates to the data storage. Most applications of PRUW are in distributed learning, specifically in federated learning (FL) \cite{FL1,FL2} where millions of users train various machine learning (ML) models using the private data stored in their local devices. Since each individual user only has access to a limited amount of local data, it is possible that the updates generated by the user for most parameters from the training process are insignificant. Top $r$ sparsification \cite{rtopk,sparse1} is introduced in FL to only upload the most significant $r$ fraction of updates to increase the efficiency of the FL process by reducing communications with insignificant impact.
However, in top $r$ sparsification, the users send the sparse updates along with their indices, which leak information about each user's private data \cite{comprehensive, featureLeakage, SecretSharer, InvertingGradients,DeepLeakage}. Note that the values as well as the indices of the sparse updates leak information about the user's personal data since the databases are able to find the specific parameters in the model on which the user's data has the most and least impact. In this work, we propose schemes to perform user-database communications in an FL setting with top $r$ sparsification using PRUW to guarantee privacy of the values and the indices of the sparse updates.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{./figs/model_coded.eps}
\caption{System model.}
\label{model}
\vspace*{-0.5cm}
\end{figure}
The system model considered in this work consists of $N$ non-colluding databases storing the MDS coded FL model (Fig.~\ref{model}), that requires the user to download and update the most significant $r'$ and $r$ fractions of subpackets, respectively, without revealing their values or the indices to the databases. In the proposed schemes, we guarantee information theoretic privacy of the values of updates by adding random noise, based on Shannon's one time pad theorem.
Further, in order to guarantee information theoretic privacy of the indices of the sparse updates, we use a permutation technique where the coordinator in Fig.~\ref{model} initially assigns a random permutation of sets of parameters of the model and makes it available to all users. At the same time, the coordinator places the corresponding \emph{noise added permutation reversing matrices} at the databases. Note that the databases are unaware of the underlying permutation despite having access to the permutation reversing matrices which are noisy, again due to Shannon's one time pad theorem.
\begin{table*}[ht]
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
case & reading cost & writing cost & storage complexity & information leakage\\
\hline
Case 1 & $\frac{3r'(1+\frac{\log_q P}{N})}{1-\frac{1}{N}}$ & $\frac{3r(1+\log_q P)}{1-\frac{1}{N}}$ & $O(\frac{L^2}{BN^2})$ & $H(\hat{X}_1,\dotsc,\hat{X}_B)$\\
\hline
Case 2 & $\frac{5r'(1+\frac{\log_q P}{N})}{1-\frac{1}{N}}$ & $\frac{5r(1+\log_q P)}{1-\frac{1}{N}}$ & $\max\{O(\frac{L^2}{BN^2}),O(B^2)\}$ & $H(\Tilde{X}_1,\dotsc,\Tilde{X}_B)$\\
\hline
\end{tabular}
\end{center}
\caption{Achievable sets of communication costs, storage costs and amounts of information leakage.}
\label{main_res}
\vspace*{-0.5cm}
\end{table*}
Once the FL process begins, the coordinator leaves the system, and all communications between the users and databases take place in terms of the permuted indices, which guarantees the privacy of the indices of sparse updates. The permuted updates sent by the users are placed at the correct (non-permuted) positions privately, with the aid of the \emph{noise added permutation reversing matrices} stored at the databases.
This process incurs a large storage cost due to the large \emph{noise added permutation reversing matrices} as shown in \cite{pruw_jpurnal,sparse}. To alleviate this, we introduce a segmentation mechanism that divides the FL model into $B$ segments, and carries out permutations separately in each segment to hide the indices of the sparse updates. This reduces the storage cost significantly at the expense of a certain amount of information leakage. The amount of information leaked on the indices of the sparse updates can be maintained under a desired privacy leakage budget by varying the number of segments $B$.
This work differs from \cite{sparseFL1} by using coded storage to achieve lower storage costs at the expense of increased read-write costs. We propose two schemes in this paper to perform private FL with top $r$ sparsification. The first scheme achieves lower read-write costs at the expense of a larger storage cost or information leakage. The second scheme uses an additional round of permutations to reduce the information leakage, at the expense of increased read-write costs. Based on the specifications and limitations of a given FL task, one can choose the most suitable scheme with the optimum number of segments $B$, to perform private FL with top $r$ sparsification.
\section{Problem Formulation}
We consider $N$ non-colluding databases, each storing an FL model consisting of $L$ parameters, which are divided into $P$ subpackets, each containing $\ell=\frac{L}{P}$ parameters. All parameters take values from a large enough finite field $\mathbb{F}_q$. The parameters of each subpacket are combined to obtain a single symbol using an $(\ell,N)$ MDS code, to reduce the storage cost.
Top $r$ sparsification is considered in both uplink and downlink. The process is divided into two phases, namely, the reading phase and the writing phase. In the reading phase at time $t$, each user reads (downloads) a set of $Pr'$ subpackets from all databases, where $r'$, $0\leq r'\leq1$, is the downlink sparsification rate. These $Pr'$ subpackets are determined by the databases, based on the information received by the users in the writing phase of time $t-1$. In the writing phase at time $t$, each user chooses the $Pr$ subpackets with the most significant updates, where $r$, $0\leq r\leq1$, is the uplink sparsification rate, and sends updates corresponding to those $Pr$ subpackets along with their indices to all databases (direct values and indices are not revealed). Note that privacy leakage can occur only in the writing phase since the user does not send any information to databases in the reading phase. The following privacy, security and correctness conditions are considered in this work.
\emph{Privacy of the values of updates:} No information on the values of updates is allowed to leak to the databases, i.e.,
\begin{align}
I(\Delta_i^{[t]};G_n^{[t]})=0, \quad \forall n, \quad \forall i,
\end{align}
where $\Delta_{i}^{[t]}$ is the $i$th sparse update and $G_{n}^{[t]}$ is all the information sent by the user to database $n$, both at time $t$.
\emph{Privacy of the indices of sparse updates:} The amount of information leaked on the indices of the sparse updates needs to be maintained under a given privacy leakage budget $\epsilon$, i.e.,
\begin{align}
I(X^{[t]};G_n^{[t]})\leq\epsilon, \quad \forall n,
\end{align}
where $X^{[t]}$ is the set of indices of the sparse subpackets updated by a given user at time $t$. The system model with the privacy constraints is shown in Fig.~\ref{model}. A coordinator is used to initialize the scheme.
\emph{Security of the model:} No information about the model parameters is allowed to leak to the databases, i.e.,
\begin{align}
I(W^{[t]};S_n^{[t]})=0, \quad \forall n,
\end{align}
where $W^{[t]}$ is the FL model and $S_n^{[t]}$ is the data content in database $n$ at time $t$.
\emph{Correctness in the reading phase:} The user should be able to correctly decode the sparse set of subpackets $J$ of the model, from the downloads in the reading phase, i.e.,
\begin{align}
H(W_{J}^{[t-1]}|A_{1:N}^{[t]})=0,
\end{align}
where $W_{J}^{[t-1]}$ subpackets in set $J$ (before updating) and $A_n^{[t]}$ is the information downloaded from database $n$ at time $t$.
\emph{Correctness in the writing phase:} Let $J'$ be the set of most significant $Pr$ subpackets of the model, updated by a user at time $t$. Then, the model should be correctly updated as,
\begin{align}
W_{s}^{[t]}=
\begin{cases}
W_{s}^{[t-1]}+\Delta_{s}^{[t]}, & \text{if $s\in J'$}\\
W_{s}^{[t-1]}, & \text{if $s\notin J'$}
\end{cases},
\end{align}
where $W_{s}^{[t-1]}$ is subpacket $s$ of the FL model at time $t-1$, and $\Delta_{s}^{[t]}$ is the corresponding update of subpacket $s$ at time $t$.
\emph{Reading and writing costs:} The reading and writing costs are defined as $C_R=\frac{\mathcal{D}}{L}$ and $C_W=\frac{\mathcal{U}}{L}$, respectively, where $\mathcal{D}$ is the total number of symbols downloaded in the reading phase, $\mathcal{U}$ is the total number of symbols uploaded in the writing phase, and $L$ is the size of the model.
\emph{Storage complexity:} The storage complexity is quantified by the order of the number of symbols stored in each database.
In this work, we propose schemes to perform user-database communications in FL with top $r$ sparsification on MDS coded data to reduce the storage cost, and quantify the minimum achievable communication costs while satisfying all privacy, security and correctness conditions described above.
\section{Main Result}
\begin{theorem}
Consider an FL setting with top $r$ sparsification, where the model with $L$ parameters belonging to $P$ subpackets, each with $\ell$ parameters are stored in $N$ non-colluding databases using an $(\ell,N)$ MDS code. The $P$ subpackets are further divided into $B$ segments, each consisting of the $\frac{P}{B}$ consecutive, non-overlapping subpackets. Let $\hat{X}_i$, $i\in\{1,\dotsc,B\}$ be the random variable representing the number of sparse subpackets updated by a given user from segment $i$, and let $(\Tilde{X}_1,\dotsc,\Tilde{X}_B)$ be the vector representing all distinct combinations of $(\hat{X}_1,\dotsc,\hat{X}_B)$ irrespective of the segment index. Then, the communication costs, storage costs and amounts of information leakage in Table~\ref{main_res} are achievable.
\end{theorem}
\begin{remark}
The two cases in Table~\ref{main_res} correspond to the results of two schemes presented in Section~\ref{scheme}. Scheme 1 (case 1) results in a lower communication cost compared to scheme 2, at the expense of a larger information leakage. The information leakage of scheme 2 is smaller than that of scheme 1, i.e., $H(\Tilde{X}_1,\dotsc,\Tilde{X}_B)\leq H(\hat{X}_1,\dotsc,\hat{X}_B)$, since $(\Tilde{X}_1,\dotsc,\Tilde{X}_B)$ combines all different permutations of $(\hat{X}_1,\dotsc,\hat{X}_B)$.
\end{remark}
\begin{remark}
The information leakage in Table~\ref{main_res} corresponds to the amount of information leaked on the indices of the sparse updates. The number of segments $B$ can be chosen based on the allowed privacy leakage budget $\epsilon$, by solving $H(\hat{X}_1,\dotsc,\hat{X}_B)\leq\epsilon$ or $H(\Tilde{X}_1,\dotsc,\Tilde{X}_B)\leq\epsilon$, based on the chosen case. Information theoretic privacy, i.e., $\epsilon=0$ can be achieved when $B=1$ since $H(\hat{X}_1)=H(\Tilde{X}_1)=H(Pr)=0$.
\end{remark}
\begin{remark}
Consider an example setting with $P=18$ subpackets divided into $B=1, 2, 3, 6, 9$ segments. Assume that each subpacket is equally probable to be selected to the set of most significant $Pr=3$ subpackets. The behavior of the information leakage for each value of $B$ is shown in Fig.~\ref{leak}. In general, the higher the storage complexity, the lower the information leakage is and vice versa.
\end{remark}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{./figs/leak2.eps}
\caption{Information leakage of an example setting with $P=18$, $Pr=3$ and different values of $B$.}
\label{leak}
\end{figure}
\section{Proposed Schemes}\label{scheme}
\subsection{Case 1: Within-Segment Permutations}
The proposed scheme for case 1 utilizes random noise addition and within-segment permutations to guarantee privacy of the read-write process. The scheme is presented in terms of an example due to limited space here. The example setting is shown in Fig.~\ref{init_c1_eg}, where the FL model consisting of $P=15$ subpackets are grouped into $B=3$ segments, each containing five subpackets.
\begin{figure}
\centering
\includegraphics[scale=0.55]{figs/c1.eps}
\caption{Initialization of the scheme for case 1.}
\label{init_c1_eg}
\vspace*{-0.4cm}
\end{figure}
\subsubsection{Initialization}
The storage of a single subpacket $s$ in database $n$, $n\in\{1,\dotsc,N\}$, is given by,
\begin{align}\label{singlesp}
S_n^{[s]}=\sum_{i=1}^\ell \frac{1}{\alpha_n^i}W_i^{[s]}+\sum_{i=0}^x \alpha_n^i Z_{s,i},
\end{align}
with $x=\ell$, where $W_i^{[s]}$ is the $i$th parameter of subpacket $s$, $Z_{s,i}$ is a random noise symbol, $\ell$ is the size of a subpacket (subpacketization) and $\alpha_n$s are globally known distinct constants from $\mathbb{F}_q$. Therefore, the storage of segment $j$, $j\in\{1,2,3\}$, each consisting of five subpackets is given by,
\begin{align}\label{storage_c2_eg}
S_{n,j}=\begin{bmatrix}
\sum_{i=1}^\ell \frac{1}{\alpha_n^i}W_i^{[1,j]}+\sum_{i=0}^x \alpha_n^i Z_{1,i}\\
\vdots\\
\sum_{i=1}^\ell \frac{1}{\alpha_n^i}W_i^{[\frac{P}{B},j]}+\sum_{i=0}^x \alpha_n^i Z_{\frac{P}{B},i}
\end{bmatrix},
\end{align}
with $x=\ell$ and $\frac{P}{B}=5$, where $W_i^{[k,j]}$ is the $i$th symbol of subpacket $k$ of segment $j$. Before the FL process begins, the coordinator randomly chooses three permutations for the five subpackets in each of the $B=3$ segments, from the $5!$ options. Let the permutations assigned for the three segments be $\Tilde{P}_1=\{2,1,4,5,3\}$, $\Tilde{P}_2=\{3,5,2,4,1\}$ and $\Tilde{P}_3=\{5,2,3,1,4\}$, respectively. This is known by all participating users, but not the databases. The coordinator also places the corresponding three noise added permutation reversing matrices at each of the databases as shown in Fig.~\ref{init_c1_eg}. For example, the noise added permutation reversing matrix corresponding to the first segment, stored at database $n$, $n\in\{1,\dotsc,N\}$, is given by,
\begin{align}\label{R1_c2}
R_n^{[1]}=\begin{bmatrix}
0 & 1 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\
\end{bmatrix}+\alpha_n^\ell\bar{Z}_1,
\end{align}
where $\bar{Z}_1$ is a matrix of size $5\times5$, consisting of random elements from $\mathbb{F}_q.$ Note that the binary matrix in \eqref{R1_c2} reverses the permutation $\Tilde{P}_1$, while the noise component $\alpha_n^\ell\bar{Z}_1$ ensures that the databases learn nothing about the underlying permutation from the noise added permutation reversing matrices.
\subsubsection{Reading Phase}
In the reading phase, the databases decide on a set of $Pr'$ sparse subpackets to be sent to the users at time $t$, based on the sparse updates received at time $t-1$ (for example, the most commonly updated $Pr'$ subpackets). Note that all communications between users and databases take place in terms of the permuted indices of subpackets. Therefore, the $Pr'$ sparse subpackets selected to be sent to the users are also indicated by their permuted indices. Let $\tilde{V}_j$ be the set of permuted indices of the sparse subpackets chosen from segment $j$ to be sent to the users for $j\in\{1,2,3\}$. For example, let $\Tilde{V}_1=\{1,3\}$ be the permuted set of sparse subpackets of segment 1 that needs to be sent to the users at time $t$. One designated database sends the permuted subpacket indices of each segment (segment 1: $\Tilde{V}_1=\{1,3\}$) to the users, from which the users identify the corresponding real sparse subpacket indices using the known permutations. For example, the real indices $V_1$ corresponding to $\Tilde{V}_1=\{1,3\}$ are given by $V_1=\{\Tilde{P}_1(1),\Tilde{P}_1(3)\}=\{2,4\}$.
Once the permuted indices of the sparse subpackets are sent to the users, each database generates a query to send each sparse subpacket. The query corresponding to the $i$th permuted sparse subpacket of segment $j$, i.e., $\Tilde{V}_j(i)$, is given by $Q_n^{[\Tilde{V}_j(i)]}=R_n^{[j]}(:,\Tilde{V}_j(i))$ for database $n$, $n\in\{1,\dotsc,N\}$. For example, the query corresponding to the first permuted subpacket of segment 1, i.e., $\Tilde{V}_1(1)=1$, is
\begin{align}\label{q_1}
Q_n^{[\Tilde{V}_1(1)]}&=Q_n^{[1]}=R_n^{[1]}(:,1)=
[0,1,0,0,0]^T+\alpha_n^\ell\hat{Z}_1,
\end{align}
where $\hat{Z}_1$ is the first column of $\bar{Z}_1$ in \eqref{R1_c2}. Then, database $n$, $n\in\{1,\dotsc,N\}$, sends the answer to the query in \eqref{q_1} as
\begin{align} A_n^{[\tilde{V}_1(1)]}&=S_{n,1}Q_n^{[\tilde{V}_1(1)]}
=\sum_{i=1}^\ell\frac{1}{\alpha_n^i}W_i^{[2,1]}+P_{\alpha_n}(2\ell),\label{ans2}
\end{align}
where $P_{\alpha_n}(2\ell)$ is a polynomial in $\alpha_n$ of degree $2\ell$. The users then obtain the parameters of the real subpacket 2, i.e., $V_1(1)=\Tilde{P}_1(\Tilde{V}_1(1))=2$, by solving
\begin{align}\label{mat_c2}
\begin{bmatrix} \!A_1^{\Tilde{V}_1(1)}\!\\\!\vdots\!\\\!A_N^{\Tilde{V}_1(1)}\!
\end{bmatrix}
\!=\!
& \begin{bmatrix}
\frac{1}{\alpha_1^\ell}\!\! & \!\!\dotsc \!\!&\!\! \frac{1}{\alpha_1} \!\!&\!\! 1 \!\!&\!\! \alpha_1 \!&\! \dotsc \!\!&\!\! \alpha_1^{2\ell}\\
\vdots \!\!&\!\! \vdots \!\!&\!\! \vdots \!\!&\!\! \vdots \!\!&\!\! \vdots \!\!&\!\! \vdots \!\!&\!\! \vdots\\
\frac{1}{\alpha_N^\ell} \!\!&\!\! \dotsc \!\!&\!\! \frac{1}{\alpha_N} \!\!&\!\! 1 \!\!&\!\! \alpha_N \!\!\!&\!\!\! \dotsc \!\!&\!\! \alpha_N^{2\ell}\\
\end{bmatrix}\!\!
\begin{bmatrix}
W_{\ell}^{[2,1]}\\\vdots\\W_{1}^{[2,1]}\\ R_{0:2\ell}
\end{bmatrix}
\end{align}
where $R_i$ are the coefficients of $P_{\alpha_n}(2\ell)$. Note that \eqref{mat_c2} is solvable given that $N=3\ell+1$, which determines the subpacketization as $\ell=\frac{N-1}{3}$. The same procedure described above is carried out for all sparse subpackets in each of the $B=3$ segments. The resulting reading cost is given by,
\begin{align}
C_R&=\frac{Pr'\log_q P+Pr'N}{L}=\frac{3r'(1+\frac{\log_q P}{N})}{1-\frac{1}{N}}.
\end{align}
\subsubsection{Writing Phase}
After carrying out the training process locally, the user chooses the $Pr$ subpackets with the most significant set of updates, and sends combined updates corresponding to each of the selected $Pr$ subpackets, along with their permuted indices. The combined update of the $i$th subpacket of segment $j$ is defined, assuming this subpacket is among the sparse set, as
\begin{align}\label{comb}
U_n^{[i,j]}=\sum_{k=1}^\ell \frac{1}{\alpha_n^k}\Delta_{k}^{[i,j]}+Z^{[i,j]},
\end{align}
where $\Delta_{k}^{[i,j]}$ is the update of the $k$th symbol of the $i$th subpacket of segment $j$ and $Z^{[i,j]}$ is a random noise symbol.
For example, assume that the user chooses to send the updates of real subpackets 2 and 4 from segment 1, subpacket 2 from segment 2 and subpacket 5 from segment 3. Note that the permuted subpacket index corresponding to the real subpacket 2 of segment 1 is 1, based on $\Tilde{P}_1=\{2,1,4,5,3\}$. Therefore, the permuted\footnote{For case 1, we only consider permutations within segments, and not among segments. Therefore, the real segment index is revealed to the databases.} (update, subpacket, segment) tuple corresponding to the first sparse update (real subpacket 2 of segment 1), which is sent by the user to database $n$, $n\in\{1,\dotsc,N\}$, is $(U_n^{[2,1]},1,1)$. Similarly, the rest of the permuted (update, subpacket, segment) tuples for this example are given by $(U_n^{[4,1]},3,1)$, $(U_n^{[2,2]},3,2)$ and $(U_n^{[5,3]},1,3)$, based on the rest of the initial permutations, $\Tilde{P}_2=\{3,5,2,4,1\}$ and $\Tilde{P}_3=\{5,2,3,1,4\}$. Each of these permuted tuples are sent to database $n$, $n\in\{1,\dotsc,N\}$, by the user. Once the databases receive the $Pr$ permuted (update, subpacket, segment) tuples, they create the permuted update vectors $Y_n^{[j]}$ for each segment $j$, $j\in\{1,2,3\}$. For the example considered, the permuted update vectors of the three segments are given by,
\begin{align}
Y_n^{[1]}&=[U_n^{[2,1]},0,U_n^{[4,1]},0,0]^T\\
Y_n^{[2]}&=[0,0,U_n^{[2,2]},0,0]^T\\
Y_n^{[3]}&=[U_n^{[5,3]},0,0,0,0]^T,
\end{align}
for database $n$. Using these permuted update vectors and the noise added permutation reversing matrices stored, database $n$, $n\in\{1,\dotsc,N\}$, privately rearranges the updates in the correct order as $\bar{U}_n^{[j]}=R_n^{[j]}Y_n^{[j]}$, $j\in\{1,2,3\}$. For example, the privately rearranged update vector in the correct order for segment 1 in database $n$ is given by,
\begin{align}
\bar{U}_n^{[1]}&\!=\!R_n^{[1]}Y_n^{[1]}\!=\!\!\left(\!\begin{bmatrix}
0 & 1 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\ \end{bmatrix}\!+\!\alpha_n^\ell\bar{Z}_1\!\!\right)
\!\!\!\begin{bmatrix}U_n^{[2,1]}\\0\\U_n^{[4,1]}\\0\\0
\end{bmatrix}\\
&=\!\!\begin{bmatrix}
0\\U_n^{[2,1]}\\0\\U_n^{[4,1]}\\0
\end{bmatrix}\!+\!P_{\alpha_n}(\ell)\!=\!\begin{bmatrix}
0\\\sum_{i=1}^\ell \frac{1}{\alpha_n^i}\Delta_{i}^{[2,1]}\\0\\\sum_{i=1}^\ell \frac{1}{\alpha_n^i}\Delta_{i}^{[4,1]}\\0
\end{bmatrix}\!+\!P_{\alpha_n}(\ell), \label{incr_c2}
\end{align}
where $P_{\alpha_n}(\ell)$ here is a vector of size $5\times1$, consisting of polynomials in $\alpha_n$ of degree $\ell$. Note that the updates of real subpackets 2 and 4 in segment 1 are now placed correctly in \eqref{incr_c2} at the $2$nd and $4$th positions, without the knowledge of the databases. Since the incremental update of each segment \eqref{incr_c2} is in the same form as the storage in \eqref{storage_c2_eg}, the incremental update is directly added to the existing storage to obtain the updated version, i.e., $S_{n,j}(t)=S_{n,j}(t-1)+\bar{U}_n^{[j]}$, $j\in\{1,2,3\}$ in each database. The writing cost for case 2 is given by,
\begin{align}
\!C_W=\frac{PrN(1+\log_q B+\log_q \frac{P}{B})}{L}=\frac{3r(1+\log_qP)}{1-\frac{1}{N}}.
\end{align}
The total storage complexity is given by
\begin{align}
O(P)+O\left(\frac{P^2}{B^2}\times B\right)=O\left(\frac{P^2}{B}\right)=O\left(\frac{L^2}{BN^2}\right).
\end{align}
\subsection{Case 2: Within-Segment and Inter-Segment Permutations}
In addition to noise addition and within-segment permutations considered in case 1 to guarantee the required privacy constraints, we consider inter-segment permutations as well in case 2 to achieve higher privacy guarantees. The scheme is presented in terms of an example, which is shown in Fig.~\ref{init_c3_eg}, where there are $12$ subpackets, divided into three segments.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figs/c3.eps}
\caption{Initialization of the scheme for case 2.}
\label{init_c3_eg}
\end{figure}
\subsubsection{Initialization}
The storage of a single subpacket $s$ is the same as \eqref{singlesp} with $x=2\ell$, and the storage of a given segment $j$, $j\in\{1,2,3\}$, is the same as \eqref{storage_c2_eg} with $x=2\ell$ and $\frac{P}{B}=4$. As described in case 1, the coordinator randomly chooses the three within-segment permutations $\Tilde{P}_1$, $\Tilde{P}_2$ $\Tilde{P}_3$ and the inter-segment permutation $\hat{P}$, and sends them to the users as shown in Fig.~\ref{init_c3_eg}. The coordinator also places the corresponding four noise added permutation reversing matrices given by $R_n^{[1]}$,$R_n^{[2]}$, $R_n^{[3]}$ and $\hat{R}_n$ at database $n$, $n\in\{1,\dots,N\}$. For instance, the noise added permutation reversing matrix corresponding to the first within-segment permutation $\tilde{P}_1=\{2,4,3,1\}$ in the example considered in Fig.~\ref{init_c3_eg} is given by,
\begin{align}
R_n^{[1]}=\begin{bmatrix}
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0
\end{bmatrix}+\alpha_n^{\ell}\bar{Z}_1,
\end{align}
where $\Bar{Z}_1$ is a random noise matrix of size $4\times4$. The noise added permutation reversing matrix corresponding to the inter-segment permutation $\hat{P}$ is given by,
\begin{align}
\hat{R}_n=\begin{bmatrix}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 & 0
\end{bmatrix}+\alpha_n^{\ell} Z,
\end{align}
where $Z$ is a random noise matrix of size $3\times3$. To aid the calculations of this scheme, we combine the two types of noise added permutation reversing matrices to obtain a combined noisy permutation reversing matrix (this is not stored at databases). For the example considered, the combined noisy permutation reversing matrix of database $n$ is given by,
\begin{align}
R_n&=\begin{bmatrix}
R_n^{[1]} & 0_{4\times4} & 0_{4\times4}\\
0_{4\times4} & R_n^{[2]} & 0_{4\times4}\\
0_{4\times4} & 0_{4\times4} & R_n^{[3]}
\end{bmatrix}\times (\hat{R}_n\otimes I_4)\\
&=\begin{bmatrix}
0_{4\times4} & 0_{4\times4} & \begin{bmatrix}
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0
\end{bmatrix}\\
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{bmatrix} & 0_{4\times4} & 0_{4\times4}\\
0_{4\times4} & \begin{bmatrix}
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0
\end{bmatrix} & 0_{4\times4}
\end{bmatrix}\nonumber\\
&\quad +\alpha_n^{\ell}P_{\alpha_n}(\ell),
\end{align}
where $I_4$ is the identity matrix of size $4\times4$ and $P_{\alpha_n}(\ell)$ here is a matrix of size $12\times12$ with entries consisting of polynomials of $\alpha_n$ of up to degree $\ell$.
\subsubsection{Reading Phase}
In the reading phase, the databases determine the set of $Pr'$ subpackets to be sent to the users at time $t$, based on the permuted information received by all users in the writing phase of time $t-1$, as explained in case 1. Since both subpacket and segment indices received by the users are in terms of their permuted indices, the $Pr'$ subpackets chosen by the databases in the reading phase are also indicated by their permuted indices. Let the permuted (subpacket, segment) tuples of the $Pr'$ subpackets to be sent to the users be denoted by $(\eta_p,\phi_p)$. This information is sent to all users by one designated database. For example, assume that the designated database sends the permuted (subpacket, segment) tuples given by $(\eta_p,\phi_p)=\{(1,3), (1,1), (1,2)\}$. These permuted tuples can be converted to their real indices using the permutations known by the users as follows. Consider the first permuted pair $(1,3)$. Since the permuted segment index is $\phi_p=3$, the corresponding real segment index is $\phi_r=\hat{P}(3)=1$. Then, the user can decode the subpacket index within the first segment as, $\eta_r=\Tilde{P}_1(1)=2$. Therefore, the real (subpacket, segment) pair corresponding to the permuted (subpacket,segment) pair $(\eta_p,\phi_p)=(1,3)$ is given by $(\eta_r,\phi_r)=(2,1)$. Similarly, the real set of sparse subpacket indices corresponding to the three permuted pairs are given by $ (\eta_r,\phi_r)=\{(2,1),(1,2),(3,3)\}$.
In order to send the subpacket corresponding to $(\eta_p,\phi_p)=(i,j)$, database $n$, $n\in\{1,\dots,N\}$, creates a query given by,
\begin{align}
Q_n^{[i,j]}&=R_n(:,(j-1)\frac{P}{B}+i).
\end{align}
For $(\eta_p,\phi_p)=(1,3)$, the corresponding query is given by,
\begin{align}\label{queg1}
Q_n^{[1,3]}&=R_n(:,9)=[[0,1,0,0],0_{1\times8}]^T
+\alpha_n^{\ell}P_{\alpha_n}(\ell),
\end{align}
where $P_{\alpha_n}(\ell)$ is a vector of size $12\times1$ with entries consisting of polynomials of $\alpha_n$ of degrees up to $\ell$. The corresponding answer to the query in \eqref{queg1} is given by,
\begin{align}
A_n^{[1,3]}&=S_n^TQ_n^{[1,3]}=\sum_{i=1}^\ell \frac{1}{\alpha_n^i}W_i^{[2,1]}+P_{\alpha_n}(4\ell).
\end{align}
The users can obtain the parameters of the second subpacket of segment 1 (since the real indices corresponding to permuted $(1,3)$ are $(2,1)$) using the $N$ answers received if $N=5\ell+1$ is satisfied. This defines the subpacketization for case 2 as $\ell=\frac{N-1}{5}$. The resulting reading cost is given by,
\begin{align}
C_R&=\frac{Pr'(N+\log_q B+\log_q\frac{P}{B})}{L}=\frac{5r'(1+\frac{\log_q P}{N})}{1-\frac{1}{N}}.
\end{align}
\subsubsection{Writing Phase}
In the writing phase, each user selects the $Pr$ subpackets with the most significant updates, and sends the corresponding combined updates along with their permuted subpacket and segment indices to all databases. Let $(\eta_r^{[m]},\phi_r^{[m]})$, $m\in\{1,\dotsc,Pr\}$, be the real (subpacket, segment) information of the $m$th sparse subpacket. The combined update of the $m$th sparse subpacket is given by \eqref{comb} with $i=\eta_r^{[m]}$ and $j=\phi_r^{[m]}$. For the example considered in Fig.~\ref{init_c3_eg}, assume that a user wants to update real (subpacket, segment) pairs given by $(\eta_r,\phi_r)=\{(2,1),(2,2),(3,3)\}$. Based on the within- and inter-segment permutations given by $\Tilde{P}_1=(2,4,3,1)$, $\Tilde{P}_2=(1,3,2,4)$, $\Tilde{P}_3=(3,1,4,2)$ and $\hat{P}=(2,3,1)$, the user sends the permuted (update, subpacket, segment) tuples corresponding to each of the $Pr$ subpackets to all databases. Consider the first sparse subpacket denoted by $(\eta_r,\phi_r)=(2,1)$. The permuted subpacket index corresponding to $\eta_r=2$ when $\phi_r=1$ is given by $\eta_p=\Tilde{P}_{\phi_r}^{-1}(2)=1$. The permuted segment index corresponding to $\phi_r=1$ is given by $\phi_p=\hat{P}^{-1}(1)=3$. Therefore, the permuted (update, subpacket, segment) tuple corresponding to the first sparse subpacket, sent to database $n$ is given by $(U_n^{[2,1]},1,3)$. Similarly, the tuples corresponding to the other sparse subpackets are given by $(U_n^{[2,2]},3,1)$ and $(U_n^{[3,3]},1,2)$. Similar to case 1, the databases create the permuted update vector based on the permuted tuples received by the user. For the example considered, the permuted update vector is given by,
\begin{align}
Y_n=[0,0,U_n^{[2,2]},0,U_n^{[3,3]},0,0,0,U_n^{[2,1]},0,0,0]^T.
\end{align}
Then, each database calculates the permutation-reversed incremental update as,
\begin{align}
\bar{U}_n&=R_nY_n=\left[0,\sum_{i=1}^\ell \frac{1}{\alpha_n^i}\Delta_{i}^{[2,1]},0,0,0,\sum_{i=1}^\ell \frac{1}{\alpha_n^i}\Delta_{i}^{[2,2]},0,0,\right.\nonumber\\
&\qquad\qquad\qquad\left. 0,0,\sum_{i=1}^\ell \frac{1}{\alpha_n^i}\Delta_{i}^{[3,3]},0\right]^T+P_{\alpha_n}(2\ell), \label{last_c4}
\end{align}
where $P_{\alpha_n}(2\ell)$ is a vector of size $12\times1$, consisting of polynomials of $\alpha_n$ of degrees up to $2\ell$. Note that the (real) subpacket 2 of segment 1, subpacket 2 of segment 2 and subpacket 3 of segment 3 ($(\eta_r,\phi_r)=\{(2,1),(2,2),(3,3)\}$) are correctly updated in \eqref{last_c4}, without revealing the real indices to the databases. Since the incremental update in \eqref{last_c4} is in the same form as \eqref{storage_c2_eg} with $x=2\ell$ and $\frac{P}{B}=4$, it is directly added to the existing storage to obtain the updated storage. The writing cost is given by,
\begin{align}
\!C_W=\frac{PrN(1+\log_q B+\log_q \frac{P}{B})}{L}=\frac{5r(1+\log_q P)}{1-\frac{1}{N}}.
\end{align}
The storage complexities of data, noise added within-segment and inter-segment permutation reversing matrices are given by $O(P)=O(\frac{L}{N})$, $O(\frac{P^2}{B})=O(\frac{L^2}{N^2B})$ and $O(B^2)$, respectively. Therefore, the storage complexity is $\max\{O(\frac{L^2}{N^2B}),O(B^2)\}$.
The proofs of the expressions for the amounts of information leaked on the sparse update indices for arbitrary $B$ (stated in in Table~\ref{main_res}) for cases 1 and 2 are omitted due to limited space.
\bibliographystyle{unsrt}
|
2,877,628,088,344 | arxiv | \section{Introduction}
Since the work of Thouless and co-workers \cite{TKNN}, physicists have
recognized that the exotic physics encountered in quantum Hall systems
\cite{MStone}, and more recently topological insulator materials
\cite{Moore, RMPHasan, RMPQi}, is intimately tied to the topological
properties of their bandstructures. Topological band theory has
since been extended in several interesting directions beyond its
original context. For example, several groups have shown that when
cold-atom or condensed-matter lattices are subjected to a
time-periodic drive, the resulting Bloch-Floquet states can form
topologically non-trivial bands
\cite{Oka,Inoue,Demler0,Demler,Lindner,Gu}. These ``Floquet
topological insulators'' \cite{Lindner,Cayssol} exhibit many of the
properties expected of topological materials, such as edge states
which are immune to disorder-induced backscattering, but they also
have some unique and peculiar characteristics of their own; for
example, topologically-protected edge states can exist even when all
the bands have zero Chern number and would thus normally be
considered ``topologically trivial'' \cite{Demler,Levin}. Topological
bandstructures have also been identified in photonic systems,
including magneto-optic photonic crystals
\cite{Raghu1,Raghu2,Wang1,Wang2}, cavity QED circuits\cite{LeHur,LeHur1}, metamaterial photonic crystals
\cite{Khanikaev}, and ring resonator lattices
\cite{hafezi,hafezi2,Liang}. Interest in these systems is driven, in
part, by the possible device applications of topologically-protected
photonic modes (e.g.~the stabilization of slow-light transmission),
and in part by the fundamental interest of combining topological band
physics with optical phenomena (e.g.~gain and nonlinearity). The
literature on topological photonics has intersected in interesting
ways with the Floquet topological insulator concept: notably, Fang
\textit{et al.}~have studied the Floquet bandstructures formed by
lattices of photonic resonators which are driven periodically
(e.g.,~by electro-optic modulators) \cite{Fan}, while Rechtsman
\textit{et al.}~have experimentally demonstrated a coupled-waveguide
array which acts like a Floquet topological insulator, with adiabatic
wavepacket evolution along a spatially-modulated axis simulating a
time-periodic drive \cite{Szameit}. We will focus on ring
resonator lattices of the sort studied in
Refs.~\onlinecite{hafezi,hafezi2,Liang}. Such photonic topological insulators
have the technologically desirable properties of being
on-chip, realizable at optical frequencies, and not requiring an
external drive or magnetic field. As originally proposed by
Hafezi \textit{et al.} \cite{hafezi}, ring resonators are arranged in
a two-dimensional (2D) lattice, and coupled weakly by
specially-engineered waveguides which produce phase shifts
incommensurate with the lattice, analogous to the Landau gauge in the
quantum Hall effect. Subsequently, it was shown that a topological
bandstructure could be obtained in a lattice with commensurate
couplings \cite{Liang}, analogous to the zero-field quantum
Hall effect \cite{haldane88}. The transition into the topologically
non-trivial phase occurs by tuning the inter-ring couplings to large
values, such that the system must be treated with transfer matrix
rather than tight-binding methods.
In this paper, we point out that these resonator-and-waveguide
photonic topological insulators \cite{hafezi,hafezi2,Liang} can be
modeled as networks of the sort developed by Chalker and Coddington in
the 1980s to study the Anderson transition in quantum Hall systems
\cite{ChalkerCo,Kivelson,Lee,Kramer}. Network models are described by
discrete-time evolution operators in place of Hamiltonians
\cite{klessemet1,HoChalker}, and we show that this allows the Bloch
modes of \textit{periodic} networks to be mapped onto the
Bloch-Floquet states of driven
lattices\cite{klessemet2,janssen,janssenbook}---which, as mentioned
above, have attracted a great deal of recent
attention \cite{Oka,Inoue,Demler0,Demler,Lindner,Gu,Cayssol,Levin}. To date,
however, ideas from the network model literature have not been
widely employed in the growing Floquet
topological insulator literature.
Furthermore, the network picture allows a topological
invariant to be formulated based on adiabatic pumping \cite{Laughlin,Brouwer}, relating
the number of topologically-protected edge states in the projected
bandstructure to the winding number of a coefficient of reflection
from one edge of the network.
In its original context, a Chalker-Coddington (CC) network model
\cite{ChalkerCo} describes a 2D electron gas subject to a strong
magnetic field and a disorder potential, $V(\vec{r})$, whose
correlation length greatly exceeds the magnetic length. In this
regime, the electron wavefunctions are localized along equipotential
contours of $V(\vec{r})$. The equipotentials form the directed
links of a network, and each link is associated with an
Aharonov-Bohm phase acquired by the electron amplitude.
Saddle points of the potential, where the quantum tunneling between adjacent contours (links) can occur, make up the nodes of the network,
which is taken to form a square lattice.
The tunneling between the incoming and outgoing links at each node is
described by a unitary scattering matrix, parameterized by a coupling
strength $\theta$. One can associate to each network a unitary matrix
relating the inputs and outputs of the entire ensemble of nodes, which
is analogous to a ``discrete-time'' evolution operator
\cite{klessemet1,HoChalker}. Although the model was originally
formulated for studying the effects of disorder, Ho and Chalker
\cite{HoChalker} subsequently applied the evolution operator analysis
to a periodic square lattice network, and showed that an effective 2D
Dirac Hamiltonian emerges at the critical value $\theta = \pi/4$,
with chiral edge states appearing when $\theta > \pi/4$. This
result was later rederived, in the context of photonic topological
insulators, in Ref.~\onlinecite{Liang}, together with the bulk and
projected bandstructures. One of the aims of the present paper is to
clarify the band topology and the nature of the bulk-edge
correspondence in these bandstructures. We will see that the
bandstructures derived in Ref.~\onlinecite{Liang} are characteristic
of ``anomalous Floquet insulators'' (AFI)\cite{Demler,Levin}: all bands have zero Chern
number despite the existence of topologically protected edge states.
We shall also see that network models based on the honeycomb lattice
have richer phase diagrams, containing both ``Chern insulator'' (CI)
phases\cite{haldane88} (where the bands have non-zero Chern number)
and AFI phases. Similar behavior has previously been found in a 2D
hexagonal tight-binding model with periodically-varying hopping
amplitudes\cite{Demler}.
It is interesting to note that in their original context, network
models were intended to be effective descriptions of a system with a
definite underlying Hamiltonian---a non-interacting electron gas in a
magnetic field and disorder potential. However, the situation is
reversed for photonic resonator lattices: here, the wave amplitude
description of coupled ring resonators \cite{yariv02, yariv_wg} is
valid for arbitrary coupling parameters, and an effective Hamiltonian
(tight-binding) description emerges for weak coupling \cite{hafezi}.
\section{Photonic networks and Floquet maps}
\label{sect:Floquet maps}
We begin by examining how a photonic lattice maps onto a network, and
how the network may be described by a unitary evolution matrix. As
described in Refs.~\onlinecite{hafezi,hafezi2,Liang}, and depicted in
Fig.~\ref{fig:couplings}(a), a photonic topological insulator can be
constructed by a lattice of ring resonators. Each resonator acts as
an optical waveguide, constraining light to propagating along the
ring. Each quarter-ring serves as a ``link'' in a photonic network,
which is associated with a phase delay whose value depends on the
operating frequency. The direction of propagation in each ring acts
as a two-fold degenerate degree of freedom, which can be thought of as
an analog of the electron spin in a quantum spin Hall
insulator\cite{RMPHasan}. The primary ring in each unit cell is
coupled to its neighbors via waveguide loops \cite{hafezi}, shown in
Fig.~\ref{fig:couplings}(a) as a set of smaller rings. If the
couplings have negligible internal backscattering, the inter-ring
coupling is ``spin'' conserving. The clockwise and counter-clockwise
modes then form separate directed networks; the
network for clockwise modes is shown in Fig.~\ref{fig:couplings}(b).
The inter-link couplings, corresponding to the nodes of the network,
are described by unitary scattering matrices.
\begin{figure}
\centering\includegraphics[width=0.45\textwidth]{fig_couplings.pdf}
\caption{(color online) (a) Schematic of a unit cell in a
two-dimensional lattice of photonic ring resonators. (b) The
equivalent periodic network. Within the unit cell, we define a
surface (blue rectangle) which is penetrated by input amplitudes
$\ket{a}$ and output amplitudes $\ket{b}$, related by $\ket{b} =
e^{i\phi}\ket{a}$. These amplitudes also scatter with those of
neighboring cells, with coupling matrices $S_x$ and $S_y$. (c) A
supercell consisting of $N_y$ unit cells joined along the $y$
direction, with twisted boundary conditions along the $x$
direction with twist angle $k_x$ and variable phase delays $w_\pm$
along the upper and lower boundaries. }
\label{fig:couplings}
\end{figure}
Propagation in such a network can be described by an evolution
operator \cite{klessemet1,HoChalker}. Consider a unit cell of a
periodic network, such as the one shown in
Fig.~\ref{fig:couplings}(b). For each cell, at lattice index $n$, we
can define a surface which is penetrated by $q$ input amplitudes
$\ket{a_n} \equiv [a_{1n}, \cdots, a_{qn}]$, and the same number of
output amplitudes $\ket{b_n} \equiv [b_{1n}, \cdots, b_{qn}]$. The
input and output amplitudes are related by $S_{\textrm{int}} \ket{a_n}
= \ket{b_n}$, where $S_{\textrm{int}}$ is a unitary matrix describing
scattering from the interior of the designated surface. As the
network is periodic, $S_{\textrm{int}}$ is independent of $n$. We
will focus on the special case where the interior consists of
equal-length delay lines with phase delay $\phi$, as shown in
Fig.~\ref{fig:couplings}(b). Then, with appropriate definitions of
$\ket{a}$ and $\ket{b}$,
\begin{equation}
\ket{a_n} = e^{-i\phi} \, \ket{b_n}.
\label{internal scattering}
\end{equation}
Furthermore, due to the connections between neighboring unit cells,
the amplitudes $\ket{b_n}$ leaving the surface of cell $n$ scatters
with those from other cells. For Bloch modes, $\ket{a_{n}} =
\ket{a_k} e^{ik\cdot r_n}$ and $\ket{b_{n}} = \ket{b_k} e^{ik\cdot
r_n}$, the inter-cell scattering can be described by
\begin{equation}
S(k) \, \ket{b_k} = \ket{a_k},
\label{external scattering}
\end{equation}
where $S(k)$ is unitary and is periodic in $k$ with the periodicity of
the Brillouin zone.
The combination of Eqs.~(\ref{internal scattering})-(\ref{external
scattering}) gives
\begin{equation}
S(k) \, \ket{b_k} = e^{-i\phi} \,\ket{b_k}.
\label{Sext equation}
\end{equation}
The eigenvectors of $S(k)$ are Bloch wave amplitudes, and the
arguments of the eigenvalues form a bandstructure $\phi(k)$. The phase
delay $\phi$ is analogous to the band energy of a Bloch electron, or
the band frequency in a photonic crystal, apart from the fact that it
is an angle variable ($\phi \equiv \phi + 2\pi$). Hereafter, we will
refer to $\phi$ as the ``quasi-energy''.
From the above description of a periodic network, we can see that the
modes of such a network are equivalent to the Floquet modes of a
periodically-driven lattice. Suppose we have a lattice (having the same spatial dimension
as our network) whose Hamiltonian is periodic in time, with period
$T$. Then Eq.~(\ref{Sext equation}) is the equation for a Floquet
state with state vector $\ket{b_k}$ and quasi-energy $\phi(k)/T$,
provided $S(k)$ is the time evolution operator over one period.
Explicitly,
\begin{equation}
S(k) = \mathcal{T} \exp\left[-i\int_0^T dt \; H_k(t)\right],
\end{equation}
where $H_k(t)$ is some time-periodic reduced Hamiltonian and
$\mathcal{T}$ is the time-ordering operator. (Except in special
cases, an explicit expression for $S(k)$ cannot be obtained from $H_k(t)$
or vice versa, but it can be computed numerically.) The link between
network models and Floquet lattices has previously been pointed out
\cite{klessemet2,janssen,janssenbook}, but to our knowledge the consequences on
the band topology of network models has not been systematically
explored.
\section{Floquet band topology of network models}
Let us consider how the topology of a periodic network's bandstructure
might be characterized. Following the usual topological
classification of band insulators \cite{Schnyder08,Schnyder09,Kitaev}, one
might take the matrix logarithm of Eq.~(\ref{Sext equation}) to obtain
an effective time-independent Hamiltonian, then look for topologically
non-trivial bands by computing topological band invariants (e.g.~the
Chern number for a 2D lattice without time-reversal
symmetry\cite{TKNN}). However, doing so for the square lattice
network in the large-$\theta$ phase reveals that the
Chern number is zero despite the presence of topologically protected
``one-way'' edge states. As discussed in Ref.~\onlinecite{Levin},
such ``anomalous Floquet insulator'' (AFI) behavior can arise in
Floquet bandstructures because the quasi-energy $\phi$ is an angle
variable. At the topological transition, each band has simultaneous
Dirac band-crossing points with the band ``above'' and the band
``below'', modulo $2\pi$; these band-crossing points are respectively
associated with +1 and -1 Berry flux, so that the band has zero Chern
number on both sides of the transition. In a static gapped
Hamiltonian system, the number of chiral edge states in a bulk gap can
be related to the sum of Chern numbers for all bands below the gap,
but this does not apply to Floquet systems since the quasi-energy
$\phi$ of a Floquet evolution operator is periodic and not bounded
below.
\begin{figure}
\centering\includegraphics[width=0.45\textwidth]{honeyphasediag.pdf}
\caption{(color online) Phase diagram of a honeycomb network.
The network is described in Appendix
\ref{honeycomb appendix}; here we take coupling matrix parameters $\varphi=\chi=0$. The phase boundaries are found by
searching numerically for band crossings (dots). The ``Chern
insulator'' (CI) and ``anomalous Floquet insulator'' (AFI) phases
are topologically non-trivial phases where the bands have
non-zero and zero Chern number, respectively. The unlabeled
phases are conventional insulators. The points labeled (a) and
(b) indicate the parameters used for the projected band diagrams
in Fig.~\ref{fig:hexbands}(a) and (b), respectively.}
\label{fig:honphasediag}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.35\textwidth]{striphightheta.pdf}
\caption{(color online) Projected quasi-energy bandstructures for
the honeycomb network, in a strip geometry with width $N=20$ unit
cells and zigzag edges. The bands are computed from
Eq.~(\ref{hexbands}); see Fig.~\ref{fig:hexnet} for a schematic of
the network. The coupling parameters are $\xi=\pi/2$,
$\varphi=\chi=0$, and (a) $\theta=0.15 \pi$ (CI phase; upper figure),
(b) $\theta=0.45 \pi$ (AFI phase; lower figure). The Chern number
$C$ for each band is indicated. These Chern numbers were computed
from the momentum-space line integral of the Berry connection
$\vect{\mathcal{A}}^{nn} (k) = -i\bra{nk}\vect{\nabla}_k\ket{nk}$,
where $\ket{nk}$ is the $n$th Bloch eigenstate \cite{TKNN}.}
\label{fig:hexbands}
\end{figure}
The square-lattice network has a rather simple phase diagram: it is an
AFI for values of the inter-ring coupling strength $\theta > \pi/4$,
and a conventional insulator otherwise, regardless of all other model
parameters. However, more complicated behaviors can be observed in
other network models, such as networks based on a honeycomb lattice.
To our knowledge, such networks have not been studied previously,
partly because the network model literature was focused on the
Anderson transition, and the lattice geometry was not thought
to have a significant influence on properties such as the critical
exponent of the localization length\cite{ChalkerCo}. The honeycomb
network, which is described in Appendix \ref{honeycomb appendix}, has
phases that depend on the inter-ring coupling $\theta$ as well as on
the parameters $\xi$ and $\varphi$, which describe the phase shifts
induced at the nodes [cf.~Eq.~(\ref{eq:Scat})-(\ref{eq:bandstruct})]. The phase
diagram for $\varphi = 0$ is shown in Fig.~\ref{fig:honphasediag}.
Unlike in the square lattice, topologically non-trivial phases exist
even for low values of $\theta$. In these low-$\theta$ ``Chern
Insulator'' (CI) phases, the bands have non-zero Chern number, similar
to 2D systems with broken time-reversal symmetry \cite{haldane88}, and
the projected bandstructure exhibits topological edge states as shown
in Fig.~\ref{fig:hexbands}(a). At larger values of $\theta$, the
system undergoes a transition from a CI phase to an AFI phase, where
all bands have zero Chern number and \emph{all} bandgaps are traversed
by topologically protected edge states\cite{Demler,Levin}, as shown in
Fig~\ref{fig:hexbands}(b).
As pointed out by Kitagawa \textit{et al.}, Floquet bandstructures can
be characterized by homotopy class-based topological invariants
\cite{Demler}, such as the ``$\nu_1$ invariants''
\begin{equation*}
\frac{1}{2\pi}\int_{-\pi}^{\pi} dk_\mu
\textrm{Tr}\left[S(k)^{-1} \; i\partial_{k_\mu} S(k) \right]
\label{Demler invariant}
\end{equation*}
for $\mu = x,y$ in 2D.
In simple terms, these are the winding numbers for the quasi-energy
bands over their $[0,2\pi]$ domain, as $k_\mu$ is advanced through
$[0,2\pi]$. They are non-zero in the AFI phase, where every bandgap
is topologically non-trivial and occupied by edge states; however, the
winding numbers are zero in CI phases where at least one of the
bandgaps is topologically trivial \cite{Demler}. Subsequently, Rudner
\textit{et al.} have shown that the nontrivial topology of both the
AFI and CI phases can be
characterized by a bulk $\nu_3$ invariant
\cite{Levin}. This invariant involves integrals over $k_x$ and $k_y$,
and over the time variable $t$. In the context of network models,
there is no meaningful definition of the ``evolution operator'' for
intermediate $t$. In practice, one can define any $S(k,t)$, such that
$S(k,T)$ is the evolution operator for the network; the choice is
non-unique but will not affect the value of $\nu_3$ thus obtained.
In the following section, we will investigate an alternative
topological characterization based on adiabatic pumping. As we shall
see, the adiabatic pumping procedure is also capable of distinguishing
the AFI and CI phases, and it has the additional advantage of having a
natural physical interpretation for network models, which could be
useful for understanding the general class of Floquet bandstructures.
\section{Adiabatic pumping method and edge state invariants}
\begin{figure*}
\centering\includegraphics[width=0.96\textwidth]{fig_projected.pdf}
\caption{(color online) Projected bandstructures for the periodic
square-lattice network of Fig.~\ref{fig:couplings}(c), with $N_y =
6$ periods in the $y$ direction. (a)-(d) show topologically
trivial bandstructures ($\theta = 0.1\pi$, where $\theta$ is the
inter-ring coupling strength \cite{Liang}), and
(e)-(h) show topologically non-trivial bandstructures ($\theta =
0.4\pi$). Varying $w_+$, the angle variable controlling the upper
edge, affects the edge states on the upper edge (highlighted in
red). The lower edge angle is fixed at $w_- = 0$, and the other
coupling matrix parameters \cite{Liang} are $\varphi = \chi = 0$,
$\xi = \pi/2$. }
\label{fig:projected}
\end{figure*}
The adiabatic pumping method of characterizing topological systems was
originally introduced by Laughlin \cite{Laughlin}, and we will adapt
an elegant re-formulation of the Laughlin argument which was recently
given by Meidan \textit{et al.}~\cite{Brouwer}. Working in the
context of static Hamiltonian systems, these authors imagined rolling
a 2D lattice into a cylinder by applying twisted boundary conditions
along one direction, attaching scattering leads to one cylinder edge,
and then calculating the eigenvalues of the scattering (reflection)
matrix. As the twist angle is swept through $[0,2\pi]$, phase shifts
in the scattering eigenvalues can be related, via standard scattering
theory, to the number of resonances crossing the specified energy.
For mid-gap energies, scattering resonances correspond to edge states
of the isolated cylinder, which can be thus counted by the winding
numbers of the scattering matrix's eigenvalue spectrum \cite{Brouwer}.
A similar procedure can be carried out in a network model. Let us
consider a two dimensional network, which is infinite in (say) the $x$
direction, and finite in the $y$ direction with $N_y$ periods. For
convenience, we normalize the lattice spacings so the quasimomentum
$k_x$ becomes an angle variable. The system can be regarded as a
supercell of $N_y$ unit cells, featuring twisted boundary conditions
along the $x$ boundaries with twist angle $k_x$. Following the
discussion in Section \ref{sect:Floquet maps}, we can designate a
scattering surface for this supercell, consisting of the union of the
scattering surfaces for the individual unit cells. This is shown in
Fig.~\ref{fig:couplings}(c) for the simple square-lattice network.
The inputs entering this supercell surface are $\ket{a} = [\ket{a_1},
\cdots \ket{a}_{N_y}]$, and the output amplitudes are $\ket{b} =
[\ket{b_1}, \cdots \ket{b_{N_y}}]$. The scattering from the interior
of the surface gives $\ket{a} = e^{-i\phi}\, \ket{b}$. As for the
scattering from the exterior of the surface back into the interior,
that depends on the inter-cell connections (which are assumed
constant), and on $k_x$ (due to scattering across the $x$ boundaries).
There is one more set of constraints which must also be specified: the
relations between the input and output amplitudes penetrating the
scattering surface along the $y$ boundaries of the supercell. As
depicted in Fig.~\ref{fig:couplings}(c), we denote these ``edge
amplitudes'' by $\ket{a_\pm}$ and $\ket{b_\pm}$, with the $\pm$
subscripts indicating the upper and lower edges. Let the number of
edge amplitudes on each edge be $n_{\perp}$. In general, we have
\begin{equation}
S_\perp \begin{bmatrix}\,\ket{b_+}\, \\ \,\ket{b_-}\,\end{bmatrix}
= \begin{bmatrix}\,\ket{a_+}\, \\ \,\ket{a_-}\,\end{bmatrix}
\end{equation}
for some some $2n_\perp\times2n_\perp$ unitary matrix $S_\perp$. From
this, we can construct an exterior scattering matrix for the
super-cell, $S_\textrm{sc}$, such that
\begin{equation}
S_\textrm{sc}(k_x,S_\perp) \, \ket{b}= e^{-i\phi} \,\ket{b}.
\label{supercell}
\end{equation}
We are free to specify $S_\perp$, and it is useful to consider a case
where the upper and lower boundaries are ``disconnected''.
Specifically,
\begin{equation}
S_\perp(w_+,w_-) = \begin{bmatrix}e^{iw_+} I & 0 \\ 0 & e^{iw_-} I
\end{bmatrix}.
\label{sperp}
\end{equation}
The values of $\phi(k_x)$ obtained from
Eqs.~(\ref{supercell})-(\ref{sperp}) form a projected quasi-energy
bandstructure for the semi-infinite lattice of width $N_y$, with the
set of $2n_\perp$ edge angles, $\{w_\pm\}$, acting as tunable edge
conditions.
The edge angles $w_\pm$ can be used to define topological invariants.
Suppose we keep $w_-$ fixed and consider only variations in $w_+$.
For any $\phi, k_x \in [0,2\pi]$, there must be exactly $n_\perp$
values of $w_+ \in [0,2\pi]$ consistent with
Eqs.~(\ref{supercell})-(\ref{sperp}); in physical terms, by specifying
$\phi$ and $k_x$ (as well as fixing $w_-$ and other network parameters
entering into $S_\textrm{sc}$), we have defined an $n_\perp$-channel
scattering problem, and the input amplitudes $\ket{a_+}$ and output
amplitudes $\ket{b_+}$ for the scatterer must be related by some
unitary reflection matrix whose eigenvalues are $e^{iw_+}$. Let us
fix a value for the quasi-energy $\phi$ which lies in a bulk bandgap,
and consider the $n_\perp$-valued function $w_+(k_x)$, which must come
back to itself (modulo $2\pi$) as $k_x$ is advanced over $[0,2\pi]$.
Each value of $w_+$ corresponds to a separate projected bandstructure,
but within each gap only the dispersion curves for edge states
localized to the upper edge can vary, since $w_+$ cannot affect the
lower edge. As a result, the winding number of $w_+(k_x)$ counts the
net (forward minus backward) number of upper edge states in the
specified bandgap.
To illustrate the above discussion, consider the previously-discussed
square-lattice network, for which $n_\perp = 1$ (i.e., $w_+(k_x)$ is
single-valued). Projected bandstructures for this network are shown
in Fig.~\ref{fig:projected}; for details of the calculation, see
Appendix \ref{square appendix}. In the conventional insulator phase,
corresponding to Figs.~\ref{fig:projected}(a)-(d), $w_+(k_x)$ has zero
winding number in each gap, as shown in Fig.~\ref{fig:winding}(a).
Note, however, that Fig.~\ref{fig:winding}(a) also shows that there
are certain values of $w_+$ for which upper edge states \textit{do}
exist. In the projected bandstructure, these take the form of
isolated bands of \textit{two-way} edge states which are ``pumped''
downwards across each gap during each cycle of $w_+$.
In the AFI phase, $w_+(k_x)$ has winding number +1 in each gap, as
shown in Fig.~\ref{fig:winding}(b). The projected bandstructures,
shown in Figs.~\ref{fig:projected}(e)-(h), exhibit one-way edge states
spanning each gap. Each band of edge states ``winds'' across the
Brillouin zone during one cycle of $w_+$, with the overall effect of
pumping one band down across each gap during one cycle of $w_+$, like
in the conventional insulator phase. Each gap also has a band of edge
states that is invariant in $w_+$, corresponding to states localized
on the lower edge.
We expect this to be the generic effect of adiabatic pumping on
quasi-energy bandstructures. Because $w_+$ is a well-defined function of $k_x$,
winding $w_+$ by $2\pi$ has the effect of transporting a band of
edge states across each gap. This transport occurs even for conventional
(topologically trivial) bandgaps, in the form of a band of two-way edge states.
The bandstructure as a whole returns to itself over one such cycle, which
is possible since the quasi-energy is an angle variable.
In the honeycomb network, the conventional insulator and AFI phases
behave in the same way as for the square-lattice network. In the CI phase, each
cycle of $k_x$ transports a band of two-way edge states down across
the topologically trivial gap (where $w_+(k_x)$ has zero winding
number), while simultaneously winding the one-way edge states in the
topological gap (where $w_+(k_x)$ has winding number +1).
\begin{figure}
\centering\includegraphics[width=0.44\textwidth]{fig_winding.pdf}
\caption{(color online) Plots of the edge angle $w_+$
versus $k_x$, in the square-lattice network with width $N_y = 6$.
(a) In the conventional insulator phase ($\theta = 0.1\pi$), the
winding numbers are zero; (b) in the AFI phase ($\theta =
0.4\pi$), the winding numbers are +1. In both cases, plots are
given for $\phi = \pi/4$ and $\phi = -\pi/4$, which lie in two
different band gaps (see Fig.~\ref{fig:projected}). In all cases,
$w_- = 0$ and all the other parameters are the same as in
Fig.~\ref{fig:couplings}. }
\label{fig:winding}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.41\textwidth]{fig_n.pdf}
\caption{(color online) Plots of $w_+$ versus $k_x$ for small values
of $N_y$, showing the emergence of a non-zero winding number. For
all three plots, we use $\phi = 0.25\pi$ and $\theta = 0.4\pi$,
corresponding to a mid-gap quasi-energy in the AFI phase. All other parameters are as in
Fig.~\ref{fig:projected}. For $N_y > 1$, an anti-crossing
develops near $k_x \sim \pi/2$, coinciding with the dispersion
curve for the lower edge states in the projected band diagram.
The width of this anti-crossing goes rapidly to zero with $N_y$,
and the rest of the curve acquires a non-zero winding
number. }
\label{fig:n}
\end{figure}
The relation of the winding number of $w_+(k_x)$ to the edge states
relies on the assumption that the upper edge angles have no effect on
the lower edge states. Hence, $\phi$ must to be chosen within a
bandgap, and the width $N_y$ must be sufficiently large (compared to
the edge state penetration depth). This is demonstrated in
Fig.~\ref{fig:n}, where we plot $w_+(k_x)$ using $N_y = 1, 2, 3$, for
the square-lattice network in the AFI phase. For $N_y = 1$, we
observe that $w_+(k_x)$ has zero winding number. As $N_y$ is
increased, the curve develops an anti-crossing, occurring at a value
of $k_x$ coinciding with the quasimomentum of an edge state localized
to the lower edge (for the specified value of $\phi$). For
sufficiently large $N_y$, the lower edge state is independent of
$w_+$, so the anti-crossing narrows into a numerically-undetectable
vertical line. Because the anti-crossing is associated with a $-1$
winding number, the remainder of the $w_+(k_x)$ curve acquires $+1$
winding.
\section{Discussion}
In this paper, we have discussed the relationships between photonic
resonator lattices, Chalker-Coddington network models, and Floquet
topological insulators. Within the emerging field of topological
photonics, these analogies may provide insights for realizing new
topological phases. For example, some years ago Chalker and Dohmen
\cite{ChalkerDoh} studied a hypothetical three-dimensional network
consisting of weakly-coupled 2D stacked layers of CC networks (a
configuration reminiscent of a 3D weak topological
insulator\cite{RMPHasan}). Photonic lattice analogs of such 3D networks may
be realizable, possibly at microwave frequencies
for ease of fabrication. Furthermore, as discussed in the
introduction, a photonic Floquet topological insulator has recently
been realized \cite{Szameit}, in which the 2D bands were shown to possess non-zero
Chern numbers. It would be interesting to analyze this or a similar
system using the scattering formalism of a network model, with the aim
of realizing an AFI phase where topologically-protected edge states
are present despite all bands having zero Chern number. (A photonic
AFI-like phase has previously been realized in 1D
\cite{KitagawaNatCom}.)
We have restricted our attentions to \emph{directed} network models.
In the photonic context, this means considering the flow of light in a
single direction within the waveguides, and assuming no backscattering
into time-reversed modes. Apart from this restriction, there are no
further symmetry requirements on the coupling matrices. The two
possible directions of propagation through the network are analogous
to two decoupled spin sectors in a 2D quantum spin Hall insulator.
However, in the electronic case a topological phase can exist even in
the presence of spin-mixing: the ${\mathbb Z} _2$ topological
insulator. This relies on the fact that edge states cannot be
backscattered by time-reversal symmetric perturbations due to the
particular nature of \textit{fermionic} time-reversal symmetric $S$
matrices \cite{RMPHasan}. Indeed, the CC network model concept has
been generalized to study quantum spin Hall insulators by imposing
fermionic time-reversal symmetries on the links and nodes
\cite{obuse1,obuse2,ryu}. However, \textit{bosonic} edge states are not
protected from backscattering by time-reversal symmetric
perturbations, so topologically non-trivial behavior can only occur if
mixing into time-reversed modes is negligible. This is an important limitation of
photonic topological insulators, but not necessarily a fatal one,
since such mixing processes can often be engineered away.
We have also, in this paper, considered translationally periodic
systems. It would be interesting to return to the original motivation
for introducing network models, which was to study disorder-induced
Anderson transitions in a
2D electron gas \cite{ChalkerCo}. In the photonic context, Anderson
\textit{localization} of light has been observed in 1D and 2D
\cite{Segev,Segevreview}. However, there is no Anderson
\textit{transition} in such systems, since they map onto time-reversal
symmetric electron gases for which localization is marginal in 2D \cite{localizationRMP}. By
contrast, an Anderson transition does exist in 2D disordered quantum
Hall systems, tied to the phenomenon of classical percolation
\cite{ChalkerCo}. Random photonic networks might thus manifest a
photonic localization-delocalization transition, which has not yet
been observed.
\begin{acknowledgments}
We thank M.~Rechtsman, A.~Szameit, M.~Hafezi, and G.~Q.~Liang for
helpful comments. This research was supported by the Singapore
National Research Foundation under grant No.~NRFF2012-02.
\end{acknowledgments}
|
2,877,628,088,345 | arxiv | \section{Introduction}
\label{sec:introduction}
Non-smooth nonlinear optimization problems of the form
\begin{equation*}
\minst[x]{f(x)}
g(x) = 0,\quad h(x) \ge 0, \tag{NLP} \label{eq:nlp}
\end{equation*}
where $\Domx\subseteq \mathbb R^n$ is open, the objective $f\in\Cspace^d(\Domx,\mathbb R)$ is a smooth function
and the equality and inequality constraints $g\in\Cd_{\text{abs}}(\Domx,\mathbb R^{m_1})$ and
$h\in\Cd_{\text{abs}}(\Domx,\mathbb R^{m_2})$ are level-1 non-smooth functions
that can be written in \emph{abs-normal form} \cite{Griewank_2013}
have been considered by the authors in \cite{Hegerhorst_Steinbach:2019}.
In this problem class, the non-smoothness is caused by finitely many occurrences
of the absolute value function, the branches of which we represent by
signature matrices $\Sigma = \diag(\sigma)$ with $\sigma \in \set{-1,0,1}^s$.
We find functions
$c_{\setE}\in\Cspace^d(\Domxz,\mathbb R^{m_1})$,
$c_{\setI}\in\Cspace^d(\Domxz,\mathbb R^{m_2})$ and
$c_{\setZ}\in\Cspace^d(\Domxz,\mathbb R^s)$ with $\Domxz=\Domx\x\Domz$,
$\Domz\subseteq\mathbb R^s$ open and \emph{symmetric}
(i.e., $z \in \Domz$ implies $\Sigma z \in \Domz$
for every signature matrix $\Sigma$) such that
\begin{align*}
g(x)&=c_{\setE}(x,\abs{z}),\\
h(x)&=c_{\setI}(x,\abs{z}),\tag{ANF}\label{eq:anf}\\
z&=c_{\setZ}(x,\abs{z}) \qtext{with $\partial_2 c_{\setZ}(x,\abs{z})$ strictly lower triangular}.
\end{align*}
Here we use a single joint \emph{switching constraint} $c_{\setZ}$
for both $g$ and $h$, and reuse \emph{switching variables}~$z_i$
if the same argument repeats as an absolute value argument in $g$ or $h$.
Due to the strictly lower triangular form of $\partial_2 c_{\setZ}(x,\abs{z})$,
component $z_j$ of $z$ can be computed from $x$ and the components $z_i$, $i<j$.
Hence, the variable $z$ is implicitly defined by $z=c_{\setZ}(x,\abs{z})$,
and to denote this dependence explicitly, we write $z(x)$ in the following.
Whenever we address questions of solvability of this system, we make use of the reformulation $\abs{z_i} = \sign(z_i) z_i$.
\begin{definition}[Signature of $z$]
\label{def:signature}
Let $x \in \Domx$. We define the \emph{signature} $\sigma(x)$
and the associated \emph{signature matrix} $\Sigma(x)$ as
\begin{equation*}
\sigma(x) \define \sign(z(x)) \in \set{-1,0,1}^s \qtextq{and}
\Sigma(x) \define \diag(\sigma(x)).
\end{equation*}
A signature vector $\sigma(x) \in \set{-1,1}^s$ is called \emph{definite},
otherwise \emph{indefinite}.
\end{definition}
For signatures $\sigma,\^\sigma \in \set{-1, 0, 1}^s$,
it is convenient to use the partial order
\begin{equation*}
\^\sigma \succeq \sigma \iff
\^\sigma_i \sigma_i \ge \sigma_i^2 \qtextq{for} i=1,\dots,s,
\end{equation*}
i.e., $\^\sigma_i$ is arbitrary if $\sigma_i = 0$ and $\^\sigma_i = \sigma_i$ otherwise.
Thus, we may write $\abs{z(x)}=\Sigma z(x)$ for every $\sigma \succeq \sigma(x)$.
Further, we may consider the system $z=c_{\setZ}(x,\Sigma z)$ \textbf{for fixed signature} $\Sigma = \Sigma(\^x)$ around a point of interest $\^x$.
By the implicit function theorem, the system has a locally unique solution $z(x)$ for fixed signature $\Sigma$, and the associated Jacobian at $\^x$ reads
\begin{equation*}
\partial_x z(\^x) = [I - \partial_2 c_{\setZ}(\^x,\abs{z(\^x)}) \Sigma]^{-1}
\partial_1 c_{\setZ}(\^x,\abs{z(\^x)})\in\mathbb R^{s \x n}.
\end{equation*}
\begin{definition}[Active Switching Set]
We call the switching variable $z_i$ \emph{active} if $z_i(x) = 0$.
The \emph{active switching set} $\alpha$
consists of all indices of active switching variables,
\begin{equation*}
\alpha(x) \define \defset{1 \leq i \leq s}{z_i(x) = 0}.
\end{equation*}
The numbers of active and inactive switching variables are $\calp[(x)]$
and $\csig[(x)] \define s - \calp[(x)]$.
\end{definition}
\subsection*{Literature}
Griewank and Walther have developed a class of
unconstrained abs-normal problems in \cite{Griewank_2013,Griewank_Walther_2016}.
These problems offer particularly attractive theoretical features
when generalizing KKT theory and stationarity concepts to non-smooth problems.
Under certain regularity conditions, they are computationally tractable by active-set
type algorithms with guaranteed convergence based on piecewise linearizations and using
algorithmic differentiation techniques \cite{Griewank_Walther_2017,Griewank_Walther_2019}.
Another important class of non-smooth optimization problems are Mathematical
Programs with Complementarity (or Equilibrium) Constraints (MPCCs, MPECs);
an overview can be found in the book \cite{Luo_et_al:1996}.
Since standard theory for smooth optimization problems cannot be applied, new constraint qualifications and corresponding optimality conditions were introduced.
By now, there is a large body of literature on MPCCs, and we refer to
\cite{Ye:2005} for an overview of the basic concepts and theory.
In this paper, constraint qualifications for MPCCs in the sense of Abadie and Guignard and corresponding stationarity concepts (in particular M-stationarity and MPCC-linearized B-stationarity) are considered.
Details can be found in \cite{Scheel_Scholtes_2000}, \cite{Luo_et_al:1996} and \cite{FlegelDiss}.
In \cite{Hegerhorst_et_al:2019:MPEC1} we have shown that
unconstrained abs-normal problems constitute a subclass of MPCCs.
In addition, we have studied regularity concepts of linear independence and of
Mangasarian-Fromovitz type.
As a direct generalization of unconstrained abs-normal problems
we have considered NLPs with abs-normal constraints,
which turned out to be equivalent to the class of MPCCs.
In \cite{Hegerhorst_Steinbach:2019} we have extended optimality conditions of unconstrained abs-normal problems
to general abs-normal NLPs under the linear independence kink qualification
using a reformulation of inequalities with absolute value slacks.
We have compared these optimality conditions to concepts of MPCCs
in \cite{Hegerhorst_et_al:2019:MPEC2}.
We have also shown that the above slack reformulation preserves kink qualifications of linear independece type but not of Mangasarian-Fromovitz type.
More details and additional information about these results as well as about the results in this paper can be found in \cite{Hegerhorst-Schultchen2020}.
\subsection*{Contributions.} In the present article we extend our detailed comparative study of general abs-normal NLPs and MPECs,
considering constraint qualifications of Abadie and Guignard type both for the standard formulation
and for the reformulation with absolute value slacks.
In particular, we show that constraint qualifications of Abadie type are equivalent for
abs-normal NLPs and MPCCs and that they are preserved under the slack reformulation.
For constraint qualifications of Guignard type we cannot prove equivalence
but only certain implications.
However, when considering branch problems of abs-normal NLPs and MPCCs,
we again obtain equivalence of constraint qualifications of Abadie
and Guignard type, even under the slack reformulation.
Finally we introduce Mordukhovich and Bouligand stationarity concepts
for abs-normal NLPs and prove first order optimality conditions
using the corresponding concepts for MPCCs.
\subsection*{Structure.} The remainder of this article is structured as follows.
In \cref{sec:anf-formulations}
we state the general abs-normal NLP and its reformulation with absolute value slacks that
permits to dispose of inequalities. We also present the branch structure of both formulations
and introduce appropriate definitions of the \tcone and the linearized cone. Using these tools,
we introduce kink qualifications in the sense of Abadie and Guignard.
In terms of these two kink qualifications, we then compare the regularity of the equality-constrained form of an
abs-normal NLP to the inequality-constrained one.
In \cref{sec:counterpart} we introduce counterpart MPCCs
for the two formulations of abs-normal NLPs
and discuss the associated MPCC-constraint qualifications, namely
MPCC-ACQ and MPCC-GCQ.
In \cref{sec:qualifications} we investigate the interrelation of the regularity concepts
for abs-normal NLPs and MPECs and find the situation to be more intricate than
under LICQ and MFCQ discussed in \cite{Hegerhorst_Steinbach:2019}.
Finally, in \cref{sec:stationarity} we introduce abs-normal variants of M-stationarity and B-stationarity
as first order necessary optimality conditions for abs-normal NLPs
and prove equivalence relations for the respective MPCC stationarity conditions.
We conclude with \cref{sec:conclusions}.
\section{Inequality and Equality Constrained Formulations}
\label{sec:anf-formulations}
In this section we consider two different treatments of inequality constraints
for non-smooth NLPs in abs-normal form.
\subsection{General Abs-Normal NLPs}
\label{sec:anf-inequalities}
Substituting the representation \eqref{eq:anf} of constraints in abs-normal form
into the general problem \eqref{eq:nlp}, we obtain a general abs-normal NLP.
Here, we use the variables $(t,\zt)$ instead of $(x,z)$ and analogously $\sigt(t)$ and $\alpt(t)$ instead of $\sigma(x)$ and $\alpha(x)$.
\begin{definition}[Abs-Normal NLP]
Let $\Domt$ be an open subset of $\mathbb R^{n_t}$.
A non-smooth NLP is called an \emph{abs-normal} NLP
if functions $f \in \Cspace^d(\Domt,\mathbb R)$, $c_{\setE} \in \Cspace^d(\Domtzt,\mathbb R^{m_1})$,
$c_{\setI} \in \Cspace^d(\Domtzt,\mathbb R^{m_2})$, and $c_{\setZ} \in \Cspace^d(\Domtzt,\mathbb R^{s_t})$ with $d \ge 1$ exist
such that it reads
\begin{equation*}
\minst[t, \zt]{f(t)}
c_{\setE}(t, \abs{\zt}) = 0,\quad
c_{\setI}(t, \abs{\zt}) \ge 0, \tag{I-NLP} \label{eq:i-anf}\quad
c_{\setZ}(t, \abs{\zt}) - \zt = 0,
\end{equation*}
where $\Dom^{\abs{\zt}}$ is open and symmetric and
$\partial_2 c_{\setZ}(x,\abs{\zt})$ is strictly lower triangular.
The feasible set of \eqref{eq:i-anf} is
$\Fabs \define \defset{(t, \zt)}{
c_{\setE}(t, \abs\zt) = 0,
c_{\setI}(t, \abs\zt) \ge 0,
c_{\setZ}(t, \abs\zt) = \zt
}$.
\end{definition}
\begin{definition}[Active Inequality Set]
Let $(t,\zt(t))\in \Fabs$. We call the inequality constraint
$i\in\setI$ \emph{active} if $c_i(t,\abs{\zt(t)})=0$.
The \emph{active set} $\setA(t)$ consists of all indices of active inequality constraints,
$ \setA(t)=\defset{i\in\setI}{c_i(t,\abs{\zt(t)})=0}.$
We set $\cA \define [c_i]_{i \in \setA(t)}$
and denote the number of active inequality constraints by $\card{\setA(t)}$.
\end{definition}
With the goal of considering kink qualifications in the spirit of Abadie and Guignard,
we define the \tcone and the abs-normal-linearized cone.
\begin{definition}[\TCone and Abs-Normal-Linearized Cone for \eqref{eq:i-anf}]\label{def:cones-i-anf}
Consider a feasible point $(t,\zt)$ of \eqref{eq:i-anf}. The \emph{\tcone} to $\Fabs$ at $(t,\zt)$ is
\begin{equation*}
\Tabs(t, \zt) \define
\begin{defarray}{(\delta t, \delta\zt)}
\exists \tau_k \searrow 0,\ \Fabs \ni (t_k,\zt_k) \to (t, \zt){:} \\
\tau_k^{-1} (t_k - t, \zt_k - \zt) \to (\delta t, \delta\zt)
\end{defarray}
.
\end{equation*}
With $\delta\zeta_i:=\abs{\delta\zt_i}$ if $i\in\alpt(t)$
and $\delta\zeta_i:=\sigt_i(t) \delta\zt_i$ if $i\notin\alpt(t)$,
the \emph{abs-normal-linearized cone} is
\begin{equation*}
\Tlinabs(t, \zt) \define
\begin{defarray}[r@{\medspace}l]{(\delta t, \delta\zt)}
\partial_1 c_{\setE}(t, \abs\zt) \delta t +
\partial_2 c_{\setE}(t, \abs\zt) \delta\zeta &= 0, \\
\partial_1 \cA(t, \abs\zt) \delta t +
\partial_2 \cA(t, \abs\zt) \delta\zeta &\ge 0, \\
\partial_1 c_{\setZ}(t, \abs\zt) \delta t +
\partial_2 c_{\setZ}(t, \abs\zt) \delta\zeta &= \delta\zt
\end{defarray}
.
\end{equation*}
\end{definition}
To prove that the \tcone is a subset of the abs-normal-linearized cone,
we follow an idea from \cite{FlegelDiss}, where an analogous result for MPCCs was obtained.
First, we need the definition of the smooth branch NLPs for \eqref{eq:i-anf}
with their standard \tcones and linearized cones.
\begin{definition}[Branch NLPs for \eqref{eq:i-anf}]\label{def:branch-anf}
Consider a feasible point $(\^t,\hzt)$ of \eqref{eq:i-anf}.
Choose $\sigt \in\{-1,1\}^{s_t}$ with $\sigt \succeq \sigt(\^t)$ and set $\Sigt=\diag(\sigt)$.
The branch problem NLP($\Sigt$) is defined as
\ifcase
\begin{align*}
\sminst[t,\zt]{f(t)}
& c_{\setE}(t,\Sigt \zt) = 0, \quad
c_{\setI}(t,\Sigt \zt) \ge 0, \\
& c_{\setZ}(t,\Sigt \zt) - \zt = 0, \quad
\Sigt \zt\ge 0.
\tag{NLP($\Sigt$)}\label{eq:branch-anf}
\end{align*}
\else
\begin{align*}
\minst[t,\zt]{f(t)}
& c_{\setE}(t,\Sigt \zt) = 0,
c_{\setI}(t,\Sigt \zt) \ge 0,\tag{NLP($\Sigt$)}\label{eq:branch-anf}
c_{\setZ}(t,\Sigt \zt) = \zt,
\Sigt \zt\ge 0.
\end{align*}
\fi
The feasible set of \eqref{eq:branch-anf},
which always contains $(\^t,\hzt)$, is denoted by $\Fsig$.
\end{definition}
\begin{definition}[\TCone and Linearized Cone for \eqref{eq:branch-anf}]\label{def:cones-branch-anf}
Given \eqref{eq:branch-anf}, consider a feasible point $(t,\zt)$.
The \emph{\tcone} to $\Fsig$ at $(t,\zt)$ is
\begin{equation*}
\Tsig(t,\zt) \define
\begin{defarray}{(\delta t,\delta\zt)}
\exists \tau_k \searrow 0,\ \Fsig \ni (t_k, \zt_k) \to (t, \zt){:} \\
\tau_k^{-1} (t_k - t, \zt_k - \zt) \to (\delta t, \delta\zt)
\end{defarray}
.
\end{equation*}
The \emph{linearized cone} is
\begin{equation*}
\Tlinsig(t, \zt) \define
\begin{defarray}[r@{\medspace}l]{(\delta t,\delta\zt)}
\partial_1 c_{\setE}(t, \Sigt\zt) \delta t +
\partial_2 c_{\setE}(t, \Sigt\zt) \Sigt\delta\zt &= 0, \\
\partial_1 \cA(t, \Sigt\zt) \delta t +
\partial_2 \cA(t, \Sigt\zt) \Sigt\delta\zt &\ge 0, \\
\partial_1 c_{\setZ}(t, \Sigt\zt) \delta t +
\partial_2 c_{\setZ}(t, \Sigt\zt) \Sigt\delta\zt &= \delta\zt, \\
\sigt_i \delta\zt_i & \ge 0,\ i \in \alpt(t)
\end{defarray}
.
\end{equation*}
\end{definition}
\begin{remark}
Observe that $\abs{\zt} = \Sigt \zt$ in \cref{def:branch-anf} and \cref{def:cones-branch-anf},
and for every $\Sigt$ we have $\Fsig \subseteq \Fabs$,
$\Tsig(t,\zt) \subseteq \Tabs(t,\zt)$, and
$\Tlinsig(t,\zt) \subseteq \Tlinabs(t,\zt)$.
\end{remark}
\begin{lemma}\label{le:decomp-cones-anf}
Consider a feasible point $(\^t,\hzt)$ of \eqref{eq:i-anf} with associated branch problems \eqref{eq:branch-anf}.
Then, the following decompositions of the \tcone and of the abs-normal-linearized cone of \eqref{eq:i-anf} hold:
\begin{equation*}
\Tabs(\^t,\hzt)=\bigcup_{\Sigt} \Tsig(\^t,\hzt)
\qtextq{and}
\Tlinabs(\^t,\hzt)=\bigcup_{\Sigt} \Tlinsig(\^t,\hzt).
\end{equation*}
\end{lemma}
\begin{proof}
We first consider the \tcones and show that a neighborhood $\setN$ of $(\^t,\hzt)$ exists such that
\begin{equation*}
\Fabs\cap \setN = \bigcup_{\Sigt} (\Fsig\cap \setN).
\end{equation*}
The inclusion $\supseteq$ holds for every neighborhood $\setN$
since $\Fsig\subseteq\Fabs$ for all $\Sigt$.
To show the inclusion $\subseteq$ we consider an index $i\notin\alpt(\^t)$.
Then, by continuity, $\epsilon_i>0$ exists with
$\sigt_i(t) = \sigt_i(\^t) \in \set{-1,+1}$ for all $t\in B_{\epsilon_i}(\^t)$.
Now set $\epsilon\define\min_{i\notin\alpt(\^t)}\epsilon_i$,
$\setN\define B_{\epsilon} \x \mathbb R^{n_t}$,
and consider $(t, \zt) \in \setN \cap \Fabs$.
With the choice $\sigt_i=\sigt_i(t)$ for $i\notin\alpt(t)$
and $\sigt_i=1$ for $i\in\alpt(t)$
we find $\Sigt=\diag(\sigt)$ such that
$(t, \zt) \in \setN\cap \Fsig$ since $\alpt(t) \subseteq \alpt(\^t)$. Thus,
\begin{equation*}
\Fabs\cap \setN = \bigcup_{\Sigt} (\Fsig\cap \setN).
\end{equation*}
Now, let $\setT(\^t,\hzt; \setF)$ generically denote
the \tcone to $\setF$ at $(\^t, \hzt)$.
Then,
\begin{align*}
\Tabs(\^t,\hzt)
=
\setT(\^t,\hzt; \Fabs)
&=
\setT(\^t,\hzt; \Fabs\cap \setN)
=
\setT(\^t,\hzt; {\textstyle\bigcup_{\Sigt}} (\Fsig \cap \setN)) \\
&=
\bigcup_{\Sigt} \setT(\^t,\hzt; \Fsig \cap \setN)
=
\bigcup_{\Sigt} \setT(\^t,\hzt; \Fsig)
=
\bigcup_{\Sigt} \Tsig(\^t,\hzt).
\end{align*}
Here the fourth equality holds since the number of branch problems is finite.
The decomposition of $\Tlinabs$ follows directly
by comparing definitions of $\Tlinabs$ and $\Tlinsig$.
\end{proof}
\begin{lemma}\label{le:i-cones}
Let $(t,\zt)$ be feasible for \eqref{eq:i-anf}. Then,
\begin{equation*}
\Tabs(t,\zt)\subseteq \Tlinabs(t,\zt) \qtextq{and} \Tabs(t,\zt)^* \supseteq \Tlinabs(t,\zt)^*.
\end{equation*}
\end{lemma}
\begin{proof}
The branch NLPs are smooth, hence the inclusion $\Tsig(t,\zt) \subseteq \Tlinsig(t,\zt)$ holds by standard NLP theory.
Then, the first inclusion follows directly from \cref{le:decomp-cones-anf}
and the second inclusion follows by dualization of the cones.
\end{proof}
In general, the reverse inclusions do not hold. This leads to the following definitions.
\ifCmp
\begin{definition}
[Abadie's and Guignard's Kink Qualifications for \eqref{eq:i-anf}]
\label{def:akq}
\label{def:gkq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf}.
We say that \emph{Abadie's Kink Qualification (AKQ)} holds at $t$
if \ifnum\FmtChoice=2 we have \fi $\Tabs(t,\zt(t)) = \Tlinabs(t,\zt(t))$,
and that \emph{Guignard's Kink Qualification (GKQ)} holds at $t$
if $\Tabs(t,\zt(t))^* = \Tlinabs(t,\zt(t))^*$.
\end{definition}
\else
\begin{definition}[Abadie's Kink Qualification for \eqref{eq:i-anf}]\label{def:akq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf}.
We say that \emph{Abadie's Kink Qualification (AKQ)} holds at $t$ if $\Tabs(t,\zt(t))=\Tlinabs(t,\zt(t))$.
\end{definition}
\begin{definition}[Guignard's Kink Qualification for \eqref{eq:i-anf}]\label{def:gkq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf}.
We say that \emph{Guignard's Kink Qualification (GKQ)} holds at $t$ if $\Tabs(t,\zt(t))^*=\Tlinabs(t,\zt(t))^*$.
\end{definition}
\fi
The decomposition of cones in \cref{le:decomp-cones-anf}
and its dualization immediately lead to the next
\ifCmp theorem\else results\fi.
\ifCmp
\begin{theorem}[ACQ/GCQ for all \eqref{eq:branch-anf}
implies AKQ/GKQ for \eqref{eq:i-anf}]
\label{th:branch-acq_akq}
\label{th:branch-gcq_gkq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf}
with associated branch problems \eqref{eq:branch-anf}.
Then, AKQ respectively GKQ holds for \eqref{eq:i-anf} at $t$
if ACQ respectively GCQ holds for all \eqref{eq:branch-anf} at $(t,\zt(t))$.
\end{theorem}
\else
\begin{theorem}[ACQ for all \eqref{eq:branch-anf} implies AKQ for \eqref{eq:i-anf}]\label{th:branch-acq_akq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf} with associated branch problems \eqref{eq:branch-anf}.
Then, AKQ holds for \eqref{eq:i-anf} at $t$ if ACQ holds for all \eqref{eq:branch-anf} at $(t,\zt(t))$.
\end{theorem}
\begin{proof}
This follows directly from \cref{le:decomp-cones-anf}.
\end{proof}
\begin{theorem}[GCQ for all \eqref{eq:branch-anf} implies GKQ for \eqref{eq:i-anf}]\label{th:branch-gcq_gkq}
Consider a feasible point $(t,\zt(t))$ of \eqref{eq:i-anf} with associated branch problems \eqref{eq:branch-anf}.
Then, GKQ holds for \eqref{eq:i-anf} at $t$ if GCQ holds for all \eqref{eq:branch-anf} at $(t,\zt(t))$.
\end{theorem}
\begin{proof}
This follows directly from \cref{le:decomp-cones-anf} by dualization.
\end{proof}
\fi
\subsection{Abs-Normal NLPs with Inequality Slacks}
\label{sec:anf-equalities}
Here, we use absolute values of slack variables to get rid of the inequality constraints.
This idea is due to Griewank. It has been introduced in \cite{Hegerhorst_Steinbach:2019}
and has been further investigated in \cite{Hegerhorst_et_al:2019:MPEC2}.
With slack variables $w\in \mathbb R^{m_2}$, we reformulate \eqref{eq:nlp} as follows:
\begin{equation*}
\minst[t,w]{f(t)}
g(t) = 0,\quad
h(t)-\abs{w} = 0.
\end{equation*}
Then, we express $g$ and $h$ in abs-normal form as in \eqref{eq:anf}
and introduce additional switching variables $\zw$ to handle $\abs{w}$.
We obtain a class of purely equality-constrained abs-normal NLPs.
\begin{definition}[Abs-Normal NLP with Inequality Slacks]
An abs-normal NLP posed in the following form is called an \emph{abs-normal NLP with inequality slacks}:
\begin{align*}
\sminst[t,w,\zt,\zw]{f(t)}
& c_{\setE}(t,\abs{\zt}) = 0, \quad
c_{\setI}(t,\abs{\zt})-\abs{\zw} = 0,\\ \tag{E-NLP}\label{eq:e-anf}
& c_{\setZ}(t,\abs{\zt})=\zt, \quad
w=\zw,
\end{align*}
where $\Dom^{\abs{\zt}}$ is open and symmetric and
$\partial_2 c_{\setZ}(t,\abs{\zt})$ is strictly lower triangular.
The feasible set of \eqref{eq:e-anf} is denoted by $\Feabs$ and is a lifting of $\Fabs$.
\end{definition}
\begin{remark}
Introducing $\abs{w}$ converts inequalities
to pure equalities without a nonnegativity condition
for the slack variables $w$.
In \cite{Hegerhorst_Steinbach:2019} we have used this formulation to simplify
the presentation of first and second order conditions for the general
abs-normal NLP
under the linear independence kink qualification (LIKQ).
Later we will see that constraint qualifications of Abadie type are preserved under reformulation.
Nevertheless, this representation causes some problems.
In \cite{Hegerhorst_et_al:2019:MPEC2} we have shown that,
in contrast to LIKQ,
constraint qualifications of Mangasarian-Fromovitz type are not preserved.
Moreover, we cannot prove compatibility of constraint qualifications of Guignard type.
Also, note that the equation $w - \zw = 0$ (and hence $w$) cannot be eliminated as
this would destroy the abs-normal form.
Finally, the signs of nonzero components $w_i$ can be chosen arbitrarily and thus the slack $w$ is not uniquely determined.
This needs to be taken into account when formulationg
kink qualifications (KQ) for \eqref{eq:e-anf}.
\end{remark}
We are now interested in \ifCmp deriving \else defining \fi
Abadie's and Guignard's KQ for \eqref{eq:e-anf}.
To this end, we observe that the
formulation \eqref{eq:e-anf} can be seen as a special case of \eqref{eq:i-anf}:
Let $x=(t,w)$, $z=(\zt,\zw)$, $\_f(x)=f(t)$,
$\bcE(x,\abs{z})=(c_{\setE}(t,\abs{\zt}), c_{\setI}(t,\abs{\zt})-\abs{\zw})$, and
$\bcZ(x,\abs{z})=(c_{\setZ}(t,\abs{\zt}),w)$.
Then, we can rewrite \eqref{eq:e-anf} as
\begin{equation*}
\minst[x,z]{f(x)}
\bcE(x,\abs{z}) = 0,\quad
\bcZ(x,\abs{z})-z = 0.
\end{equation*}
\ifCmp
Hence, the following material is readily obtained by specializing
the definitions and results in the previous section.
\else
Hence, the following lemmas follow directly from results in the previous section.
\begin{lemma}[\TCone and Abs-Normal-Linearized Cone for \eqref{eq:e-anf}]\label{le:cones-e-anf}
\fi
With $\delta=(\delta t, \delta w, \delta\zt,\delta \zw)$,
\ifCmp \cref{def:cones-i-anf} and $w = \zw$ give \fi
the \tcone to $\Feabs$ at $(t, w,\zt, \zw)$ \ifCmp as \else reads \fi
\begin{equation*}
\Teabs(t, w, \zt, \zw) =
\begin{defarray}{\delta}
\exists \tau_k \searrow 0, \
\Feabs \ni (t_k, w_k, \zt_k, \zw_k) \to (t, w, \zt, \zw){:} \\
\tau_k^{-1} (t_k - t, w_k - w, \zt_k - \zt)
\to (\delta t, \delta w, \delta\zt), \
\delta \zw = \delta w
\end{defarray}
,
\end{equation*}
and the abs-normal-linearized cone reads
\begin{equation*}
\Tlineabs(t, w, \zt, \zw) =
\ifcase0
\begin{defarray}{\delta}
\partial_1 c_{\setI}(t, \abs\zt) \delta t +
\partial_2 c_{\setI}(t, \abs\zt) \delta\zeta = \delta\omega, \\
(\delta t, \delta\zt) \in \Tlinabs(t,\zt), \
\delta\zw = \delta w
\end{defarray},
\or
\begin{defarray}[r@{\medspace}l]{\delta}
(\delta t, \delta\zt)&\in\Tlinabs(t,\zt), \\
\partial_1 c_{\setI}(t, \abs\zt) \delta t +
\partial_2 c_{\setI}(t, \abs\zt) \delta\zeta &= \delta\omega, \\
\delta w&=\delta\zw
\end{defarray},
\fi
\end{equation*}
where $\alpha=(\alpt,\alpw)$ and
\begin{equation*}
\delta\zeta_i
=
\begin{ccases}
\sigt_i(t) \delta\zt_i, & i\notin\alpt(t) \\
\abs{\delta\zt_i}, & i\in\alpt(t)
\end{ccases}
,
\quad
\delta\omega_i
=
\begin{ccases}
\sigw_i(w) \delta\zw_i, & i\notin\alpw(w) \\
\abs{\delta\zw_i}, & i\in\alpw(w)
\end{ccases}
.
\end{equation*}
\ifCmp
In \cref{def:branch-anf}, consider
\else
\end{lemma}
\begin{proof}
This follows from \cref{def:cones-i-anf}, the definition of $\Tlinabs(t,\zt)$, and $w=\zw$.
\end{proof}
\begin{lemma}[Branch NLPs for \eqref{eq:e-anf}]\label{def:branch-e-anf}
Consider
\fi
a feasible point $(\^t,\^w,\hzt,\hzw)$ of \eqref{eq:e-anf}.
Choose $\sigt\in\{-1,1\}^{s_t}$ with $\sigt \succeq \sigt(\^t)$ and
$\sigw\in\{-1,1\}^{m_2}$ with $\sigw \succeq \sigw(\^w)$.
Set $\Sigt=\diag(\sigt)$ and $\Sigw=\diag(\sigw)$.
Then, the branch problem NLP($\Sigtw$) for $\Sigtw \define \diag(\Sigt, \Sigw)$ reads
\begin{align*}
\sminst[t,w,\zt,\zw]{f(t)}
& c_{\setE}(t,\Sigt \zt) = 0, \quad
c_{\setI}(t,\Sigt \zt) - \Sigw \zw = 0, \\
& c_{\setZ}(t,\Sigt \zt) - \zt = 0, \quad
w - \zw = 0,\tag{NLP($\Sigtw$)}\label{eq:branch-e-anf}\\
& \Sigt \zt \ge 0, \quad
\Sigw \zw \ge 0.
\end{align*}
The feasible set of \eqref{eq:branch-e-anf},
which always contains $(\^t,\^w,\hzt,\hzw)$, is denoted by $\Fesig$ and is a lifting of $\Fsig$.
\ifCmp
By \cref{def:cones-branch-anf}, the
\else
\end{lemma}
\begin{proof}
This follows from \cref{def:branch-anf}.
\end{proof}
\begin{lemma}[\TCone and Linearized Cone for \eqref{eq:branch-e-anf}]\label{def:cones-branch-e-anf}
Given \eqref{eq:branch-e-anf}, consider a feasible point $(t,w,\zt,\zw)$.
The
\fi
\tcone to $\Fesig$ at $(t,w,\zt,\zw)$ reads
\begin{equation*}
\Tesig(t,w,\zt,\zw) =
\begin{defarray}{\delta}
\exists \tau_k \searrow 0,\ \Fesig \ni (t_k,w_k,\zt_k,\zw_k) \to (t,w,\zt,\zw){:} \\
\tau_k^{-1} (t_k - t, w_k-w, \zt_k - \zt) \to (\delta t, \delta w, \delta\zt), \
\delta \zw = \delta w
\end{defarray}
\end{equation*}
with $\delta=(\delta t, \delta w, \delta\zt, \delta \zw)$.
The linearized cone reads
\begin{equation*}
\Tlinesig(t,w,\zt,\zw) =
\ifcase0
\begin{defarray}{\delta}
\partial_1 c_{\setI} \delta t +
\partial_2 c_{\setI} \Sigt \delta\zt - \Sigw \delta\zw = 0, \
\delta\zw = \delta w, \\
(\delta t,\delta\zt) \in \Tlinsig(t,\zt), \
\sigw_i \delta\zw_i \ge 0, \ i \in \alpw(w)
\end{defarray}
\or
\begin{defarray}[r@{\medspace}l]{\delta}
\partial_1 c_{\setI} \delta t +
\partial_2 c_{\setI} \Sigt \delta\zt - \Sigw \delta\zw &= 0, \\
(\delta t,\delta\zt) \in \Tlinsig(t,\zt), \
\delta\zw &= \delta w, \\
\sigw_i \delta\zw_i \ge 0,\ i & \in \alpw(w)
\end{defarray}
\or
\begin{defarray}[r@{\medspace}l]{\delta}
(\delta t,\delta\zt) &\in \Tlinsig(t,\zt), \\
\partial_1 c_{\setI} \delta t +
\partial_2 c_{\setI} \Sigt \delta\zt - \Sigw \delta\zw &= 0, \\
\delta w &= \delta \zw, \\
\sigw_i \delta\zw_i \ge 0,\ i & \in \alpw(w)
\end{defarray}
\fi
.
\end{equation*}
Here, all partial derivatives are evaluated at $(t, \Sigt\zt)$.
\ifCmp\else
\end{lemma}
\begin{proof}
This follows from \cref{def:cones-branch-anf}.
\end{proof}
\fi
Moreover, we obtain the following decompositions by applying
\cref{le:decomp-cones-anf} to \eqref{eq:e-anf}
at $y=(t,w,\zt,\zw)$ with associated branch problems $\eqref{eq:branch-e-anf}$:
\begin{equation*}
\Teabs(y) = \bigcup_{\Sigtw} \Tesig(y) \qtextq{and} \Tlineabs(y) = \bigcup_{\Sigtw} \Tlinesig(y).
\end{equation*}
As before, the \tcone is a subset of the linearized cone
and the reverse inclusion holds for the dual cones:
\begin{equation*}
\Teabs(y)
\subseteq
\Tlineabs(y)
\qtextq{and}
\Teabs(y)^*
\supseteq
\Tlineabs(y)^*.
\end{equation*}
This follows directly by applying \cref{le:i-cones} to \eqref{eq:e-anf}.
Again, equality does not hold in general,
and we consider Abadie's Kink Qualification (AKQ)
and Guignard's Kink Qualification (GKQ) for \eqref{eq:e-anf}.
\ifCmp
Given a feasible point $y = (t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf},
\cref{def:akq} gives AKQ and GKQ at $(t,w)$, respectively, as
\begin{equation*}
\Teabs(y) = \Tlineabs(y)
\qtextq{and}
\Teabs(y)^* = \Tlineabs(y)^*.
\end{equation*}
\else
\begin{lemma}[AKQ for \eqref{eq:e-anf}]
Consider a feasible point $(t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf}.
Then, AKQ for \eqref{eq:e-anf} at $(t,w)$ reads
\begin{equation*}
\Teabs(t,w,\zt(t),\zw(w))=\Tlineabs(t,w,\zt(t),\zw(w)).
\end{equation*}
\end{lemma}
\begin{proof}
This follows from \cref{def:akq}.
\end{proof}
\begin{lemma}[GKQ for \eqref{eq:e-anf}]
Consider a feasible point $(t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf}.
Then, GKQ for \eqref{eq:e-anf} at $(t,w)$ reads
$\Teabs(t,w,\zt(t),\zw(w))^*=\Tlineabs(t,w,\zt(t),\zw(w))^*$.
\end{lemma}
\begin{proof}
This follows from \cref{def:gkq}.
\end{proof}
\fi
\begin{remark}\label{rem:independece}
The possible slack values
$w \in W(t) \define \defset{w}{\abs{w} = c_{\setI}(t,\abs{\zt(t)})}$
just differ by the signs of components $w_i$ for $i \in \setA(t)$.
Thus, neither AKQ nor GKQ depends on the particular choice of $w$,
and both conditions are well-defined for $\eqref{eq:e-anf}$.
\end{remark}
\ifCmp
Now \cref{th:branch-acq_akq} takes the following form.
\begin{theorem}
[ACQ/GCQ for all \eqref{eq:branch-e-anf} implies AKQ/GKQ for \eqref{eq:e-anf}]
\label{th:e-branch-acq_akq}
\label{th:e-branch-gcq_gkq}
Consider a feasible point $y = (t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf}
with associated branch problems \eqref{eq:branch-e-anf}.
Then, AKQ respectively GKQ for \eqref{eq:e-anf} holds at $(t,w)$ if
ACQ respectively GCQ holds for all \eqref{eq:branch-e-anf} at $y$.
\end{theorem}
\else
As before, AKQ or GKQ are implied if ACQ or GCQ hold for all branch problems.
\begin{theorem}[ACQ for all \eqref{eq:branch-e-anf} implies AKQ for \eqref{eq:e-anf}]\label{th:e-branch-acq_akq}
Consider a feasible point $(t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf} with associated branch problems \eqref{eq:branch-e-anf}.
Then, AKQ for \eqref{eq:e-anf} holds at $(t,w)$ if ACQ holds for all \eqref{eq:branch-e-anf} at $(t,w,\zt(t),\zw(w))$.
\end{theorem}
\begin{proof}
This follows from \cref{th:branch-acq_akq}.
\end{proof}
\begin{theorem}[GCQ for all \eqref{eq:branch-e-anf} implies GKQ for \eqref{eq:e-anf}]\label{th:e-branch-gcq_gkq}
Consider a feasible point $(t,w,\zt(t),\zw(w))$ of \eqref{eq:e-anf} with associated branch problems \eqref{eq:branch-e-anf}.
Then, GKQ for \eqref{eq:e-anf} holds at $(t,w)$ if GCQ holds for all \eqref{eq:branch-e-anf} at $(t,w,\zt(t),\zw(w))$.
\end{theorem}
\begin{proof}
This follows from \cref{th:branch-gcq_gkq}.
\end{proof}
\fi
\subsection{Relations of Kink Qualifications for Abs-Normal NLPs}
In this paragraph we discuss the relations of kink qualifications for the two different formulations introduced above.
Here, equality of the cones and of the dual cones just needs to be considered for one element of the set $W(t) = \defset{w}{\abs{w}=c_{\setI}(t,\abs{\zt(t)})}$.
Then, it holds directly for all other elements by \cref{rem:independece}.
\begin{theorem}\label{th:akq}
AKQ for \eqref{eq:i-anf} holds at $(t,\zt(t))\in\Fabs$ if and only if
AKQ for \eqref{eq:e-anf} holds at $(t,w,\zt(t),\zw(w))\in\Feabs$
for any (and hence all) $w\in W(t)$.
\end{theorem}
\begin{proof}
As $\Tabs(t,\zt)\subseteq \Tlinabs(t,\zt)$ and
$\Teabs(t,\zt)\subseteq \Tlineabs(t,\zt)$
always hold, we just need to prove
\begin{equation*}
\Tabs(t, \zt) \supseteq \Tlinabs(t, \zt)
\iff
\Teabs(t, w, \zt, \zw) \supseteq \Tlineabs(t, w, \zt, \zw).
\end{equation*}
We start with the implication ``$\Rightarrow$''. Let $\delta=(\delta t,\delta w,\delta \zt, \delta\zw)\in \Tlineabs(t,w, \zt, \zw)$.
Then, we have $\tilde\delta=(\delta t,\delta \zt)\in \Tlinabs(t,\zt)=\Tabs(t,\zt)$.
Hence, there exist sequences $(t_k,\zt_k)\in\Fabs$ and $\tau_k\searrow 0$ with $(t_k,\zt_k)\to(t,\zt)$ and $\tau_k^{-1}(t_k-t, \zt_k-\zt)\to (\delta t,\delta\zt)$.
Now, define
\begin{equation*}
\Sigw=\diag(\sigma) \qtextq{with} \sigma_i=\begin{cases} \sigw(w_i), & i\notin\alpw(w), \\ \sign(\delta\zw_i), & i\in\alpw(w), \end{cases}
\end{equation*}
and set $\zw_k\define w_k\define\Sigwc_{\setI}(t_k,\abs{\zt_k})$.
Then, we have $\zw=w= \Sigwc_{\setI}(t,\abs{\zt})$ and obtain
\begin{align*}
\zw_k-\zw
&=
\Sigw [c_{\setI}(t_k,\abs{\zt_k})-c_{\setI}(t,\abs{\zt})] \\
&=
\Sigw [
\partial_1 c_{\setI}(t,\abs{\zt})(t_k-t) +
\partial_2 c_{\setI}(t,\abs{\zt})(\abs{\zt_k}-\abs{\zt}) +
o(\norm{(t_k-t, \abs{\zt_k}-\abs{\zt})})
].
\end{align*}
Further, for $k$ large enough we have
$\abs{\zt_k}-\abs{\zt}=\Sigt_k\zt_k-\Sigt\zt$ using
$\Sigt_k=\diag(\sigt_k)$ with $\sigt_k=\sigma(t_k)$ and $\Sigt=\diag(\sigt)$ with $\sigt=\sigma(t)$.
Then, we obtain for $\zt_i\neq 0$
\begin{equation*}
\tau_k^{-1}(\abs{(\zt_k)_i}-\abs{\zt_i}) = \tau_k^{-1}\sigt_i ((\zt_k)_i-\zt_i)\to \sigt_i\delta\zt_i.
\end{equation*}
For $\zt_i= 0$ we have $\tau_k^{-1}(\zt_k)_i \to \delta\zt_i$ and hence
\begin{equation*}
\tau_k^{-1}(\abs{(\zt_k)_i}-\abs{\zt_i}) =
\tau_k^{-1}\abs{(\zt_k)_i} \to \abs{\delta\zt_i}.
\end{equation*}
Thus,
$ \tau_k^{-1}(\abs{(\zt_k)}-\abs{\zt}) \to \delta\zeta$
holds, and in total
\begin{equation*}
\tau_k^{-1}(\zw_k-\zw)\to \Sigw [\partial_1 c_{\setI}(t,\abs{\zt})\delta t + \partial_2 c_{\setI}(t,\abs{\zt})\delta \zeta]=\Sigw\delta\zeta=\delta\zw.
\end{equation*}
Additionally, we obtain $\tau_k^{-1}(w_k-w)\to \delta w$ and finally $d\in \Teabs(t,w, \zt, \zw)$.
To prove the implication ``$\Leftarrow$'',
consider $\delta=(\delta t,\delta \zt)\in \Tlinabs(t,\zt)$. We define
\begin{equation*}
\Sigw=\diag(\sigma) \qtextq{with} \sigma_i=\begin{cases} \pm 1, & i\in\setA(t), \\ \sign([\partial_1c_{\setI}(t,\abs{\zt})\delta t+\partial_2c_{\setI}(t,\abs{\zt})\delta\zeta]_i), & i\notin\setA(t), \end{cases}
\end{equation*}
and set $\delta w=\delta\zw=\Sigw [\partial_1c_{\setI}(t,\abs{\zt})\delta t+\partial_2c_{\setI}(t,\abs{\zt})\delta\zeta]$.
Then\ifnum\FmtChoice=0,\else\ we have \fi $\tilde\delta=(\delta t,\delta w,\delta\zt,\delta\zw)\in\Tlineabs(t,w,\zt,\zw)$ for $w=\zw=\Sigwc_{\setI}(t,\abs{\zt})$.
By assumption, $\tilde\delta\in\Teabs(t,w,\zt,\zw)$ holds,
and this directly implies $\delta=(\delta t,\delta\zt)\in\Tabs(t,\zt)$.
\end{proof}
\begin{theorem}\label{th:gkq}
GKQ for \eqref{eq:i-anf} holds at \ifnum\FmtChoice=1 the point \fi $(t,\zt(t))\in\Fabs$ if GKQ for \eqref{eq:e-anf} holds at $(t,w,\zt(t),\zw(w))\in\Feabs$ for any (and hence all) $w\in W(t)$.
\end{theorem}
\begin{proof}
The inclusion $\Tabs(t, \zt)^* \supseteq \Tlinabs(t, \zt)^*$ is always satisfied.
Thus, we just have to show
\begin{equation*}
\Tabs(t, \zt)^* \subseteq \Tlinabs(t, \zt)^*.
\end{equation*}
Let $\omega=(\omega t,\omega \zt)\in \Tabs(t, \zt)^*$,
i.e. $\omega^T\delta\ge 0$ for all $\delta=(\delta t,\delta \zt)\in \Tabs(t,\zt)$.
Then, set $\tilde{\omega}=(\omega t,0,\omega \zt,0)$ and obtain $\tilde{\omega}^T\tilde \delta=\omega^T\delta\ge 0$ for all $\tilde \delta\in \Teabs(t,w,\zt,\zw)$ where $w\in W(t)$ is arbitrary.
By assumption, then $\tilde{\omega}^T\tilde \delta\ge 0$ for all $\tilde \delta\in \Tlineabs(t,w,\zt,\zw)$ holds.
This implies $\omega^T\delta=\tilde{\omega}^T\tilde \delta\ge 0$ for all $\delta\in \Tlinabs(t,\zt)$.
\end{proof}
The converse is unlikely to hold, but we are, at the same time, not aware of a counterexample.
Next, we consider the branch problems and relations of ACQ and GCQ for all branch problems.
Here, we can exploit sign information to show equivalence of GCQ for the
branch problems of \eqref{eq:i-anf} and \eqref{eq:e-anf}.
\begin{theorem}\label{th:branch-acq}
ACQ for \eqref{eq:branch-anf} holds at $(t,\zt(t))\in\Fsig$ if and only if
ACQ for \eqref{eq:branch-e-anf} holds at $(t,w,\zt(t),\zt(w))\in\Fesig$ for any (and hence all) $w\in W(t)$.
\end{theorem}
\begin{proof}
The proof proceeds as in \cref{th:akq}.
\end{proof}
\begin{theorem}\label{th:branch-gcq}
GCQ for \eqref{eq:branch-anf} holds at $(t,\zt(t))\in\Fsig$ if and only if
GCQ for \eqref{eq:branch-e-anf} holds at $(t,w,\zt(t),\zt(w))\in\Fesig$ for any (and hence all) $w\in W(t)$.
\end{theorem}
\begin{proof}
The inclusions $\Tsig(t,\zt)^*\supseteq \Tlinsig(t,\zt)^*$ and
$\Tesig(t,\zt)^*\supseteq \Tlinesig(t,\zt)^*$
are always satisfied. Thus, we just need to prove
\begin{equation*}
\Tsig(t, \zt)^* \subseteq \Tlinsig(t, \zt)^*
\iff
\Tesig(t, w, \zt, \zw)^* \subseteq \Tlinesig(t, w, \zt, \zw)^*.
\end{equation*}
We start with the implication ``$\Rightarrow$''.
Let $\omega=(\omega t,\omega w,\omega \zt,\omega\zw)\in\Tesig(t,w,\zt,\zw)^*$, i.e. $\omega^T\delta\ge 0$ for all $\delta=(\delta t,\delta w,\delta \zt\delta \zw)\in \Tesig(t,w,\zt,\zw)$.
\ifcase0
Set
\begin{equation*}
\tilde\omega
=
(\tilde\omega t,\tilde\omega \zt)
=
(\omega t,\omega \zt) +
(\omega w + \omega \zw) \Sigw
(\partial_1c_{\setI}(t,\Sigt \zt),
\partial_2c_{\setI}(t,\Sigt\zt)\Sigt).
\end{equation*}
\or
Set $\tilde\omega =(\tilde\omega t,\tilde\omega \zt)$ with
\begin{align*}
\tilde\omega t
&\define
\omega t + (\omega w + \omega \zw)\Sigw \partial_1c_{\setI}(t,\Sigt \zt),
\\
\tilde\omega\zt
&\define
\omega\zt +
(\omega w + \omega \zw)\Sigw \partial_2c_{\setI}(t,\Sigt\zt)\Sigt.
\end{align*}
\fi
Then, we have $\tilde\omega^T \tilde\delta = \omega^T \delta \ge 0$ for all $\delta=(\delta t,\delta \zt)\in \Tsig(t,\zt)$ and thus $\tilde\omega\in\Tlinsig(t,\zt)$.
Then, $\omega^T\delta\ge 0$ for all $\delta=(\delta t,\delta w,\delta \zt\delta \zw)\in \Tlinesig(t,w,\zt,\zw)$ as $\omega^T\delta=\tilde\omega^T\tilde\delta$ holds.
The reverse implication may be proved as shown in \cref{th:gkq}.
\end{proof}
\section{Counterpart MPCCs}
\label{sec:counterpart}
In this section we restate the MPCC counterpart problems for the two formulations
\eqref{eq:i-anf} and \eqref{eq:e-anf} and we present the relations between them.
\subsection{Counterpart MPCC for the General Abs-Normal NLP}
To reformulate \eqref{eq:i-anf} as an MPCC, we split $\zt$ into its nonnegative part and the modulus of its
nonpositive part, $\ut \define [\zt]^+\define\max(\zt,0)$ and $\vt \define[\zt]^-\define \max(-\zt,0)$.
Then, we add complementarity of these two variables
to replace $\abs{\zt}$ by $\ut+\vt$ and $\zt$ itself by $\ut-\vt$.
\begin{definition}[Counterpart MPCC of \eqref{eq:i-anf}]
\label{def:i-mpec}
The \emph{counterpart MPCC} of the non-smooth NLP \eqref{eq:i-anf} reads
\begin{align*}
\sminst[t,\ut,\vt]{f(t)}
& c_{\setE}(t,\ut+\vt)=0, \quad
c_{\setI}(t,\ut+\vt)\ge 0,\\
& c_{\setZ}(t,\ut+\vt)-(\ut-\vt)=0,\tag{I-MPCC} \label{eq:i-mpec}\\
& 0 \le \ut \ensuremath\perp \vt \ge 0,
\end{align*}
where $\ut, \vt\in\mathbb R^{s_t}$.
The feasible set of \eqref{eq:i-mpec} is denoted by $\Fmpec$.
\end{definition}
Given an abs-normal NLP \eqref{eq:i-anf} and its counterpart MPCC \eqref{eq:i-mpec},
the mapping $\phi\: \Fmpec \to \Fabs$ defined by
\begin{equation*}
\phi(t, \ut, \vt) = (t, \ut - \vt) \qtextq{and} \Inv\phi(t, \zt) = (t, [\zt]^+, [\zt]^-)
\end{equation*}
is a homeomorphism. This result was obtained in \cite[Lemma 31]{Hegerhorst_et_al:2019:MPEC2}.
Corresponding to the active switching set in the previous section, we introduce index sets for MPCCs.
\begin{definition}[Index Sets]
We denote by $\setUt_0\define\defset{i\in \set{1, \dots, s_t}}{\ut_i=0}$
the set of indices of active inequalities $\ut_i\geq 0$,
and by $\setUt_+\define\defset{i\in\set{1, \dots, s_t}}{\ut_i>0}$
the set of indices of inactive inequalities $\ut_i\geq 0$.
Analogous definitions hold of $\setVt_0$ and $\setVt_+$.
By $\setDt\define\setUt_0\cap\setVt_0$ we denote
the set of indices of non-strict (degenerate) complementarity pairs.
Thus we have the partitioning
$\set{1, \dots, s_t} = \setUt_+ \cup \setVt_+ \cup \setDt$.
\end{definition}
In order to define MPCC-CQs in the spirit of Abadie and Guignard,
we introduce the \tcone, the complementarity cone,
and the MPCC-linearized cone.
\begin{definition}[\TCone and MPCC-Linearized Cone for \eqref{eq:i-mpec}, see \cite{FlegelDiss}]\label{def:cones-i-mpec}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec} with associated index sets $\setUt_+$, $\setVt_+$ and $\setDt$.
The \emph{\tcone} to $\Fmpec$ at $(t,\ut,\vt)$ is
\begin{equation*}
\Tmpec(t, \ut, \vt) \define
\begin{defarray}{(\delta t, \delta\ut, \delta\vt)}
\exists \tau_k \searrow 0,\
\Fmpec \ni (t_k, \ut_k, \vt_k) \to (t, \ut, \vt){:} \\
\tau_k^{-1} (t_k - t, \ut_k - \ut, \vt_k - \vt)
\to (\delta t, \delta \ut, \delta \vt)
\end{defarray}
.
\end{equation*}
The \emph{MPCC-linearized cone} at $(t,\ut,\vt)$ is
\begin{equation*}
\Tlinmpec(t, \ut, \vt) \define
\ifcase1
\begin{defarray}[r@{\medspace}l]
{\begin{pmatrix}\delta t \\ \delta\ut \\ \delta\vt \end{pmatrix}}
\partial_1 c_{\setE} \delta t + \partial_2 c_{\setE} (\delta\ut + \delta\vt) &= 0,\\
\partial_1 \cA \delta t + \partial_2 \cA (\delta\ut + \delta\vt) &\ge 0,\\
\partial_1 c_{\setZ} \delta t + \partial_2 c_{\setZ} (\delta\ut + \delta\vt) &=
\delta \ut - \delta \vt, \\
\delta \ut_i &= 0,\ i \in \setVt_+, \\
\delta \vt_i &= 0,\ i \in \setUt_+, \\
0 \le \delta \ut_i \ensuremath\perp \delta \vt_i &\ge 0,\ i \in \setDt
\end{defarray}
\or
\begin{defarray}[r@{\medspace}l]
{\begin{pmatrix}\delta t \\ \delta\ut \\ \delta\vt \end{pmatrix}}
\partial_1 c_{\setE} \delta t + \partial_2 c_{\setE} (\delta\ut + \delta\vt) &= 0,\\
\partial_1 \cA \delta t + \partial_2 \cA (\delta\ut + \delta\vt) &\ge 0,\\
\partial_1 c_{\setZ} \delta t + \partial_2 c_{\setZ} (\delta\ut + \delta\vt) &=
\delta \ut - \delta \vt, \\
(\delta \ut, \delta \vt) &\in \Tcompl(\ut, \vt)
\end{defarray}
\fi
\end{equation*}
with \emph{complementarity cone}
\begin{equation*}
\Tcompl(\ut, \vt)
\define
\ifcase0
\begin{defarray}{(\delta \ut, \delta \vt)}
\delta \ut_i = 0,\ i \in \setVt_+,\
\delta \vt_i = 0,\ i \in \setUt_+, \\
0 \le \delta \ut_i \ensuremath\perp \delta \vt_i \ge 0,\ i \in \setDt
\end{defarray}
\else
\begin{defarray}[r@{\medspace}l]{(\delta \ut, \delta \vt)}
\delta \ut_i &= 0,\ i \in \setVt_+, \\
\delta \vt_i &= 0,\ i \in \setUt_+, \\
0 \le \delta \ut_i \ensuremath\perp \delta \vt_i &\ge 0,\ i \in \setDt
\end{defarray}
\fi
.
\end{equation*}
Here, all partial derivatives are evaluated at $(t, \ut + \vt)$.
\end{definition}
Note that the MPCC-linearized cone was originally stated in \cite{Pang_Fukushima_1999} and \cite{Scheel_Scholtes_2000}, but was not further investigated there.
Moreover, we modified the definition in \cite{FlegelDiss} by introducing the complementarity cone which is studied in the next lemma.
\begin{lemma}
The complementarity cone $\Tcompl(\hut, \hvt)$ is the \tcone
and also the linearized cone to the complementarity set
$\defset{(\ut, \vt)}{0 \le \ut \ensuremath\perp \vt \ge 0}$ at $(\hut, \hvt)$.
\end{lemma}
\begin{proof}
Given a tangent vector
$(\delta \ut, \delta \vt) = \lim \Inv\tau_k (\ut_k - \hut, \vt_k - \hvt)$
where $0 \le \ut_k \ensuremath\perp \vt_k \ge 0$ and $\tau_k \searrow 0$,
we have for $k$ large enough:
\begin{align*}
\ut_{ki} > 0,\ \vt_{ki} &= 0, & i \in \setUt_+\ (\hut_i &> 0,\ \hvt_i = 0), \\
\ut_{ki} = 0,\ \vt_{ki} &> 0, & i \in \setVt_+\ (\hut_i &= 0,\ \hvt_i > 0), \\
0 \le \ut_{ki} \ensuremath\perp \vt_{ki} &\ge 0, & i \in \setDt\ (\hut_i &= 0,\ \hvt_i = 0).
\end{align*}
This implies $(\delta \ut, \delta \vt) \in \Tcompl(\hut, \hvt)$.
Conversely, every $(\delta \ut, \delta \vt) \in \Tcompl(\hut, \hvt)$
is a tangent vector generated by the sequence
$(\ut_k, v_k) = (\hut, \hvt) + \tau_k (\delta \ut, \delta \vt)$
with $\tau_k = 1/k$, $k \in \mathbb N_{>0}$.
The linearized cone clearly coincides with the \tcone.
\end{proof}
\begin{lemma} \label{le:hom-T-i}
Given \eqref{eq:i-anf} with counterpart MPCC \eqref{eq:i-mpec},
consider $(t, \zt) \in \Fabs$ with $\sigt=\sigt(t)$ and $(t, \ut, \vt) = \phi^{-1}(t,\zt)\in\Fmpec$ with associated index sets $\setUt_+$, $\setVt_+$ and $\setDt$.
Define $\psi\: \Tmpec(t, \ut, \vt) \to \Tabs(t, \zt)$ and $\psi\: \Tlinmpec(t, \ut, \vt) \to \Tlinabs(t, \zt)$ as
\begin{equation*}
\psi(\delta t, \delta \ut, \delta \vt) = (\delta t, \delta \ut - \delta \vt) \qtextq{and} \Inv\psi(\delta t, \delta \zt) = (\delta t, \delpos\zt, \delneg\zt).
\end{equation*}
Here, $\delpos\zt, \delneg\zt$ map $\delta\zt$ into the complementarity cone via
\bgroup
\def1.15{1.15}
\begin{equation*}
\delpos{\zt_i} =
\begin{ccases}
+\delta \zt_i, & i \in \setUt_+\ (\sigt_i > 0) \\ 0, & i \in \setVt_+\ (\sigt_i < 0) \\ {}[\delta \zt_i]^+, & i \in \setDt\ (\sigt_i = 0)
\end{ccases}
,\quad
\delneg{\zt_i} =
\begin{ccases}
0, & i \in \setUt_+\ (\sigt_i > 0) \\ -\delta \zt_i, & i \in \setVt_+\ (\sigt_i < 0) \\ {}[\delta \zt_i]^-, & i \in \setDt\ (\sigt_i = 0)
\end{ccases}.
\end{equation*}
\egroup
Then, both functions $\psi$ are homeomorphisms.
\end{lemma}
\begin{proof}
First, consider $\psi\: \Tmpec(t, \ut, \vt) \to \Tabs(t, \zt)$:
Given a \ifnum\FmtChoice=1 tangent \fi vector
\begin{math}
(\delta t, \delta \ut, \delta \vt) = \lim \Inv\tau_k (t_k - t, \ut_k - \ut, \vt_k - \vt) \in \Tmpec(t, \ut, \vt),
\end{math}
set
\begin{math}
(t_k, \zt_k) = \phi(t_k, \ut_k, \vt_k) = (t_k, \ut_k - \vt_k) \in \Fabs
\end{math}
to obtain
\begin{equation*}
\lim \frac{\zt_k - \zt}{\tau_k} = \lim \frac{(\ut_k - \ut) - (\vt_k - \vt)}{\tau_k} = \delta \ut - \delta \vt
\implies (\delta t, \delta \ut - \delta \vt) \in \Tabs(t, \zt).
\end{equation*}
Conversely, given a vector $(\delta t, \delta \zt) = \lim \Inv\tau_k (t_k - t, \zt_k - \zt) \in \Tabs(t, \zt)$,
define $(t_k, \ut_k, \vt_k) = \Inv\phi(t_k, \zt_k) = (t_k, [\zt_k]^+, [\zt_k]^-) \in \Fmpec$.
Then, $\tau_k^{-1}((u_k-u)-(v_k-v))\to \delpos\zt - \delneg\zt$ holds.
Thus, it remains to show $\tau_k^{-1}(u_k-u,v_k-v)\to (\delpos\zt,\delneg\zt)$ which is done componentwise:
\begin{itemize}
\item $i\in \setUt_+$: $\vt_i=0$ holds by feasibility and $\delneg\zt=0$ by definition.
Thus, $(\ut_k)_i>0$ holds for $k$ large enough and by complementarity $(\vt_k)_i=0$ holds.
Then, $\tau_k^{-1}((\ut_k)_i-\ut_i)\to \delpos\zt_i$ follows.
\item $i\in\setVt_+$: $\tau_k^{-1}((\vt_k)_i-\vt_i)\to \delneg\zt_i$ follows as in the previous case.
\item $i\in\setDt$ and $\delpos\zt_i>0$: $\delneg\zt_i=0$ holds by complementarity and so $\tau_k^{-1}((\ut_i)_k-(\vt_i)_k)\to \delpos\zt_i$.
Then, $\tau_k^{-1}(\ut_i)_k\to \delpos\zt_i$ and $\tau_k^{-1}(\vt_i)_k\to 0$ because of sign constraints.
\item $i\in\setDt$ and $\delneg\zt_i>0$:
$\tau_k^{-1}(\ut_i)_k\to 0$ and $\tau_k^{-1}(\vt_i)_k\to \delneg\zt_i$ follow as in the previous case.
\item $i\in\setDt$ and $\delpos\zt_i=\delneg\zt_i=0$: Then, $\tau_k^{-1}((\ut_i)_k-(\vt_i)_k)\to0$ holds. Because of sign constraints and complementarity, this can only hold if
$\tau_k^{-1}(\ut_i)_k\to 0,\ \tau_k^{-1}(\vt_i)_k\to0$.
\end{itemize}
Altogether, this implies
\begin{equation*}
\lim \frac{(t_k - t, \ut_k - \ut, \vt_k - \vt)}{\tau_k} =
(\delta t, \delpos\zt, \delneg\zt) \in \Tmpec(t, \ut, \vt).
\end{equation*}
By construction, $\psi$ and $\psi^{-1}$ are both continuous and inverse to each other.\\
Second, consider $\psi\: \Tlinmpec(t, \ut, \vt) \to \Tlinabs(t, \zt)$:
Given $(\delta t, \delta \ut, \delta \vt) \in \Tlinmpec(t, \ut, \vt)$,
the vectors $\delta \zt = \delta \ut - \delta \vt$ and $\delta \zeta = \delta \ut + \delta \vt$ satisfy
\begin{equation*}
\delta\zt_i =
\begin{ccases}
\delta\ut_i, & i \in \setUt_+ \\ -\delta\vt_i, & i \in \setVt_+ \\ \delta\ut_i - \delta\vt_i, & i \in \setDt
\end{ccases}, \quad
\delta\zeta_i =
\begin{ccases}
\delta\ut_i=\sigma_i\delta \zt_i, & i \in \setUt_+ \\ \delta\vt_i=\sigt_i\delta \zt_i, & i \in \setVt_+ \\ \delta\ut_i + \delta\vt_i=\abs{\delta \zt_i}, & i \in \setDt
\end{ccases}.
\end{equation*}
Thus, $(\delta t, \delta \zt) = \psi(\delta t, \delta \ut, \delta \vt) \in \Tlinabs(t, \zt)$.
Conversely, the same case distinction yields
\begin{math}
(\delta t, \delta \ut, \delta \vt) =
\Inv\psi(\delta t, \delta \zt)
\in \Tlinmpec(\^t, \hut, \hvt)
\end{math}
for every $(\delta t, \delta \zt) \in \Tlinabs(\^t, \hzt)$.
Again, $\psi$ and $\psi^{-1}$ are both continuous and inverse to each other by construction.
\end{proof}
\begin{definition}[Branch NLPs for \eqref{eq:i-mpec}, see \cite{Pang_Fukushima_1999}]\label{def:branch-mpec}
Consider a feasible point $(\^t,\hut,\hvt)$ of \eqref{eq:i-mpec}
with associated index sets $\setUt_+$, $\setVt_+$, and $\setDt$
and choose $\setPt \subseteq \setDt$
with complement $\bar\setPt = \setDt \setminus \setPt$.
The branch problem NLP($\setPt$) is defined as
\begin{align*}
\sminst[t,\ut,\vt]{f(t)}
& c_{\setE}(t,\ut+\vt) = 0,\\
& c_{\setI}(t,\ut+\vt) \ge 0,\\
& c_{\setZ}(t,\ut+\vt) - (\ut - \vt) = 0,\tag{NLP($\setPt$)}\label{eq:branch-mpec}\\
& 0 = \ut_i, \ 0 \le \vt_i, \ i\in\setVt_+\cup\setPt,\\
& 0 \le \ut_i, \ 0 = \vt_i, \ i\in\setUt_+\cup\bar{\setPt}.
\end{align*}
The feasible set of \eqref{eq:branch-mpec},
which always contains $(\^t,\hut,\hvt)$, is denoted by $\Fp$.
\end{definition}
Clearly, the homeomorphism $\phi$ can be restricted to the branch problems \eqref{eq:branch-anf} and \eqref{eq:branch-mpec} where $\setPt=\{i\in\alpt(\^t)\colon \sigt_i=-1 \}$.
Thus, the mapping $\phi_{\setPt}\: \Fp \to \Fsig$ defined by
\begin{equation*}
\phi_{\setPt}\define \phi\vert_{\setPt} \qtextq{and} \phi^{-1}_{\setPt}\define \phi^{-1}\vert_{\Sigt}
\end{equation*}
is a homeomorphism.
\ifCmp\else
\begin{definition}[\TCone and Linearized Cone for \eqref{eq:branch-mpec}]\label{def:cones-branch-mpec}
Given \eqref{eq:branch-mpec}, consider a feasible point $(t,\ut,\vt)$.
\fi
The \tcone to $\Fp$ at $(t,\ut,\vt)$ is
\begin{equation*}
\Tp(t,\ut,\vt) \define
\begin{defarray}{(\delta t,\delta\ut,\delta\vt)}
\exists \tau_k \searrow 0,\ \Fp \ni (t_k, \ut_k,\vt_k) \to (t,\ut,\vt){:} \\
\tau_k^{-1} (t_k - t, \ut_k - \ut, \vt_k - \vt) \to (\delta t, \delta\ut, \delta\vt)
\end{defarray}.
\end{equation*}
The linearized cone is
\begin{equation*}
\Tlinp(t, \ut,\vt) \define
\begin{defarray}{\begin{pmatrix} \delta t \\ \delta\ut \\ \delta\vt \end{pmatrix}}
\partial_1 c_{\setE} \delta t +
\partial_2 c_{\setE} (\delta\ut + \delta\vt) = 0, \\
\partial_1 \cA \delta t +
\partial_2 \cA (\delta\ut + \delta\vt) \ge 0, \\
\partial_1 c_{\setZ} \delta t +
\partial_2 c_{\setZ} (\delta\ut + \delta\vt) = \delta\ut - \delta \vt, \\
0 = \delta\ut_i \ \text{for} \ i\in\setVt_+\cup\setP,\ 0 = \delta\vt_i \ \text{for} \ i\in\setUt_+\cup\bar\setP,\\
0 \le \delta \ut_i \ \text{for} \ i\in \bar{\setP},\ 0 \le \delta \vt_i \ \text{for} \ i\in\setP
\end{defarray}.
\end{equation*}
Here, all partial derivatives are evaluated at $(t, \ut + \vt)$.
\ifCmp\else
\end{definition}
\fi
\begin{lemma} \label{le:hom-T-branch-i}
Given \eqref{eq:branch-anf} and \eqref{eq:branch-mpec} with $\setPt=\defset{ i\in\alpt(\^t)}{\sigt_i=-1}$.
Consider $(t, \zt) \in \Fsig$ and $(t,\ut,\vt)= \phi^{-1}_{\setPt}(t, \zt)$.
Define $\psi_{\setPt}\define \psi\vert_{\Tp}$, $\psi^{-1}_{\setPt}\define \psi^{-1}\vert_{\Tsig}$ and $\psi_{\setPt}\define \psi\vert_{\Tlinp}$, $\psi^{-1}_{\setPt}\define \psi^{-1}\vert_{\Tlinsig}$.
Then,
\begin{equation*}
\psi_{\setP}\: \Tp(t, \ut, \vt) \to \Tsig(t, \hzt) \qtextq{and} \psi_{\setP}\: \Tlinp(t, \hut, \hvt) \to \Tlinsig(t, \hzt)
\end{equation*}
are homeomorphisms.
\end{lemma}
\begin{proof}
By construction and since $\alpt(\^t)=\setDt$, the following equalities of sets hold:
\begin{align*}
\setPt&=\{i\in\alpt(\^t): \sigt_i=-1\}, && \setVt_+=\{i\notin\alpt(\^t): \sigt_i=-1\},\\
\bar{\setP}^t&=\{i\in\alpt(\^t): \sigt_i=+1\}, && \setUt_+=\{i\notin\alpt(\^t): \sigt_i=+1\}.
\end{align*}
Thus, the claim follows directly from \cref{le:hom-T-i}.
\end{proof}
\ifCmp\else
\begin{lemma}\label{le:decomp-cones-mpec}
\fi
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec} with associated branch problems \eqref{eq:branch-mpec}.
Then, the following decompositions of the \tcone
and of the abs-normal-linearized cone of \eqref{eq:i-mpec} hold
\ifCmp (for a proof see \cite{FlegelDiss})\fi:
\begin{equation}
\label{eq:decomp-cones-mpec}
\Tmpec(t,\ut,\vt)=\bigcup_{\setPt} \Tp(t,\ut,\vt)
\qtextq{and}
\Tlinmpec(t,\ut,\vt)=\bigcup_{\setPt} \Tlinp(t,\ut,\vt).
\end{equation}
\ifCmp
The following inclusions are also proved in \cite{FlegelDiss}:
\else
\end{lemma}
\begin{proof}
A proof may be found in \cite{FlegelDiss}.
\end{proof}
\begin{lemma}\label{le:i-cones-mpec}
Let $(t,\ut,\vt)$ be feasible for \eqref{eq:i-mpec}. Then,
\fi
\begin{equation*}
\Tmpec(t,\ut,\vt)\subseteq \Tlinmpec(t,\ut,\vt) \qtextq{and} \Tmpec(t,\ut,\vt)^* \supseteq \Tlinmpec(t,\ut,\vt)^*.
\end{equation*}
\ifCmp\else
\end{lemma}
\begin{proof}
A proof may be found in \cite{FlegelDiss}.
\end{proof}
\fi
In general, the converses do not hold. This motivates the definition of MPCC-ACQ and MPCC-GCQ.
\ifCmp
\begin{definition}
[Abadie's and Guignard's Constraint Qualifications for \eqref{eq:i-mpec}, see \cite{FlegelDiss}]
\label{def:mpec-acq}
\label{def:mpec-gcq}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec}.
We say that \emph{Abadie's Constraint Qualification for MPCC (MPCC-ACQ)}
holds at $(t,\ut,\vt)$ if $\Tmpec(t,\ut,\vt) = \Tlinmpec(t,\ut,\vt)$,
and that \emph{Guignard's Constraint Qualification for MPCC (MPCC-GCQ)}
holds at $(t,\ut,\vt)$ if $\Tmpec(t,\ut,\vt)^* = \Tlinmpec(t,\ut,\vt)^*$.
\end{definition}
The decomposition \eqref{eq:decomp-cones-mpec} and its dualization imply
that both MPCC-CQs hold if the corresponding CQ holds for all branch problems.
\begin{theorem}[ACQ/GCQ for all \eqref{eq:branch-mpec}
implies MPCC-ACQ/MPCC-GCQ for \eqref{eq:i-mpec}]
\label{th:branch-acq_mpec-acq}
\label{th:branch-gcq_mpec-gcq}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec}.
Then, MPCC-ACQ respectively MPCC-GCQ holds at $(t,\ut,\vt)$ if
ACQ respectively GCQ holds for all \eqref{eq:branch-mpec} at $(t,\ut,\vt)$.
\end{theorem}
\else
\begin{definition}[Abadie's Constraint Qualification for \eqref{eq:i-mpec}]\label{def:mpec-acq}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec}.
We say that \emph{Abadie's Constraint Qualification for MPCC (MPCC-ACQ)} holds for \eqref{eq:i-mpec} at
$(t,\ut,\vt)$ if $\Tmpec(t,\ut,\vt)=\Tlinmpec(t,\ut,\vt)$.
\end{definition}
\begin{definition}[Guignard's Constraint Qualification for \eqref{eq:i-mpec}]\label{def:mpec-gcq}
Consider a feasible point $(\^t,\hut,\hvt)$ of \eqref{eq:i-mpec}.
We say that \emph{Guignard's Constraint Qualification for MPCC (MPCC-GCQ)} holds for \eqref{eq:i-mpec} at
$(t,\ut,\vt)$ if $\Tmpec(t,\ut,\vt)^*=\Tlinmpec(t,\ut,\vt)^*$.
\end{definition}
Both MPCC-CQs hold if the corresponding CQ holds for all branch problems.
\begin{theorem}[ACQ for all \eqref{eq:branch-mpec} implies MPCC-ACQ for \eqref{eq:i-mpec}]\label{th:branch-acq_mpec-acq}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec} with associated branch problems \eqref{eq:branch-mpec}.
Then, MPCC-ACQ holds for \eqref{eq:i-mpec} at $(t,\ut,\vt)$ if ACQ holds for all \eqref{eq:branch-mpec} at $(t,\ut,\vt)$.
\end{theorem}
\begin{proof}
This follows directly from \cref{le:decomp-cones-mpec}.
\end{proof}
\begin{theorem}[GCQ for all \eqref{eq:branch-mpec} implies MPCC-GCQ for \eqref{eq:i-mpec}]\label{th:branch-gcq_mpec-gcq}
Consider a feasible point $(t,\ut,\vt)$ of \eqref{eq:i-mpec} with associated branch problems \eqref{eq:branch-mpec}.
Then, MPCC-GCQ holds for \eqref{eq:i-mpec} at $(t,\ut,\vt)$ if GCQ holds for all \eqref{eq:branch-mpec} at $(t,\ut,\vt)$.
\end{theorem}
\begin{proof}
This follows directly from \cref{le:decomp-cones-mpec} by dualization.
\end{proof}
\fi
\subsection{Counterpart MPCC for the Abs-Normal NLP with Inequality Slacks}\label{subsec:e-mpec}
\ifCmp
By \cref{def:i-mpec},
the \emph{counterpart MPCC} of the non-smooth NLP \eqref{eq:e-anf} reads:
\else
Using the same approach as in the preceding paragraph, we restate the counterpart MPCC of \eqref{eq:e-anf}.
\begin{definition}[Counterpart MPCC of \eqref{eq:e-anf}]
The \emph{counterpart MPCC} of the non-smooth NLP \eqref{eq:e-anf} reads:
\fi
\begin{align*}
\sminst[t,w,\ut,\vt,\uw,\vw]{f(t)}
& c_{\setE}(t,\ut+\vt)=0,\\
& c_{\setI}(t,\ut+\vt) - (\uw+\vw) = 0, \\
& c_{\setZ}(t,\ut+\vt) - (\ut-\vt) = 0,\tag{E-MPCC} \label{eq:e-mpec}\\
& w - (\uw-\vw) = 0, \\
& 0 \le \ut \ensuremath\perp \vt \ge 0, \quad 0 \le \uw \ensuremath\perp \vw \ge 0,
\end{align*}
where $\ut, \vt \in \mathbb R^{s_t}$ and $\uw, \vw \in \mathbb R^{m_2}$.
The feasible set is denoted by $\Fempec$ and is a lifting of $\Fmpec$.
\ifCmp\else
\end{definition}
\fi
Clearly, the homeomorphism between $\Fmpec$ and $\Fabs$
extends to $\Fempec$ and $\Feabs$. It is given by
\begin{align*}
\_\phi(t, w, \ut, \vt, \uw, \vw) &= (t, w, \ut - \vt, \uw - \vw),\\
\Inv{\_\phi}(t, w, \zt, \zw) &= (t, w, [\zt]^+, [\zt]^-, [\zw]^+, [\zw]^-).
\end{align*}
Just like in the abs-normal case, problem \eqref{eq:e-mpec} is a special case of \eqref{eq:i-mpec}.
\ifCmp
Hence, we obtain the following material by specializing the definitions
and results for \eqref{eq:i-mpec}.
By \cref{def:cones-i-mpec}, the \tcone to $\Fempec$ at $y$ reads
\else
Hence, we obtain the next lemmas from the corresponding definitions and lemmas for \eqref{eq:i-mpec}.
\begin{lemma}[\TCone and MPCC-Linearized Cone for \eqref{eq:e-mpec}]\label{le:cones-e-mpec}
Consider a feasible point $y = (t,w, \ut,\vt,\uw,\vw)$ of \eqref{eq:e-mpec}.
The \tcone to $\Fempec$ at $y$ reads
\fi
\begin{equation*}
\Tempec(y) =
\begin{defarray}{\delta}
\exists \tau_k \searrow 0,\
\Fempec \ni y_k = (t_k, w_k, \ut_k, \vt_k, \uw_k, \vw_k) \to y{:} \\
\tau_k^{-1} (y_k - y) \to
\delta = (\delta t, \delta w, \delta\ut, \delta\vt, \delta\uw, \delta\vw)
\end{defarray}
.
\end{equation*}
The MPCC-linearized cone reads
\begin{equation*}
\Tlinempec(\^y) =
\ifcase0
\begin{defarray} \delta
\partial_1c_{\setI} \delta t + \partial_2c_{\setI} (\delta\ut + \delta\vt)
= \delta\uw + \delta\vw, \
\delta w = \delta\uw - \delta\vw, \\
(\delta t, \delta \ut \delta \vt) \in \Tlinmpec(t,\ut,\vt), \
(\delta \uw, \delta \vw) \in \Tcompl(\huw, \hvw)
\end{defarray}
\or
\begin{defarray}[r@{\medspace}l]{\delta}
(\delta t, \delta \ut \delta \vt) &\in \Tlinmpec(t,\ut,\vt), \\
\partial_1c_{\setI} \delta t + \partial_2c_{\setI} (\delta\ut + \delta\vt)
&= \delta\uw + \delta\vw, \\
\delta w &= \delta\uw - \delta\vw, \\
(\delta \uw, \delta \vw) &\in \Tcompl(\huw, \hvw)
\end{defarray}
\fi
.
\end{equation*}
Here, all partial derivatives are evaluated at $(t,\ut+\vt)$.
\ifCmp
The associated homeomorphisms of \cref{le:hom-T-i},
\else
\end{lemma}
\begin{proof}
This follows from \cref{def:cones-i-mpec}.
\end{proof}
\begin{lemma}\label{le:hom-T-e}
Given \eqref{eq:e-anf} with counterpart \eqref{eq:e-mpec}.
Consider $(t, w, \zt, \zw) \in \Feabs$ and $(t, w, \ut, \vt, \uw, \vw)\in\_\phi^{-1}(t, w, \zt, \zw)\in\Fempec$.
Define
\fi
\begin{gather*}
\_\psi\: \Tempec(t, w, \ut, \vt, \uw, \vw) \to \Teabs(t, w, \zt, \zw), \\
\_\psi\: \Tlinempec(t, w, \ut, \vt, \uw, \vw) \to \Tlineabs(t, w, \zt, \zw),
\end{gather*}
\ifCmp now take the form\else as\fi
\begin{align*}
\_\psi(\delta t, \delta w, \delta \ut, \delta \vt, \delta \uw, \delta \vw) &= (\delta t, \delta w, \delta \ut - \delta \vt, \delta \uw - \delta \vw),\\
\Inv{\_\psi}(\delta t, \delta w, \delta \zt, \delta \zw) &= (\delta t, \delta w, \delpos\zt, \delneg\zt, \delpos\zw, \delneg\zw).
\end{align*}
\ifCmp\else
Then, both functions $\_\psi$ are homeomorphisms.
\end{lemma}
\begin{proof}
This follows directly from \cref{le:hom-T-i}.
\end{proof}
\begin{lemma}[Branch NLPs for \eqref{eq:e-mpec}]\label{def:branch-e-mpec}
\fi
Given $\^y = (\^t,\^w,\hut,\hvt,\huw,\hvw)$, a feasible point of \eqref{eq:e-mpec} with associated index sets $\setUt_+$, $\setVt_+$, $\setDt$, $\setUw_+$, $\setVw_+$, and $\setDw$,
choose $\setPt\subseteq\setDt$ as well as $\setPw\subseteq\setDw$ and set $\setPtw=\setPt\cup\setPw$.
The branch problem NLP($\setPtw$)
\ifCmp of \cref{def:branch-mpec} then \fi reads
\begin{align*}
\sminst[t,w,\ut,\vt,\uw, \vw]{f(t)}
& c_{\setE}(t,\ut+\vt) = 0, \quad
c_{\setI}(t,\ut+\vt) - (\uw+\vw) = 0, \\
& c_{\setZ}(t,\ut+\vt) - (\ut-\vt) = 0, \quad
w - (\uw-\vw) = 0, \\
& 0 = \ut_i,\ 0 \le \vt_i, \ i\in \setVt_+\cup\setPt, \\
& 0 \le \ut_i,\ 0 = \vt_i, \ i\in \setUt_+\cup\bar\setPt,
\tag{NLP($\setPtw$)}\label{eq:branch-e-mpec}\\
& 0 = \uw_i,\ 0 \le \vw_i, \ i\in \setVw_+\cup\setPw,\\
& 0 \le \uw_i,\ 0 = \vw_i, \ i\in \setUw_+\cup\bar\setPw.
\end{align*}
The feasible set of \eqref{eq:branch-e-mpec},
which always contains $\^y$, is denoted by $\Fep$ and is a lifting of $\Fp$.
\ifCmp\else
\end{lemma}
\begin{proof}
This follows from \cref{def:branch-mpec}.
\end{proof}
\fi
Again, the homeomorphism between feasible sets can be restricted to the \ifnum\FmtChoice=2 respective \fi branch problems \eqref{eq:branch-e-anf} and \eqref{eq:branch-e-mpec} where $\setPt=\{i\in\alpt(\^t)\colon \sigt_i=-1 \}$ and $\setPw=\{i\in\alpw(\^w)\colon \sigw_i=-1 \}$.
Thus, the mapping $\_\phi_{\setPtw}\: \Fep \to \Fesig$ given as
\begin{equation*}
\_\phi_{\setPtw}\define \_\phi\vert_{\setPtw} \qtextq{and} \_\phi^{-1}_{\setPtw}\define \_\phi^{-1}\vert_{\Sigtw}
\end{equation*}
is a homeomorphism.
\ifCmp\else
\begin{lemma}[\TCone and Linearized Cone for \eqref{eq:branch-e-mpec}]\label{def:cones-branch-e-mpec}
Consider a feasible point $y=(t,w,\ut,\vt,\uw,\vw)$ of \eqref{eq:branch-e-mpec}.
\fi
The \tcone to $\Fep$ at $y$ reads
\begin{equation*}
\Tep(y) =
\begin{defarray}{\delta}
\exists \tau_k \searrow 0,\ \Fep \ni (t_k,w_k,\ut_k,\vt_k,\uw_k,\vw_k)
\to (t,w,\ut,\vt,\uw,\vw){:} \\
\tau_k^{-1} (t_k - t, w_k-w, \ut_k - \ut, \vt_k - \vt, \uw_k - \uw, \vw_k - \vw) \to \delta
\end{defarray}
\end{equation*}
where
$\delta = (\delta t, \delta w, \delta\ut, \delta\vt, \delta\uw, \delta\vw)$.
The linearized cone reads
\begin{equation*}
\Tlinep(y) =
\begin{defarray}{\delta}
(\delta t,\delta \ut, \delta \vt)\in\Tlinp, \\
\partial_1 c_{\setI} \delta t +
\partial_2 c_{\setI} (\delta\ut + \delta\vt) = \delta\uw + \delta\vw, \
\delta w = \delta \uw - \delta \vw, \\
0 = \delta\uw_i \ \text{for} \ i\in\setVw_+\cup\setPw,\ 0 = \delta\vw_i \ \text{for} \ i\in\setUw_+\cup\bar\setPw,\\
0 \le \delta \uw_i \ \text{for} \ i\in \bar{\setPw},\ 0 \le \delta \vw_i \ \text{for} \ i\in\setPw
\end{defarray}
.
\end{equation*}
Here, all partial derivatives are evaluated at $(t, \ut + \vt)$.
\ifCmp
The associated cone homeomorphisms of \cref{le:hom-T-branch-i}
are now obtained as follows.
\else
\end{lemma}
\begin{proof}
This follows from \cref{def:cones-branch-mpec}.
\end{proof}
\begin{lemma}\label{le:hom-T-branch-e}
\fi
Given \eqref{eq:branch-e-anf} and \eqref{eq:branch-e-mpec} with $\setPt=\defset{ i\in\alpt(\^t)}{\sigt_i=-1}$ and $\setPw=\defset{ i\in\alpw(\^w)}{\sigw_i=-1}$,
consider $(t, w, \zt, \zw) \in \Feabs$ and $(t, w, \ut, \vt, \uw, \vw)=\_\phi^{-1}(t, w, \zt, \zw)$.
Define $\bar\psi_{\setPtw}\define \bar\psi\vert_{\Tep}$, $\bar\psi^{-1}_{\setPtw}\define \bar\psi\vert_{\Tesig}$
and $\bar\psi_{\setPtw}\define \bar\psi\vert_{\Tlinep}$, $\bar\psi^{-1}_{\setPtw}\define \bar\psi\vert_{\Tlinesig}$.
Then, we have \ifCmp\else homeomorphisms\fi
\begin{align*}
\bar\psi_{\setPtw}\: \Tep(t, w, \ut, \vt, \uw, \vw) \to \Tesig(t, w, \zt, \zw),\\
\bar\psi_{\setPtw}\: \Tlinep(t, w, \ut, \vt, \uw, \vw) \to \Tlinesig(t, w, \zt, \zw).
\end{align*}
\ifCmp\else
\end{lemma}
\begin{proof}
This follows directly from \cref{le:hom-T-branch-i}.
\end{proof}
\fi
By applying \ifCmp
\eqref{eq:decomp-cones-mpec} \else \cref{le:decomp-cones-mpec} \fi
to \eqref{eq:e-mpec} with associated branch problems \eqref{eq:branch-e-mpec},
we obtain the following decomposition of cones at $y = (t,w,\ut,\vt,\uw,\vw)$:
\begin{equation*}
\Tempec(y)=\bigcup_{\setPtw} \Tep(y) \qtextq{and}
\Tlinempec(y)=\bigcup_{\setPtw} \Tlinep(y).
\end{equation*}
Moreover, the \tcone is contained in the linearized cone
and the converse holds for the dual cones:
\begin{equation*}
\Tempec(y) \subseteq \Tlinempec(y) \qtextq{and}
\Tempec(y)^* \supseteq \Tlinempec(y)^*.
\end{equation*}
Once again, the converses do not hold in general and we consider
Abadie's and Guignard's Constraint Qualifications for \eqref{eq:e-mpec}
at $y = (t,w, \ut,\vt, \uw,\vw)$.
\ifCmp
Recalling \cref{def:mpec-acq}, MPCC-ACQ and MPCC-GCQ simply read
\begin{equation*}
\Tempec(y) = \Tlinempec(y)
\qtextq{and}
\Tempec(y)^* = \Tlinempec(y)^*
.
\end{equation*}
\else
\begin{lemma}[MPCC-ACQ for \eqref{eq:e-mpec}]
Given a feasible point $y$ of \eqref{eq:e-mpec},
MPCC-ACQ at $y$ reads $\Tempec(y)=\Tlinempec(y)$.
\end{lemma}
\begin{proof}
This follows from \cref{def:mpec-acq}.
\end{proof}
\begin{lemma}[MPCC-GCQ for \eqref{eq:e-mpec}]
Given a feasible point $y$ of \eqref{eq:e-mpec},
MPCC-GCQ at $y$ reads $\Tempec(y)^*=\Tlinempec(y)^*$.
\end{lemma}
\begin{proof}
This follows from \cref{def:mpec-gcq}.
\end{proof}
\fi
\begin{remark}
Let
\begin{equation*}
W(t,\ut,\vt)=\defset{(w,\uw,\vw)}{\abs{w}=c_{\setI}(t,\ut+\vt),\uw=[w]^+,\vw=[w]^-}.
\end{equation*}
Due to symmetry, the above equality of cones (respectively dual cones)
\ifnum\FmtChoice=2 clearly \fi holds for all elements $(w,\uw,\vw)\in W(t,\ut,\vt)$
if it holds for any element.
\end{remark}
\ifCmp
Now \cref{th:branch-acq_mpec-acq} reads as follows.
\begin{theorem}[ACQ/GCQ for all \eqref{eq:branch-e-mpec}
implies MPCC-ACQ/MPCC-GCQ for \eqref{eq:e-mpec}]
\label{th:e-branch-acq_mpec-acq}
\label{th:e-branch-gcq_mpec-gcq}
Consider a feasible point $y = (t,w, \ut,\vt, \uw,\vw)$ of \eqref{eq:e-mpec}
with \ifnum\FmtChoice<2 associated \fi
branch problems \eqref{eq:branch-e-mpec}.
Then, MPCC-ACQ respectively MPCC-GCQ holds for \eqref{eq:e-mpec} at $y$
if ACQ respectively GCQ holds for all \eqref{eq:branch-e-mpec} at $y$.
\end{theorem}
\else
As with \eqref{eq:i-mpec}, ACQ or GCQ for all branch problems \eqref{eq:branch-e-mpec}
implies MPCC-ACQ or MPCC-GCQ for \eqref{eq:e-mpec}.
\begin{theorem}[ACQ for all \eqref{eq:branch-e-mpec} implies MPCC-ACQ for \eqref{eq:e-mpec}]\label{th:e-branch-acq_mpec-acq}
Consider a feasible point $y = (t,w, \ut,\vt, \uw,\vw)$ of \eqref{eq:e-mpec} with associated branch problems \eqref{eq:branch-e-mpec}.
Then, MPCC-ACQ holds for \eqref{eq:e-mpec} at $y$ if ACQ holds for all \eqref{eq:branch-e-mpec} at $y$.
\end{theorem}
\begin{proof}
This follows from \cref{th:branch-acq_mpec-acq}.
\end{proof}
\begin{theorem}[GCQ for all \eqref{eq:branch-e-mpec} implies MPCC-GCQ for \eqref{eq:e-mpec}]\label{th:e-branch-gcq_mpec-gcq}
Consider a feasible point $y = (t,w, \ut,\vt, \uw,\vw)$ of \eqref{eq:e-mpec} with associated branch problems \eqref{eq:branch-e-mpec}.
Then, MPCC-GCQ holds for \eqref{eq:e-mpec} at $y$ if GCQ holds for all \eqref{eq:branch-e-mpec} at $y$.
\end{theorem}
\begin{proof}
This follows from \cref{th:branch-gcq_mpec-gcq}.
\end{proof}
\fi
\subsection{Relations of MPCC-CQs for Different Formulations}
In this paragraph we prove relations between constraint qualifications for the two different formulations \eqref{eq:i-mpec} and \eqref{eq:e-mpec}.
Some relations follow from the results in the previous section and in the two following sections.
\begin{theorem}\label{th:acq}
MPCC-ACQ for \eqref{eq:i-mpec} holds at $(t,\ut,\vt)\in\Fmpec$ if and only if
MPCC-ACQ for \eqref{eq:e-mpec} holds at $(t,w,\ut,\uw,\vt,\vw)\in\Fempec$
for any (and hence all) $(w, \uw, \vw) \in W(t, \ut, \vt)$.
\end{theorem}
\begin{proof}
This follows immediately from \cref{th:akq}, \cref{th:akq-acq-i} and \cref{th:acq-akq-e}.
\end{proof}
\begin{theorem}\label{th:gcq}
MPCC-GCQ for \eqref{eq:i-mpec} holds at $(t,\ut,\vt)\in\Fmpec$ if
MPCC-GCQ for \eqref{eq:e-mpec} holds at $(t,w,\ut,\vt,\uw,\vw)\in\Fempec$
for any (and hence all) $(w, \uw, \vw) \in W(t, \ut, \vt)$.
\end{theorem}
\begin{proof}
The inclusion $\Tmpec(t, \ut, \vt)^* \supseteq \Tlinmpec(t, \ut, \vt)^*$ holds always.
Thus, it is left to show that
\begin{equation*}
\Tmpec(t,\ut, \vt)^* \subseteq \Tlinmpec(t, \ut, \vt)^*.
\end{equation*}
Let $\omega=(\omega t,\omega \ut, \omega \vt)\in \Tmpec(t, , \ut, \vt)^*$,
i.e. $\omega^T\delta\ge 0$ for all $\delta=(\delta t,\delta \ut,\delta \vt)\in \Tmpec(t, \ut, \vt)$.
Then, \ifnum\FmtChoice=0 define \else let \fi $\tilde{\omega}=(\omega t,0,\omega \ut, \omega \vt,0,0)$ to obtain $\tilde{\omega}^T\tilde \delta=\omega^T\delta\ge 0$ for all $\tilde \delta\in \Tempec(t,w, \ut, \vt,\uw,\vw)$ where $w\in W(t)$ is arbitrary.
By assumption, we have $\tilde{\omega}^T\tilde \delta\ge 0$ for all $\tilde \delta\in \Tlinempec(t,w, \ut, \vt,\uw,\vw)$
which implies $\omega^T\delta=\tilde{\omega}^T\tilde \delta\ge 0$ for all $\delta\in \Tlinmpec(t, \ut, \vt)$.
\end{proof}
The converse of the previous theorem is unlikely to hold, but we do not know how to construct a counterexample.
However, equivalence of ACQ or GCQ for corresponding branch problems holds.
\begin{theorem}\label{th:mpec-branch-acq}
ACQ for \eqref{eq:branch-mpec} holds at $(t,\ut,\vt)\in\Fp$ if and only if
ACQ for \eqref{eq:branch-e-mpec} holds at $(t,w,\ut,\vt,\uw,\vw)\in\Fep$
for any (and hence all) $(w, \uw, \vw) \in W(t, \ut, \vt)$.
\end{theorem}
\begin{proof}
This follows immediately from \cref{th:branch-acq}, \cref{th:branch-acq-i} and \cref{th:branch-acq-e}.
\end{proof}
\begin{theorem}\label{th:mpec-branch-gcq}
GCQ for \eqref{eq:branch-mpec} holds at $(t,\ut,\vt)\in\Fp$ if and only if
GCQ for \eqref{eq:branch-e-mpec} holds at $(t,w,\ut,\vt,\uw,\vw)\in\Fep$
for any (and hence all) $(w, \uw, \vw) \in W(t, \ut, \vt)$.
\end{theorem}
\begin{proof}
This follows immediately from \cref{th:branch-gcq}, \cref{th:branch-gcq-i} and \cref{th:branch-gcq-e}.
\end{proof}
\section{Kink Qualifications and MPCC Constraint Qualifications}
\label{sec:qualifications}
In this section we show relations between abs-normal NLPs and counterpart MPCCs.
Here, we consider both treatments of inequality constraints.
\subsection{Relations of General Abs-Normal NLP and MPCC}
In the following the variables $x$ and $z$ instead of $t$ and $\zt$ are used.
Thus, the abs-normal NLP \eqref{eq:i-anf} reads:
\begin{equation*}
\minst[x,z]{f(x)}
c_{\setE}(x,\abs{z})=0,\quad
c_{\setI}(x,\abs{z}) \ge 0,\quad
c_{\setZ}(x,\abs{z})-z=0.
\end{equation*}
The counterpart MPCC \eqref{eq:i-mpec} becomes:
\begin{align*}
\sminst[x,u,v]{f(x)}
& c_{\setE}(x,u+v)=0, \quad
c_{\setI}(x,u+v) \ge 0,\\
& c_{\setZ}(x,u+v)-(u-v)=0, \quad
0 \le u \ensuremath\perp v \ge 0.
\end{align*}
Then, the subsequent relations of kink qualifications and MPCC constraint qualifications can be shown.
\begin{theorem}[AKQ for \eqref{eq:i-anf} $\iff$ MPCC-ACQ for \eqref{eq:i-mpec}]\label{th:akq-acq-i}
AKQ for \eqref{eq:i-anf} holds at $(x,z(x))\in\Fabs$ if and only if MPCC-ACQ for \eqref{eq:i-mpec} holds at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fmpec$.
\end{theorem}
\begin{proof}
\ifcase0
We need to show
\begin{equation*}
\Tabs(x, z) = \Tlinabs(x, z) \iff \Tmpec(x, u, v) = \Tlinmpec(x, u, v).
\end{equation*}
This is obvious from the homeomorphisms $\psi$ in \cref{le:hom-T-i}.
\or
By \cref{le:i-cones} and \cref{le:i-cones-mpec} we always have
\begin{equation*}
\Tabs(x,z)\subseteq\Tlinabs(x,z) \qtextq{and} \Tmpec(x,u,v)\subseteq\Tlinmpec(x,u,v).
\end{equation*}
Thus, we just need to prove
\begin{equation*}
\Tabs(x,z)=\Tlinabs(x,z) & \implies \Tmpec(x,u,v)\supseteq\Tlinmpec(x,u,v),\\
\Tmpec(x,u,v)=\Tlinmpec(x,u,v) & \implies \Tabs(x,z)\supseteq\Tlinabs(x,z).
\end{equation*}
We start with the implication ``$\Rightarrow$'' and consider $\delta=(\delta x, \delta u,\delta v)\in\Tlinmpec(x,u,v)$.
Then, we set
\begin{equation*}
\delta z_i \define \delta u_i - \delta v_i =
\begin{ccases}
+\delta u_i, & i \in \setU_+ \\
-\delta v_i, & i \in \setV_+ \\
\delta u_i - \delta v_i, & i \in \setD
\end{ccases}
\qtextq{with}
0 \le \delta u_i \ensuremath\perp \delta v_i \ge 0 \text{ for } i \in \setD,
\end{equation*}
and obtain $\tilde\delta=(\delta x,\delta z)\in\Tlinabs(x,z)$.
This is because
\begin{equation*}
\delta u_i + \delta v_i =
\begin{ccases}
\delta u_i, & i \in \setU_+ \\
\delta v_i, & i \in \setV_+ \\
\delta u_i + \delta v_i, & i \in \setD
\end{ccases}
=
\begin{ccases}
\sigma_i\delta z_i, & \sigma_i = +1 \\
\sigma_i\delta z_i, & \sigma_i = -1 \\
\abs{\delta z_i}, & \sigma_i = 0
\end{ccases}
= \delta\zeta \qtextq{for} \sigma = \sigma(x).
\end{equation*}
By assumption, $\tilde\delta\in\Tabs(x,z)$ and there exist sequences $\Fabs\ni(x_k,z_k)\to (x,z)$
and $\tau_k\searrow 0$ such that $\tau_k^{-1}(x_k-x,z_k-z)\to (\delta x,\delta z)$.
We set $u_k=[z_k]^+$ and $v_k=[z_k]^-$ and have $\tau_k^{-1}((u_k-u)-(v_k-v))\to \delta u-\delta v$.
Thus, it remains to show $\tau_k^{-1}(u_k-u,v_k-v)\to(\delta u,\delta v)$.
We show this componentwise:
\begin{itemize}
\item $i\in \setU_+$:
we have $v_i=0$ by feasibility and $\delta u_i\neq 0$ and $\delta v_i=0$ by definition of $\Tlinmpec$.
Thus, $(u_k)_i>0$ holds for $k$ large enough and by complementarity $(v_k)_i=0$ holds.
Then, $\tau_k^{-1}((u_k)_i-u_i)\to {\delta u}_i$ follows.
\item $i\in\setV_+$: $\tau_k^{-1}((v_k)_i-v_i)\to {\delta v}_i$ follows as in the previous case .
\item $i\in\setD$ and $\delta u_i>0$:
we have $\delta v_i=0$ by complementarity and so $\tau_k^{-1}((u_i)_k-(v_i)_k)\to \delta u_i$.
Then, $\tau_k^{-1}(u_i)_k\to \delta u_i$ and $\tau_k^{-1}(v_i)_k\to 0$ because of sign constraints.
\item $i\in\setD$ and $\delta v_i>0$:
$\tau_k^{-1}(u_i)_k\to 0$ and $\tau_k^{-1}(v_i)_k\to \delta v_i$ follow as in the previous case.
\item $i\in\setD$ and ${\delta u}_i={\delta v}_i=0$, we have
$\tau_k^{-1}((u_i)_k-(v_i)_k)\to0$. Because of sign constraints and complementarity, this can only hold if
$\tau_k^{-1}(u_i)_k\to 0,\ \tau_k^{-1}(v_i)_k\to0$.
\end{itemize}
Altogether we have $u_k-u\to\delta u$ and $v_k-v\to\delta v$, i.e. $\delta\in\Tmpec(x,u,v)$.\\
Now, we prove the implication ``$\Leftarrow$''. To this end, consider $\delta=(\delta x,\delta z)\in \Tlinabs(x,z)$.
Then, set
\begin{equation*}
\delta u_i \define
\begin{ccases}
\delta z_i & \sigma_i = +1 \\
0, & \sigma_i = -1 \\
{}[\delta z_i]^+, & \sigma_i = 0
\end{ccases}
\qtextq{and}
\delta v_i \define
\begin{ccases}
0 & \sigma_i = +1 \\
-\delta z_i, & \sigma_i = -1 \\
{}[\delta z_i]^-, & \sigma_i = 0
\end{ccases}
\end{equation*}
to obtain $\delta z = \delta u - \delta v$ and $\delta \zeta= \delta u + \delta v$.
Thus, $\tilde\delta=(\delta x,\delta u,\delta v)\in\Tlinmpec(x,u,v)$ and by assumption $\tilde d\in\Tmpec$.
Thus, there exist $\Fmpec\ni(x_k,u_k,v_k)\to(x,u,v)$ and $\tau_k\searrow 0$ with
$\tau_k^{-1}(x_k-x,u_k-u,v_k-v)\to(\delta x,\delta u, \delta v)$.
With $z_k=u_k+v_k$ we have $z_k\to z=u+v$ and $\tau_k^{-1}(z_k-z)\to \delta u-\delta v =\delta z$,
i.e. $d\in\Tabs(x,z)$.
\fi
\end{proof}
\begin{theorem}[MPCC-GCQ for \eqref{eq:i-mpec} implies GKQ for \eqref{eq:i-anf}]\label{th:gkq-gcq-i}
GKQ for \eqref{eq:i-anf} holds at $(x,z(x))\in\Fabs$ if MPCC-GCQ for \eqref{eq:i-mpec} holds at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fmpec$.
\end{theorem}
\begin{proof}
The inclusion $\Tlinabs(x,z)^*\subseteq\Tabs(x,z)^*$ hold always by \cref{le:i-cones}.
Thus, we just have to show
\begin{equation*}
\Tabs(x, z)^* \subseteq \Tlinabs(x, z)^*.
\end{equation*}
Consider $\omega=(\omega x,\omega z)\in\Tabs(x,z)^*$, i.e. $\omega^T\delta \ge 0$ for all $\delta=(\delta x,\delta z)\in\Tabs(x,z)$.
Set $\~\omega = (\omega x, \omega z, -\omega z)$.
For every $\delta \in \Tabs(x, z)$ we then have
\begin{equation*}
\~\omega^T \Inv\psi(\delta) =
\omega x^T \delta x + \omega z^T \delpos z - \omega z^T \delneg z =
\omega x^T \delta x + \omega z^T \delta z =
\omega^T \delta \ge 0.
\end{equation*}
This means $\~\omega \in \Tmpec(x, u, v)^*$ and hence,
by assumption, $\~\omega \in \Tlinmpec(x, u, v)^*$.
We thus have $\omega^T \delta = \~\omega^T \Inv\psi(\delta) \ge 0$
for every $\delta \in \Tlinabs(x, z)$,
which means $\omega\in\Tlinabs(x,z)^*$.
\end{proof}
The converse is unlikely to hold, although we are not, at this time, aware of a counterexample.
Once again, moving to the branch problems allows to exploit additional sign information.
\begin{theorem}[ACQ for \eqref{eq:branch-anf} $\iff$ ACQ for \eqref{eq:branch-mpec}]\label{th:branch-acq-i}
ACQ for \eqref{eq:branch-anf} holds at $(x,z(x))\in\Fsig$ if and only if ACQ for the corresponding \eqref{eq:branch-mpec} holds at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fp$.
\end{theorem}
\begin{proof}
We need to show
\begin{equation*}
\Tsig(x, z) = \Tlinsig(x, z) \iff \Tp(x, u, v) = \Tlinp(x, u, v).
\end{equation*}
This is obvious from the homeomorphisms $\psi_{\setP}$ in \cref{le:hom-T-branch-i}.
\end{proof}
\begin{theorem}[GCQ for \eqref{eq:branch-anf} $\iff$ GCQ for \eqref{eq:branch-mpec}]\label{th:branch-gcq-i}
GCQ for \eqref{eq:branch-anf} holds at $(x,z(x))\in\Fsig$ if and only if GCQ for the corresponding \eqref{eq:branch-mpec} holds at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fp$.
\end{theorem}
\begin{proof}
The inclusions $\Tlinp(x,u,v)^*\subseteq\Tp(x,u,v)^*$ and $\Tlinsig(x,z)^*\subseteq\Tsig(x,z)^*$ hold always.
Thus, we just have to show
\begin{equation*}
\Tsig(x, z)^* \supseteq \Tlinsig(x, z)^* \iff \Tp(x,u,v)^* \supseteq \Tlinp(x,u,v)^*.
\end{equation*}
First, we show the implication ``$\Rightarrow$''.
Consider $\omega=(\omega x,\omega u,\omega v)\in\Tp(x,u,v)^*$, i.e. $\omega^T\delta \ge 0$ for all $\delta=(\delta x,\delta u,\delta v)\in\Tp(x,u,v)$.
Set $\~\omega = (\omega x, \omega z)$ with
\begin{equation*}
\omega z_i=
\begin{cases}
+\omega u_i, & i\in\setU_+\cup \setP,\\
-\omega v_i, & i\in\setV_+\cup \bar\setP.
\end{cases}
\end{equation*}
This leads to
\begin{equation*}
\~\omega^T \psi_{\setP}(\delta) =
\omega x^T \delta x + \omega z^T (\delta u - \delta v) =
\omega x^T \delta x + \omega u^T \delta u + \omega v^T \delta v =
\omega^T \delta \ge 0
\end{equation*}
for every $\delta \in \Tp(x, u, v)$, i.e. $\~\omega \in \Tsig(x, z)^*$.
Then, the assumption yields $\~\omega \in \Tlinsig(x, z)^*$. As we have $\omega^T \delta = \~\omega^T \psi_{\setP}(\delta) \ge 0$
for every $\delta \in \Tlinp(x, u,v)$, we obtain $\omega\in\Tlinp(x,u,v)^*$.
The reverse implication follows as in \cref{th:gkq-gcq-i}.
\end{proof}
\subsection{Relations of Abs-Normal NLP and MPCC with Inequality Slacks}
Now, the relations for the slack reformulations are stated.
These are special cases of the general problem formulations,
hence we \ifCmp obtain the following four theorems that correspond
to \crefrange{th:akq-acq-i}{th:branch-gcq-i}.\else
simply cite the previous proofs.\fi
\begin{theorem}[AKQ for \eqref{eq:e-anf} $\iff$ MPCC-ACQ for \eqref{eq:e-mpec}] \label{th:acq-akq-e}
AKQ for \eqref{eq:e-anf} holds at $(x,z(x)) \in \Feabs$
if and only if MPCC-ACQ for \eqref{eq:e-mpec} holds at
$(x, u, v)=(x,[z(x)]^+,[z(x)]^-) \in \Fempec$.
\end{theorem}
\ifCmp\else
\begin{proof}
This follows as in the proof of \cref{th:akq-acq-i}.
\end{proof}
\fi
\begin{theorem}[MPCC-GCQ for \eqref{eq:e-mpec} implies GKQ for \eqref{eq:e-anf}]
\label{th:gcq-gkq-e}
GKQ for \eqref{eq:e-anf} holds at $(x,z(x)) \in \Feabs$
if MPCC-GCQ for \eqref{eq:e-mpec} holds at
$(x, u, v)=(x,[z(x)]^+, [z(x)]^-)\in \Fempec$.
\end{theorem}
\ifCmp\else
\begin{proof}
This follows as in the proof of \cref{th:gkq-gcq-i}.
\end{proof}
\fi
The converse is unlikely to hold, but to date we are not aware of a counterexample.
\begin{theorem}[ACQ for \eqref{eq:branch-e-anf} $\iff$ ACQ for \eqref{eq:branch-e-mpec}]\label{th:branch-acq-e}
ACQ for \eqref{eq:branch-e-anf} at $(x,z(x))\in\Fesig$ is equivalent to ACQ for the corresponding \eqref{eq:branch-e-mpec} at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fep$.
\end{theorem}
\ifCmp\else
\begin{proof}
This follows as in the proof of \cref{th:branch-acq-i}.
\end{proof}
\fi
\begin{theorem}[GCQ for \eqref{eq:branch-e-anf} $\iff$ GCQ for \eqref{eq:branch-e-mpec}]\label{th:branch-gcq-e}
GCQ for \eqref{eq:branch-e-anf} at $(x,z(x))\Fesig$ is equivalent to GCQ for
\ifnum\FmtChoice=1\else the corresponding \fi \eqref{eq:branch-e-mpec} at
$(x,u,v)=(x,[z(x)]^+,[z(x)]^-)\in\Fep$.
\end{theorem}
\ifCmp\else
\begin{proof}
This follows as in the proof of \cref{th:branch-gcq-i}.
\end{proof}
\fi
All the relations discussed in \crefrange{sec:anf-formulations}{sec:qualifications}
are illustrated in \cref{fig:relations}.
In the inner square (containing \eqref{eq:i-anf} and \eqref{eq:e-anf} as well as the counterpart MPCCs \eqref{eq:i-mpec} and \eqref{eq:e-mpec}) there are four single-headed arrows,
which indicate that only one direction has been proved
and we do not know if the converses hold as well.
Therefore we considered the branch problems given on the outer right and left in the figure.
Since ACQ respectively GCQ for all branch problems imply the corresponding
kink qualification or MPCC-constraint qualification,
there are further single-headed arrows pointing to the inner square.
Results that follow directly from other equivalences have arrows with the label (implied).
\ifnum\FmtChoice=1
\amssidewaysfigure
\else
\sidewaysfigure
\fi
\begin{center}{
\begin{tikzpicture}[ampersand replacement=\&]
\matrix (m) [matrix of math nodes,nodes={draw,anchor=center,text centered,align=center,text width=3.3cm,rounded corners,minimum width=2.0cm, minimum height=1.5cm},column sep=6.5pc,row sep=10em]
{
|(A)| {\begin{array}{c} \textup{Branch-Problems} \\ \textup{Reformulation} \\ \textup{\eqref{eq:branch-e-anf}} \end{array}} \&
|(B)| {\begin{array}{c} \textup{Reformulation} \\ \textup{\eqref{eq:e-anf}} \end{array}} \&
|(C)| {\begin{array}{c} \textup{Reformulation} \\ \textup{\eqref{eq:e-mpec}} \end{array}} \&
|(D)| {\begin{array}{c} \textup{Branch-Problems} \\ \textup{Reformulation} \\ \textup{\eqref{eq:branch-e-mpec}} \end{array}} \\
|(E)| {\begin{array}{c} \textup{Branch-Problems} \\ \textup{\eqref{eq:branch-anf}} \end{array}} \&
|(F)| {\begin{array}{c} \textup{Abs-normal NLP} \\ \textup{\eqref{eq:i-anf}} \end{array}} \&
|(G)| {\begin{array}{c} \textup{Counterpart} \\ \textup{MPCC} \\ \textup{\eqref{eq:i-mpec}} \end{array}} \&
|(H)| {\begin{array}{c} \textup{Branch-Problems} \\ \textup{\eqref{eq:branch-mpec}} \end{array}} \\
};
\begin{scope}[>={LaTeX[width=2mm,length=2mm]},->,line width=.5pt]
\tikzstyle{every node}=[font=\footnotesize]
\path[dashed] ([xshift=2em]A.north) edge[bend left=20,<->] node[above] {\cref{th:branch-gcq-e}} ([xshift=-2em]D.north);
\path[] ([xshift=-2em]A.north) edge[bend left=25,<->] node[above] {\cref{th:branch-acq-e}} ([xshift=2em]D.north);
\path[] ([yshift=1em]A.east) edge[->] node[below]{\cref{th:e-branch-acq_akq}}([yshift=1em]B.west);
\path[dashed] ([yshift=-1em]A.east) edge[->] node[below] {\cref{th:e-branch-gcq_gkq}} ([yshift=-1em]B.west);
\path[] ([yshift=1em]B.east) edge[<->] node[below]{\cref{th:acq-akq-e}}([yshift=1em]C.west);
\path[dashed] ([yshift=-1em]B.east) edge[<-] node[below] {\cref{th:gcq-gkq-e}} ([yshift=-1em]C.west);
\path[] ([yshift=1em]C.east) edge[<-] node[below]{\cref{th:e-branch-acq_mpec-acq}}([yshift=1em]D.west);
\path[dashed] ([yshift=-1em]C.east) edge[<-] node[below] {\cref{th:e-branch-gcq_mpec-gcq}} ([yshift=-1em]D.west);
\path[] ([xshift=-2em]A.south) edge[<->] node[above,rotate=-90]{\cref{th:branch-acq}}([xshift=-2em]E.north);
\path[dashed] ([xshift=2em]A.south) edge[<->] node[above,rotate=-90]{\cref{th:branch-gcq}}([xshift=2em]E.north);
\path[] ([xshift=-2em]B.south) edge[<->] node[above,rotate=-90]{\cref{th:akq}}([xshift=-2em]F.north);
\path[dashed] ([xshift=2em]B.south) edge[->] node[above,rotate=-90]{\cref{th:gkq}}([xshift=2em]F.north);
\path[dashed] ([xshift=-2em]C.south) edge[->] node[above,rotate=90]{\cref{th:gcq}}([xshift=-2em]G.north);
\path[] ([xshift=2em]C.south) edge[<->] node[above,rotate=90]{(implied)}([xshift=2em]G.north);
\path[dashed] ([xshift=-2em]D.south) edge[<->] node[above,rotate=90]{(implied)}([xshift=-2em]H.north);
\path[] ([xshift=2em]D.south) edge[<->] node[above,rotate=90]{(implied)}([xshift=2em]H.north);
\path[dashed] ([yshift=1em]E.east) edge[->] node[below]{\cref{th:branch-gcq_gkq}}([yshift=1em]F.west);
\path[] ([yshift=-1em]E.east) edge[->] node[below] {\cref{th:branch-acq_akq}} ([yshift=-1em]F.west);
\path[dashed] ([yshift=1em]F.east) edge[<-] node[above] {\cref{th:gkq-gcq-i}} ([yshift=1em]G.west);
\path[] ([yshift=-1em]F.east) edge[<->] node[above] {\cref{th:akq-acq-i}} ([yshift=-1em]G.west);
\path[dashed] ([yshift=1em]G.east) edge[<-] node[below]{\cref{th:branch-gcq_mpec-gcq}}([yshift=1em]H.west);
\path[] ([yshift=-1em]G.east) edge[<-] node[below] {\cref{th:branch-acq_mpec-acq}} ([yshift=-1em]H.west);
\path[dashed] ([xshift=2em]E.south) edge[bend right=20,<->] node[above] {\cref{th:branch-gcq-i}} ([xshift=-2em]H.south);
\path[] ([xshift=-2em]E.south) edge[bend right=25,<->] node[above] {\cref{th:branch-acq-i}} ([xshift=2em]H.south);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Solid arrows: relations between AKQ and MPCC-ACQ;
dashed arrows: relations between GKQ and MPCC-GCQ. Note that in \cref{th:gkq}, \cref{th:gcq}, \cref{th:gkq-gcq-i} and \cref{th:gcq-gkq-e} we have only proved one-sided implications and it is open whether the reverse implications hold.}
\label{fig:relations}
\ifnum\FmtChoice=1
\endamssidewaysfigure
\else
\endsidewaysfigure
\fi
\section{First Order Stationarity Concepts}
\label{sec:stationarity}
In this section, we introduce definitions of Mordukhovich stationarity
and Bouligand stationarity for abs-normal NLPs
and compare these definitions to M-stationarity and B-stationarity for MPCCs.
We give proofs based on the general formulation.
\subsection{Mordukhovich Stationarity}
In this paragraph we have a closer look at M-stationarity \cite{Outrata1999},
which is a necessary optimality condition for MPCCs under MPCC-ACQ \cite{FlegelKanzow2006}.
\begin{definition}[M-Stationarity for \eqref{eq:i-mpec}, see \cite{Outrata1999}]\label{def:m-stat}
Consider a feasible point $(x^*,u^*,v^*)$ of \eqref{eq:i-mpec}
with associated index sets $\setU_+$, $\setV_+$ and $\setD$.
It is an \emph{M-stationary} point
if there exist multipliers $\lambda = (\lame,\lami,\lamz)$ and
$\mu=(\muu,\muv)$ such that the following conditions are satisfied:
\begin{subequations}\label{eq:m-stat}
\begin{align}
\partial_{x,u,v} \Lc(x^*,u^*,v^*,\lambda,\mu) &= 0,\label{eq:m-stat-a}\\
((\muu)_i > 0,\ (\muv)_i > 0) \ \vee \ (\muu)_i(\muv)_i &= 0, \ i\in \setD\label{eq:m-stat-b}\\
(\muu)_i &= 0, \ i\in \setU_+,\label{eq:m-stat-c}\\
(\muv)_i &= 0, \ i\in \setV_+,\label{eq:m-stat-d}\\
\lami &\ge 0,\label{eq:m-stat-e}\\
\lami^T c_{\setI}(x^*,u^*,v^*) &= 0\label{eq:m-stat-f}.
\end{align}
\end{subequations}
Herein, $\Lc$ is the MPCC-Lagrangian function
\begin{align*}
\Lc(x,u,v,\lambda,\mu)\define f(x)
&+\lame^Tc_{\setE}(x,u+v) - \lami^Tc_{\setI}(x,u+v)\\
&+\lamz^T[c_{\setZ}(x,u+v) - (u-v)] - \muu^T \ut - \muv^T \vt.
\end{align*}
\end{definition}
\ifCmp
Local minimizers of \eqref{eq:i-mpec}
are M-stationary points under MPCC-ACQ, as shown in \cite{FlegelKanzow2006,FlegelDiss}.
\begin{minipage}{0.74\textwidth}
The name M-stationarity was introduced by Scholtes in \cite{Scholtes_2001} and was motivated by the fact that
the sign restrictions on the multipliers in \eqref{eq:m-stat} in fact model the Mordukhovich normal cone.
The inset on the right illustrates the feasible set of multiplier values for a pair $((\mu_{{u}})_i,(\mu_{{v}})_i)$, $i\in\mathcal D$.
M-stationarity is a weaker stationarity concept than strong stationarity, but is at the same time the strongest
necessary optimality condition known to hold in absence of a strong constraint qualification like MPCC-MFCQ.
\end{minipage}
\hspace*{.5cm}\begin{minipage}{0.25\textwidth}
\begin{tikzpicture}
\draw[fill=black] (0,0)--(1,0)--(1,1)--(0,1)--(0,0);
\draw[<->] (1.5,0) node[right]{$(\mu_{{u}})_i$} --(0,0)--(0,1.5) node[above]{$(\mu_{{v}})_i$};
\draw[very thick] (-1,0)--(0,0)--(0,-1);
\end{tikzpicture}
\end{minipage}
\else
\begin{theorem}\label{th:mpec-local-min-m-stat}
Under MPCC-ACQ all local minimizers of \eqref{eq:i-mpec} are M-stationary points.
\end{theorem}
\begin{proof}
A short and direct proof is found in \cite{FlegelKanzow2006}.
\end{proof}
\fi
\begin{definition}[M-Stationarity for \eqref{eq:i-anf}]\label{def:m-stat-anf}
Consider a feasible point $(x^*,z^*)$ of \eqref{eq:i-anf}.
It is an \emph{M-stationary} point
if there exist multipliers $\lambda = (\lame,\lami,\lamz)$
such that the following conditions are satisfied:
\begin{subequations}\label{eq:m-stat-anf}
\begin{align}
f'(x^*) +
\lame^T \partial_1 c_{\setE} -
\lami^T \partial_1 c_{\setI} +
\lamz^T \partial_1 c_{\setZ} &= 0,\label{m-stat-anf-a}\\
[\lame^T \partial_2 c_{\setE} -
\lami^T \partial_2 c_{\setI} +
\lamz^T \partial_2 c_{\setZ}]_i &= (\lamz)_i\sigma^*_i, \ i\notin\alpha(x^*),\label{m-stat-anf-b}\\
(\mu_i^-)(\mu_i^+)=0 \ \vee \ [\lame^T \partial_2 c_{\setE} -
\lami^T \partial_2 c_{\setI} +
\lamz^T \partial_2 c_{\setZ}]_i &> \abs{(\lamz)_i}, \ i\in\alpha(x^*),\label{m-stat-anf-c}\\
\lami &\ge 0, \label{m-stat-anf-d}\\
\lami^T c_{\setI} &= 0\label{m-stat-anf-e}.
\end{align}
\end{subequations}
Here we use the notation
\begin{align*}
\mu^+_i&\define\left[\lame^T \partial_2 c_{\setE} - \lami^T \partial_2 c_{\setI} + \lamz^T [\partial_2 c_{\setZ}-I]\right]_i,\\
\mu^-_i&\define\left[\lame^T \partial_2 c_{\setE} - \lami^T \partial_2 c_{\setI} + \lamz^T [\partial_2 c_{\setZ}+I]\right]_i,
\end{align*}
and the constraints and the partial derivatives are evaluated
at $(x^*,\abs{z^*})$.
\end{definition}
\begin{theorem}[M-Stationarity for \eqref{eq:i-mpec} is M-Stationarity for \eqref{eq:i-anf}]\label{thm:m-stat-is-m-stat}
A feasible point $(x^*,z^*)$ of \eqref{eq:i-anf} is M-stationary if and only if
$(x^*,u^*,v^*)=(x^*,[z^*]^+,[z^*]^-)$ of \eqref{eq:i-mpec} is M-stationary.
\end{theorem}
\begin{proof}
For indices that satisfy the first condition in \eqref{eq:m-stat-b},
the equivalence with the second condition in \eqref{m-stat-anf-c}
was shown in \cite[Theorem 33]{Hegerhorst_et_al:2019:MPEC2}.
Thus, we just need to consider the alternative conditions.
For \eqref{eq:i-mpec} we have the relations
\begin{align*}
\left[\lame^T \partial_2 c_{\setE} - \lami^T \partial_2 c_{\setI} + \lamz^T [\partial_2 c_{\setZ} -I]\right]_i &= (\muu)_i,
\ i\in \setD, \\
\left[\lame^T \partial_2 c_{\setE} - \lami^T \partial_2 c_{\setI} + \lamz^T [\partial_2 c_{\setZ} +I]\right]_i &= (\muv)_i,
\ i\in \setD,
\end{align*}
which was also shown in \cite[Theorem 33]{Hegerhorst_et_al:2019:MPEC2}.
These are exactly the definitions of $\mu_i^+$ and $\mu_i^-$ in the definition of
M-Stationarity for \eqref{eq:i-anf}.
\end{proof}
Consequently, we may now rephrase the result by \cite{FlegelKanzow2006,FlegelDiss} in the language of abs-normal forms.
\begin{theorem}[Minimizers and M-Stationarity for \eqref{eq:i-anf}]\label{th:local-min-m-stat}
Assume that $(x^*,z^*)$ is a local minimizer of \eqref{eq:i-anf} and that AKQ holds at $x^*$.
Then, $(x^*,z^*)$ is M-stationary for \eqref{eq:i-anf}.
\end{theorem}
\begin{proof}
First, note that $(x^*,z^*)$ is a local minimizer of \eqref{eq:i-anf} if and only if
$(x^*,u^*,v^*)=(x^*,[z^*]^+,[z^*]^-)$ is a local minimizer of \eqref{eq:i-mpec}.
Then, the point $(x^*,u^*,v^*)$ is a local minimizer of the counterpart MPCC,
and MPCC-ACQ holds by \cref{th:akq-acq-i}.
\ifCmp Thus, \else Now, \cref{th:mpec-local-min-m-stat} implies that \fi
$(x^*,u^*,v^*)$ is M-stationary for \eqref{eq:i-mpec}
and \cref{thm:m-stat-is-m-stat} implies
that $(x^*,z^*)$ is M-stationary for \eqref{eq:i-anf}.
\end{proof}
\subsection{MPCC-linearized Bouligand Stationarity}
Finally, we introduce MPCC-linearized Bouligand stationarity, which is defined via smooth subproblems.
\begin{definition}[MPCC-linearized B-Stationarity for \eqref{eq:i-mpec}, see \cite{Scheel_Scholtes_2000}]\label{def:b-stat-mpec}
Consider a feasible point $(x^*,u^*,v^*)$ of \eqref{eq:i-mpec} with associated index sets $\setU_+$, $\setV_+$ and $\setD$.
It is a \emph{B-stationary} point if it is a stationary point of all
branch problems \eqref{eq:branch-mpec} for $\setPt=\setP \subseteq \setD$.
Here, $\bar\setP$ denotes the complement of $\setP$ in $\setD(x^*)$.
\end{definition}
Note that there exist different names for the variant of B-stationarity just introduced. It is simply called \emph{B-stationarity} in \cite{Scheel_Scholtes_2000},
but we prefer here the name \emph{MPCC-linearized B-stationarity} suggested in \cite{FlegelDiss} to prevent confusion with the definition of B-stationarity in the smooth case. The concept of B-stationarity is the most intuitive among stationarity concepts in simply requiring that, no matter how degenerate pairs of complementarities are resolved, no first order descent direction may be revealed. Moreover, it may be brought into agreement with the concept of local minimizers already under very weak assumptions regarding constraint qualifications. The downside however is that verifying B-stationarity inherently requires exponential runtime effort, as the number of branch problems is exponential in the number of degenerate pairs in $\setD$.
\begin{theorem}\label{th:local-min-b-stat_mpec}
If GCQ holds for all \eqref{eq:branch-mpec}, then all local minimizers of \eqref{eq:i-mpec} are MPCC-linearized B-stationary points.
\end{theorem}
\begin{proof}
This follows directly by KKT theory for smooth optimization problems.
\end{proof}
\begin{definition}[Abs-Normal-Linearized B-Stationarity for \eqref{eq:i-anf}]\label{def:b-stat-anf}
Consider a feasible point $(x^*,z^*)$ of \eqref{eq:i-anf}.
It is an \emph{abs-normal-linearized B-stationary} point if it is a stationary point of the
branch problems \eqref{eq:branch-anf} for $\Sigt=\diag(\sigma)$ with $\sigma \succeq \sigma(x)$.
\end{definition}
\begin{theorem}[MPCC-linearized B-stationarity for \eqref{eq:i-mpec} is abs-normal-linearized B-stationarity for \eqref{eq:i-anf}]\label{th:b-stat-is-b-stat}
A feasible point $(x^*,z^*)$ of \eqref{eq:i-anf} is abs-normal-linearized B-stationary if and only if
$(x^*,u^*,v^*)=(x^*,[z^*]^+,[z^*]^-)$ of \eqref{eq:i-mpec} is MPCC-linearized B-stationary.
\end{theorem}
\begin{proof}
Every branch problem \eqref{eq:branch-anf} is smooth and thus stationarity is equivalent to the condition $f'(x^*)^Td\ge 0$ for all $d\in\Tlinsig(x^*,z^*)$.
Analogously, stationarity for every branch problem \eqref{eq:branch-mpec} is equivalent to the condition $f'(x^*)^Td\ge 0$ for all $d\in\Tlinp(x^*,[z^*]^+,[z^*]^-)$.
Then, the equivalence follows as both branch problems are homeomorphic
and both linearization cones are homeomorphic by \cref{le:hom-T-branch-i}.
\end{proof}
\begin{theorem}[Minimizers and abs-normal-linearized B-Stationarity for \eqref{eq:i-anf}]\label{th:local-min-b-stat}
Assume that $(x^*,z^*)$ is a local minimizer of \eqref{eq:i-anf} and that GCQ holds at $(x^*,z^*)$ for all \eqref{eq:branch-anf}.
Then, $(x^*,z^*)$ is abs-normal-linearized B-stationary for \eqref{eq:i-anf}.
\end{theorem}
\begin{proof}
The point $(x^*,z^*)$ is a local minimizer of \eqref{eq:i-anf} if and only if
$(x^*,u^*,v^*)=(x^*,[z^*]^+,[z^*]^-)$ is a local minimizer of \eqref{eq:i-mpec}.
Moreover, GCQ for all \eqref{eq:branch-anf} and GCQ for all \eqref{eq:branch-mpec} are equivalent by \cref{th:branch-gcq-i}.
Thus, $(x^*,u^*,v^*)$ is a local minimizer of the counterpart MPCC and GCQ holds for all \eqref{eq:branch-mpec}.
Then, it is MPCC-linearized B-stationary by \cref{th:local-min-b-stat_mpec} and finally
$(x^*,z^*)$ is abs-normal-linearized B-stationary by \cref{th:b-stat-is-b-stat}.
\end{proof}
\begin{remark}
In \cite{Griewank_Walther_2019}, Griewank and Walther have presented
a stationarity concept that holds without any kink qualification
for minimizers of the \emph{unconstrained} abs-normal NLP
\begin{equation}
\label{eq:anf-0}
\min_{x} f(x), \quad f\in \Cd_{\text{abs}}(\Domx,\mathbb R).
\end{equation}
Indeed, this concept is precisely abs-normal-linearized Bouligand stationarity:
it requires the conditions of \cref{def:b-stat-anf}
specialized to \eqref{eq:anf-0}.
Now, the question arises why no regularity assumption is needed.
The answer is that the abs-normal form provides
a certain degree of built-in regularity:
we have shown in \cite{Hegerhorst_et_al:2019:MPEC1}
that MPCC-ACQ is always satisfied for the counterpart MPCC of \eqref{eq:anf-0} (and thus every local minimizer is an M-stationary point).
Analogously one can show that ACQ for all branch problems \eqref{eq:branch-mpec} is always satisfied for \eqref{eq:anf-0}.
Now, ACQ for all branch problems \eqref{eq:branch-mpec} is equivalent to ACQ for all branch problems \eqref{eq:branch-anf}
by \cref{th:branch-acq-i}, which in turn implies GCQ for all branch problems \eqref{eq:branch-anf}.
Thus, GCQ for all branch problems \eqref{eq:branch-anf} is always satisfied for \eqref{eq:anf-0} and \cref{th:local-min-b-stat} holds.
\end{remark}
\section{Conclusions
\label{sec:conclusions}
We have shown that general abs-normal NLPs
are essentially the same problem class as MPCCs.
The two problem classes permit the definition of corresponding constraint qualifications,
and optimality conditions of first order under weak constraint qualifications.
We have also shown that the slack reformulation
from \cite{Hegerhorst_Steinbach:2019}
preserves constraint qualifications of Abadie type, whereas for Guginard type we could only prove some implications.
Here, one subtle drawback is the non-uniqueness of slack variables.
Thus, we have introduced branch formulations of general abs-normal NLPs and counterpart MPCCs.
Then, constraint qualifications of Abadie and Guignard type are preserved.
\ifcase\FmtChoice
\bibliographystyle{tfs}
\or
\bibliographystyle{tfs}
\or
\bibliographystyle{jnsao}
\fi
|
2,877,628,088,346 | arxiv | \section{Introduction}
Theoretical insight into resonant response from optical systems,
including photonic-crystalline resonators \cite{Joannopoulos11}
and resonant metasurfaces \cite{Yang15}, is of big importance in
photonics \cite{Miroshnichenko10, Lalanne18}. Very unfortunately
only a few systems generally allow for a tractable analytic
solution providing intuitively clear and mathematically exact
picture, such as, e.g., the celebrated Mie-Lorenz theory
\cite{Stratton41}. Thus, in the field of optics the resonant
scattering quite often can only be understood in terms of the
temporal coupled mode theory (TCMT) \cite{Haus, Fan03, Suh04}. The
TCMT is a phenomenological approach that maps the scattering
problem onto a system of field driven lossy oscillators.
Mathematically, the problem is cast in the form of a system of
linear differential equations. The coefficients of the system
account for both "internal" modes of the resonant structure as
well as for the coupling of the "internal" modes to incoming and
outgoing waves. The interaction with the impinging light is
understood in terms of "coupled modes" which are populated when
the system is illuminated from the far-zone. The elegance of the
TCMT is in its simplicity and the relative ease in establishing
important relationships between the phenomenological coefficients
solely from the system's symmetries and conservation laws
\cite{Fan03, Suh04, Ruan09, Ruan12}. However, despite its numerous
and successful applications, the TCMT generally
relies on a set of fitting parameters.
Moreover, the mathematical foundations of the TCMT remain vague
since the theory neither gives an exact definition of the "coupled mode", nor a clear recipe for
such a "coupled mode" to be computed numerically.
Historically, the problem of coupling between the system's eigenmodes to the scattering channels
with the continuous spectrum has attracted a big deal of attention in the field of quantum mechanics
\cite{physrep, Dittes, Ingrid}. One of the central ideas was
the use of the Feshbach projection method \cite{Dittes,Chruscinski13} for mapping the problem onto the Hilbert
space spanned by the eigenstates of the scattering domain isolated from the environment. Such approaches have met with a limited
success in application to various wave related set-ups, including quantum billiards \cite{Stockmann,Pichugin, Stockmann1}, tight-binding models \cite{SR},
potential scattering \cite{Savin}, acoustic resonators \cite{Maksimov15},
nanowire hetrostructures
\cite{Racec09}
and, quite recently, dielectric resonators \cite{Gongora17}. Besides its mathematical complexity there are
two major problems with the Feshbach projection method: First, the eigenmodes of the isolated systems are in general
not known analytically; therefore, some numerical solver has most often to be applied. Furthermore, the computations of such
eigenmodes requires some sort of artificial boundary condition
on the interface between the scattering domain and the outer space. Quite remarkably the convergence
of the method is shown to be strongly affected by the choice of the boundary condition on the interface \cite{Pichugin, Lee, Schanz}.
In the recent decades we have witnessed the rise of efficient
numerical solvers utilizing perfectly matched layer (PML)
absorbing boundary conditions \cite{Berenger94, Chew94}. The
application of perfectly matched layer has rendered numerical modelling of wave
propagation in open optical, quantum, and acoustic systems
noticeably less difficult allowing for direct full-wave
simulations even in three spatial dimensions. On the other hand,
the application of PML also made it possible to compute the
eigenmodes and eigenfrequencies of wave equations with
refletionless boundary conditions. Such eigenmodes come under many
names including quasinormal modes \cite{Lalanne18}, Gamow states
\cite{Civitarese04}, decaying states \cite{More71}, leaky modes
\cite{snyder2012optical}, and resonant states \cite{Muljarov10}.
The availability of the resonant states has naturally invited
applications to solving Maxwell's equations via series expansions
giving rise to a variety of resonant state expansion (RSE) methods
\cite{Lalanne18}. One problem with the resonant states is that
they are not orthogonal in the usual sense of convolution between
two mode shapes with integration over the whole space
\cite{Ingrid, Kristensen13}. This can be seen as a consequence of
exponential divergence with the distance from the scattering
center \cite{Lalanne18}. Fortunately, both of the normalization
and orthogonality issues have recently been by large resolved with
different approaches, most notably through the PML
\cite{Sauvan13}, and the flux-volume (FV) \cite{More71,
Muljarov10} normalization conditions.
In this paper we propose a RSE approach to the problem of light scattering by
an anisotropic defect layer (ADL) embedded into anisotropic photonic crystal (PhC) in the spectral vicinity of an optical BIC.
Although BICs are ubiquitous in various optical systems \cite{Hsu16, Koshelev19}, the
system under scrutiny is the only one allowing for an exact full-wave analytic solution for an optical BIC \cite{Timofeev18}.
By matching the general solution of Maxwell's equation within the ADL to both evanescent and propagating solutions in the PhC
\cite{Rytov1956,
YarivYeh1984bk, ShiTsai1984_PBG, CamleyMills1984_PBG} we find the eigenfield and eigenfrequency of the resonant mode family limiting to the BIC under variation
of a control parameter.
Next, for finding the scattering spectra we apply the spectral representation of Green's function in terms the FV-normalized resonant states \cite{Muljarov10}.
This is a well developed approach which has already been applied
to both two\cite{Doost13}- and three\cite{Doost14}-dimensional optical systems. The approach has also been recently extended to magnetic, chiral,
and bi-anisotropic optical materials \cite{Muljarov18} as well as potential scattering in quantum mechanics \cite{Tanimu18}. Remarkably, so far RSE methods
have been mostly seen as a numerical tool. Here we show how
a perturbative analytic solution can be constructed in a closed form within the RSE framework. Such a perturbative solution uses the BIC as the zeroth order approximation and,
very importantly, is capable of describing the collapsing Fano resonance
\cite{Kim, Shipman, SBR, Blanchard16, Bulgakov18a, Bogdanov19} in the spectral vicinity of the BIC.
We mention in passing that due to the fine control of Fano line-shapes
optical BICs has been recently viewed as an efficient
instrument in design of narrowband optical filters \cite{Foley14,Cui16,Doskolovich19,Nguyen19}. We shall see that the
analytic solution matches the exact numerical result to a good accuracy.
\section{The System}
The system under scrutiny is composed of an ADL with two anisotropic PhC arms
attached to its sides as shown in Fig. \ref{fig1} (a). Each PhC
arm is a one-dimensional PhC with alternating layers of isotropic
and anisotropic dielectric materials. The layers are stacked along
the $z$-axis with period $\Lambda$. The isotropic layers are made
of a dielectric material with permittivity $\epsilon_o$ and
thickness $\Lambda-d$. The thickness of each anisotropic layer is
$d$. The anisotropic layers have their principal dielectric axes
aligned with the $x$, $y$-axes with the corresponding permittivity
component principal dielectric constants $\epsilon_{e}$,
$\epsilon_{o}$, but the principal axes of the ADL are tilted with
respect of the principal axes of the PhC arms as shown in Fig.
\ref{fig1}(a). Propagation of the monochromatic electromagnetic waves is
controlled by Maxwell's equations of the following form \cite{Timofeev18}
\begin{figure*}
\begin{minipage}[b]{0.49\textwidth}
\center{\input{Fig_1a.pdf_tex}}
\end{minipage}
\begin{minipage}[b]{0.49\textwidth}
\centering\includegraphics[scale=1]{Fig_1b.png}
\end{minipage}
\caption
{(a) One-dimensional PhC structure stacked of alternating layers
of an isotropic dielectric material with permittivity $\epsilon_o$
(gray) and an anisotropic material with the permittivity
components $\epsilon_o$ and $\epsilon_e$ (pink). An anisotropic
defect layer with a tuneable permittivity tensor is inserted in
the center of the structure. The analytic solution for the
quasi-BIC mode profile
is plotted on top and right sides of the stack: the $x$-wave component
Re$(E_x)$ -- blue, the $y$-wave component Re$(E_y)$ -- black. (b)
The transmittance spectra $|t^{\prime}|^2$ of $x$- (blue) and
$y$-waves (black) for PhC structure from Fig. \ref{fig1}(a)
calculated with Berreman`s method. The parameters are $\epsilon_e
= 4$, $\epsilon_o = 1$, $d = 0.125 \ \mu m$, $(\Lambda - d) =
0.250 \ \mu m$, tilt angle $\phi = \pi/9$ (a), $\phi = \pi/18$ (b).
}}
\label{fig1}
\end{figure*}
\begin{equation}
\left\{\begin{array}{cc}
0 & \nabla \times \\
-\nabla \times & 0
\end{array}\right\}
\left\{\begin{array}{c}
{\bm{E}} \\
{\bm{H}}
\end{array} \right\}=
-ik_0
\left\{\begin{array}{c}
\hat{\epsilon} {\bm{E}} \\
{\bm{H}} \end{array}\right\},
\label{Maxwell}
\end{equation}
where $\bm{E}$ is the electric vector, $\bm{H}$ is the magnetic vector, $k_0 = \omega/c$ is the wave number in vacuum
with $c$ as the speed of light, and, finally, $\hat{\epsilon}$ is the dielectric tensor.
The orientation of the ADL optical axis is
determined by the unit vector
\begin{equation}
\bm{a} = \{\cos{(\phi)}, \sin{(\phi)}, 0\}^{\dagger},
\label{a}
\end{equation}
as shown in Fig. \ref{fig1}(a).
Since the reference frame is aligned with the
optical axes in the PhC, the dielectric tensor is diagonal everywhere out of the ADL.
Given that $\bm{a}$ is specified by the tilt angle $\phi$, in the ADL it takes the following
form
\begin{equation}
\hat{\epsilon} = \left\{\begin{array} {cc}
\epsilon_e \cos^2 (\phi) + \epsilon_o \sin^2 (\phi) & \sin{(2\phi)} \; (\epsilon_e-\epsilon_o)/2 \\
\sin{(2\phi)} \; (\epsilon_e-\epsilon_o)/2 & \epsilon_e \sin^2 (\phi) + \epsilon_o \cos^2 (\phi)
\end{array} \right\}.
\label{eq:epsilon}
\end{equation}
In this paper we restrict ourselves with the normal incidence, i.e. the Bloch wave vector is
aligned with the $z$-axis in Fig. \ref{fig1}(a). The dispersion of waves in the PhC arms depends on polarization.
For the $x$-polarized
waves ($x$-waves) the dispersion relationship is that of a one-dimensional PhC \cite{Rytov1956,
YarivYeh1984bk, ShiTsai1984_PBG, CamleyMills1984_PBG}
\begin{equation}
\cos{(K \Lambda)} = \cos{(k_{e} d)} \cos{[k_{o} (\Lambda - d)]} - \frac{1 + r_{o e}^2}{1 - r_{o e}^2} \sin{(k_{e} d)} \sin{[k_{o}
(\Lambda - d)]},
\label{Rytov}
\end{equation}
where $K$ is the Bloch wave number,
\begin{equation}
k_{e} = k_0 \sqrt{\epsilon_e} = k_0 n_e, \ k_{o} = k_0 \sqrt{\epsilon_o} = k_0 n_o,
\label{o_and_e}
\end{equation}
and the Fresnel coefficient $r_{o e}$ is given by
\begin{equation}
r_{o e} = \frac{k_{o} - k_{e}} {k_{o} + k_{e}}.
\label{Fresnel_ab_perp}
\end{equation}
Equation (\ref{Rytov}) defines the band structure for the
$x$-waves with the condition $|\cos{(K \Lambda)}| = 1$
corresponding to the edges of the photonic band gap in which
the wave propagation is forbidden. In Fig. \ref{fig1} (a) we
demonstrate the transmittance spectrum of the system with $20$
bi-layers in each PhC arms; the overall system being submersed in
air. One can see that for the $x$-waves the transmittance is zero
within the band gap. On the other hand the PhC arms are always
transparent to the $y$-polarized waves ($y$-waves) with dispersion
$k_o=\epsilon_o k_0$. Notice, though, that the $y$-waves
transmittance exhibits a sharp dip at the center of the band gap. This
dip is due to a high quality resonant mode predicted in
\cite{Timofeev18}. Although the line shape is symmetric, the dip,
nonetheless, can be attributed to a Fano resonance as the
transmittance reaches zero at the center of the band gap indicating a
full destructive interference between two transmission paths.
In this paper we set a goal of finding the
analytic solution describing the Fano anomaly in the band gap.
\section{Resonant eigenmode and bound state in the continuum}
The resonant states are the eigenmodes of Maxwell's equations (\ref{Maxwell}) with reflectionless boundary conditions
in the PhC arms. The equation for resonant eigenfrequencies can be obtained by matching the general solution in the ADL
to the outgoing, both propagating and evanescent, waves in the PhC arms.
Let us start from the general solution in the ADL.
\subsection{General solution in the ADL}
First, the unit vector along the propagation direction is defined as
\begin{equation}
\bm{\kappa}^{\scriptscriptstyle(\pm)} = [0, 0, \pm 1],
\label{kappa_eo}
\end{equation}
where the symbol $\pm$ is used to discriminate between forward and backward propagating waves with respect to
the orientation of the $z$-axis. The ADL supports two types of electromagnetic
waves of different polarization. The $e$-waves with wavenumber $k_e=\epsilon_e k_0$ are polarized along the director $\bm{a}$,
equation (\ref{a}), while the
$o$-waves with wavenumber $k_o=\epsilon_o k_0$ have their electric vector perpendicular to both $\bm{a}$ and $\bm{\kappa}$,
as seen from Fig. \ref{fig1}.
The electric and magnetic vectors of the $e$-wave can be written as
\begin{equation}
\bm{E}_{e}^{\scriptscriptstyle(\pm)} = E_{e}^{\scriptscriptstyle(\pm)} \bm{a}, \ \ \bm{H}_{e}^{\scriptscriptstyle(\pm)}
= \frac{k_e}{k_0}\left[\bm{\kappa}^{\scriptscriptstyle(\pm)} \times \bm{E}_{e}^{\scriptscriptstyle(\pm)} \right],
\label{e_EH}
\end{equation}
where $E_{e}^{\scriptscriptstyle (\pm)}$ are unknown amplitudes.
At the same time for $o$-waves we have
\begin{equation}
\bm{E}_{o}^{\scriptscriptstyle(\pm)} = E_{o}^{\scriptscriptstyle (\pm)} \left[ \bm{a} \times \bm{\kappa}^{\scriptscriptstyle(\pm)} \right], \ \
\bm{H}_{o}^{\scriptscriptstyle(\pm)} = \frac{k_o}{k_0}\left[\bm{\kappa}^{\scriptscriptstyle(\pm)} \times \bm{E}_{o}^{\scriptscriptstyle(\pm)}
\right],
\label{o_EH}
\end{equation}
where $E_{o}^{\scriptscriptstyle (\pm)}$ are again unknown amplitudes.
The general solution of equations (\ref{Maxwell}) in the ADL, $\ z \in [-d,\ d]$, is written as a sum of
the forward and backward propagating waves
\begin{equation}
\bm{E} = \sum_{j=o,e} \left(\bm{E}_{j}^{\scriptscriptstyle(+)} e^{i k_{j} z} + \bm{E}_{j}^{\scriptscriptstyle(-)} e^{-i k_{j} z} \right),\ \
\bm{H} = \sum_{j=o,e} \left(\bm{H}_{j}^{\scriptscriptstyle(+)} e^{i k_{j} z} + \bm{H}_{j}^{\scriptscriptstyle(-)} e^{-i k_{j} z} \right).
\label{sum_EH}
\end{equation}
\subsection{General solution in the PhC arms}
The general solution of Maxwell's equations (\ref{Maxwell}) in the PhC arms is also written as a sum of forward
and backward propagating waves. For
the $x$-waves the field components $E_x$ and $H_y$ in the isotropic layer
with the cell number $m$, $\ z \in [d + m \Lambda,\ (m + 1) \Lambda]$, are written as
\begin{equation}
\begin{aligned}
& E_{x}^{(m)} = e^{iK \Lambda m}\left[A^{\scriptscriptstyle(+)}_x e^{i k_{o} (z - d - m \Lambda)}
+ A^{\scriptscriptstyle (-)}_x e^{-i k_{o} (z - d - m \Lambda)}\right], \\
& H_{y}^{(m)} = \frac{k_{o}}{k_0} e^{iK \Lambda m}\left[A^{\scriptscriptstyle (+)}_x e^{i k_{o}
(z - d - m \Lambda)} - A^{\scriptscriptstyle (-)}_x e^{-i k_{o} (z - d - m \Lambda)}\right].
\end{aligned}
\label{xEH1}
\end{equation}
In the anisotropic layer with the cell number $m$, $\ z \in [(m+1) \Lambda,\ d + (m + 1) \Lambda] $, we have
\begin{equation}
\begin{aligned}
& E_{x}^{(m)} = e^{iK \Lambda m}\left[B^{\scriptscriptstyle (+)}_x e^{i k_{e} (z - m\Lambda-\Lambda)} + B^{\scriptscriptstyle (-)}_x e^{-i k_{e} (z - m\Lambda-\Lambda)}\right], \\
& H_{y}^{(m)} = \frac{k_{e}}{k_0} e^{iK \Lambda m}\left[B_x^{\scriptscriptstyle (+)} e^{i k_{e} (z - m\Lambda-\Lambda)} - B_x^{\scriptscriptstyle (-)} e^{-i k_{e} (z - m\Lambda-\Lambda)}\right].
\end{aligned}
\label{xEH2}
\end{equation}
By applying the continuity condition for the tangential field components
the solutions (\ref{xEH1}) and (\ref{xEH2}) are matched
on the boundary between the anisotropic layer in the $(m-1)_{\rm th}$ cell and the isotropic
layer in $m_{\rm th}$ cell, $\ z = d + m \Lambda$, as well as on the boundary between the layers
in the $m_{\rm th}$ cell, $\ z = (m + 1)\Lambda$. This gives us a system of four equations for four
unknowns, $A^{\scriptscriptstyle (+)}, A^{\scriptscriptstyle (-)}, B^{\scriptscriptstyle (+)}, B^{\scriptscriptstyle (-)}$. After solving for $B^{\scriptscriptstyle (+)}$ and $B^{\scriptscriptstyle (-)}$, this system can
be reduced to the following two equations
\begin{equation}
\left\{\begin{aligned}
A^{\scriptscriptstyle (+)} \left[e^{i k_{o} (\Lambda - d)} - e^{iK \Lambda} e^{-i k_{e} d}\right] -
A^{\scriptscriptstyle (-)} r_{o e}\left[e^{-i k_{o} (\Lambda - d)} - e^{iK \Lambda} e^{-i k_{e} d}\right] = 0, \\
-A^{\scriptscriptstyle (+)} r_{o e} \left[e^{i k_{o} (\Lambda - d)} - e^{iK \Lambda} e^{i k_{e} d}\right] +
A^{\scriptscriptstyle (-)} \left[e^{-i k_{o} (\Lambda - d)} - e^{iK \Lambda} e^{i k_{e} d}\right] = 0,
\end{aligned}\right.
\label{PC_equations}
\end{equation}
where $r_{oe}$ is given by equation (\ref{Fresnel_ab_perp}). One can easily check that Eq. (\ref{PC_equations}) is only solvable
when $K$ satisfies the dispersion relationship (\ref{Rytov}).
In contrast to the $x$-waves, for the outgoing $y$-waves in the right PhC arms the solution is simple
\begin{equation}
\begin{aligned}
& E_{y} =
-C^{\scriptscriptstyle (+)} e^{i k_{o} (z - d)}, \\
& H_{x} = \frac{k_{o}}{k_0} C^{\scriptscriptstyle (+)} e^{i k_{o} (z - d)}.
\end{aligned}
\label{yEH}
\end{equation}
Notice that so far we have not written down the solution in the left PhC arm. The direct application of that solution can be avoided
by using the mirror symmetry of the system. Here, in accordance with ref \cite{Timofeev18} we restrict ourselves with the antisymmetric case
\begin{equation}
\bm{E}(z) = - \bm{E}(-z).
\label{atysymmetry}
\end{equation}
The generalization onto the symmetric case is straightforward.
\subsection{Wave matching}
Now we have all ingredients for finding the field profile of the antisymmetric resonant eigenmodes. By matching equation
(\ref{sum_EH}) to both equation (\ref{xEH1}) and equation (\ref{yEH}) on the interface between the ADL and the
right PhC arm and using equations (\ref{PC_equations}, \ref{atysymmetry}) one obtains eight equations for
eight unknown variables $E_{e}^{\scriptscriptstyle(+)}, E_{e}^{\scriptscriptstyle(-)}, E_{o}^{\scriptscriptstyle (+)}, E_{o}^{\scriptscriptstyle (-)}, A^{\scriptscriptstyle (+)}, A^{\scriptscriptstyle (-)}, C^{\scriptscriptstyle (+)}, K$.
After some lengthy and tedious calculations one finds that the system is solvable under the following condition
\begin{equation}
\frac{\xi e^{i k_{o} (\Lambda - d)} - r_{o e} e^{-i k_{o} (\Lambda - d)}}{\xi e^{-i k_{e} d} - r_{o e} e^{-i k_{e} d}} -
\frac{e^{-i k_{o} (\Lambda - d)} - \xi r_{o e} e^{i k_{o} (\Lambda - d)}}{e^{i k_{e} d} - \xi r_{o e} e^{i k_{e} d}} = 0,
\label{disp}
\end{equation}
where
\begin{equation}
\xi = -e^{2i k_{o} d}\sin^2{(\phi)} + \frac{r_{oe} - e^{2i k_{e} d}}{1 - r_{oe} e^{2i k_{e} d}} \cos^2{(\phi)}.
\label{x}
\end{equation}
Taking into account equation (\ref{o_and_e}) we see that the above formulae represent the transcendental equation for
complex eigenvalues, $k_0$ of the Maxwell's equations (\ref{Maxwell}).
The analytic solution for the electromagnetic field within the ADL can be evaluated through the following formulae
\begin{equation}
\begin{aligned}
& E_{o}^{\scriptscriptstyle (+)} = E_{o}^{\scriptscriptstyle (-)}, \ E_{e}^{\scriptscriptstyle (-)} = \zeta E_{o}^{\scriptscriptstyle(-)}, \ E_{o}^{\scriptscriptstyle (+)} = -\zeta E_{o}^{\scriptscriptstyle (-)}, \ E_{o}^{\scriptscriptstyle (-)} = \frac{i A} {2 n_e \zeta},\\
& \zeta = - \frac{e^{-i k_{o} d} t_{oe} \cos{(\phi)}}{(e^{-i k_{e} d} - r_{oe} e^{i k_{e} d}) \sin{(\phi)}}, \ t_{oe} = \frac{2k_{o}} {k_{o} + k_{e}}.
\end{aligned}
\label{fields}
\end{equation}
Substituting (\ref{fields}) into equation (\ref{sum_EH}) one finds the profile of the resonant eigenmode within the ADL
\begin{equation}
\begin{aligned}
& E_x = \frac{A}{n_e} \sin{(k_e z)} \cos{(\phi)} - \frac{A}{n_e \zeta} \sin{(k_o z)} \sin{(\phi)}, \\
& E_y = \frac{A}{n_e} \sin{(k_e z)} \sin{(\phi)} + \frac{A}{n_e \zeta} \sin{(k_o z)} \cos{(\phi)}.
\label{EADL}
\end{aligned}
\end{equation}
The amplitude $A$ has to be defined from a proper normalization
condition. We mention that in limiting case $\phi \rightarrow 0$
the $\zeta \rightarrow \infty$ and fields $E_x = (A/n_e) \sin{(k_e
z)}$, and $E_y = 0$ coincide with exact solution for BIC (8), (21)
from our previous work \cite{Timofeev18}. The obtained eigenfield
are plotted in Fig. (\ref{fig1}) (a). One can see that though the
$y$-component is localized due to the band gap, the $x$-component
grows with the distance from the ADL as it is typical for resonant
eigenstates \cite{Lalanne18}.
\subsection{Perturbative solution}
Equations (\ref{disp}, \ref{x}) are generally not solvable analytically. There is, however, a single tractable perturbative solution in
the case of quarter-wave optical thicknesses of the layers
\begin{equation}
k_{o} (\Lambda - d) = k_{e} d = \frac{k_0 \lambda_{\scriptscriptstyle PBG}}{4} = \frac{\pi \omega}{2 \omega_{\scriptscriptstyle PBG}},
\label{QW}
\end{equation}
where $\omega_{\scriptscriptstyle PBG}$ is the center frequency of photonic band gap, and $\lambda_{\scriptscriptstyle PBG}$ is the corresponding wavelength.
In our previous work \cite{Timofeev18} we found an exact solution for $\phi=0$.
Here by applying a Taylor expansion of equations (\ref{disp}, \ref{x}) in powers
of the tilt angle $\phi$ we found approximate solutions for both resonant eigenfrequency and resonant eigenomode.
By writing the resonant eigenfrequency as
\begin{equation}
\omega_r=\tilde{\omega}-i \gamma,
\end{equation}
where both $\tilde{\omega}$ and $\gamma$ are real and positive,
and substituting into equations (\ref{disp}, \ref{x}) one finds
\begin{equation}
\begin{aligned}
& \tilde{\omega} = \omega_{\scriptscriptstyle PBG} + \frac{\omega_{\scriptscriptstyle PBG}}{\pi} q (1 - q) \sin{(\pi q)} \cdot \phi^{2} + {\cal O}(\phi^4)\\
& \gamma = \frac{2 \omega_{\scriptscriptstyle PBG}}{\pi} q (1 - q) \cos^2{(\pi q/2)} \cdot \phi^{2}+ {\cal O}(\phi^4).
\label{omega}
\end{aligned}
\end{equation}
where $q = n_{o}/n_{e}$. Notice that the imaginary part of $\omega$ vanishes if $\phi=0$.
Thus, the system supports an antisymmetric BIC with the frequency $\omega_{\scriptscriptstyle BIC}=\omega_{\scriptscriptstyle PBG}$. That BIC
was first reported in our previous work \cite{Timofeev18}. We address the reader to the above reference for
detailed analysis of the BIC and the plots visualizing its eigenfields.
For the further convenience we also introduce the resonant vacuum wave number as
\begin{equation}
k_r = (\omega_r - i \gamma)/c = k_{\scriptscriptstyle BIC}\left[1 + \alpha \cdot \phi^2+ {\cal O}(\phi^4)\right],
\label{k_BIC}
\end{equation}
where the complex valued $\alpha$ is implicitly defined by equation (\ref{omega}) and $k_{\scriptscriptstyle BIC}=\omega_{\scriptscriptstyle BIC}/c$.
Finally, expanding (\ref{EADL}) into the Taylor series in $\phi$ we find the following expression for the resonant eigenmode profile within the ADL
\begin{equation}
\bm{E}_r(z) = \frac{A}{n_e}
\left\{\begin{array} {c} \sin{\left(\frac{\pi z}{2 d}\right)}
+ {\cal O}(\phi^2)
\\ \left[ \sin{\left(\frac{\pi z}{2 d}\right)} + ie^{\frac{i\pi q}{2}} \sin{\left(\frac{\pi q z}{2 d}\right)} \right]\cdot \phi + {\cal O}(\phi^3) \end{array}\right\}.
\label{E_ADL}
\end{equation}
Notice that ${\bm E}_r$ can be handled as $2\times 1$ vector since
$E_z=0$.
\subsection{Normalization condition}
There are several equivalent formulations of the FV normalization condition \cite{Muljarov10, Doost13, Muljarov18}.
Here we follow \cite{Doost13}, writing down the FV normalization condition through analytic continuation
${\bm E}(z,k)$ of the resonant eigenmode ${\bm E}_r(z)$ around the point $k=k_r$
\begin{equation}
\int\limits_{-d}^{d}{\bm E}_r\cdot\hat{\epsilon}{\bm E}_{r}dz
- \lim_{k\rightarrow k_r}\left\{ \frac{2\left[{\bm E}_r(d)\cdot\partial_z{\bm E}(d,k)-
{\bm E}(d,k)\cdot\partial_z{\bm E}_{r}(d) \right]}{k_{r}^2-k^2}\right\}=1.
\label{normalization}
\end{equation}
At the first glance the "flux" term in equation (\ref{normalization})
differs from that in \cite{Doost13} by the factor of $2$; this is because the "flux" term is doubled to account for
both interfaces $z=\pm d$. For the resonant eigenmode $\bm{E}_r(z)$ found in the previous subsection the amplitude $A$ corresponding
equation (\ref{normalization}) can be found analytically
we the use of equation (\ref{E_ADL}). This would, however, result in a very complicated expression. Fortunately, we shall see later on that
in our case we do not need the general expression for $A$ in the second order perturabtive
solution consistent with equation (\ref{omega}).
By a close examination of equation (\ref{E_ADL}) one can see that the the Taylor expansion of $A$ can only
contain even powers of $\phi$.
Thus, for the further convenience we can write
\begin{equation}
A = \frac{1}{\sqrt{F+B\cdot\phi^2}}
\label{AB}
\end{equation}
assuming that $F$ and $B$ are such that the normalization condition (\ref{normalization}) is satisfied up to ${\cal O}(\phi^4)$.
\section{Scattering problem}
Let us assume that a monochromatic $y$-wave is injected into the system through the left PhC arm. The scattering problem can now be solved through the following
decomposition of the electric field within the ADL
\begin{equation}\label{decomposition}
{\bm E}={\bm E}_{\scriptscriptstyle dir}+{\bm E}_{\scriptscriptstyle res},
\end{equation}
where the direct contribution is simply the electric field of the incident wave
\begin{equation}\label{direct}
{\bm E}_{\scriptscriptstyle dir}=\sqrt{I_0}e^{ik_oz}
\left\{0, \ 1\right\}^{\dagger}
\end{equation}
with intensity $I_0$, while ${\bm E}_{\scriptscriptstyle res}$ can be viewed as the contribution due to the resonant pathway
via exitation of the resonant eigenmode ${\bm E}_r$. Substituting Eq. (\ref{decomposition}) into Eq. (\ref{Maxwell})
one obtains the inhomogeneous equation
\begin{equation}\label{inhomo}
\partial_z^2{\bm E}_{\scriptscriptstyle res} +k^2_0\hat{\epsilon} {\bm E}_{\scriptscriptstyle res}={\bm J}
\end{equation}
with
\begin{equation}\label{J}
{\bm J}=-\partial_z^2{\bm E}_{\scriptscriptstyle dir} -k^2_0\hat{\epsilon} {\bm E}_{\scriptscriptstyle dir}.
\end{equation}
Equation (\ref{inhomo}) can be solved with the use of Green's function
\begin{equation}\label{solved}
{\bm E}_{\scriptscriptstyle res}(z)=\int\limits_{-d}^{d}\widehat{G}(z,z'){\bm J}(z')dz'
\end{equation}
that is defined as the solution of Maxwell's equations with a delta source
\begin{equation}\label{Green}
\partial_z^2\widehat{G}(z,z') +k^2_0\hat{\epsilon} \widehat{G}(z,z')=\delta(z-z')\widehat{\mathbb I}_{2},
\end{equation}
where $\widehat{\mathbb I}_{2}$ is the $2\times2$ identity matrix. According to
\cite{Doost13} Green's function can expanded into the orthonormal resonant eigenmodes as
\begin{equation}\label{Green_expantion}
\widehat{G}(z,z')=\sum_n\frac{{\bm E}_n(z) \otimes{\bm E}_n(z')}{2k_0(k_0-k_n)}.
\end{equation}
Of course we do not know the full spectrum $k_n$, since equations
(\ref{disp}, \ref{x}) are not solved analytically. We, however,
assume that the contribution of all eigenfields except ${\bm
E}_{\scriptscriptstyle res}$ is accumulated in the direct field. Thus, in the
spectral vicinity of the quasi-BIC we apply the {\it resonant
approximation} taking into account only the eigenmode associated
with the BIC
\begin{equation}\label{Green_resonant}
\widehat{G}_{\scriptscriptstyle res}(z,z')=\frac{{\bm E}_{r}(z) \otimes{\bm E}_{r}(z')}{2k_0(k_0-k_{r})}.
\end{equation}
The resonant field can now be calculated from equation (\ref{solved}) with
the resonant Green's function equation (\ref{Green_resonant}) once the FV normalization condition (\ref{AB})
is applied to the quasi-BIC eigenmode. The analytic expression for the resonant field reads
\begin{equation}
\bm{E}_{\scriptscriptstyle res}(z) = \frac{1}{k_0(k_0-k_r) (F+B\cdot\phi^2) \epsilon_e}
\left[
\frac{i k_o \sqrt{I_0} k_0^2 (\epsilon_e-\epsilon_o) \cos{(k_o d)}}{k_{o}^2-\pi^2/4d^2} \cdot \phi
+ {\cal O}(\phi^3) \right]
\left\{\begin{array} {c} \sin{\left(\frac{\pi z}{2 d}\right)}
\\ \left[\sin{\left(\frac{\pi z}{2 d}\right)} + ie^{\frac{i\pi q}{2}} \sin{\left(\frac{\pi q z}{2 d}\right)} \right]\cdot \phi
\end{array}\right\}.
\label{Solution}
\end{equation}
\begin{figure*}[h!]
\centering\includegraphics[scale=1]{Fig_2_ampl.png} \caption{ {(a, c)
The reflectance $|r^{\prime}|^2$ (a) and transmittance
$|t^{\prime}|^2$ (c) spectra vs tilt angle $\phi$ of $y$-waves for
PhC structure from Fig. \ref{fig1}(a) calculated with Berreman`s
method. The dashed red line shows the analytic resonant frequency
$\omega_r/(2\pi)$ \eqref{omega}. The solid magenta lines show the
analytic results for half-minima in transmittance $(\omega_r \pm
\gamma)/(2\pi)$. (b, d) The difference between reflectance (b)
and transmittance (d) spectra calculated with Berreman`s method
and analytically \eqref{r}, \eqref{t}. The parameters are the same
as in Fig. \ref{fig1}.}} \label{fig2}
\end{figure*}
The above equation constitutes the perturbative solution of the
scattering problem with the accuracy up to ${\cal O} (\phi^3)$.
Notice that the terms dependant on $B$ also vanish as ${\cal O}
(\phi^3)$. Thus, in evaluating the FV normalization condition
(\ref{normalization}) we can restrict ourselves to $\phi=0$ in
which case the eigenmode is a BIC. Further on we safely set $B=0$
in all calculations. The BIC is localized function decaying with
$z\rightarrow \pm \infty$. Since the division into the scattering
domain and the waveguides is arbitrary and the flux term is
vanishing with $z\rightarrow \pm \infty$, one can rewrite the
normalization condition for the BIC as follows
\begin{equation}
\int\limits_{-\infty}^{\infty}{\bm E}_n\cdot\hat{\epsilon}{\bm E}_{n}dz=1.
\label{normalization_BIC}
\end{equation}
We mention in passing that the equivalence between equations (\ref{normalization}) and (\ref{normalization_BIC})
can also be proven by subsequently applying the Newton-Leibniz axiom and Maxwell's equations (\ref{Maxwell}) to the flux
term in equation (\ref{normalization}). The integral in equation (\ref{normalization_BIC}) is nothing but
the energy stored in the eigenmode up to a constant prefactor. This integral for the system in Fig. \ref{fig1}(a)
has been evaluated analytically in our previous work \cite{Timofeev18}. As a result the normalization condition (\ref{normalization_BIC})
yields
\begin{equation}
F=\frac{d}{1 - q}.
\label{A}
\end{equation}
Finally, the reflection and transmission coefficients can be found through the following
equations, correspondingly
\begin{equation}\label{r}
r=e^{-ik_od}{\bm e}_y^{\dagger} \cdot\left[{\bm E}(-d)-\sqrt{I_0}e^{-ik_od}{\bm e}_y\right],
\end{equation}
and
\begin{equation}\label{t}
t=e^{-ik_od}{\bm e}_y^{\dagger} \cdot {\bm E}(d),
\end{equation}
where ${\bm e}_y= \left\{0, 1 \right\}^{\dagger}$ and ${\bm E}$ is
computed through equations (\ref{decomposition}, \ref{direct},
\ref{Solution}). In Fig. \ref{fig2} we compare the perturbative
analytic solution against direct numerical simulations with the
Berreman transfer-matrix method \cite{Berreman72}. One can see
that deviation is no more than 2\% even for relatively large
angle $\phi = 10$ deg.
\section{Conclusion}
In this paper we considered light scattering by
an anisotropic defect layer embedded into anisotropic photonic crystal in the spectral vicinity of an optical BIC. Using a resonant state expansion method we derived an analytic solution for reflection and transmission amplitudes.
The analytic solution is constructed via a perturbative approach with the BIC as the zeroth order approximation.
The solution is found to accurately describe the collapsing Fano feature in the spectral vicinity of the BIC.
So far the theoretical attempts to describe the Fano feature induced by the BIC relied on phenomenological approaches such
as the $S$-matrix approach \cite{Blanchard16}, or the coupled mode theory \cite{Bulgakov18a}.
To the best
of our knowledge this is the first full-wave analytic solution involving an optical BIC reported in the literature.
We believe that the results presented offer a new angle onto the resonant state expansion method paving a way
to analytic treatment of resonant scattering due to optical BICs. In particular we expect that the resonant approximation
can be invoked to build a rigorous theory of nonlinear response \cite{Bulgakov19b}.
The BICs in photonic systems have already found important applications in
enhanced optical absorbtion \cite{Zhang15}, surface enhanced Raman spectroscopy \cite{Romano18}, lasing \cite{Kodigala17},
and sensors \cite{Romano18a}. We speculate that analytic results are of importance for a further insight into
localization of light as well as the concurrent phenomenon of collapsing Fano resonance.
|
2,877,628,088,347 | arxiv | \section{Introduction}
The spatial topology of the Universe is
one of the unresolved problems in cosmology.
From the recent cosmic microwave background radiation data,
the density fraction of the curvature is estimated as
$\Omega_k = 0.000 \pm 0.005$ ($95\%$, {\it Planck} TT+lowP+lensing+BAO)
\cite{Ade:2015xua}.
Because of the observational error,
it is not possible to determine the spatial topology
from the data at the current stage.
Some other efforts have been made in the inflation models in the closed/open universe
\cite{Ellis:2002we,Ellis:2003qz,Labrana:2013oca,Bucher:1994gb,White:2014aua}.
The investigation of primordial density perturbation shows that
the peculiar predictions of those models are beyond the resolution of
the current observational data.
Therefore, one needs to consider other ways
in order to catch an idea of the background spatial topology,
for example, the investigation of the gravitating localized objects
in different topologies.
The pure closed/open ($S_3/H_3$) spatial topology is achieved by a constant matter field
with the equation of state,
\begin{align}\label{eom0}
p=-\frac{1}{3}\rho = {\rm constant},
\end{align}
where $\rho>0$ for $S_3$ and $\rho<0$ for $H_3$.
The resulting metric is well known as
\begin{align}\label{metric0}
ds^2 = \mp dt^2 +\frac{dr^2}{1-kr^2/R_0^2} +r^2d\Omega_2^2,
\end{align}
where $k=+1/-1$ represents $S_3/H_3$,
and $\rho =\pm 3/(8\pi R_0^2)$.
For $S_3$, the ranges of the radial coordinate,
$0 \leq r \leq R_0$ and $r \geq R_0$, are considered separately.
(We shall call the former $S_3$-I and the latter $S_3$-II.)
For $S_3$-II, we take $g_{00}=+1$ to consider only one time coordinate.
The metric \eqref{metric0} is the only solution to the Einstein's equation
with the matter of Eq.~\eqref{eom0}.
There is no additional mass term unlike in vacuum
which admits the flat Minkowski space
as the massless limit of the Schwarzschild spacetime.
In order to achieve a nontrivial structure such as a black hole in $S_3/H_3$,
other type of matter than Eq.~\eqref{eom0} needs to be introduced.
Then, the $S_3/H_3$ nature will be exposed only at some place of space
while a nontrivial geometry is formed elsewhere.
For the nontrivial geometrical structure
that admits the inherent $S_3$/$H_3$ topology,
the static fluid configuration with the equation of state $p(r) = -\rho(r)/3$
was recently studied in Ref.~\cite{Cho:2016kpf}.
It was found that
there are a black-hole solution ($S_3$-I, $S_3$-II, $H_3$),
a nonstatic cosmological solution ($S_3$-II, $H_3$),
and a singular static solution ($H_3$).
The nontrivial geometries of these three types of solutions
are sourced by fluid.
At some region of space,
the signature of the $S_3/H_3$ topology appears
(near the equator for $S_3$-I, near the center for $S_3$-II,
and at the asymptotic region for $H_3$).
In this sense, we interpret the nontrivial geometrical configuration
as a gravitating object formed in the $S_3/H_3$ background spatial topology.
This object can be considered as a large fluid object
which is produced in a global universe,
or a local compact object which is produced in a local $S_3$/$H_3$ space.
In this paper, we consider the same static fluid in Ref.~\cite{Cho:2016kpf}
with the electric field in spherical symmetry.
If there is only the electric field,
the spacetime is described by the Reisner-Norstr\"om solution.
If we add the constant matter of Eq.~\eqref{eom0} to the electric field,
there is no consistent static solution to the Einstein's equation.
Therefore, as in Ref.~\cite{Cho:2016kpf} we consider the fluid of $p(r) = -\rho(r)/3$.
The mixture of electric field and fluid form the geometry, and we expect
that the $S_3/H_3$ topology due to fluid unveils at some region of space.
When the electric field is turned off,
the system reduces to the fluid-only case investigated in Ref.~\cite{Cho:2016kpf}.
There are some other works on the gravitating solutions for static fluids
(see e.g., Refs.~\cite{Bekenstein:1971ej,Sorkin:1981wd,Pesci:2006sb,Semiz:2008ny,Lake:2002bq,Bronnikov:2008ia,Cho:2017nhx}).
This paper consists as following.
In Sec. II, we introduce the model and field equations.
In Sec. III, we classify the solutions and discuss the spacetime structure.
In Sec. IV, we discuss the geodesic motions.
In Sec. V, we study the stability of the solutions.
In Sec. VI, we conclude.
\section{Model and field equations}
We consider the electric field and the perfect fluid in static state.
The static metric ansatz for spherical symmetry is given by
\begin{align}\label{metricfgr}
ds^2 = -f(r)dt^2 +g(r)dr^2 +r^2d\Omega_2^2.
\end{align}
The energy-momentum tensor for the fluid is given by
\begin{align}\label{emF}
T^\mu_\nu = {\rm diag}[-\rho(r), p(r),p(r),p(r)],
\end{align}
and we consider the equation of state which meets
the $S_3/H_3$ boundary condition,
\begin{align}\label{eos}
p(r)=-\frac{1}{3}\rho(r) .
\end{align}
The field-strength tensor for the electric field is given by
\begin{align}\label{Fmunu}
{\cal F}_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu .
\end{align}
We consider the static electric field only,
then the vector potential is given by
\begin{align}
A_\mu = [A_0(r),0,0,0].
\end{align}
Then the nonvanishing components of ${\cal F}_{\mu\nu}$ in Eq.~\eqref{Fmunu} are
\begin{align}\label{Ftr}
{\cal F}_{01} = -{\cal F}_{10} = E(r) = [f(r)A_0(r)]',
\end{align}
where $E(r)$ is the electric field,
and the prime denotes the derivative with respect to $r$.
The energy-momentum tensor for the electric field is given by
\begin{align}\label{emE}
{\cal T}^\mu_\nu
= {\cal F}^{\mu\alpha}{\cal F}_{\nu\alpha}
-\frac{1}{4}\delta^\mu_\nu {\cal F}_{\alpha\beta}{\cal F}^{\alpha\beta}
= \frac{E^2(r)}{2f(r)g(r)} {\rm diag}(-1,-1,1,1).
\end{align}
With the metric \eqref{metricfgr} and the energy-momentum tensors
\eqref{emF} and \eqref{emE},
the nonvanishing components of the Einstein's equation,
$G^\mu_\nu = 8\pi(T^\mu_\nu +{\cal T}^\mu_\nu$),
are
\begin{align}
G^0_0 &= -\frac1{r^2} + \frac{1}{r^2 g} - \frac{g'}{r g^2}
= - 8\pi \left[ \rho(r) +\frac{E^2(r)}{2f(r)g(r)} \right] ,\label{G00} \\
G^1_1 &= -\frac1{r^2} + \frac{1}{r^2 g}+ \frac{f'}{rfg}
= 8\pi \left[ p(r) -\frac{E^2(r)}{2f(r)g(r)} \right] , \label{G11} \\
G^2_2 &= G^3_3 = \frac{f'}{2rfg} - \frac{f'^2}{4f^2 g} - \frac{g'}{2rg^2} -
\frac{f' g'}{4f g^2} + \frac{f''}{2fg}
= 8\pi \left[ p(r) +\frac{E^2(r)}{2f(r)g(r)} \right] . \label{G22}
\end{align}
Since the fluid and the electric field are minimally coupled
only thorough gravity,
the conservation of the energy-momentum tensor is satisfied individually,
$\nabla_\mu T^{\mu\nu} = 0$ and $\nabla_\mu {\cal T}^{\mu\nu} = 0$,
which provide the field equations,
\begin{align}\label{TEeqn}
\rho' +\frac{f'}{f}\rho =0,
\qquad
\frac{3E^2}{fg} \left( \frac{E'}{E} -\frac{f'}{2f} -\frac{g'}{2g} +\frac{2}{r} \right) =0.
\end{align}
These field equations give solutions for fluid and electric field
in terms of the gravitational field,
\begin{align}\label{solrhoE}
\rho(r) = {\rm constant} \times f(r),
\qquad
E(r) = {\rm constant} \times \frac{\sqrt{f(r)g(r)}}{r^2}.
\end{align}
\section{Classification of solutions}
With the solutions in Eq.~\eqref{solrhoE} and the equation of state \eqref{eos},
the Einstein equations \eqref{G00}-\eqref{G22} are solved,
\begin{align}
\rho(r) &= -\frac{3}{8\pi\alpha} \left\{ 1\mp \frac{2\alpha |\beta|}{r}
\left[ \beta(r^2+\alpha) \right]^{1/2}
+\frac{Q^2}{3}\left( \frac{1}{\alpha} +\frac{1}{2r^2} \right) \right\}, \label{rho}\\
f(r) &= \frac{\rho(r)}{\rho_c}, \qquad
g^{-1}(r) = -\frac{8\pi}{3} (r^2+\alpha) \rho(r), \label{g} \\
E(r) &= \frac{Q}{3r^2\left[ \beta(r^2+\alpha) \right]^{1/2}}, \label{E}
\end{align}
where, $Q$ is the electric charge, $\alpha$ and $\beta$ are integration constants,
and $\rho_c = -9\beta/(8\pi)$.
The above solutions reduce to those of the fluid-only solutions in Ref.~\cite{Cho:2016kpf}
when $Q=0$, and to the Reisner-Norstr\"om (RN) solution
when $\alpha \to\infty$ and $\beta \to 0$ with $\alpha\beta = {\rm finite} = M^{2/3}$.
In order to catch the idea of the spatial topology,
we transform the radial coordinate $r$ to $\chi$,
and use the metric
\begin{align}\label{metric2}
ds^2 = -f(\chi)dt^2 +g(\chi)d\chi^2 + R_0^2b^2(\chi)d\Omega_2^2,
\end{align}
where $b(\chi)$ is introduced in the subsections below.
We introduced a new parameter $R_0\equiv \sqrt{|\alpha|}$
which is related with the curvature.
In addition,
we introduce another parameter $K \equiv 2R_0^2|\beta|^{3/2}$
interpreted as a mass parameter
analogous to the fluid-only black hole investigated Ref.~\cite{Cho:2016kpf}.
Depending on the signatures of $\alpha$ and $\beta$,
the solutions are classified into three categories.
Two of them meet the $S_3$ boundary condition,
and the other does the $H_3$ condition.
The classes are summarized in Table I.
When both of the parameter $K$ and the charge $Q$ are turned off,
the metric reduces to that of the pure $S_3$/$H_3$ in Eq.~\eqref{metric0}
\begin{table}
\begin{tabular}{|l||c|c|c|}
\hline
\quad Class & $\rho(\chi)$ & $f(\chi)$ & $g(\chi)$ \\ \hline\hline
\quad $S_3$-I $\quad$
& $\qquad \frac{3}{8\pi R_0^2} \left[ 1- K \cot\chi -\frac{Q^2}{6R_0^2} (1-\cot^2\chi) \right] \qquad$
& $\qquad \frac{\rho(\chi)}{\rho_c}, \quad (\rho_c>0) \qquad$
& $\qquad \frac{3}{8\pi \rho(\chi)} \qquad$ \\ \hline
\quad $S_3$-II
& $\frac{3}{8\pi R_0^2} \left[ 1 \mp K \tanh\chi -\frac{Q^2}{6R_0^2} (1+\tanh^2\chi) \right]$
& $\frac{\rho(\chi)}{\rho_c}, \quad (\rho_c<0)$
& $-\frac{3}{8\pi \rho(\chi)}$ \\ \hline
\quad $H_3$
& $-\frac{3}{8\pi R_0^2} \left[ 1 \mp K \coth\chi +\frac{Q^2}{6R_0^2} (1+\coth^2\chi) \right]$
& $\frac{\rho(\chi)}{\rho_c}, \quad (\rho_c<0)$
& $-\frac{3}{8\pi \rho(\chi)}$ \\
\hline
\end{tabular}
\caption{Classification of solutions.
The signature of $\rho_c$ is chosen so that $f(\chi)g(\chi)>0$.
}
\end{table}
\subsection{$S_3$-I}
This is the case of $\alpha<0$ and $\beta<0$.
The transformation is performed by
\begin{equation}\label{rS3I}
r = R_0b(\chi) = R_0\sin\chi
\quad (0\leq \chi \leq \pi,\; 0\leq r \leq R_0).
\end{equation}
Note that for a given value of $r$, $\chi$ is double valued.
The metric becomes
\begin{equation}\label{metricS3I}
ds^2 = -\frac{3}{8\pi R_0^2\rho_c} \left[ 1- K \cot\chi
-\frac{Q^2}{6R_0^2} (1-\cot^2\chi) \right] dt^2
+\frac{R_0^2}{1- K \cot\chi
-(Q^2/6R_0^2) (1-\cot^2\chi)} d\chi^2
+R_0^2\sin^2\chi d\Omega_2^2.
\end{equation}
Here, $\rho_c>0$.
This solution states that the fluid with the electric field strength in Eq.~\eqref{Ftr}
closes the space in a finite region $ 0\leq r \leq R_0$.
We believe that the fluid is responsible for this closure
since the same phenomenon occurs even in the fluid-only case in Ref.~\cite{Cho:2016kpf}.
Both of the Ricci scalar and the Kretschmann scalar diverge
at $\chi=0$ and $\pi$, i.e.,
there exist curvature singularities at both poles.
For the pure fluid case ($Q=0$) investigated in Ref.~\cite{Cho:2016kpf},
the background $S_3$ topology is exposed at the boundary around the equator
($\chi \approx\pi/2$, i.e., $r\approx R_0$),
\begin{align}
ds_3^2 \approx R_0^2d\chi^2 +R_0^2\sin^2\chi d\Omega_2^2.
\end{align}
With the electric field, however, there is a charge correction,
\begin{align}
ds_3^2 \approx \frac{R_0^2}{1-Q^2/(6R_0^2)}d\chi^2 +R_0^2\sin^2\chi d\Omega_2^2.
\end{align}
The location of the horizon is found from $g_{\chi\chi}^{-1}=0$,
\begin{equation}
\chi_h = \chi_\pm \equiv \cot^{-1} \left( \frac{3KR_0^2 \mp \sqrt{J_1}}{Q^2} \right),
\quad\mbox{where}\quad
J_1=9K^2R_0^4 -6Q^2R_0^2+Q^4.
\end{equation}
Depending on the existence of the horizon, there are two types of solutions.
(See Fig. 1 for the graphical view of the metric function.)
(i) {\bf RN black-hole type solution}:
If $J_1>0$, there exist two horizons at $\chi_h=\chi_\pm$,
which coalesce when $J_1=0$.
This solution mimics the Reisner-Nordstr\"om geometry of the charge black hole.
The spacetime is regular at $\chi<\chi_-$ and $\chi>\chi_+$.
The singularity at the north pole ($\chi=0$) is inside the inner horizon,
and is not accessible by the timelike observers as in the RN black hole.
The singularity at the south pole ($\chi=\pi$) is naked,
but is not accessible either by the timelike observers as in the fluid black hole
investigated in Ref.~\cite{Cho:2016kpf}.
The geodesics are studied in the next section.
(ii) {\bf Naked singular solution}:
If $J_1<0$, there is no horizon.
Both singularities are naked, but neither of them are accessible.
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{f-S3I.eps}
\end{center}
\caption{Plot of metric function $8\pi R_0^2 \rho(\chi)/3$ for $S_3$-I.
(i) RN black-hole type solution: $K=0.9$, $Q=1$, $R_0=1$.
There are two horizons between which the spacetime is nonstatic.
There exist two curvature singularities.
Neither of them is accessible except by radial null rays.
(ii) Naked singular solution: $K=5/9$, $Q=1$, $R_0=1$.
}
\end{figure*}
\subsection{$S_3$-II}
This is the case of $\alpha<0$, $\beta>0$.
The transformation is performed by
\begin{equation}\label{rS3II}
r=R_0b(\chi) =R_0\cosh\chi
\quad ( -\infty < \chi < \infty,\; r\geq R_0),
\end{equation}
Again, $\chi$ is double-valued for a given value of $r$.
The metric becomes
\begin{equation}\label{metricS3II}
ds^2 = -\frac{3}{8\pi R_0^2\rho_c} \left[ 1 \ominus\oplus K \tanh\chi
-\frac{Q^2}{6R_0^2} (1+\tanh^2\chi) \right] dt^2
+\frac{R_0^2}{-\left[ 1 \ominus\oplus K \tanh\chi
-(Q^2/6R_0^2) (1+\tanh^2\chi) \right]} d\chi^2
+R_0^2\cosh^2\chi d\Omega_2^2.
\end{equation}
Here, $\rho_c<0$.
The fluid curves the space in a flipped way to the $S_3$-I case;
the space is confined in the open region $r\geq R_0$.
The curvature is finite everywhere.
The location of the horizon is
\begin{equation}
\chi_h = \chi_\pm \equiv \tanh^{-1} \left( \frac{\ominus\oplus 3KR_0^2 \pm \sqrt{J_2}}{Q^2} \right),
\quad\mbox{where}\quad
J_2=9K^2R_0^4 +6Q^2R_0^2 -Q^4.
\end{equation}
(The $\pm$ roots are valid for both $\ominus$ and $\oplus$.)
There are four types of solutions. (See Fig. 2.)
Two of them are black-hole type solutions
(Schwarzschild and Reisner-Nordstr\"om types)
without a singularity,
and the others are regular and nonstatic solutions.
\vspace{12pt}
Let us consider the $\ominus$ solution.
\vspace{12pt}
If $J_2>0$, there are three types of solutions.
(i) {\bf RN black-hole type solution}:
For $Q^2 \geq 3(1+K)R_0^2$,
there are two horizons at $\chi_\pm$
and this is the RN black-hole type.
(ii) {\bf Schwarzschild black-hole type solution}:
For $3(1-K)R_0^2 < Q^2 < 3(1+K)R_0^2$,
there exists only one horizon.
Inside the horizon (the trapped region), $f(\chi), g(\chi) <0$ and $\rho>0$.
The spacetime is nonstatic in the trapped region, and static outside.
The structure is similar to that of the Schwarzschild black hole.
(iii) {\bf Nonstatic solution}:
For $Q^2 \leq 3(1-K)R_0^2$, there is no horizon and the spacetime is nonstatic everywhere.
This type of solution is special for $S_3$-II.
This is analogous to the solution in Eq.~\eqref{metric0}
describing the region $r\geq R_0$
in which the roles of the temporal and the radial coordinates are exchanged.
\vspace{12pt}
If $J_2<0$, there is one type of solution.
(iv) {\bf Regular solution}:
The spacetime is regular everywhere while $\rho<0$.
\vspace{12pt}
For the $\oplus$ solution,
the situation is the same with the $\ominus$ solution
with $\chi \to -\chi$.
Therefore, out of four types (i)-(iv),
the only change is in (ii).
Now, the region of $\chi < \chi_h$ is static,
and the region of $\chi > \chi_h$ is nonstatic.
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{f-S3II.eps}
\end{center}
\caption{Plot of metric function $-8\pi R_0^2 \rho(\chi)/3$ for $S_3$-II.
(i) RN black-hole type solution: $K=1$, $Q=1$, $R_0=0.4$.
(ii) Schwarzschild black-hole type solution: $K=0.3$, $Q=1$, $R_0=0.6$.
(iii) Nonstatic solution: $K=0.1$, $Q=1$, $R_0=0.4$.
(iv) Regular solution: $K=0.3$, $Q=1$, $R_0=1$.
}
\end{figure*}
\subsection{$H_3$}
This is the case of $\alpha>0$, $\beta>0$.
The transformation is performed by
\begin{equation}\label{rH3}
r= R_0b(\chi) =R_0\sinh\chi \quad (\chi \geq 0,\; r \geq 0),
\end{equation}
and the metric becomes
\begin{equation}\label{metricH3}
ds^2 = -\frac{3}{8\pi R_0^2(-\rho_c)} \left[ 1 \ominus\oplus K \coth\chi
+\frac{Q^2}{6R_0^2} (1+\coth^2\chi) \right] dt^2
+\frac{R_0^2}{1 \ominus\oplus K \coth\chi +(Q^2/6R_0^2) (1+\coth^2\chi)} d\chi^2
+R_0^2\sinh^2\chi d\Omega_2^2.
\end{equation}
Here, $\rho_c<0$.
The curvature diverges at $\chi=0$.
The location of the horizon is
\begin{equation}
\chi_h = \chi_\pm \equiv \coth^{-1} \left( \frac{\oplus\ominus 3KR_0^2 \mp \sqrt{J_3}}{Q^2} \right),
\quad\mbox{where}\quad
J_3=9K^2R_0^4 -6Q^2R_0^2 -Q^4.
\end{equation}
The solutions are classified as below. (See Fig. 3.)
\vspace{12pt}
For the $\ominus$ solution in Eq.~\eqref{metricH3},
there are three types of solutions for $J_3>0$.
\vspace{12pt}
(i) {\bf RN black-hole type solution}:
For $3(K-1)R_0^2 < Q^2 < 3KR_0^2$, there are two horizons at $\chi_\pm$
and this is the RN black-hole type.
(ii) {\bf dS-type solution}:
For $Q^2 \leq 3(K-1)R_0^2 $, there is only one horizon at $\chi_+$.
The spacetime is static inside the horizon, and nonstatic outside.
This is a de Sitter-like solution.
This solution is achieved when the electric charge $Q$ is small.
When $Q=0$, this corresponds to the cosmological solution
of the fluid-only case in Ref.~\cite{Cho:2016kpf}
for which the spacetime is nonstatic everywhere.
It was interpreted as a universe expanding from an initial singularity.
For the present case, however,
the horizon is formed due to the electric field
inside which the spacetime is static.
(iii) {\bf Naked singular solution}:
For $Q^2 \geq 3KR_0^2$, the solution is static everywhere,
but with a singularity at the center.
\vspace{12pt}
For the $\oplus$ solution in Eq.~\eqref{metricH3}, or for $J_3<0$,
there is no horizon, and the solution is singular static like (iii).
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{f-H3.eps}
\end{center}
\caption{Plot of metric function $-8\pi R_0^2 \rho(\chi)/3$ for $H_3$.
(i) RN black-hole type solution (blue): $K=1$, $Q=1$, $R_0=1$.
(ii) dS-type solution (red): $K=1.5$, $Q=1$, $R_0=1$.
(iii) Naked singular solution: $K=0.5$, $Q=1$, $R_0=1$.
}
\end{figure*}
\subsection{Gauss' Law}
Let us discuss the Gauss' law in the $\chi$ coordinate.
The field-strength tensor ${\cal F}_{\mu\nu}$ in Eq.~\eqref{Fmunu}
in the $r$ coordinate with the components \eqref{Ftr}
is transformed to ${\cal F}'_{\mu\nu}$ in the $\chi$ coordinate
with the nonzero components,
\begin{align}\label{Ftchi}
{\cal F}'_{t\chi} = -{\cal F}'_{\chi t} = E(\chi)
= \frac{Q}{3|\beta|^{1/2}R_0^2b^2(\chi)} .
\end{align}
The electric flux is then
\begin{align}
\Phi_E= \oint E\sqrt{g^{(2)}}d^2x
= \iint \frac{Q}{3|\beta|^{1/2}R_0^2b^2(\chi)}
\times R_0^2b^2(\chi) \sin\theta d\theta d\phi
= \frac{4\pi Q}{3|\beta|^{1/2}}
= \frac{4\pi Q}{\sqrt{8\pi |\rho_c|}},
\end{align}
where we used the relation $\rho_c = -9\beta/(8\pi)$.
Compared with the Gauss' law in flat space,
there is a correction due to fluid by the factor $\sqrt{8\pi |\rho_c|}$.
\subsection{Mass}
In this section, let us discuss the mass of the black-hole solutions.
For the fluid-only case in Ref.~\cite{Cho:2016kpf},
it was investigated that the horizon structure of the fluid black hole
is similar to that of the Schwarzschild black hole.
The parameters are related with the Schwarzschild mass $M$ as
\begin{align}\label{KM}
K = \left( \frac{R_0^2}{4M^2} -1 \right)^{-1/2},
\quad
\left( -\frac{R_0^2}{4M^2} +1 \right)^{-1/2},
\quad
\left( \frac{R_0^2}{4M^2} +1 \right)^{-1/2},
\end{align}
for the type $S_3$-I, $S_3$-II, and $H_3$, respectively.
For the $S_3$-I type,
there is an upper limit in the mass, $M \to R_0/2$ as $ K \to \infty$.
In this limit, the horizon approaches the equator of $S_3$, $\chi_h = \cot^{-1}(1/K) \to \pi/2$.
Other than the Schwarzschild mass, it is interesting to consider the Misner-Sharp mass ${\cal M}$
which can be used for black-hole thermodynamics \cite{Misner:1964je}.
We evaluate ${\cal M}$ in this work.
When the metric is given by
\begin{align}\label{metricMS}
ds^2 = h_{ab} dx^adx^b +r^2(x)d\Omega_2^2,
\end{align}
where $a,b=0,1$,
the Misner-Sharp mass is defined as
\begin{align}\label{MS}
{\cal M} = \frac{1}{2} (1-h^{ab} \partial_a r \partial_b r).
\end{align}
In the $\chi$ coordinate, we have $r=R_0b(\chi)$
and Eq.~\eqref{MS} becomes
\begin{align}
{\cal M}(\chi) = -\frac{4\pi R_0^3}{3s} \rho(\chi) b(\chi) [b'(\chi)]^2 +\frac{R_0}{2}b(\chi),
\end{align}
where $s$ is the signature of $\rho_c$
($s=+1$ for $S_3$-I, and $s=-1$ for the others).
The mass depends on the radial coordinate $\chi$.
For the fluid-only case ($Q=0$), the mass is
still $\chi$ dependent,
while one has ${\cal M}_{\rm Sch} = M$ for the ordinary Schwarzschild black hole.
For the fluid black-hole solutions, one can show with the aid of Eq.~\eqref{KM} that
the Misner-Sharp mass evaluated on the horizon
coincides with the Schwarzschild mass,
${\cal M}(\chi_h) = M$.
This indicates that the horizon structure of the fluid black hole
is the same with that of the Schwarzschild black hole.
For the ordinary RN black hole, the Misner-Sharp mass is
given by ${\cal M}_{\rm RN} =M-Q^2/(2r) = M-Q^2/[2R_0b(\chi)]$.
For the RN black-hole type solutions obtained in this work ($Q\neq 0$),
keeping the mass relation of $K$ in Eq.~\eqref{KM},
the Misner-Sharp mass evaluated on the horizons does not
coincide with that of the ordinary RN black hole,
${\cal M}(\chi_\pm) \neq {\cal M}_{\rm RN}(\chi_\pm)$.
Although the horizon structure of the fluid black hole ($Q=0$)
is the same with that of the ordinary one,
the thermodynamics must be very different because the off-horizon structure
is very different.
We shall study the thermodynamics using the Misner-Sharp mass
in a separate work including the charged case.
\section{Geodesics}
In this section, we discuss the geodesics of the solutions.
We focus mainly on the black-hole solutions.
For simplicity, we define a function,
\begin{align}\label{Fchi}
F(\chi) \equiv \frac{8\pi R_0^2}{3s}\rho(\chi).
\end{align}
The geodesic equations become
\begin{align}
&\mbox{$t$-eq. : }\quad
\frac{1}{F(\chi)} \frac{d}{d\lambda} \left[ F(\chi) \frac{dt}{d\lambda}\right]
=0 ,\label{S3teq}\\
&\mbox{$\phi$-eq. : }\quad
\frac{1}{b^2(\chi)} \frac{d}{d\lambda} \left[ b^2(\chi) \frac{d\phi}{d\lambda}\right]
=0.\label{S3phieq}
\end{align}
From Eqs.~\eqref{S3teq} and \eqref{S3phieq},
we denote the conserved quantities
$E$ (energy) and $L$ (angular momentum) as
\begin{align}
E \equiv F(\chi) \frac{dt}{d\lambda} = {\rm constant},
\qquad
L \equiv b^2(\chi) \frac{d\phi}{d\lambda} = {\rm constant}.
\end{align}
The $\chi$-equation can be derived from the metric as
\begin{align}\label{chieq}
g_{\mu\nu} \frac{dx^\mu}{d\lambda} \frac{dx^\nu}{d\lambda} = - \varepsilon,
\end{align}
where $\varepsilon = 0,1$ for null and timelike geodesics, individually.
On the $\theta=\pi/2$ plane, Eq.~\eqref{chieq} becomes
\begin{align}
\frac{1}{2} \left( \frac{d\chi}{d\lambda} \right)^2 + V(\chi) = \frac{3E^2}{16\pi R_0^4|\rho_c|} \equiv \tilde{E}^2,
\end{align}
where the effective potential is given by
\begin{align}
V(\chi) = \frac{1}{2} F(\chi) \left[ \frac{L^2}{b^2(\chi)} +\frac{\varepsilon}{R_0^2} \right].
\end{align}
We summarize $V(\chi)$ in Table II.
The effective potential $V(\chi)$ of the black-hole type solutions
is plotted in Fig. 4-6.
For the RN black-hole type solution of $S_3$-I,
the singularities at both poles are not accessible
except by the radial null geodesic.
For the fluid-only case in Ref.~\cite{Cho:2016kpf},
the one at the north pole inside the horizon was accessible
since the inner geometry was similar to that of the Schwarzschild black hole.
However, for the present case, it is not because
the inner geometry is similar to that of the charged black hole.
The nonaccessibility to the naked singularity at the south pole
is similar to the fluid-only case.
The geodesic observer starting from the outer static region
falls into the inner static region passing the intermediate nonstatic region.
Afterwards, the observer bounces back to the nonstatic region and then
enters the outer static region.
This later motion after the bounce proceeds
in the other copy of the spacetime
accompanied in the usual RN geometry.
The geodesic as a whole is an oscillatory orbit
in the infinite tower of the RN spacetime.
For $S_3$-II, the RN black-hole type solution,
when the energy level ($\tilde E$) is low,
the oscillatory orbit is similar to
that of $S_3$-I.
When the energy level is increased,
the geodesic observer can reach the inner static region behind the inner horizon.
When the energy level is high enough,
the geodesic observer can escape to the asymptotic infinity in the static region.
The Schwarzschild black-hole type solution has
the similar geodesic structure to that of the usual Schwarzschild black hole.
When the energy level is low,
all the geodesic motions fall into the black hole.
However, $V(\chi)$ approaches a constant value as $\chi \to -\infty$.
For $H_3$, the singularity at the center is not accessible
except by the radial null geodesic,
which is different from the fluid-only case.
Similarly to the $S_3$-I,
it is due the electric charge.
When the energy level is low,
the geodesic motion is oscillatory
as in $S_3$-I.
When the energy level is high,
the geodesic observer can reach the asymptotic infinity.
Another interesting solution is dS-type.
For this solution, the geodesics escape from the static region
crossing the de Sitter-like horizon and reach asymptotic infinity.
This is different from the pure de Sitter space
in which there can be a stable geodesic motion inside the horizon.
\begin{table}
\begin{tabular}{|l||c|c|}
\hline
\quad Class & $F(\chi)$ & $V(\chi)$ \\ \hline\hline
\quad $S_3$-I $\quad$
& $\qquad 1- K \cot\chi -(Q^2/6R_0^2) (1-\cot^2\chi) \qquad$
& $\qquad \frac{1}{2} [1- K \cot\chi -(Q^2/6R_0^2) (1-\cot^2\chi)] \left( \frac{L^2}{\sin^2\chi} +\frac{\varepsilon}{R_0^2} \right) \qquad$ \\ \hline
\quad $S_3$-II
& $-1 \pm K \tanh\chi +(Q^2/6R_0^2) (1+\tanh^2\chi) $
& $\frac{1}{2} [-1 \pm K \tanh\chi +(Q^2/6R_0^2) (1+\tanh^2\chi)] \left( \frac{L^2}{\cosh^2\chi} +\frac{\varepsilon}{R_0^2} \right)$ \\ \hline
\quad $H_3$
& $1 \mp K \coth\chi +(Q^2/6R_0^2) (1+\coth^2\chi)$
& $\frac{1}{2} [1 \mp K \coth\chi +(Q^2/6R_0^2) (1+\coth^2\chi)] \left( \frac{L^2}{\sinh^2\chi} +\frac{\varepsilon}{R_0^2} \right)$ \\
\hline
\end{tabular}
\caption{Effective potential $V(\chi)$}
\end{table}
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{V_eff-S3I-i-RN.eps}
\end{center}
\caption{Plot of effective potential $V(\chi)$
fo (i) RN black-hole type solution ($K=0.9$, $Q=1$, $R_0=1$)
for $S_3$-I.
[$L=0$ (blue), $L=1$ (red) for timelike and $L=1$ for null.]
The shape of the potential shows that the two singularities
are not accessible except by the radial ($L=0$) null geodesic.
The geodesic observers can get into the inner region of the black hole.
Then they bounce to the outer region in the other copy of the spaceitme
as usual in the Reisner-Norstr\"om geometry
in which there exists an infinite tower of spacetime.
}
\end{figure*}
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{V_eff-S3II-RN.eps}
\includegraphics[width=0.3\textwidth]{V_eff-S3II-Sch.eps}\\
(i) \hspace{2in} (ii)\\
\end{center}
\caption{Plot of effective potential $V(\chi)$ for $S_3$-II.
[$L=0$ (blue), $L=4$ (red) for timelike.]
(i) RN black-hole type solution: $K=1$, $Q=1$, $R_0=0.4$, $L_{\rm null}=4$.
For the low energy level ($\tilde E$),
the geodesic motion is similar to that of $S_3$-I (i),
which oscillates in the infinite spacetime tower.
For the intermediate energy level,
the geodesic motion can reach the inner static region behind the inner horizon.
For the high energy level,
the geodesic motion can reach the asymptotic infinity at the outer static region.
(ii) Schwarzschild black-hole type solution: $K=0.3$, $Q=1$, $R_0=0.6$, $L_{\rm null}=4$.
The potential is similar to that of the usual Schwarzschild black hole.
For the low energy level, the geodesic motion
falls into the black hole.
}
\end{figure*}
\begin{figure*}[btph]
\begin{center}
\includegraphics[width=0.3\textwidth]{V_eff-H3-i-RN.eps}
\includegraphics[width=0.3\textwidth]{V_eff-H3-ii-dS.eps}\\
(i) \hspace{2in} (ii)
\end{center}
\caption{Plot of effective potential $V(\chi)$ for $H_3$.
[$L=0$ (blue), $L=1$ (red) for timelike.]
(i) RN black-hole type solution: $K=1$, $Q=1$, $R_0=1$, $L_{\rm null}=1$.
The central singularity is not accessible.
For the high energy level,
the geodesic motion can reach the asymptotic infinity.
(ii) dS-type solution: $K=1.5$, $Q=1$, $R_0=1$, $L_{\rm null}=0.2$.
There is no stable geodesic motion inside the de Sitter-like horizon.
All the geodesics escape from the static region crossing the horizon.
}
\end{figure*}
\section{Stability}
In this section, we study the stability of the solutions.
We introduce linear spherical scalar perturbations
with the metric ansatz,
\begin{align}
ds^2 = -f(t,\chi)dt^2 + g(t,\chi) d\chi^2 + R_0^2 b^2(\chi) d\Omega_2^2.
\end{align}
The metric perturbations are introduced as
\begin{align}
f(t,\chi) &= f_0(\chi) + \epsilon f_1(t,\chi), \label{p1}\\
g(t,\chi) &= R_0^2 \big[ g_0(\chi) + \epsilon g_1(t,\chi) \big], \label{p2}
\end{align}
where $\epsilon$ is a small parameter,
and the subscript $0$ stands for the background solutions
obtained in Sec. III.
Using the function $F(\chi) = 8\pi R_0^2\rho_0(\chi)/3s$
defined in Eq.~\eqref{Fchi},
where $\rho_0(\chi)$ is the background solution in Table I,
we have
\begin{align}
f_0(\chi) &= \frac{\rho_0(\chi)}{\rho_c} = \frac{3s}{8\pi R_0^2\rho_c}F(\chi), \\
g_0(\chi) &= \frac{1}{F(\chi)}.
\end{align}
The contravariant form of the energy-momentum tensor for fluid is written as
\begin{align}
T^{\mu\nu} = (\rho +p)u^\mu u^\nu + pg^{\mu\nu},
\end{align}
with the velocity four-vector
\begin{align}
u^\mu = \big[ u^0(t,\chi),u^1(t,\chi),0,0 \big].
\end{align}
For the fluid at hand, $p=-\rho/3$,
the perturbations for the energy density and the four-velocity are introduced by
\begin{align}
\rho(t,\chi) &= \rho_0(\chi) +\epsilon \rho_1(t,\chi), \label{p3}\\
u^0(t,\chi) &= u_0^0(\chi) +\epsilon u_1^0(t,\chi), \label{p4}\\
u^1(t,\chi) &= u_0^1(\chi) +\epsilon u_1^1(t,\chi). \label{p5}
\end{align}
We have $u_0^1(\chi)=0$ for the comoving background fluid.
From the normalization $u^\mu u_\mu=-1$, we have
$u_0^0(\chi)=1/\sqrt{f_0(\chi)}$ and
$u_1^0(t,\chi)=-f_1u_0^0/(2f_0) = -f_1/(2f_0^{3/2})$.
For the electric field, we introduce the simplest perturbation
along the radial direction only,
by which there is no magnetic field induced by the perturbation,
\begin{align}\label{p6}
{\cal F}'_{t\chi} = -{\cal F}'_{\chi t} = E(t,\chi) = E_0(\chi) + \epsilon E_1(t,\chi),
\end{align}
where $E_0(\chi)$ is given in Eq.~\eqref{Ftchi}.
Now we apply the perturbations \eqref{p1}, \eqref{p2}, and \eqref{p3}-\eqref{p6},
and expand the field equations in the first order of $\epsilon$.
From the $(0,1)$ component of the Einstein's equation, we get
\begin{align}
u_1^1(t,\chi) = -\sqrt{\frac{2\pi R_0^2\rho_c}{3}} \frac{\dot{g_1}b'F}{s^2b\sqrt{F}}.
\end{align}
Therefore, the perturbations of the four-vector, $u_1^0$ and $u_1^1$ in Eqs.~\eqref{p4} and \eqref{p5},
are expressed by the background functions and the metric perturbations.
There are seven equations in total for four perturbations,
$f_1$, $g_1$, $\rho_1$ and $E_1$;
three from Einstein's equation,
two from $\nabla_\mu T^{\mu\nu} =0$ ,
and two from $\nabla_\mu {\cal T}^{\mu\nu} =0$.
Four of them are independent equations.
After manipulating equations with
\begin{align}
f_1(t,\chi) &= e^{i\omega t} \psi(\chi),\\
g_1(t,\chi) &= e^{i\omega t} \varphi(\chi),
\end{align}
the equation for $\varphi(\chi)$ is decoupled as
\begin{align}\label{PE1}
-F^2\varphi''
-\left[ 3FF' + F^2 \left( 3\frac{b''}{b'} +s\frac{b}{b'} \right) \right] \varphi'
+\left[ \frac{\omega^2}{\sigma} -2FF''
-FF'\left( 4\frac{b''}{b'}-\frac{b'}{b} -s\frac{b}{b'} \right)
-2F^2 \left( \frac{b'''}{b'} -\frac{b'^2}{b^2} +s \frac{bb''}{b'^2} -s \right)
\right] \varphi =0,
\end{align}
where $\sigma \equiv 1/(8\pi R_0^4\rho_c s) = 1/(8\pi R_0^4|\rho_c|) >0$ for all classes.
The coefficients of the above equation depend only on the background functions $F(\chi)$ and $b(\chi)$.
By transforming the radial coordinate and the amplitude function as
\begin{align}
z = \int^\chi_0 \frac{d\chi}{\sqrt{2}F(\chi)}, \qquad
\Phi(z) = N \frac{F(\chi)b'(\chi)}{z} \varphi(\chi),
\end{align}
where $N$ is a normalization constant,
we get the perturbation equation in the nonrelativistic Schr\"odinger-type,
\begin{align}\label{PE2}
\left[ -\frac{1}{2}\frac{d^2}{dz^2} - \frac{1}{z}\frac{d}{dz}
+U(z) \right] \Phi(z) = -\frac{\omega^2}{\sigma} \Phi(z)
=-8\pi R_0^4|\rho_c| \omega^2 \Phi(z) \equiv \Omega \Phi(z).
\end{align}
The potential is given by
\begin{align}
U[z(\chi)] = F^2 \left[ -\frac{F''}{F} +\left( \frac{F'}{F} \right)^2
+\frac{F'}{F} \left( \frac{b''}{b'} +2\frac{b'}{b} +4s \right)
+2 \left( \frac{b''}{b'} \right)^2 +s
\right],
\end{align}
where we used $sb/b' = -b''/b'$, $b'''/b'=-s$, and $b''/b=-s$.
Since there always exists a positive eigenvalue $\Omega$ for any type of potential $U$,
i.e., $\omega^2 <0$,
this system is {\it unconditionally unstable}.
The stability story is very similar to the fluid-only case.
When perturbations are introduced to the static fluid,
the fluid becomes time dependent,
which drives the Universe to undergo the Friedmann expansion.
This type of instability does not necessarily mean
that the black-hole structure is destroyed.
Instead, the instability indicates that the background universe
undergoes expansion while the black-hole structure sustains.
When the perturbation of the electric field is considered,
the instability can be related with the destruction of the black-hole structure.
It is known that the Cauchy (inner) horizon of the charged black hole
is unstable to form a singularity \cite{Gursel:1979zza}.
The perturbation introduced in this work may develop such an instability
in the RN black-hole type solution.
\section{Conclusions}
We investigated the gravitational field of static fluid plus electric field.
Both of the fluid and the electric field
are the sources of the gravitational field,
but the way to curve the spacetime is a bit different from each other.
By adopting the equation of state $p(r) = -\rho(r)/3$,
the fluid is responsible for the topology of the background space.
The spatial topology can be either closed ($S_3$) or open ($H_3$).
Such a nature of the spatial topology is not observed everywhere.
Instead, the signature of the background spatial topology appears
at some place of the spacetime.
Based on the background topology,
there exist various types of solutions in three classes
which we named as $S_3$-I, $S_3$-II, and $H_3$.
Interesting classes are $S_3$-I and $H_3$
although the class $S_3$-II has most varieties in solution.
The most interesting solutions are the black-hole solutions.
Due to the presence of the electric field,
the black-hole geometry mimics that of the Reisner-Norstr\"om spacetime.
This type of black hole exists in both $S_3$ and $H_3$ spaces.
(There exists also a Schwarzschild-type black hole in $S_3$-II.)
The central singularity inside the black hole of this type of solution
is due to the electric source as well as the fluid source.
There is a naked singularity in $S_3$-I at the antipodal point
which is not accessible except by the radial null rays.
The formation of this singularity is caused by the fluid.
The geodesics of the Reisner-Norstr\"om black-hole type solution
exhibit the oscillatory orbit in the infinite tower of the spacetime
encountered in the usual Reisner-Norstr\"om geometry.
All the solutions obtained in this paper are unconditionally unstable.
This is not surprising because the stability story is similar to
the fluid-only case in Ref.~\cite{Cho:2016kpf}.
The reason of the instability is that the static fluid becomes
unstable (time dependent) with small perturbations
and drives the background geometry to the Friedmann expansion.
In addition, there is an electric field for which it is well known
that the pure charged black-hole solution (Reisner-Norstr\"om geometry)
is unstable under perturbations.
The solutions investigated in this paper are
useful in studying the magnetic monopole in the closed/open space,
which is under investigation currently.
Usually, the outside geometry of the magnetic monopole is the same with that of
the charged black hole (Reisner-Norstr\"om geometry)
\cite{Gibbons:1990um,Cho:1975uz,Bais:1975gu,Yasskin:1975ag,Cordero:1976jc}.
Since we obtained the charged black-hole solution in $S_3$/$H_3$ with the aid of fluid,
it is very interesting to investigate the magnetic monopole in the presence of fluid.
It may give rise to insight about the monopole in the closed/open space.
The asymptotic geometry of this type of the gauge monopole is worth while to investigate
and will be very interesting to compare with the usual monopole geometry.
In addition, the removal of the singularity is also a very interesting issue.
For the usual case, the monopole field removes the singularity of the charged solution.
For this case, however, the formation of the singularity is caused
not only by the electric charge, but also by the fluid.
It is interesting to see if the monopole field can regularize the singular behavior of the fluid.
\acknowledgements
The author is grateful to Hyeong-Chan Kim and Gungwon Kang for useful discussions.
This work was supported by the grant from the National Research Foundation
funded by the Korean government, No. NRF-2017R1A2B4010738.
|
2,877,628,088,348 | arxiv | \section{Introduction}
The study of localized structures in $(2+1)D$ systems is relevant to the description of supercondutivity phenomena, since they essentially take place in planes \cite{1, Laughlin}. The investigation of the Chern-Simons term in these systems has become more common due the theoretical description of vortice structures \cite{Gosh}. Originally written in $(2+1)D$, the Chern-Simons term describes the eletromagnetic field dynamics with the usual Maxwell term \cite{Sales}. However, is worth noticing that the Chern-Simons term, when compared to the Maxwell term, is dominant in regions distant from eletromagnetic field sources \cite{Gosh, Sales}.
Theories with the Chern-Simons term have been vastly studied in the last decades \cite{horvathy,Cunha}. One remarkable feature of these theories is the fact that they have solutions with rotational symmetry. Such solutions represent localized and charged tubes of magnetic flux and in this way they correspond to the description of ideal anyons, which are objects obeying fractionary statistics \cite{1, Laughlin}. The Chern-Simons term is then believed to be linked to the fractionary statistics of these particles, suggesting the existence of a connection between the Chern-Simons term and the supercondutivity phenomenon at high temperatures \cite{Haldane, Zhang}.
An essential aspect of the study of two-dimensional models in $(2+1)D$ field theory is the existence of solutions of the solitary-like wave, that means, solitonic solutions \cite{Gosh, Casana1}. Solitons in two or multiple dimensions are extensively studied and have received increasing attention, not only for fundamental reasons but also for their applicability \cite{Leblond, Christ, Eisenberg}. For instance, we notice that, in $(1+1)D$, the soliton theory relies on the theory of completely integrable equations using the method of inverse dispersion transformation. The most common and relevant of these equations is a non-linear equation, the Schrödinger equation. In higher dimensions, for example, $(2+1)D$, the completely integrable equations are quite scarce \cite{Leblond, Morandotti, Fleischer}.
The interest in the study of $O(3)$-sigma models is due to its contribution to the description of the Heisenberg ferromagnetism \cite{Belavin}. The sigma model $O(3)$ in $(2+1)D$ is exactly integrable in the Bogomol'nyi limit \cite{Gosh} and the stability of these solitonic solutions is guaranteed by topological arguments \cite{Gosh, Cunha}. However, the solitons in this model can be expressed in terms of functions that are not scale invariant. As a consequence, the size of these solitons can change arbitrarily while they evolve in time without modifying their energy \cite{Gosh, Samoilenka, CLee}.
Essentially, there are various ways of breaking the scale invariance in this model \cite{Leese, KLee}; the construction of {\it Q-lumps} is an example where the scale invariance is broken by including a specific potential term in the sigma model \cite{KLee}. Therefore, the size colapse of the soliton in this model is prevented by making a rotation in the inner space of the field variables. The above-mentioned solitons have finite energy and are consequently time dependent with a constant angular velocity \cite{Gosh}.
Recently, the study of the so-called compacton-like solutions has notably instigated the interest of some researchers \cite{Casana, Bazeia3, Bazeia4}. Compactons are finite wavelength solitons \cite{Rosenau} and they have been the subject of numerous studies since models containing topological defects can be employed to describe particles and cosmological objects, type cosmic strings \cite{Nielsen}. Another motivation for the expanding interest in investigating compact structures is that the compact vortices and skyrmions are intrinsically connected to spintronics \cite{Jubert, Romming}. Additionally, applications of the study of compact structures appear in braneworlds scenarios, and its properties are well discussed by Veras {\it et. al.} in Ref. \cite{Veras}.
Theoretical models subject to hyperbolic potentials have emerged some years ago in some areas of physics, such as in the study of position-dependent mass systems in quantum mechanics as an attempt to describe the dynamics of solid state systems like abrupt heterojuctions and quantum dots \cite{CA1,CA2, Bastard}. Not very far from the quantum theories, the hyperbolic potential have also been used in the investigation of vortices solutions of the scalar field \cite{Bazeia1}. Furthermore, scalar fields subject to hyperbolic interactions can be applied to scalar and solvable black holes, for example, to find a regular configuration of non-charged black holes and the cosmological scalar field \cite{Bazeia2}. As discussed by Bazeia {\it et. al.} in Ref. \cite{Bazeia1, Bazeia2}, this field falsifies the Wheeler conjecture \cite{Ruffini} and avoids the scalar ``no-hair" theorem \cite{Bizon}. As a consequence, classes of exact solutions appear, including new scalar black holes with hyperbolic potentials \cite{Bazeia2}.
In this work, we discuss the existence of solitons in a minimal coupling of $O(3)$-sigma model and the gauge field governed by a Chern-Simons term. In Sec. II, we discuss the $O(3)$-sigma model with the Chern-Simons term subject to the hyperbolic self-dual potential. In Sec. III, we present how the dielectric constant modifies the vortice solutions of the model. Afterwards, we investigate the static vortice solutions in the model and present the respective numerical solutions. Finally, we summarize our results and discussions in Sec. IV.
\section{The minimal gauged $O(3)$ model}
To iniciate this work, we consider the Lagrangean
\begin{align}\label{lagrangian}
\mathcal{L}=\frac{1}{2}D_{\mu}\Phi\cdot D^{\mu}\Phi+\frac{\kappa}{4}\varepsilon^{\mu\nu\alpha}A_{\mu}F_{\nu\alpha}-\mathcal{U}(\Phi_3),
\end{align}
where we defined
\begin{align}
D_{\mu}\Phi=\partial_{\mu}\Phi+ A_{\mu}\hat{n}_{3}\times\Phi.
\end{align}
In the $O(3)$-sigma model, the scalar field $\Phi$ is mapped in the Minkowski space of two unitary spheres, denoted by $S^{2}$. In other words, $\Phi$ is a three-component vector that satisfies the constraint $\Phi\cdot\Phi=\phi_{1}^{2}+\phi_{2}^{2}+\phi_{3}^{2}=1$. Depending on the specific choice for the potential, the Lagrangean may be invariant under a iso-rotation around a preferred axis, in this case, $\hat{n}_{3}=(0,0,1)$. The $U(1)$ nature of the model can be easely seen in the following identity:
\begin{align}
D_{\mu}\Phi\cdot D^{\mu}\Phi=\vert(\partial_{\mu}+iA_{\mu})(\phi_{1}+i\phi_{2})\vert^{2}+\partial_{\mu}\phi_{3}\partial^{\mu}\phi_{3}.
\end{align}
To ensure a self-dual system, we consider a potential of the form
\begin{align}\label{potential}
\mathcal{U}(\Phi)=\frac{1}{2\kappa^{2}}\{\tanh^{2}[\xi(\Phi-\Phi_{0})\cdot \hat{n}_{3}]+\tanh^{2}[\xi(\Phi+\Phi_{0})\cdot \hat{n}_{3}]\},
\end{align}
where $\Phi_{0}$ is a constant vector and $\xi$ is a deformation parameter. We can notice that, in this case, the system has two vacuum states, given by $\phi_{0}=\pm\Phi_{0}\cdot\hat{n}_{3}$. See Fig. (\ref{fig1}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{potential.pdf}
\caption{Graphical behavior of the potential for several values of the deformation parameter.} \label{fig1}
\end{figure}
As a result, the $SO(2)$ ($U(1)$) symmetry is not spontaneously broken. In this case, it is also interesting to notice that the gauge field dynamics is regulated only by the Chern-Simons term.
At this point, we should notice that our metric is $\eta_{\mu\nu}=$diag$(+,-,-)$ with $\varepsilon^{012}=1$; $\mu$, $\nu=0,1,2$ and $i$, $j=1,2$. Therefore, the motion equations for the Lagrangean (\ref{lagrangian}) are given by
\begin{align}\label{gauge}
\textbf{J}^{\mu}=-\Phi\times D^{\mu}\Phi,
\end{align}
with $\textbf{J}^{\mu}=-j^{\mu}\cdot\hat{n}_{3}$ and
\begin{align}\label{current_chern_simons}
j^{\mu}=\frac{\kappa}{2}\varepsilon^{\mu\nu\alpha}F_{\nu\alpha}.
\end{align}
For the scalar field, we have
\begin{align}\label{scalar}
D_{\mu}D^{\mu}\Phi=-\frac{\partial\mathcal{U}}{\partial\Phi}.
\end{align}
By replacing (\ref{gauge}) in (\ref{scalar}), we obtain
\begin{align}\label{euler}
D_{\mu}\textbf{J}^{\mu}=\Phi\times\frac{\partial\mathcal{U}}{\partial\Phi}.
\end{align}
The zeroth component of (\ref{current_chern_simons}) is the well known Gauss law, and thus, the Gauss law implies that the configuration of the field with magnetic flux $\Phi$ carries, essentially, one non-zero charge given by $Q=-\kappa\Phi_{flux}$.
As usual, the energy functional can be obtained by a integration of the component $T_{00}$ of the energy-momentum tensor over all space. In this way, we obtain the functional
\begin{align}
E=\frac{1}{2}\int d^{2}x \, \bigg[(D_{1}\Phi)^{2}+(D_{2}\Phi)^{2}+\frac{\kappa^{2}F_{12}^{2}}{\phi_{1}^{2}+\phi_{2}^{2}}+2\mathcal{U}\bigg].
\end{align}
Rearranging the energy functional, we obtain
\begin{align}\label{energy}
E=&\frac{1}{2}\int d^{2}x\, \bigg\{ (D_{i}\Phi\pm\varepsilon_{ij}\Phi\times D_{j}\Phi)^{2}+\frac{\kappa^{2}}{\phi_{1}^{2}+\phi_{2}^{2}}\bigg[ F_{12}\mp \sqrt{\frac{2\mathcal{U}(\phi_{1}^{2}+\phi_{2}^{2})}{\kappa^{2}}}\bigg]\bigg\}
\pm 4\pi\int d^{2}x \, \mathcal{Q}_{0}.
\end{align}
Now, we define the topological charge of the model as
\begin{align}
\mathcal{Q}_{\mu}=\frac{1}{8\pi}\varepsilon_{\mu\nu\lambda}\bigg[\Phi\cdot D^{\nu}\Phi\times D^{\lambda}\Phi+F^{\nu\lambda}\sqrt{\frac{2\kappa^{2}\mathcal{U}}{(\phi_{1}^{2}+\phi_{2}^{2})}}\bigg].
\end{align}
Following topological argumentations, we know that the energy (\ref{energy}) has a lower limit \cite{Bogomol'nyi, Atmaja}, i.e., we have
\begin{align}
E\geq 4\pi\int d^{2}x \, \mathcal{Q}_{0}.
\end{align}
At the energy saturation limit, the motion equations of the model reduce to
\begin{align}
D_{i}\Phi\pm\varepsilon_{ij}\Phi\times D_{j}\Phi=0;
\end{align}
\begin{align}
F_{12}\mp \sqrt{\frac{2\mathcal{U}(\phi_{1}^{2}+\phi_{2}^{2})}{\kappa^{2}}}=0.
\end{align}
\subsection{Static vortex solutions}
First, in order to investigate the Bogomol'nyi equation numerically, we choose a spherical rotational symmetry for the variable field \cite{Sales}, that means
\begin{align}\label{ansatz} \nonumber
& \phi_{1}(\rho, \theta)=\sin f(r)\cos N\theta; \\
& \phi_{2}(\rho, \theta)=\sin f(r)\sin N\theta; \\ \nonumber
& \phi_{3}(\rho, \theta)=\cos f(r)
\end{align}
and
\begin{align}
\textbf{A}(\rho, \theta)=-\frac{Na(r)}{\kappa r}\hat{e}_{\theta}.
\end{align}
Considering the self-dual hyperbolic potential, we rewrite the Bogomol'nyi equations as
\begin{align}\label{bogomol'nyi}
f'(r)=\pm N\frac{a+1}{r}\sin f(r);
\end{align}
\begin{align}\label{bogomol'nyi1}
a'(r)=\pm \frac{r}{N}\sqrt{(1-\cos^{2} f(r))\{\tanh^{2}[\xi(\cos f(r)+\phi_{0})]\}+\tanh^{2}[\xi(\cos f(r)-\phi_{0})]\}}.
\end{align}
with $\phi_{0}$ being the vacuum expectation value, $f(r)$ being a arbitrary function and $\rho=r\kappa$ being an adimensional length. $N$ is the parameter responsible for defining the vorticity of the solutions.
By decoupling the previous equations, we find the equation to the variable field
\begin{align}\label{field}
f''(r)+\frac{f'(r)}{r}-\frac{f'(r)^{2}}{\tan f(r)}-
\sin^{2}f(r)\sqrt{\{\tanh^{2}[\xi(\cos f(r)-\phi_{0})]+\tanh^{2}[\xi(\cos f(r)+\phi_{0})]\}}=0.
\end{align}
It can be easily verified that the Bogomol'nyi equations (\ref{bogomol'nyi}-\ref{bogomol'nyi1}) satisfy the motion equations (\ref{euler}). This is well discussed in the specialized literature, but a more enthusiastic reader can look up at Refs. \cite{Gosh, Cunha, Sales}.
Now, we want to obtain the solutions for equations (\ref{bogomol'nyi}) and (\ref{bogomol'nyi1}). To guarantee that the field is not singular at the origin, we consider that the variable field in the vicinity of the origin has the form:
\begin{align}
f(0)=n\pi, \, \, \, \, \, n \, \, \in \, \, \mathbb{N},
\end{align}
and then, the regular solutions at the origin require the initial behaviour of the gauge field to be $a(0)=0$. Furthermore, the solutions are symmetric under $f(r)=2\pi$. Therefore, we should have $f(0)=0$ and $f(0)=\pi$ for the topological and non-topological solutions. If we consider the last condition, it is helpful to use the variable change $f(r)=\pi+h(r)$. We also consider the lower sign of the Bogomol'nyi equations and $N$ to be positive. For $h(r)\ll 1$, we assume that the model has solutions of the type
\begin{align}\label{b1}
h(r)=B_{0}r^{N}
\end{align}
and consequently, we obtain
\begin{align}\label{b11}
a(r)=-\frac{B_{0}}{N(N+2)}\sqrt{\tanh[\xi(1+\phi_{0})]^{2}+\tanh[\xi(1-\phi_{0})]^{2}}r^{N+2}+
\mathcal{O}(r^{3N+2}).
\end{align}
On the other hand, if we consider that $f(0)=0$ and $N$ is negative we have, at the vicinity of origin
\begin{align}\label{b2}
f(r)=\bar{B}_{0}r^{-N}
\end{align}
and thus, in this case, the solution for the gauge field is
\begin{align}\label{b22}
a(r)=-\frac{\bar{B}_{0}}{N(2-N)}\sqrt{\tanh[\xi(1+\phi_{0})]^{2}+\tanh[\xi(1-\phi_{0})]^{2}}r^{2-N}+\mathcal{O}(r^{2-3N}).
\end{align}
At infinite, there are two different asymptotic behaviours to the Bogomol'nyi equations. When $f(\infty)=\pi$ and $N$ is positive, $f(r)$ can again be rewritten conveniently as $f(r)=\pi+h(r)$, and then, to obtain the localized energy solutions we should have $a(\infty)=-\eta_{1}$. In this way, we assume that
\begin{align}\label{b3}
h(r)=C_{\infty} r^{N(1-\eta_{1})}.
\end{align}
As a result, we have
\begin{align}\label{b33}
a(r)\simeq -\frac{C_{\infty}}{N[N(1-\eta_{1})+2]}\sqrt{\tanh[\xi(1+\phi_{0})]^{2}+\tanh[\xi(1-\phi_{0})]^{2}}r^{2N(1-\eta_{1})+2}-\eta_{1}.
\end{align}
To finish, we analyse the boundary condition $f(\infty)=0$ and $a(\infty)=\eta_{2}$ with negative $N$ and we obtain
\begin{align}\label{b4}
f(r)=\bar{C}_{\infty}r^{N(1+\eta_{2})}.
\end{align}
In this way, we obtain the result
\begin{align}\label{b44}
a(r)\simeq -\frac{\bar{C}_{\infty}r^{N(1+\eta_{2})+2}}{N[N(1+\eta_{2})+2]}\sqrt{\frac{1}{2}\{\tanh[\xi(1+\phi_{0})]^{2}+\tanh[\xi(1-\phi_{0})]^{2}\}}+\eta_{2}.
\end{align}
Note that the parameter $\eta_{1}$ ($\eta_{2}$) indicates non-topological (topological) solutions.
\subsection{Numerical results}
From now on, we turn our attention to the numerical analysis of the Bogomol'nyi equations (\ref{bogomol'nyi}-\ref{bogomol'nyi1}). Initially, we investigate numerically the topological solutions with boundary conditions $f(0)=0$ and $f(r\rightarrow\infty)=\pi$ with $N=1$. To obtain the numerical solutions of the coupled Bogomol'nyi equations, we consider the assymptotic behaviours expressed by (\ref{b1}), (\ref{b11}), (\ref{b3}) and (\ref{b33}). In this way, we obtain the numerical results for $f(r)$, $a(r)$ and $B(r)$ as shown, respectively, in the Figs. (\ref{fig2}), (\ref{fig3}) e (\ref{fig4}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{topological_solution.pdf}
\vspace{-1cm}
\caption{Numerical solutions of topological vortices for several asymptotic values of the gauge field.}
\label{fig2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{gauge_topological.pdf}
\vspace{-1cm}
\caption{Behavior of the topological gauge field for different values of the parameter $\eta_{1}$.}
\label{fig3}
\end{figure}
We observe that the variable field and the behaviour of the gauge field are similar to those results presented by a similar model with a non-minimal coupling \cite{Sales}. In this case, since the gauge field is driven exclusively by the Chern-Simons term and the model is subject to the arbitrary hyperbolic potential, the topological solutions are obtained for $N=1$. The behaviour of the magnetic field with $N=1$ is represented in the Fig. (\ref{fig4}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{B_topological.pdf}
\vspace{-1cm}
\caption{The magnitude of the magnetic fields $B$ as a function of $r$ for $N=1$.}
\label{fig4}
\end{figure}
For the non-topological solutions, we investigate numerically the existence of solutions with $f(0)=\pi$ and $f(r\rightarrow\infty)=\pi$, $N=-1$ as boundary conditions. With this in mind, in order to find the numerical solutions to the coupled Bogomol'nyi equation, we make use of the assymptotic behaviours expressed by (\ref{b2}), (\ref{b22}), (\ref{b4}) and (\ref{b44}) and thus, obtain the numerical results displayed in Figs. (\ref{fig5}), (\ref{fig6}) and (\ref{fig7}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{nontopological_solution.pdf}
\vspace{-0.75cm}
\caption{Numerical solutions of nontopological vortices for several asymptotic values of the gauge field.}
\label{fig5}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{gauge_nontopological.pdf}
\vspace{-0.75cm}
\caption{Behaviour of the nontopological gauge field for different values of the parameter $\eta_{2}$.}
\label{fig6}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{B_nontopological.pdf}
\vspace{-1cm}
\caption{The magnitude of the magnetic fields $B$ as function of $r$ for $N=-1$.}
\label{fig7}
\end{figure}
We notice that, in the non-topological case, the magnetic field behaviour resembles the variable field behaviour in the case $N=-1$, with the respective values of the parameter $\eta_{2}$.
\section{The dielectric constant and the compacton-like vortex}
In this section, we focus our attention to the discussion of how the dielectric constant modifies the vortice solutions of the variable field. To achieve this aim, we consider the Lagrangean written as
\begin{align}
\mathcal{L}=\frac{1}{2}D_{\mu}\Phi\cdot D^{\mu}\Phi+\frac{\kappa}{4}\omega(\Phi)\varepsilon^{\mu\nu\lambda}A_{\mu}F_{\nu\lambda}-\mathcal{U}(\Phi),
\end{align}
where $\omega$ is a parameter responsible for defining the dielectric constant of the model.
Following arguments similar to those used in the previous section, we once more have the local current given by
\begin{align}
j^{\mu}=\frac{\kappa}{2}\omega(\Phi)\varepsilon^{\mu\nu\lambda}F_{\nu\lambda},
\end{align}
where $\textbf{J}^{\mu}=-j^{\mu}\cdot\hat{n}_{3}$.
The motion equation of the model remains the same
\begin{align}
\textbf{J}^{\mu}= \Phi\times\frac{\partial\mathcal{U}}{\partial\Phi}.
\end{align}
We construct the energy functional and thus, we arrive again at the expression
\begin{align}
E=\frac{1}{2}\int d^{2}x \, \bigg\{(D_{1}\Phi)^{2}+(D_{2}\Phi)^{2}+\frac{\kappa^{2}\omega^{2}F_{12}^{2}}{\phi_{1}^{2}+\phi_{2}^{2}}+2\mathcal{U}\bigg\}.
\end{align}
Rearranging the functional, we obtain the expression
\begin{align}\label{E}
E=\frac{1}{2}\int d^{2}x \, \bigg\{(D_{i}\Phi\pm\varepsilon_{ij}\Phi\times D_{j}\Phi)^{2}+\frac{\kappa^{2}\omega^{2}}{\phi_{1}^{2}+\phi_{2}^{2}}\bigg(F_{12}\mp\sqrt{\frac{2\mathcal{U}(\phi_{1}^{2}+\phi_{2}^{2})}{\kappa^{2}\omega^{2}}}\bigg)^{2}\bigg\}+4\pi\int d^{2}x\, \mathcal{Q}_{0}.
\end{align}
We notice that the topological charge ``acquires" a parameter due to the addition of the dieletric constant. Therefore, we have
\begin{align}
\mathcal{Q}_{\mu}=\frac{1}{8\pi}\varepsilon_{\mu\nu\lambda}\bigg[\Phi\cdot D^{\nu}\Phi\times D^{\lambda}\Phi+F^{\nu\lambda}\sqrt{\frac{2\kappa^{2}\omega^{2}\mathcal{U}}{\phi_{1}^{2}+\phi_{2}^{2}}}\bigg].
\end{align}
Once again we use the argumentation of the previous section and write the Bogomol'nyi equations as
\begin{align}
D_{i}\Phi\pm\varepsilon_{ij}\Phi\times D_{j}\Phi=0;
\end{align}
\begin{align}
F_{12}\mp\sqrt{\frac{2\mathcal{U}(\phi_{1}^{2}+\phi_{2}^{2})}{\kappa^{2}\omega^{2}}}=0.
\end{align}
Considering the ansatz (\ref{ansatz}), we obtain the coupled expression
\begin{align}
f'(r)=\pm N\frac{a+1}{r}\sin f(r)
\end{align}
and
\begin{align}
a'(r)=\pm \frac{r}{\omega N}\sqrt{(1-\cos^{2}f(r))(\tanh^{2}[\xi(\cos f(r)-1)]+\tanh^{2}[\xi(\cos f(r)+1)]}.
\end{align}
Decoupling the previous equations, we obtain
\begin{align}\label{field1}
f''(r)+\frac{f'(r)}{r}-\frac{f'(r)^{2}}{\tan f(r)}-\frac{1}{\omega}\sin ^{2}f(r)\sqrt{\{\tanh^{2}[\xi(\cos f(r)-\phi_{0})]+\tanh^{2}[\xi(\cos f(r)+\phi_{0})]\}}=0.
\end{align}
The equation (\ref{field1}) is analogous to the equation (\ref{field}) when the parameter $\omega=1$. From now on, we focus on understanding how the dielectric constant modifies the variable field solutions. In order to do this, we iniciate the numerical study of the equation (\ref{field1}) using the above-mentioned boundaries.
\subsection{Numerical study of dielectric constant}
First, in order to investigate the influence of the dielectric constant on the solutions of the variable field $f(r)$, we use a numerical approach. We consider the numerical solution presented in Fig. (\ref{fig2}) with $a(\infty)=-0.35$. Then, we vary the parameter $\omega$. As a result, we observe that the kink-like solutions acquire a characteristic of a compacton-like solution by simply varying numerically the parameter $\omega$ (dielectric constant) of the model. In other words, as the gauge field or the Chern-Simons term contributions decrease, the kink-like solutions become compacton-like solutions, as shown in Fig. ({\ref{fig8}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{compactonlike_solution.pdf}
\vspace{-1cm}
\caption{From the kink-like topological solutions to compacton-like solutions.}
\label{fig8}
\end{figure}
Interesting results also emerge when we vary numerically $\omega$ in the non-topological solutions presented in Fig. (\ref{fig5}). For this result, we consider $\eta_2=0.25$ and vary the parameter $\omega$. The outcome of these numerical results are shown in Fig. (\ref{fig9}). In this case, we notice that when we increase the gauge field contribution, the solitonic solutions approach the kink-like solutions for the variable field.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{kink.pdf}
\vspace{-1cm}
\caption{Nontopological kink-like solutions for the variable field.}
\label{fig9}
\end{figure}
\section{Concluding remarks}
In this work, we investigate the vortices solutions of the $O(3)$-sigma model with the gauge field coupled minimally with the Chern-Simons term. As a result, we note that due to the ansatz and the constraint of the $O(3)$ model, for the boundary conditions $f(0)=0$ and $f(\infty)=\pi$, we have the so-called non-topological solutions, since the topological charge is given by a non-integer parameter. On the other hand, for the boundary conditions $f(0)=\pi$ and $f(\infty)=\pi$ we have topological solutions, which have a topological charge described by an integer. As a consequence, the topological vortices of the model have a quantized energy given by $\mathcal{E}= 4\pi\vert N\vert$, the charge being $Q=-\kappa\Phi_{flux}$ and the magnetic flux given by $\Phi_{flux}=2\pi N\eta_1$. Particularly, we note that although the energy is quantized, the flux of the model is not quantized. We also observe that for a fixed $N$ there exists a family of solutions for different values of $\eta_{1}$, which implies that there are infinitely many degenerate solutions in the model. The so-called non-topological solutions are characterized by an energy $\mathcal{E}=4\pi N\eta_2$, by a flux $\Phi_{flux}=2\pi N\eta_2$ and by a charge $Q=-\kappa\Phi_{flux}$. Finally, after modifying the model by the introduction of a dielectric constant, we note that through the their variation, we can take along kink-like solutions (topological) to compacton-like solutions by a numerical variation of the dielectric constant of the model. In other words, as the contribution of the gauge field decreases, the contribution of the Chern-Simons terms also decreases, so we have that solutions kink-like become solutions compacton-like, as shown in Fig. (\ref{fig8}). In the same way, we observe that when we increase the contribution of the gauge field in the non-topological solutions, the solitonic solutions approach the kink-like solutions for the variable field and, therefore, the non-topological solutions tend to topological solutions with $\vert Q_{top}\vert=N$.
\section*{Acknowledgments}
\hspace{0.5cm}The authors would like to thank the Funda\c{c}\~{a}o Cearense de apoio ao Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (FUNCAP) the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES), and the Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) for financial support.
|
2,877,628,088,349 | arxiv | \section{Introduction}
The latest cosmological observations suggest that the primordial curvature perturbation $\mathcal{R}$ is nearly Gaussian \cite{Planck:2018jri, Planck:2019kim}, which is in agreement with the predictions of the single-field slow-roll inflation \cite{Maldacena:2002vr}. For future cosmological surveys, any detection of deviations from the Gaussian statistics will reveal important information about the primordial universe. Therefore, primordial non-Gaussianity is one of the major targets in upcoming experiments \cite{Meerburg:2019qqi, Achucarro:2022qrl}. So far theoretical studies on primordial non-Gaussianity have been mostly performed in the framework of cosmological perturbation theory, where cosmological correlators, such as the bispectrum and trispectrum, are examined by using perturbative approaches (see \cite{Chen:2010xka} for a review). These correlation functions provide a good description of the non-Gaussian statistics when the perturbation is small.
Except for the $n$-point correlators, there in principle exist significant phenomenological implications in the probability distribution of curvature perturbations that are not captured by perturbative approaches (see \cite{Chen:2018uul, Chen:2018brw, Panagopoulos:2019ail, Panagopoulos:2020sxp, Celoria:2021vjw, Cohen:2021jbo, Ezquiaga:2019ftu, Figueroa:2020jkf, Pattison:2021oen, Achucarro:2021pdh, Ahmadi:2022lsm} for recent discussions). In particular, the perturbation theory breaks down at the tail of the distribution where fluctuations are large and rare. Therefore, if deviations from Gaussianity appear in the tail, cosmological correlators are no longer appropriate in describing the physics involving large and rare fluctuations. The nontrivial behaviour in the tail of the distribution, dubbed {\it the non-Gaussian tail}, has been actively discussed in connection with primordial black holes (PBHs). It is speculated that black holes may have formed in the early Universe due to the presence of large curvature fluctuations \cite{Carr:1974nx} and these objects could serve an inspiring tool to probe the unknown physics in the
very early universe \cite{Khlopov:2008qy, Sasaki:2018dmp, Carr:2016drx}.
Since the large fluctuations are highly sensitive to the tail of the probability distribution function, non-Gaussianities in the initial condition from inflation may play a crucial role in determining the abundance of PBHs \cite{Franciolini:2018vbk, Biagetti:2018pjj, Atal:2018neu, Passaglia:2018ixg, Atal:2019cdz, Atal:2019erb, Meng:2022ixx,Taoso:2021uvl, Biagetti:2021eep, Davies:2021loj, Hooshangi:2022lao, DeLuca:2022rfz}.
In most of previous studies, both perturbative and non-perturbative regimes are governed by the same physics, and thus the non-Gaussian tails always have an exponential form. For instance, ultra-slow-roll (USR) inflation is one of the well-studied models in which the inflaton undergoes a period when the slow-roll (SR) conditions are violated and an $\mathcal{O}(1)$ local non-Gaussianity is generated \cite{Namjoo:2012aa, Chen:2013aj, Chen:2013eea, Cai:2017bxr}.
In the simplest model of USR inflation, it was shown that the resulting non-Gaussian tail is determined by the amplitude of the bispectrum represented by the parameter $f_{\rm NL}$, which is defined in the perturbative analysis.
In principle, however, we remind that one should go beyond perturbation theory to properly understand the behaviour of the tail of distribution.
For example, if there are some non-perturbative effects that affect only large fluctuations, it is very likely that one gets a large non-Gaussian tail even if the perturbative non-Gaussian parameters like $f_{\rm NL}$, $g_{\rm nl}$, $\tau_{\rm nl}$, etc. are small. In other words, the behaviour of the distribution function at $|\mathcal{R}|\ll 1$ and that at $|\mathcal{R}|\sim 1$ can be uncorrelated. Interestingly, this gives rise to a hypothetical possibility that the distribution may be perfectly Gaussian at $|\mathcal{R}|$ is small, while it could have a highly non-Gaussian bump at the tail, as sketched in Figure \ref{fig:pdf1}:
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.6]{Imagine_PDF.pdf}
\caption{A speculative probability distribution of the curvature perturbation $\mathcal{R}$, where the dotted orange curve is the Gaussian distribution which fits well at small $\mathcal{R}$ but not at the tail.}
\label{fig:pdf1}
\end{figure}
Recently we proposed a specific realization of the above phenomenon \cite{Cai:2021zsp}, where an upward step in the inflaton potential generates a highly non-Gaussian tail while the perturbative non-Gaussianity remains small.
In this paper, we extend the study upon the aforementioned interesting phenomenon into more general examples by performing much comprehensive and detailed analyses. In particular, we consider the canonical single-field inflation with a tiny upward step in the potential and investigate both the USR-SR and SR-SR transitions. In both cases, by deriving the exact background solutions, we identify the crucial role of {\it off-attractor phase trajectories} around the upward-step transition. Afterwards, we perform the analysis of perturbations via the $\delta N$ formalism, which can clearly reveal the essential role of the off-attractor trajectories around the step with a quantitative description. To be concrete, they affect small and large fluctuations differently. The small fluctuations are still in the perturbative regime, which can be analyzed using the conventional approach for the power spectrum, the bispectrum, etc. However, the large fluctuations are {\it non-perturbatively} affected by the step. That is to say, the inflaton field may not be able to move forward if the field fluctuation is so large that it makes the forward momentum too small to climb the step. This effect gives rise to the non-perturbative expression of the curvature perturbation, which consequently leads to nontrivial shape at the tail of the probability distribution. As an application, we also estimate how this novel non-Gaussian tail affects the abundance of PBHs.
The paper is organized as follows. In Section 2, we discuss the background dynamics of two models of upward-step transition in canonical single-field inflation, with a focus on the off-attractor trajectories.
Then we present the detailed analyses of perturbations by using the $\delta N$ formalism. Section 3 comprises the major part of this paper, in which we discuss the nontrivial tail behaviour in the probability distribution of the curvature perturbation. Applying the results of Section 2 in a concrete example, we demonstrate how a non-perturbative effect during inflation (a tiny upward step here) can lead to a significant modification in the non-Gaussian tail. In Section 4, we consider the corresponding implications into the formation of PBHs. We analyse the curvature perturbation in a much realistic model with an inflection-point potential, and compute the mass fraction of PBHs in the presence of the non-perturbative non-Gaussian tail. We also briefly discuss another mechanism for the PBH formation, i.e. the formation of PBHs due to trapping of the inflaton field at the local minimum of the potential at the step. We conclude in Section 5 with an outlook of future directions.
Throughout the whole paper, we use the natural units $c = \hbar = 1$, and the reduced Planck mass $M_{\rm pl}^2 = 1/8\pi G$.
\section{Upward-step transitions}
\label{sec:ustep}
In this section we investigate the background evolution of canonical single-field inflation with an upward step in the potential, and then derive the $\delta N$ formula based on the background solutions.
First, we perform a detailed analysis of a simple model of USR-SR upward-step transition. We adopt non-attractor initial conditions and evolve them through the step to the slow-roll phase after the transition.
With this concrete example, we illustrate the nontrivial effect of the off-attractor phase-space trajectories around the step.
After that, we extend the analysis into an SR-SR upward-step transition model where non-attractor trajectories also play a crucial role because of the quantum fluctuations, even though the fiducial, classical trajectory is a slow-roll one before the step.
\subsection{USR-SR transition}\label{sec:USR_SR}
\subsubsection{A recap of USR inflation}
Let us begin with a brief review of the simplest USR inflation, where the inflaton rolls on a constant potential $V(\phi)=V_0$,
\begin{equation}
\ddot\phi+3H\dot\phi=0\,.
\end{equation}
Unlike the conventional slow-roll inflation in which the inflaton is in an attractor phase with $\dot\phi$ being determined by the value of $\phi$, there is no attractor phase in this case and hence $\dot\phi$ remains to be a dynamical degree of freedom, independent of $\phi$.
Using the number of e-folds $n$ as the time variable via $dn = Hdt$, the background equations can be approximately written as
\begin{equation}
\frac{d^2\phi}{d n^2} + 3\frac{d\phi}{d n} = 0 ~, ~~~~~~ 3H^2\simeq V_0 ~.
\end{equation}
We denote the initial values at $n=n_i$ by $\phi(n_i)$ and $\pi(n_i)=d\phi/dn(n_i)$.
Then, we have
\begin{align}
\phi(n) &= \phi(n_i) + \frac{\pi(n_i)}{3}(1-e^{-3(n-n_i)}) ~, \label{phi solution 1 in variable N}\\
\pi(n) &=\dv{\phi}{n_i} = \pi(n_i) e^{-3(n-n_i)} ~. \label{pi solution 1 in variable N}
\end{align}
Note that we have the following relation,
\begin{equation} \label{pic}
\pi(n)+3\phi(n)= \pi(n_i) + 3\phi(n_i).
\end{equation}
Fixing the final values as $\phi(n_c)=\phi_c$ and $\pi(n_c)=\pi_c$, the number of e-folds $N_{USR}\equiv n_c-n_i$
may be expressed as a function of the initial values at $n=n_i$, which we denote by $\phi_i$ and $\pi_i$,
\begin{equation} \label{USR-N}
N_{USR}(\phi_i, \pi_i; \phi_c,\pi_c) =\frac{1}{3} \log\[\frac{\pi_i}{\pi_c}\]= \frac{1}{3} \log\[ \frac{\pi_i}{\pi_i+3(\phi_i-\phi_c)}\].
\end{equation}
In the simplest case, the USR stage is assumed to end at $\phi=\phi_c$ independent of $\pi_c$ which is expected to be extremely small as $\pi$ has decayed exponentially. Therefore, when we take the variation of $N_{USR}$ to evaluate the curvature perturbation, we only fix $\phi_c$.
As a result, the expansion history of the Universe depends on the initial conditions of both $\phi_i$ and $\pi_i$, which differs from the single-clock behaviour of slow-roll inflation where $\phi$ is the only independent degree of freedom. Because of this, the USR inflation generates $\mathcal{O}(1)$ local non-Gaussianity with $f_{\rm NL} =5/2$, and this simple model has been known as an example of canonical single-field models that violates Maldacena's consistency relation \cite{Namjoo:2012aa}\footnote{The same dominant mode of primordial perturbations was obtained as well in the single-field matter bounce cosmology \cite{Cai:2009fn, Quintin:2015rta, Li:2016xjb}.}.
Meanwhile, it has been well recognized that, this USR evolution is allowed only in a limited period of time, and for a complete description of inflation one still needs a subsequent slow-roll (SR) phase \cite{Cai:2016ngx, Cai:2017bxr}. Therefore, it is important to take into account the {\it USR-SR transition} in the realistic model building. The consequences of both smooth and sharp transitions have been investigated in detail in \cite{Cai:2017bxr}, which established that the final size of local non-Gaussianity is very sensitive to the transition process. When the transition is smooth, the USR result $f_{\rm NL} =5/2$ will be completely erased in the subsequent evolution; while it remains non-vanishing for sharp transitions with the maximum value of local $f_{\rm NL}$ being $5/2$. These results were further confirmed by the derivation of a generalized consistency relation using the background field method \cite{Suyama:2021adn}.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.3]{two_stage_non_attractor_with_step.pdf}
\caption{A sketch plot of the potential for an upward step transition. }
\label{fig:step}
\end{figure}
\subsubsection{USR-SR transition with an upward step}
\label{sec:usr-sr}
Now we consider a USR-SR transition with an upward step at $\phi=\phi_c$, whose potential is illustrated in Figure \ref{fig:step}. Initially, the inflaton is in the USR phase at $\phi>\phi_c$, moving toward the negative $\phi$ direction. The USR phase ends at $\phi=\phi_c$, and the inflaton climbs up the upward step $\Delta V$ at a cost of spending some of its kinetic energy $\pi_c$, assuming that $\pi_c$ is large enough to allow the upward jump. After the step, there is a short relaxation stage of a non-attractor phase, and the inflaton eventually reaches the slow-roll attractor after the relaxation stage. For simplicity we assume that inflation ends at $\phi=\phi_f$.
\paragraph{Background dynamics}
With the above picture in mind, let us derive the background solution of $\phi(t)$ from the USR stage to the final SR stage.
At $\phi_c$, the inflaton velocity drops instantly in order to climb up the step. According to energy conservation the velocity right after the step is determined as
\begin{equation} \label{picpid}
\pi_d = - \sqrt{\pi_c^2-6\frac{\Delta V}{V}} ~,
\end{equation}
which serves as the initial condition for the subsequent stage. The minus sign comes from the assumption that the inflaton evolves toward the negative $\phi$ direction. For later convenience, we define the ratio of these two velocities as follows,
\begin{equation} \label{g}
g\equiv \frac{\pi_d}{\pi_c}\quad (0< g <1) ~.
\end{equation}
This will be the key parameter in our analysis below. When $\Delta V \rightarrow 0$, we have $g \rightarrow 1$, and the system goes back to the cases discussed in Ref.~\cite{Cai:2017bxr}. For $g\ll 1$, the effects of the step at the transition become significant.
Afterwards, the inflaton spends a few e-folds in a non-attractor, relaxation phase before it enters the slow-roll attractor stage. For the purpose of describing this transition process, we focus on small inflaton displacements, and parameterise the potential as
\begin{align}
V(\phi) = (V_0 + \Delta V)\[1+ \sqrt{2\epsilon_V}(\phi -\phi_c)
+ \frac{1}{2}\eta_V(\phi -\phi_c)^2 \] ; \quad \phi<\phi_c \,.
\label{eq:potential}
\end{align}
where $\epsilon_V$ and $\eta_V$ are the slow-roll parameters defined at $\phi_c$. For simplicity, we assume the above form is valid until the inflaton reaches $\phi_f$. Then, the equation of motion for $\phi$ is given by
\begin{align}
\frac{d^{2} \phi}{d n^{2}}+3 \frac{d \phi}{dn}+3
\sqrt{2 \epsilon_{V}}+3 \eta_{V}\left(\phi-\phi_{c}\right) = 0 ~,
\label{background equation in slow-roll}
\end{align}
where we have approximated the Hubble parameter to be a constant given by $3H^2=V_0+\Delta V$.
Setting $n=n_c$ at $\phi=\phi_c$ and $\pi=\pi_d$, we obtain the analytical solution to be
\begin{align}
\phi (n) &=\frac{s-3-h}{s(s-3)} \pi_{d} e^{\frac{1}{2}(s-3) (n-n_c)}-\frac{s+3+h}{s(s+3)} \pi_{d}
e^{-\frac{1}{2}(s+3) (n-n_c)}+\frac{2 \pi_{d} h}{s^{2}-9}+\phi_{c} ~.\label{phi solution of slow-roll}\\
\pi (n) &=\frac{d\phi(n)}{dn}=\frac{s-3-h}{2s} \pi_{d} e^{\frac{1}{2}(s-3) (n-n_c)}+\frac{s+3+h}{2s} \pi_{d}
e^{-\frac{1}{2}(s+3) (n-n_c)} ~.\label{pi solution of slow-roll}
\end{align}
where we have defined
\begin{equation} \label{hs}
h\equiv 6\sqrt{2\epsilon_V}/\pi_d , ~~~~~~~~ s \equiv 3\sqrt{1-4\eta_V/3} \simeq 3-2\eta_V .
\end{equation}
Note that, $h$ is negative and can be written in terms of the ratio of two field velocities $h=-6\pi_f/\pi_d$, where $\pi_f\simeq-\sqrt{2\epsilon_V}$ is
the velocity at $\phi=\phi_f$. The $h$ parameter can be any negative real number. For $h=-6$, we have $\pi_f=\pi_d$ and thus the inflaton evolution joins the slow-roll attractor immediately after the step. For other values of $h$, a relaxation phase is expected before the inflaton reaches the slow-roll attractor. From the solution given by \eqref{phi solution of slow-roll} and \eqref{pi solution of slow-roll}, we observe that there exists an equality,
\begin{equation}
\pi(n) + \frac{s+3}{2} \phi(n)
= \pi_d \left[\(1-\frac{h}{s-3}\) e^{\frac{1}{2}(s-3) (n-n_c)} +\frac{h}{s-3}\right]+\frac{s+3}{2} \phi_c\,,
\end{equation}
which is an analog of the relation \eqref{pic} for the USR inflation.
\paragraph{Number of e-folds}
With the exact background solution, it is possible to obtain an expression for the number of e-folds, which plays the central role in the perturbation analysis in Subsection \ref{sec:deltaN}. Let $n_f$ be the number of e-folds at $\phi_f$, $\phi(n_f)=\phi_f$. We consider $n_f-n_c\gg1$ so that the second term in \eqref{phi solution of slow-roll} can be neglected.
Accordingly, we have
\begin{align}
\phi_f \simeq \frac{s-3-h}{s(s-3)} \pi_{d} e^{\frac{1}{2}(s-3) (n_f-n_c)} +
\frac{2 \pi_{d} h}{s^{2}-9}+\phi_{c} ~.
\label{phi_e approximate solution}
\end{align}
This gives the number of e-folds after the step $N_{SR}\equiv n_f-n_c$ as a function of $\pi_d$,
\begin{align}
N_{SR}(\pi_d; \phi_f) &= \frac{2}{s-3} \log \left\{ \frac{s(s-3)}{s-3-h}\left[ \frac{(\phi_f - \phi_c)}{\pi_d} - \frac{2h}{s^2 - 9}\right] \right\} \nn\\
&\simeq \frac{1}{\eta_V} \log\( -2\eta_V \pi_d-6\sqrt{2\epsilon_V} \) + {\rm const.} \,,
\label{SR-N}
\end{align}
where we have separated the piece that does not depend on $\pi_d$ as a constant as it does not contribute to the $\delta N$ computation. Note that, although $N_{SR}$ is also a function of $\phi_c$, its dependence is fixed as it is a parameter of the model. In other words, instead of $\phi$ as an independent variable to be varied for the computation of $\delta N$, the independent variable is $\pi_d$ in the present case.
Adding $N_{USR}$ in \eqref{USR-N} and $N_{SR}$ in \eqref{SR-N} together, we obtain the total number of e-folds from the USR phase to the end of inflation specified by $\phi=\phi_f$.
Removing the index $i$ from the initial values $\phi_i$ and $\pi_i$, we obtain
\begin{equation}
\begin{aligned}
N(\phi,\pi;\phi_f) &= N_{USR}+N_{SR}
=\frac{1}{\eta_V} \log\( -2\eta_V \pi_d-6\sqrt{2\epsilon_V} \)
+ \frac{1}{3}\log \left( \frac{\pi}{\pi_c} \right) +\text{const.} ~,
\label{total N of USR to SR}
\end{aligned}
\end{equation}
where $\pi_d$ is a function of $\pi_c$ given by \eqref{picpid}, and $\pi_c$ is a function
of the initial $\phi$ and $\pi$ as $\pi_c=\pi+3(\phi-\phi_c)$ through \eqref{pic}.
The constant part refers to the terms without $\phi$ and $\pi$ dependence.
To illustrate the effects of the step, it is helpful to make a comparison with the case without a step. When $\Delta V = 0$, or equivalently $g=\pi_d/\pi_c=1$, we have $\pi_c=\pi_d$ and the above analysis simply goes back to the smooth transition (for $h=0$) or the sharp transition (for $h<0$) discussed in Ref. \cite{Cai:2017bxr}.
The effect of a step is to render $\pi_d$ a nonlinear function of $\pi_c$ given by \eqref{picpid}. As we shall see below, this gives rise to a distinctive non-perturbative feature in the distribution function of the curvature perturbation.
Finally, let us consider the number of e-folds when the initial value of $\phi$ is after the step, $\phi<\phi_c$. In this case we have the standard slow-roll result,
\begin{equation}
\begin{aligned}
N(\phi;\phi_f) &= \frac{1}{\eta_V} \log \[ 1+\frac{\eta_V}{\sqrt{2\epsilon_V}}(\phi_f-\phi) \] +\text{const.} ~,
\label{relaxationN}
\end{aligned}
\end{equation}
where the validity of the expansion in potential \eqref{eq:potential} implies
the condition $\eta_V/\sqrt{2\epsilon_V}(\phi_f-\phi)\ll1$.
\subsubsection{Off-attractor trajectories}
\label{sec:off}
\begin{figure}[htb]
\centering
\includegraphics[scale =0.25]{phase_diagram_SR_USR_step_SR.pdf}
\caption{Off-attractor trajectories in the phase diagram $(\phi,\abs{\pi})$ of the SR-USR-SR transition. Slow-roll attractors are depicted in red line. The thick blue line describes the base trajectory while the thin blue lines represent off-attractor trajectories.}
\label{fig:phase_diagram USR}
\end{figure}
In a more realistic model, an initial SR phase is expected before the USR stage. This corresponds to attaching a SR potential at $\phi>\phi_s$ to the right of the USR potential shown in Figure \ref{fig:step} (see also the potential in \ref{fig:potential draft}). The model is similar to the inflection-point inflation, which have been extensively discussed in literature as one of possible mechanisms for producing PBHs.
Now, let us consider the role of off-attractor trajectories at an initial SR stage prior to the USR stage. At first glance, it seems that the background evolution is fully determined by the value of $\phi$ because the universe must have already arrived at the attractor stage during the initial slow-roll stage. Then the momentum $\pi$ at the beginning of the USR stage, which is the one at the end of the first SR phase, is fixed by the attractor solution $\pi(n_s)=\pi_s$. As a result, although subsequent stages may have non-attractor stages, $\pi$ can still be expressed as a function of the inflaton $\phi$ like in the conventional slow-roll inflation.
Thus, one might conclude that there exits a unique phase-space trajectory, as shown by the thick blue line in Figure \ref{fig:phase_diagram USR}, just like the conventional slow-roll inflation, and the non-attractor nature of the trajectory at the intermediate stage would not affect the properties of the perturbation. Let us name this trajectory \textit{the base trajectory}.
A crucial point that has been missed in the above argument is the effect of \textit{off-attractor trajectories} near the end of the first SR stage.
These trajectories are always present when we consider possible deviations of $\phi$ and $\pi$. At the SR stage, they can be normally neglected, as any deviations from the attractor trajectory vanishes within a couple of e-folds and the SR evolution is quickly recovered.
But the situation may become different when there is a transition. In the SR-USR transition around $\phi_s$, quantum fluctuations may take the trajectory away to an off-attractor one, and some of these off-attractor trajectories may not have enough time to converge to the slow-roll attractor before the USR phase starts. These trajectories, which are shown by light blue curves in Figure \ref{fig:phase_diagram USR}, deviate from the base trajectory. Hence, we may have the field velocity $\pi(n_s) \neq \pi_s $ at $\phi_s$, which depends on the off-attractor field velocity $\pi$ at the initial SR stage.
Since $\pi(n_s)$ provides the initial condition for the subsequent stages, this $\pi$-dependence becomes a crucial factor for the whole evolution of the system. In Appendix \ref{App: pi-dependence before the step}, we provide detailed computations and clarify the $\pi$-dependence of the background solution by solving the full dynamics of the initial SR phase.
Apparently, the off-attractor behaviour around the transition plays an important role in the $\delta N$ analysis, as the background evolution of inflation cannot be fully determined by the base trajectory and the initial $\pi$-dependence should also be taken into account. Namely, when computing $\delta N$, it is crucial to include the $\pi$-dependence as well as the $\phi$-dependence.
Interestingly, we find that the above analysis of the off-attractor behaviour may also apply to transitions without an intermediate USR stage. We shall consider such a case, that is, an SR-SR transition with an upward step in Section \ref{sec:SR to SR}.
\subsubsection{Local non-Gaussianity from the $\delta N$ formalism}\label{sec:deltaN}
As has been mentioned previously, the original USR model with a constant potential is able to generate $\mathcal{O}(1)$ local non-Gaussianity of primordial curvature perturbations within the framework of single-field inflation.
There have been various methods to derive this result in the literature, such as the in-in formalism \cite{Namjoo:2012aa, Chen:2013aj, Chen:2013eea, Cai:2017bxr}, the background wave method \cite{Bravo:2017wyw}, the operator product expansion \cite{Finelli:2017fml}.
Among them, the $\delta N$ formalism provides a simple and intuitive way.
This method is based on the separate universe assumption and successfully captures the non-linear evolution of perturbations on super-Hubble scales \cite{Salopek:1990jq, Sasaki:1995aw, Starobinsky:1986fxa, Sasaki:1998ug, Lyth:2004gb, Lee:2005bb, Lyth:2005fi}.
Previously this approach has also been used in the analysis of both smooth and sharp USR-SR transitions \cite{Cai:2017bxr}.
In this section, we apply the $\delta N$ computation to our two models to compute the local non-Gaussianity generated from the transition with an upward step.
First, we consider the USR-SR transition. At the USR stage, the inflaton fluctuation $\delta \phi$ behaves like a massless scalar with no interactions in pure de Sitter space, and its probability distribution is Gaussian.
For the perturbation mode which exits the Hubble radius during USR at $n=n_i$,
we can simply use the number of e-folds derived in \eqref{total N of USR to SR}, and get the following $\delta N$ formula from the definition:
\begin{align}
\delta N = N(\phi_i+\delta\phi, \pi_i+\delta\pi) - N(\phi_i, \pi_i)\simeq \frac{1}{\eta_V}\log \left[1 + \frac{2\eta_V\delta\pi_d}{6\sqrt{2\epsilon_V} + 2\eta_V \pi_d}\right]
- \frac{1}{3}\log \left[ 1 + \frac{3\delta \phi}{\pi_c} \right] ~, \label{exact_calR}
\end{align}
where we have neglected $\delta\pi$ due to its exponential decay on super-Hubble scales, and $\delta \pi_d$ is given by
\begin{align}
\delta \pi_d
=\pi_d\left[\sqrt{1+\frac{6}{g}\frac{\delta\phi}{\pi_d} + 9\left(\frac{\delta\phi}{\pi_d}\right)^2} - 1\right] ~.
\label{delta pi_d as a function of delta phi}
\end{align}
With $\mathcal{R}=\delta N$, the formula in \eqref{exact_calR} yields a non-perturbative mapping between the curvature perturbation $\mathcal{R}$ and the Gaussian field fluctuation $\delta\phi$.
This relation not only captures the case when $|\mathcal{R}|$ is small, but also remains valid for the tail of the probability distribution with rare but large $|\mathcal{R}|$.
We shall elaborate on the implications of the non-Gaussian tail in the next section. Before that, let us first perform the analysis in the perturbative regime $|\mathcal{R}|\ll 1$.
When perturbation theory is valid, we can expand the formula in $\delta \phi$. Up to second order we get
\begin{align} \label{deltaNpert}
\mathcal{R} \simeq \[ \frac{6}{g^2 (h+2\eta_V)} - 1 \] \frac{\delta\phi}{\pi_c} + \[ 9\frac{ \left(g^2-1\right) h+2 \eta_V\left(g^2-2\right)}{g^4 (h+2 \eta_V)^2}+\frac{3}{2}\] \(\frac{\delta\phi}{\pi_c}\)^2 .
\end{align}
The linear term corresponds to the Gaussian part
\begin{align}
\mathcal{R}_{\text{G}} \equiv \[ \frac{6}{g^2 (h+2\eta_V)} - 1 \] \frac{\delta\phi}{\pi_c} \simeq \(\frac{6}{g^2 h} - 1 \) \frac{\delta\phi}{\pi_c}\,.
\label{Rgauss}
\end{align}
This part of the contribution determines the amplitude of the power spectrum, the evaluation of which is deferred to Section \ref{matching}.
Notice that, the second term in the bracket is the size of $\mathcal{R}$ at the end of the USR phase, while the first term is related to the step transition.
If $g^2|h|\gg 6$, the upward step does not change the USR result; but for $g^2|h|\ll 6$ the effect of the step becomes dominant. This can happen when the step size is big enough to significantly reduce the kinetic energy of the inflaton, {\it i.e.} $g^2\ll 1$. In that limit we have $\mathcal{R}_{\text{G}} \simeq {\delta\phi}/({g\sqrt{2\epsilon_V})} $.
Following the standard treatment, the leading order local non-Gaussianity generated in this model is given by
\begin{align}
f_{\text{NL}}&= \frac{5}{6} \frac{\partial^2 N}{\partial \phi^2} \left/ \(\frac{\partial N}{\partial \phi}\)^2 \right.
\simeq\frac{5\left(g^4h^2 + 6g^2h-6h \right)}
{2(6-g^2h)^2}
\label{fNL of step slow-roll} ~,
\end{align}
where we have ignored the correction of $\mathcal{O}(\eta_V)$ to simplify the result.
Again the final result depends on two independent parameters $g$ and $h$. In general $h\simeq -6\pi_e/\pi_d$ can be any negative value, while $0 < g \leq 1$ reflects how significant the effect of the upward step is. The size of $f_{\rm NL}$ is shown by the contour plot of the $(g, h)$ parameter space in Figure \ref{fig:fnl}.
In the following, we study two particular cases.
\begin{figure}[htb]
\centering
\includegraphics[scale =0.8]{fNL_USR_to_SR.pdf}
\caption{The size of $f_{\rm NL}$ in the $(g, h)$ parameter space.(The white region corresponds to $f_{\rm NL}> 15$.)}
\label{fig:fnl}
\end{figure}
For $g=1$, which means no step, we reproduce the results of the smooth and sharp transitions discussed in Ref. \cite{Cai:2017bxr}.
One finds $f_{\rm NL}=5h^2/[2(6-h)^2] $, and its maximum value is $f_{\rm NL}=5/2$ in the limit of $h\rightarrow -\infty$, which corresponds to an infinitely sharp transition. On the other hand, for $g\ll 1$ where the step becomes important, it is possible to have a large local non-Gaussianity with $f_{\rm NL}\gg 5/2$, as seen in Figure \ref{fig:fnl}.
In the limit $g^2|h|\ll 6$ where the contribution from the step dominates the curvature perturbation, as seen from Eq.~(\ref{Rgauss}), $f_{\rm NL}$ Eq.~\eqref{fNL of step slow-roll} can be approximated as
\begin{equation} \label{eq:fnl}
f_{\rm NL} \simeq \frac{5}{12}|h| ~.
\end{equation}
Thus, as long as $g^2|h|\ll1$, $f_{\rm NL}$ is determined by the value of $h$.
This result shows that even in the framework of canonical single-field inflation, we can achieve large local non-Gaussianity with $f_{\rm NL} \gg \mathcal{O}(1)$, which easily breaks the upper limit of $5/2$ in sharp and smooth transitions. This serves as an intriguing counterexample that violates Maldacena's consistency relation in single-field inflation~\cite{Maldacena:2002vr}.
At first sight, it also seems possible to have infinitely large local $f_{\rm NL}$ in this model by finely tuning the model parameters. However, we should note that the perturbative treatment breaks down when $|f_{\rm NL} \mathcal{R}| \gtrsim 1$.
In particular this corresponds to the situation where Taylor expansion of the $\delta N$ formula in \eqref{deltaNpert} becomes invalid, and thus we need to reconsider the full expression in \eqref{exact_calR}.
An intriguing fact is that, as the validity of the perturbative expansion is controlled by the amplitude of $|f_{\rm NL} \mathcal{R}|\ll1$, we may encounter the non-perturbative regime when $|f_{\rm NL} \mathcal{R}|={\cal O}(1)$, that is, when we look at the rare and large perturbations at the tail of the probability distribution even if $|f_{\rm NL}|\ll1$.
A detailed analysis of the non-Gaussian tail is discussed in the next section.
Finally let us consider the perturbation modes which exit the Hubble radius after the step transition.
With the number of e-folds in \eqref{relaxationN}, we derive
\begin{equation}
\delta N = \frac{\delta\phi}{\sqrt{2\epsilon_V}} - \frac{\eta_V}{2} \(\frac{\delta\phi}{\sqrt{2\epsilon_V}}\)^2 + ... ~.
\end{equation}
This gives us the standard results of slow-roll inflation for the amplitude of curvature perturbation with $f_{\rm NL}=-5\eta_V/12$.
It may be noted that even though there is a non-attractor stage where slow-roll conditions are violated, the curvature perturbation is still given by
the slow-roll attractor formula as if there were no non-attractor stage~\cite{Leach:2001zf}.
As a result, the local non-Gaussianity remains the same as that for the slow-roll case.\footnote{For these small-scale modes, there may be large intrinsic non-Gaussianities in the inflaton fluctuations (hence they would not be of the local type) due to the discontinuity in the potential. This part of the analysis is beyond the scope of the present paper.}
\subsection{SR-SR transition with an upward step}\label{sec:SR to SR}
So far, we have focused on the upward-step transition from an USR stage.
In this subsection we consider an upward step in the SR-SR transition.
The model is the one proposed in \cite{Cai:2021zsp}. Here we analyse it in more detail.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.4]{SR_to_SR_v2.pdf}
\caption{A sketch of the inflaton potential for the SR-SR transition, where two stages of slow-roll inflation are connected by an upward step.}
\label{fig:SR to SR}
\end{figure}
We consider the two-stage slow-roll inflation with two distinct slow-roll potentials joined at $\phi=\phi_c$ with an upward step. The inflaton initially rolls on a slow-roll potential from $\phi> \phi_c$ (Stage-I), and climbs up an upward step at $\phi_c$, as shown in Figure \ref{fig:SR to SR}, and rolls down on the second slow-roll potential after a short relaxation time (Stage-II).
To solve the background dynamics of this model, we first parametrize the two slow-roll potentials as
\begin{align}
V(\phi) =
\begin{dcases}
V_0 \left[ 1+ \sqrt{2\epsilon_{\rom{1}}}\left(\phi -\phi_c \right) + \frac{1}{2}\eta_{\rom{1}}\left(\phi -\phi_c \right)^2\right] ~,\quad \phi \geq \phi_c\\
\left(V_0 + \Delta V \right)\left[ 1 + \sqrt{2\epsilon_{\rom{2}}}\left(\phi -\phi_c \right) + \frac{1}{2}\eta_{\rom{2}}\left(\phi -\phi_c \right)^2\right] ~,\quad \phi < \phi_c
\end{dcases} \qquad ,
\end{align}
where $\Delta V $ is the size of the step, and $\epsilon_{I,II}$ and $\eta_{I,II}$ are the Stage-I and Stage-II slow-roll parameters defined at $\phi_c$.
Following Sec. \ref{sec:usr-sr}, we denote the field velocities before and after the step by $\pi_c$ and $\pi_d$, respectively.
For the evolution in Stage-II, the background solutions are the same with what found in Sec. \ref{sec:usr-sr}. We may simply use the results there, by replacing $\epsilon_V\rightarrow\epsilon_{II}$ and $\eta_V\rightarrow\eta_{II}$.
It is a bit nontrivial to derive the full evolution in Stage-I with arbitrary initial conditions.
This background analysis is essential for the studies of long wavelength perturbations which exit the horizon during Stage-I.
We leave the detailed computation in Appendix \ref{App: pi-dependence before the step}.
Here we present a simple analysis with major results.
For the Stage-I evolution with non-slow-roll initial conditions,
there can be two different situations:
\begin{itemize}
\item[(a)] For the initial condition $(\phi_i,\pi_i)$ sufficiently far from the step, the trajectory quickly approaches the slow-roll trajectory, and starts following the attractor evolution. Thus the conventional
slow-roll approximations can still apply, and we find
\begin{align}\label{slow-roll eq in Stage-I}
\dv{\phi}{n} + \sqrt{2 \epsilon_{\rom{1}}} + \eta_{\rom{1}}\left(\phi-\phi_{c}\right) = 0 ~,\quad \phi_c < \phi~.
\end{align}
At the end of Stage-I, the field momentum is given by $\pi_c = -\sqrt{2 \epsilon_{\rom{1}}}$, which is fully fixed by the shape of the slow-roll potential at Stage-I. This corresponds to the base trajectory whose dynamics is determined regardless of initial conditions.
Then the number of e-folds in this stage $N_{\rom{1}}$ can be directly solved from \eqref{slow-roll eq in Stage-I}:
\begin{align}
N_{\rom{1}} = n_c - n_i = \frac{1}{\eta_{\rom{1}}}\log\left[ 1 + \frac{\eta_{\rom{1}}}{\sqrt{2\epsilon_{\rom{1}}}}\left(\phi_i - \phi_c\right) \right] ~.\label{N of the first stage }
\end{align}
After the step, $N_{\rom{2}} \equiv n_f - n_c $ is essentially the same as \eqref{SR-N}. So in this case the total number of e-folds is given by
\begin{equation}\label{total N}
\begin{aligned}
N_{\text{total}} &= N_{\rom{1}} + N_{\rom{2}}\\
&\simeq
\frac{1}{\sqrt{2\epsilon_{\rom{1}}}}\left( \phi_i - \phi_c \right)
+
\frac{1}{\eta_{\rom{2}}}\log\left[
-2\eta_{\rom{2}}\pi_d - 6\sqrt{2\epsilon_{\rom{2}}} \right] + \text{constant} ~.
\end{aligned}
\end{equation}
Here we notice that there is a major difference with the USR-SR transition discussed in Sec. \ref{sec:usr-sr}.
For the total number of e-folds $N$ in \eqref{total N of USR to SR}, $\pi_d$ is a function of the initial conditions $(\phi_i,\pi_i)$ in the USR stage. However, here the field momentum $\pi_d$
is fully fixed by $\pi_c = -\sqrt{2\epsilon_{\rom{1}}}$ and the size of the step. Therefore in this case, $N_{\rom{2}}$ can also be seen as a constant, which is independent of initial conditions.
For perturbation modes which exit the horizon at $\phi = \phi_i$,
the $\delta N$ formula simply gives us the standard slow-roll result,
\begin{equation}\label{slow-roll result}
\begin{aligned}
\mathcal{R} =\delta N=
-\frac{\delta \phi}{\sqrt{2\epsilon_{\rom{1}}}} ~.
\end{aligned}
\end{equation}
\item[(b)] For the initial condition $(\phi_i,\pi_i)$ close to the step,
there may not be enough time for the trajectory to converge to the slow-roll trajectory before it encounters the step.
This corresponds to the off-attractor trajectories we discussed in Sec. \ref{sec:off}. Thus the dependence on $\pi_i$ in the initial condition becomes crucially important.
As shown in Appendix \ref{App: pi-dependence before the step}, the off-attractor trajectories demonstrate the non-attractor behaviour just like the USR case,
\begin{align} \label{non-attractor dynamics}
3\phi(n) + \pi(n) = 3\phi_i + \pi_i~.
\end{align}
Then the number of e-folds $N_I$ until the step is given by
\begin{align}
N_I =\frac{1}{3} \log\[\frac{\pi_i}{\pi_c}\]= \frac{1}{3} \log\[ \frac{\pi_i}{\pi_i+3(\phi_i-\phi_c)}\].
\end{align}
Adding the number of e-folds of Stage-II to the above, we find
\begin{equation}\label{total N}
\begin{aligned}
N_{\text{total}} &= N_{\rom{1}} + N_{\rom{2}}\\
&\simeq
\frac{1}{3} \log\[ \frac{\pi_i}{\pi_i+3(\phi_i-\phi_c)}\].
+
\frac{1}{\eta_{\rom{2}}}\log\left[
-2\eta_{\rom{2}}\pi_d - 6\sqrt{2\epsilon_{\rom{2}}} \right] + \text{constant} ~.
\end{aligned}
\end{equation}
Now, contrary to the case (a), $\pi_d$ in this case depends on the initial condition through \eqref{picpid} and $\pi_c =3(\phi_i-\phi_c) + \pi_i $. For the perturbation modes which exit horizon close to the end of Stage-I, we find the $\delta N$ result identical to the one for the USR-SR transition,
\begin{equation} \label{near step modes}
\begin{aligned}
\mathcal{R} &= N(\phi_i+\delta\phi, \pi_i+\delta\pi) - N(\phi_i, \pi_i)\\
&\simeq
- \frac{1}{3}\log \left[ 1 + \frac{3\delta \phi}{\pi_c} \right] +
\frac{1}{\eta_{\rom{2}}}\log \qty[1 + \frac{2\eta_{\rom{2}}\delta\pi_d}{6\sqrt{2\epsilon_{\rom{2}}} + 2\eta_{\rom{2}} \pi_d} ]\,,
\end{aligned}
\end{equation}
where $\delta \pi_d$ is given by \eqref{delta pi_d as a function of delta phi}.
\end{itemize}
With the above analysis, we now have a good description of the non-Gaussianity generated in the SR-SR transition with an upward step.
For a broad range of perturbation modes which exit the horizon during Stage-I, the standard slow-roll result in \eqref{slow-roll result} still applies, and thus the local $f_{\rm NL}$ is slow-roll suppressed.
However, for a narrow range of modes which exit the horizon right before the step, the off-attractor trajectories become important, and thus we should resort to \eqref{near step modes} to study the nonlinear perturbations. Here we focus on the perturbative regime $\abs{\mathcal{R}} \ll 1$, and leave the discussion of non-Gaussian tails around $\abs{\mathcal{R}} \sim 1$ in the next section.
The $\delta N$ expansion for the case (b) is the same as that for the USR-SR transition, Eq.~(\ref{deltaNpert}),
\begin{align}
\mathcal{R} \simeq \[ \frac{6}{g^2 (h+2\eta_{\rom{2}})} - 1 \] \frac{\delta\phi}{\pi_c} + \[ 9\frac{ \left(g^2-1\right) h+2 \eta_{\rom{2}}\left(g^2-2\right)}{g^4 (h+2 \eta_{\rom{2}})^2}+\frac{3}{2}\] \(\frac{\delta\phi}{\pi_c}\)^2\,,
\end{align}
with the linear term being the Gaussian part,
\begin{align}
\mathcal{R}_{\text{G}} \equiv \[ \frac{6}{g^2 (h+2\eta_{\rom{2}})} - 1 \] \frac{\delta\phi}{\pi_c} \simeq \(\frac{6}{g^2 h} - 1 \) \frac{\delta\phi}{\pi_c}\,.
\end{align}
Again, as in the case of the USR-SR transition, this part of the contribution determines the amplitude of the power spectrum, which will be evaluated in Sec.~\ref{matching}.
The local non-Gaussianity generated in this model is given by
\begin{align}
f_{\text{NL}}&= \frac{5}{6} \frac{\partial^2 N}{\partial \phi^2} \left/ \(\frac{\partial N}{\partial \phi}\)^2 \right.
\simeq\frac{5\left(g^4h^2 + 6g^2h-6h \right)}
{2(6-g^2h)^2}\,,
\end{align}
which exactly agrees with the one in the USR-SR case given in Eq.~(\ref{fNL of step slow-roll}). Again, in the limit when the effect of the step is significant, $g^2 \ll 1$, the above reduces to $f_{\text{NL}}^{\text{local}} \simeq 5\abs{h}/12$, as given by Eq.~\eqref{eq:fnl} for the USR-SR transition.
\section{A tale of the non-Gaussian tail}\label{NG tail}
Now we study the probability distribution of the curvature perturbation beyond the perturbative regime. We focus on the behaviour of the tail due to the upward step transition.
Here we are interested in the local non-Gaussianity, which in general arises when the curvature perturbation is a function of a Gaussian random field $\mathcal{R}_{\text{G}}$ as
\begin{equation} \label{fullcalR}
\mathcal{R}({\bf x}) = f\(\mathcal{R}_{\text{G}}({\bf x}) \) .
\end{equation}
This function is arbitrary in principle apart from the condition that $f(0)=0$. Further, if we assume that $\mathcal{R}$ is linear in $\mathcal{R}_{\text{G}}$ in the limit $\mathcal{R}_{\text{G}}\to0$, we can also rescale $\mathcal{R}_{\text{G}}$ such that $f'(0)=1$.
Then, in perturbation theory, we have the Taylor expansion as $|\mathcal{R}_{\text{G}}|\ll 1$,
\begin{equation} \label{taylorR}
\mathcal{R}({\bf x}) = \mathcal{R}_{\text{G}}({\bf x}) + \frac{1}{2}f''\(0\) \mathcal{R}_{\text{G}}({\bf x})^2 + ...
\end{equation}
which gives us $f_{\rm NL}= 5f''(0)/6$.
Note that the nonlinear relation \eqref{fullcalR} and its perturbative expansion \eqref{taylorR} are exactly what we obtain from the $\delta N$ formalism.
When we look at the probability distribution of the curvature perturbation, normally a positive $f_{\rm NL}$ means that the probability is higher than the Gaussian distribution for positive values of $\mathcal{R}$.
However this naive expectation is only valid in the perturbative regime. As we have argued previously, when we consider large values of $|\mathcal{R}|$, the perturbative expansion \eqref{taylorR} may break down, and we may need a full nonlinear expression of $f\(\mathcal{R}_{\text{G}}({\bf x}) \)$ to capture non-perturbative effects.
One of well-studied examples in the literature is based on the original USR scenario. From the analytical expression of $N$ in \eqref{USR-N}, one easily finds that the nonlinear mapping from $\delta\phi$ to $\mathcal{R}$ is given by $\mathcal{R} = \delta N= -\frac{1}{3}\log \( 1+3 \delta\phi /\pi_c \right) $. In the perturbative regime, this yields $f_{\rm NL}=5/2$, while the non-Gaussian probability distribution is enhanced at large $\mathcal{R}$.
Thus in this example, one finds that a positive $f_{\rm NL}$ leads to more distribution of large perturbations on the tail, which agrees with the naive expectation from the perturbative analysis.
Now let us go back to the USR-SR transition with an upward step. The non-perturbative relation between $\mathcal{R}$ and $\delta \phi$ is given in \eqref{exact_calR}. We may further simplify the expression by
taking the $\eta_V\rightarrow 0$ limit,
\begin{equation}
\mathcal{R} \simeq
-\frac{1}{3}\log\left( 1 + 3 \frac{\delta \phi}{\pi_c} \right)+\frac{2}{h}\left[ \sqrt{1+\frac{6}{g}\frac{\delta\phi}{\pi_d} + 9\left(\frac{\delta\phi}{\pi_d}\right)^2} - 1\right] ~. \label{calR_phi}
\end{equation}
Here the first term with a logarithmic function is contribution from the USR stage, whose effects on the probability distribution have been discussed in the literature \cite{Franciolini:2018vbk, Biagetti:2018pjj, Atal:2018neu, Passaglia:2018ixg, Atal:2019cdz, Atal:2019erb, Taoso:2021uvl, Biagetti:2021eep, Davies:2021loj}.
The second term with the square root is the new contribution caused by the upward step.
Here let us focus on the case when $g^2|h|\ll 1$ where the curvature perturbation is dominated by the effect of the step.
In this parameter regime, one can neglect the logarithmic term to obtain
\begin{align}
\mathcal{R} &\simeq -\frac{2}{\abs{h}}\left[\sqrt{1-|h|\mathcal{R}_{\text{G}} } - 1\right] ~,\label{approximate solutioin of calR}
\end{align}
where $\mathcal{R}_{\text{G}}=6\,\delta\phi/(gh\pi_d) $ is the Gaussian part of the curvature perturbation when $|\mathcal{R}| \ll 1$.
We immediately notice the presence of a cutoff at $\mathcal{R}=2/|h|$. This is a genuine non-perturbative effect. Physically, it corresponds to the situation when the inflaton fluctuation is too large to render the field momentum too small to climb the upward step. The formula~\eqref{approximate solutioin of calR} is one of our major results of this paper.
To see the non-Gaussian feature of the tail more clearly, let us compute the probability distribution function (PDF) of $\cal R$.
For the Gaussian perturbation $\mathcal{R}_{\text{G}}$, the PDF is given by
\begin{equation} \label{Gaussian}
P[\mathcal{R}_{\text{G}}] = \frac{1}{\sqrt{2\pi} \sigma_{\mathcal{R}}} e^{-{\mathcal{R}_{\text{G}}^2}\left/{2\sigma_{\mathcal{R}}^2}\right.} ,
\end{equation}
where $\sigma_{\mathcal{R}}^2$ is the variance of the Gaussian perturbation $\mathcal{R}_{\text{G}}$,
\begin{equation}
\sigma_{\mathcal{R}}^2 = \int d\log k \mathcal{P}_{\mathcal{R}_{\text{G}}}(k)\,,
\end{equation}
where we have implicitly assumed that the power spectrum of $\mathcal{R}_\text{G}$ is peaked at a certain scale, say at $k=k_*$, and ignored the contribution from the spectrum far away from the peak.
From Eq.~\eqref{approximate solutioin of calR}, the PDF of the curvature perturbation is derived as
\begin{equation} \label{pdfnon-G}
P[\mathcal{R}]=P[\mathcal{R}_{\text{G}}] \left| \frac{d \mathcal{R}_{\text{G}}}{d \mathcal{R}} \right|
= \frac{2-\abs{h}\mathcal{R}}{\Omega}\exp\left[ -\frac{\mathcal{R}^2(4-\abs{h}\mathcal{R})^2}{32\sigma^{2}_{\mathcal{R}}} \right]\,;\qquad\mathcal{R}\leq\frac{2}{\abs{h}}\,~,
\end{equation}
where the $\Omega$ is a normalization coefficient :
\begin{align}
\Omega \equiv \sqrt{2\pi\sigma^{2}_{\mathcal{R}}}\left[1+ \rm{Erf}\left(\frac{1}{\abs{h}\sqrt{2\sigma^2_{\mathcal{R}}}}\right) \right]~.
\end{align}
The comparison between this PDF and the Gaussian one is shown in Figure \ref{fig:pdf}, where $\sigma_\mathcal{R}^2=0.02$ for a demonstration. It may be noted that the expectation value $\langle\cal R\rangle$ is non-zero. Nevertheless it is found to be too small to be seen in the figure for the current choice of $\sigma_\mathcal{R}$. As we can see, in the perturbative regime the probability distribution behaves like Gaussian. But as $|\mathcal{R}|$ becomes large, the deviation from the Gaussian distribution becomes more and more significant, and there appears a sudden cutoff at $\mathcal{R} = 2/|h|$.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.7]{pdf20220630.pdf}
\caption{The PDF in \eqref{pdfnon-G} compared with a Gaussian distribution.}
\label{fig:pdf}
\end{figure}
As we see from Eq.~(\ref{approximate solutioin of calR}),
the parameter $h$ determines the non-Gaussian nature of the PDF in the limit $g^2|h|\ll1$.
Recalling that $h$ is inversely proportional to the ratio of the field velocity after the step and that for the slow-roll attractor $h=-6\pi_e/ \pi_d$, it can take any negative number.
Depending on the values of $h$, there are at least two interesting cases:
\begin{itemize}
\item For $|h|\gg1$, Eq.~(\ref{approximate solutioin of calR}) implies that there are large deviations from the Gaussian PDF even at $|\mathcal{R}|\ll1$, and the upper limit of $\mathcal{R}$ becomes small.
Namely, the probability of large $\mathcal{R}$ vanishes completely.
Thus we find a highly suppressed non-Gaussian tail at $\mathcal{R}\lesssim 2/|h|$, as shown by the orange curve in Figure \ref{fig:pdf}.
\item For $|h| \lesssim \order{1}$, the deviation from the Gaussian distribution at $|\mathcal{R}|\ll1$ is small. In terms of $f_{\rm NL}$, we have $f_{\rm NL}=5|h|/12\lesssim1$. On the other hand, the tail of the distribution at large $\mathcal{R}$ is significantly enhanced, up to its upper limit $\mathcal{R}=2/|h|\gtrsim1$, where there is a sharp cutoff. An example of this case is shown by the dark blue curve in Figure \ref{fig:pdf}. This rather counter-intuitive result provides a clear demonstration that the sizes of the perturbative non-Gaussianity and the non-Gaussianity at the tail are not necessarily related to each other.
\end{itemize}
\section{Power spectrum and implications to the PBH formation}\label{Inflection point}
In this section, we first compute the curvature perturbation power spectrum, and
study how the non-Gaussian tail may affect the formation of PBHs.
A concrete, more realistic inflation model with an upward step transition can be constructed within the context of inflation with an inflection-point potential,
which has been extensively studied in the literature \cite{Garcia-Bellido:2017mdw, Bhaumik:2019tvl, Anguelova:2020nzl, Karam:2022nym, Germani:2017bcs}.
In this scenario, the inflaton field undergoes SR-USR-SR transitions, and certain modes of the curvature perturbation are enhanced at the USR phase. This leads to an amplified power spectrum on small scales which can efficiently generate PBHs.
As an analytic approximation for an inflection-point potential with an upward step, we consider the form,
\begin{align}
\begin{split}
V(\phi) = &V_0 \left[ 1 + \sqrt{2\epsilon_S}\left(\phi - \phi_s\right)
\Theta\left( \phi-\phi_s\right) \right] \Theta\left( \phi-\phi_c \right) \\
&+ (V_0+\Delta V)\left[ 1 + \sqrt{2\epsilon_V}\left(\phi - \phi_c\right)
+\frac{1}{2}\eta_V \left(\phi - \phi_c\right)^2 \right]
\Theta\left( \phi_c-\phi \right) ~.
\end{split}\label{parameterized potential}
\end{align}
Here $\phi_s$ is the field value at the beginning of the USR stage, $\phi_c$ is that at the step, which also marks the end of the USR stage, $\epsilon_S$ is the potential slow-roll parameter of the first SR stage, while $(\epsilon_V,\eta_V)$ are the potential slow-roll parameters of the last SR stage. A sketch of this potential is given in Fig.~\ref{fig:potential draft}.
Following the notation used in the previous sections, we denote the comoving wavenumber which crosses the Hubble radius at $\phi=\phi_s$ by $k_s$ and that at $\phi_c$ by $k_c$.
\begin{figure}[htb]
\centering
\includegraphics[scale = 0.37]{parameterized_potential.pdf}
\caption{A sketch plot of the inflection-point potential with an upward step. The inflaton starts rolling at a linear potential from $\phi > \phi_s$, and then goes through a flat platform which gives the USR phase and re-enters the slow-roll phase after climbing over the upward step.
}
\label{fig:potential draft}
\end{figure}
\subsection{The power spectrum from matching}\label{matching}
We study the inflaton field fluctuations and derive the curvature perturbation power spectrum for the potential provided in Eq.~\eqref{parameterized potential}.
We also consider a special limiting case where there is no USR state, that is, $\phi_c = \phi_s$. We present an analytical computation of the power spectrum on large scales $k \lesssim k_s$ first and then of that on small scales $k \sim k_c$.
\subsubsection*{SR-USR-SR Model}
We begin with computing the power spectrum of the extremely long wavelength modes $k \ll k_s$. In this limit, the standard slow-roll result applies, and the Gaussian approximation to the curvature perturbation is valid, with the Gaussian part given by
\begin{equation}
\mathcal{R}_{\text{G}}= \frac{\delta\phi}{\sqrt{2\epsilon_S}} ~, \quad k \ll k_s ~,
\end{equation}
and the power spectrum by
\begin{equation}
\mathcal{P}_{\mathcal{R}_{\text{G}}}(k \ll k_s) = \frac{1}{2\epsilon_S}\(\frac{H}{2\pi}\)^2 ~.
\end{equation}
This large scale part may be made to fit the current CMB data with an appropriate choice of $\epsilon_S$.
For the long wavelength modes $k\lesssim k_s$, the power spectrum shows power-law behaviour with the spectrum index given by
\begin{equation}\label{Spectrum index}
n_s -1 = \frac{d \log \mathcal{P}_{\mathcal{R}}(k)}{d\log k} \simeq 4\,.
\end{equation}
The details are discussed in Appendix \ref{Inflection point and scaling}.
This power-law behaviour agrees with the typical growth rate of a power spectrum in single-field inflation when there is a stage where the friction-dominated evolution is violated due to a sudden change in the derivative of the potential~\cite{Byrnes:2018txb}.
Next, we consider the spectrum around the step at $\phi_c$. During the USR stage, the mode function for the field fluctuation $\delta\phi$ is given by the standard adiabatic vacuum solution,
\begin{equation}\label{eq:BD-vacuum mode function}
\delta\phi_k(\tau) = \frac{H}{\sqrt{2 k^3}}(1+ik\tau)e^{-ik\tau}\,; \quad \tau < \tau_c\,,
\end{equation}
where $\tau_c$ is the conformal time at the time of the upward step transition.
For modes $k \lesssim k_c$, assuming that the perturbative non-Gaussianity is not extremely large (i.e., $|h|$ is not extremely large), we can use the linear $\delta N$ formula to compute the power spectrum. From \eqref{total N of USR to SR}, the Gaussian part of the curvature perturbation is
\begin{equation}
\mathcal{R}_{\text{G}} = \frac{\partial N}{\partial\phi} \delta\phi \simeq \(\frac{1}{g}-\frac{gh}{6}\)\frac{\delta\phi}{\sqrt{2\epsilon_V}} ~; \quad k \lesssim k_c ~.
\end{equation}
This yields the power spectrum,
\begin{equation}\label{Long mode power spectrum 1}
\mathcal{P}_{\mathcal{R}_{\text{G}}}(k \lesssim k_c) = \frac{1}{2\epsilon_V}\(\frac{1}{g}-\frac{gh}{6}\)^2
\mathcal{P}_{\delta\phi}(k \lesssim k_c) = \frac{1}{2\epsilon_V g^2}\(1-\frac{g^2h}{6}\)^2 \(\frac{H}{2\pi}\)^2\,.
\end{equation}
Here we mention that the above scale invariant spectrum is obtained under the assumption that the initial stage is USR. If the initial stage is SR, followed by a quickly flattening potential as in the inflection-point inflation, the perturbation modes which exit the Hubble horizon when the inflaton is decelerated by the flattening potential becomes scale-dependent due to the mixing of positive and negative frequencies. In this case, and the spectral index is given by $n_s-1=4$ \cite{Byrnes:2018txb}. The detailed discussion is deferred to Appendix \ref{Inflection point and scaling}.
For modes $k \gtrsim k_c$, Eq.~\eqref{relaxationN} yields
\begin{equation}
\delta N_{\text{G}} = \frac{\delta\phi}{\sqrt{2\epsilon_V}} ~, \quad k \gtrsim k_c ~.
\end{equation}
Here, however, the spectrum of $\delta\phi$ would not be given by $(H/(2\pi))^2$. Similar to the effect of the SR-USR transition, the upward transition induces the mixing of positive and negative frequencies due to a large change in the derivative of the potential.
The resultant power spectrum takes the form,
\begin{equation}
\mathcal{P}_{\mathcal{R}_{\text{G}}}(k \gtrsim k_c) = \frac{1}{2\epsilon_V} \mathcal{P}_{\delta\phi}(k \gtrsim k_c) = \frac{1}{2\epsilon_V} \big| \alpha_{k}+\beta_{k} \big|^2 \(\frac{H}{2\pi}\)^2 ~,
\end{equation}
where $\alpha_{k}$ and $\beta_{k}$ are the Bogoliubov coefficients due to the upward step transition. A detailed derivation of the coefficients is given in Appendix \ref{deltaphi gauge}. Here we provide an approximate expression of the power spectrum at the short wavelengths,
\begin{equation}\label{Short wavelength power spectrum}
\mathcal{P}_{\mathcal{R}_{\text{G}}}(k \gtrsim k_c) \simeq \frac{1}{2 \epsilon_V}\frac{g^4+1+\left(1-g^4\right) \cos (2 k \tau_c)}{2 g^2} \(\frac{H}{2\pi}\)^2 \leq \frac{1}{2 \epsilon_V}\frac{1}{g^2} \(\frac{H}{2\pi}\)^2~.
\end{equation}
Thus the power spectrum scales with the parameter $g$ as
\begin{equation}\label{Power spectrum amplitude scaling}
\mathcal{P}_{\mathcal{R}}(k) \propto g^{-2}\,.
\end{equation}
Recall that $g$ defined in \eqref{g} parametrizes the size of the upward step; $g$ becomes smaller as the step becomes higher. Thus a higher step amplifies the power spectrum and boosts the production of PBHs.
Another feature in the above spectrum is an oscillating feature with period $2\tau_c$. Notice that the amplitude of these oscillations is almost constant. This result is agreement with the comoving slicing computation where $\mathcal{R}$ is quantized in Appendix \ref{R gauge}. The constancy of the amplitue is an artifact due to our sudden step approximation. Since the step is infinitely sharp, it affects all the comoving wavenumbers up to infinitely large $k$. In reality, the spectrum settles down to the standard slow-roll expression as we shall shortly see below.
To confirm our analytical estimates, we numerically computed the power spectrum by smoothing the step with the function,
\begin{align}\label{Numerical step function}
\Theta(\phi)
\equiv \frac{1}{2}\left( \tanh\left[\lambda (\phi-\phi_c)\right] + 1 \right) ~,
\end{align}
where $\lambda$ controls the steepness of the step.
The left panel in Fig.~\ref{fig:powerspectrum} shows the result, where we chose a steep step transition $\lambda = 5 \times 10^{4} M_{pl}^{-1}$, with the energy scale $V_0 = 7 \times 10^{-10} M_{pl}^{4}$ and the velocity at the step
$\pi_c=-0.00232803 M_{pl}$.
This gives $\lambda|\pi_c|\approx 116$, which means the step is quite sharp.
Here we mention a difference from the analytical result. The analytical power spectrum oscillates with a constant amplitude as shown in \eqref{Short wavelength power spectrum}, while the numerical result shows damped oscillations, which is due to the finiteness of the step.
In contrast to the infinitely sharp approximation for the analytical computation, the width of the step determines the maximum wavenumber affected by the step, $k_\text{max}\sim \lambda\pi_c k_c$. The modes $k\gg k_\text{max}$ are not affected by the transition.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth]{powerspectrum_SR_USR_SR.pdf}
\includegraphics[width=0.45\textwidth]{powerspectrum_SR_SR.pdf}
\caption{The enhancement of the power spectrum from both the USR phase and the upward step. Left panel shows the power spectrum of inflection-point model with USR phase which start at $\phi = \phi_s$ in Figure \ref{fig:potential draft}. Right panel is the power spectrum without USR stage, the rest parameters of potential are the same. The orange and blue curve corresponds to $g=1$ and $g=0.11$ respectively. All the scaling properties match the behaviours displayed in Eq.~\eqref{Spectrum index} and \eqref{Power spectrum amplitude scaling} very well. }
\label{fig:powerspectrum}
\end{figure}
To summarize, the analytical formula for the power spectrum for the potential~\eqref{parameterized potential} takes the form,
\begin{equation}
\mathcal{P}_{\mathcal{R}}(k) \simeq \left\{
\begin{aligned}
& \frac{1}{2\epsilon_S} \(\frac{H}{2\pi}\)^2\,; & k \ll k_s ~, \\
& \frac{1}{2\epsilon_V g^2} \(1- \frac{g^2 h}{6}\)^2 \(\frac{H}{2\pi}\)^2\,;
& k \lesssim k_c ~, \\
& \frac{1}{2\epsilon_V} \frac{g^4 + 1 + (1-g^4)\cos(2k\tau_c)}{2g^2} \(\frac{H}{2\pi}\)^2\,; &k \gtrsim k_c ~,
\end{aligned}
\right.
\end{equation}
with the spectrum at $k\lesssim k_s$ having the power-law index $ n_s-1 \simeq 4$.
\subsubsection*{SR-SR Model}
The calculations for the SR-SR case are similar to those for the SR-USR-SR case discussed above.
Before the step, the behaviour of the curvature perturbation is well approximated by the adiabatic mode function \eqref{eq:BD-vacuum mode function}.
When the inflaton goes through the step, the negative frequency modes are excited as in the previous case.
Following the same technical details for the SR-USR-SR case, as given in App.\ref{Mode function matching}, the power spectrum for the parameter range of our interest, $g^2\ll |\eta_c| \ll 1$ and $|h|\sim \mathcal{O}(1)$, is given by
\begin{equation}
\begin{aligned}
\mathcal{P}_\mathcal{R}(k) &= \frac{H^2}{8\pi^2 \epsilon_k} |\alpha_k+\beta_k|^2\\ &\simeq
\frac{H^2}{32\pi^2\epsilon_k g^2 }\frac{ \left(\eta_c+2 k^2 \tau_c^2\right)^2+\eta_c^2 k^2 \tau_c^2}{k^6 \tau_c^6} \big[\sin (k \tau_c)-k \tau_c \cos (k\tau_c)\big]^2 ~.
\end{aligned}
\end{equation}
Here the $\epsilon_k$ is the slow-roll parameter of the potential $\epsilon_V$ when $k$ modes crossing the Hubble horizon. The $k$-dependence of the power spectrum can be analyzed in three different scales. For the long wavelength modes with $k^2\tau_c^2\ll |\eta_c|\ll 1$, we have
\begin{eqnarray}
\mathcal{P}_\mathcal{R}(k) \simeq
\frac{H^2}{8\pi^2\epsilon_k } ~\frac{\eta_c^2}{36g^2} ,
\end{eqnarray}
while for the short wavelength modes $-k\tau_c\gg1$, we find
\begin{equation}
\mathcal{P}_\mathcal{R}(k) \simeq
\frac{H^2}{8\pi^2\epsilon_k g^2 } \qty[\cos (k\tau_c) -\frac{1}{k\tau_c}\sin(k\tau_c)]^2 \simeq \frac{H^2}{8\pi^2\epsilon_k g^2 } \cos^2(k\tau_c) .
\end{equation}
For the intermediate frequencies with $|\eta_c|< k^2\tau_c^2< 1$, we obtain the $k^4$ growth behavior~\citep{Byrnes:2018txb},
\begin{equation}
\mathcal{P}_\mathcal{R}(k) \simeq \frac{H^2}{8\pi^2\epsilon_k } ~\frac{k^4\tau_c^4}{9g^2}\,.
\end{equation}
At $k \sim k_c$, the amplitude of the power spectrum is also proportional to $g^{-2}$.
To summarize, an approximate analytical formula for the power spectrum for the SR-SR transition model takes the form,
\begin{equation}
\mathcal{P}_{\mathcal{R}}(k) \simeq \left\{
\begin{aligned}
& \frac{H}{8\pi^2 \epsilon_k} \frac{\eta^2}{36g^2}\,; &k^2 \tau_c^2 \ll |\eta_c| \ll 1 ~,\\
& \frac{H^2}{8\pi^2 \epsilon_k} \frac{k^4 \tau_c^4}{9g^2}\,; &|\eta_c| < k^2\tau_c^2 < 1 ~, \\
& \frac{H^2}{8\pi^2 \epsilon_k g^2}\cos^2\(k\tau_c\)\,;&k^2 \tau_c^2 \gg 1 ~, \\
\end{aligned}
\right.
\end{equation}
The right panel in Fig.~\ref{fig:SR to SR} shows the corresponding numerical result. In this case, we have $\pi_c= -0.071M_{pl}$, hence $\lambda|\pi_c|=710$.
Thus the step is much steeper than the SR-USR-SR case. This is the reason why
we do not see the decrease in the oscillation amplitude.
\subsection{Non-Gaussian imprints on PBH mass fraction}\label{PBH_mass_fraction}
With the above results at hand, we consider the implications of the non-Gaussian tail to the generation of PBHs.
It is well-known that PBHs are formed from rare and large perturbations on the tail of the probability distribution.
To estimate the PBH abundance, it is customary to compute the fraction of the perturbation that turns into PBHs by
integrating the PDF of the density perturbation $P(\delta)$ above a certain critical value $\delta_c$ of order unity,
\begin{align}
\beta_{\text{PBH}} = \int^{+\infty}_{\delta_c} \dd \delta\, P(\delta)\,.
\end{align}
We note that, for a Gaussian distribution with a small variance, $\left\langle{\delta^2}\right\rangle\ll1$, $\beta_\text{PBH}$ is extremely sensitive to the behaviour of the PDF at the tail.
Although it is non-trivial in general to translate the critical value of the density perturbation to that of the curvature perturbation as the relation involves the Laplacian operator, for a peaked power spectrum, one may approximately evaluate $\beta_\text{PBH}$ by introducing a critical value $\mathcal{R}_c$ corresponding to $\delta_c$ which is
also $\mathcal{O}(1)$ \cite{Musco:2020jjb}.
Thus we have
\begin{align}
\beta_{\text{PBH}} = \int^{+\infty}_{\mathcal{R}_c} \dd \mathcal{R} \,P(\mathcal{R}) \,,
\end{align}
where $\mathcal{R}_c=C\delta_c+\langle{\mathcal{R}}\rangle$, where $C$ is a constant of order unity.
As we are mainly interested in the primordial non-Gaussian tail, this simple approximation is good enough to show its effects on the PBH formation.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.7]{beta-ratio.pdf}
\caption{The ratio of PBH mass fractions from non-Gaussian and Gaussian tails.}
\label{fig:beta}
\end{figure}
Using the PDF for $\mathcal{R}$ in \eqref{pdfnon-G}, we obtain the mass fraction of PBHs at the time of formation,
\begin{align}
\beta^{\text{NG}}_{\text{PBH}} &= \int_{\mathcal{R}_c}^{2/\abs{h}}P(\mathcal{R})\dd \mathcal{R}
\nonumber\\
&= \frac{\sqrt{2\pi\sigma^{2}_{\mathcal{R}}}}{\Omega}\left[ \text{Erf}\left(\frac{1}{\abs{h}\sqrt{2\sigma^{2}_{\mathcal{R}}}}\right) - \text{Erf}\left(\frac{\mathcal{R}_c\left(4-\abs{h}\mathcal{R}_c\right)}{4\sqrt{2\sigma^{2}_{\mathcal{R}}}}\right) \right]
\Theta\left( \frac{2}{\abs{h}}-\mathcal{R}_c\right) .\label{NG mass fraction in calR}
\end{align}
An intriguing fact is that there will be exactly no PBH if $2/|h|<\mathcal{R}_c$.
For comparison, the mass fraction for the Gaussian PDF \eqref{Gaussian} is given by
\begin{align}
\beta^{\text{G}}_{\text{PBH}} &= \int_{\mathcal{R}_c}P(\mathcal{R}_{\text{G}})\dd \mathcal{R}_{\text{G}} = \frac{1}{2} \left[1-\text{Erf}\left(\frac{\mathcal{R}_c}{\sqrt{2\sigma^{2}_{\mathcal{R}}}}\right)\right] . \label{G mass fraction in calR}
\end{align}
As a demonstration of the enhancement of the PBH mass fraction, we plot the ratio of $\beta_\text{PBH}$ for the non-Gaussian case to the Gaussian case as a function of the critical value $\mathcal{R}_C$ in Fig.~\ref{fig:beta}, for $\sigma^{2}_{\mathcal{R}}=0.02$.
As the purpose of the paper is not to give a detailed analysis of the PBH formation, we ignore the ambiguities in the PBH formation critera discussed in the literature.
As we can see, for $|h| \gtrsim1$, the non-Gaussian tail can enhance the PBH mass fraction
by several orders of magnitude.
\subsection{Inflaton trapping as another seed for PBHs}\label{Subsec: trapping PBHs}
Interestingly, in addition to the enhancement from the non-Gaussian tails, there is another effect of upward-step models that can lead to the production of PBHs. Namely the trapping of the inflaton at the potential step \cite{Inomata:2021tpx}. When the inflaton arrives at $\phi_c$, the velocity fluctuations there may render some Hubble size regions of the universe trapped at $\phi_c$ due to an insufficient momentum to climb the step.
Then, such a region of the universe surrounded by the region where the inflaton has successfully climbed up the step behaves like a true vacuum bubble in the sea of false vacuum.
Thus initially the bubble will expand exponentially. But as the inflaton in the exterior of the bubble rolls down the potential hill, the potential energy of the outer universe becomes smaller than that of the trapped region. Then the roles are reversed. That is, the trapped universe now behaves like false vacuum surrounded by true vacuum.
Then, the bubble wall will be pushed toward the false vacuum side direction, and the
trapped region will become a black hole.
In passing it is interesting to note that inside the bubble will still be expanding exponentially~\cite{Deng:2017uwc, Garriga:2015fdk}. Thus it gives rise to a wormhole-like spacetime geometry with two causally disconnected universes \cite{Sato:1981bf, Sato:1981gv}.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.3]{trapped_PBH.pdf}
\caption{A sketch plot of inflaton trapping as the seeds of PBHs. When inflaton of region A is trapped in the bottom of the step, a bubble universe forms and begins to expand. Only when the potential energy of outside lower than $V(\phi_c^{+})$, the bubble universe start to collapse into a black hole.}
\label{fig:trapped_PBH}
\end{figure}
We can estimate the mass of such PBHs as
\begin{equation}
M_{\rm PBH} = \rho \frac{4}{3}\pi R^3 \simeq \frac{4\pi M_{\rm pl}^2}{H} e^{3\Delta N} ~,
\end{equation}
where $\Delta N$ is the number of $e$-folds for the potential energy of the inflaton in the exterior of the bubble to become equal to that inside the bubble.
We can also estimate the probability of forming such PBHs. From Eq. \eqref{approximate solutioin of calR}, we can determine the trapping condition to be
\begin{equation}
\mathcal{R}_{\rm G} \geq \frac{1}{|h|} ~.
\end{equation}
Thus, the inflaton trapping probability, and hence the associated PBH formation probability
is estimated as
\begin{equation}
\beta_{\rm PBH}^{\rm trap} = \frac{1}{2}\[1 + {\rm Erf} \(\frac{1}{\sqrt{2 \sigma_{\mathcal{R}}^2 } |h| }\)\] ~.
\end{equation}
Finally, let us mention the possibility that, due to quantum tunneling effects, the region trapped by the false vacuum may eventually tunnel to the true vacuum and thus destroying the PBH. According to \cite{Coleman:1977py}, the tunneling rate are actually exponentially suppressed by the Euclidean action $\Gamma\propto e^{-S_E}$. Typically, we would expect that, compared with the expansion rate of the universe, the tunneling probability is actually negligible, i.e. $\Gamma/H^4 \ll 1$. The detailed computation here is actually model-dependent and we refer to readers to a specific example mentioned in \cite{Inomata:2021tpx}.
\section{Conclusions and outlook}
In the study of the primordial curvature perturbation, the tail of the probability distribution is a wonderland where the conventional perturbative approaches break down, and large but rare fluctuations can lead to plentiful phenomenological consequences such as the abundant PBH formation. Interestingly, deviations from the Gaussian distribution of the tail may not be necessarily related to the lower-order moments computed in the perturbative regime. Therefore, non-Gaussian tails may provide us with a novel and crucial window for probing non-perturbative effects during inflation.
In this paper, we argued that the tail of the probability distribution of the curvature perturbation can be highly non-Gaussian even if the perturbative non-Gaussianity remains small. In particular, we constructed a specific model of single-field inflation with an upward step in the potential that supports our argument. We performed a detailed analysis of the background evolution with an upward-step transition, with or without an USR stage, and identified the important role played by off-attractor trajectories in the phase space. Then through a perturbative computation, we find that the local non-Gaussianity in these models depends on the details of the upward step transition, which can be either large or small.
In particular, the local $f_{\rm NL}$ parameter can become much bigger than the one for the original USR inflation $f_{\rm NL}=5/2$.
Then focusing on the non-Gaussian tail from the step transition, which is the key part of this work, we derived a non-perturbative $\delta N$ formula, and identified a nonlinear mapping from the Gaussian inflaton fluctuation to the curvature perturbation. This relation leads to highly nontrivial tail behaviour of the non-Gaussian PDF of the curvature perturbation. Intriguingly, we found that the tail of the distribution is significantly suppressed in the case of large $f_{\rm NL}$, while it is exponentially enhanced in the case of small $f_{\rm NL}$. This result,
in apparent contradiction with the naive speculation from the perturbative analysis,
is due to the non-perturbative effects caused by the upward step in both the
USR-SR and SR-SR transitions.
Lastly, we studied the implications of our results to the formation of PBHs. For this purpose, we explicitly computed the curvature perturbation spectrum for an inflection-point potential model in which inflation begins with a SR stage, goes through an USR stage, and makes a transition to another SR stage with an upward step in the potential.
Our analysis shows that due to the highly non-perturbative non-Gaussian tail, the mass fraction of PBHs can be boosted by several orders of magnitudes. In addition, in such a kind of single-field inflation models with an upward step, we showed that there exists another possibility of producing PBHs by trapping the inflaton at the bottom of the step. Namely, some regions of the space where the inflaton failed to climb up the upward step would eventually collapse to black holes.
The above results can also inspire new lines of research for novel phenomenologies of non-Gaussian tails.
Here we close by listing several possible directions for future investigations.
First of all, it would be interesting to further explore various non-Gaussian tails in other scenarios with non-perturbative effects.
As demonstrated in our study, the fully nonlinear mapping between $\mathcal{R}$ and $\delta \phi$ plays a key role for the tail of the PDF. Meanwhile, the stochastic effects during inflation may also lead to nontrivial non-Gaussian tails
\cite{Ezquiaga:2019ftu, Figueroa:2020jkf, Pattison:2021oen, Achucarro:2021pdh, Ahmadi:2022lsm}.
Thus it is encouraging to extend our current analysis to incorporate more general considerations.
Secondly, for the studies on PBHs, there are various mechanisms in the literature for their formation, such as resonance effects during inflation \cite{Cai:2018tuh, Cai:2019jah, Chen:2019zza, Chen:2020uhe, Zhou:2020kkf, Cai:2021wzd, Cai:2021yvq, Peng:2021zon} and multifield dynamics \cite{Pi:2021dft, Hooshangi:2022lao}. Richer behaviours of non-Gaussian tails may be expected in these scenarios, which deserve a closer look.
At last, in addition to PBHs, it would also be crucial to study other phenomenological implications of the non-Gaussian tail. One possibility is the scalar-induced Gravitational Waves during inflation.
Normally their amplitudes are also amplified when there are enhanced curvature perturbations for generating PBHs. Then the presence of a highly non-Gaussian tail could lead to distinct signals in these induced tensor modes. These open questions deserve a closer look in future research.
\
\paragraph*{Acknowledgements}
We are grateful to Chao Chen, Xingang Chen, Keisuke Inomata, Bichu Li, Chunshan Lin, Mohammad Hossein Namjoo, Bo Wang and Yi Wang for discussions.
DGW thanks the University of Science and Technology of China for hospitality where this work was initiated.
YFC and XHM are supported in part by the National Key R\&D Program of China (2021YFC2203100), the NSFC (11961131007), by the Fundamental Research Funds for Central Universities, by the CSC Innovation Talent Funds, by the CAS project for young scientists in basic research (YSBR-006), by the USTC Fellowship for International Cooperation, and by the USTC Research Funds of the Double First-Class Initiative.
MS is supported in part by JSPS KAKENHI grants (19H01895, 20H04727, 20H05853).
DGW is supported by the Netherlands Organisation for Scientific Research (NWO) through a Vidi grant with Project No. 680-47-535, and a Rubicon Postdoctoral Fellowship.
ZZ is supported in part by the scholarship at Princeton.
We acknowledge the use of computing facilities of Kavli IPMU, as well as the clusters {\it LINDA} and {\it JUDY} of the particle cosmology group at USTC.
|
2,877,628,088,350 | arxiv | \section{Introduction}
Properties of non-Hermitian random operators or matrices have attracted
recently a considerable attention. Non-Hermitian random Hamiltonians can
appear as a result of mapping of a model for flux lines in a $\left(
d+1\right)$-dimensional superconductor with line defects in a tilted
magnetic field on a $d$-dimensional model for bosons in a random potential%
\cite{hatano}. Non-Hermitian operators enter Fokker-Planck equations that
describe diffusion and advection of classical particles in a spatially
random but time-independent velocity field \cite
{fisher,aron,kravtsov,bouchaud,chalker} and also determine equations used
for study of problems of turbulence \cite{burgers,fogedby,gurarie}.
Ensembles of random complex non-Hermitian and real asymmetric matrices find
their application for a description of dissipative quantum maps\cite{haake}
in neural network dynamics\cite{sompol}. Recently it was suggested that they
could be relevant for QCD where they correspond to a random Dirac operator
with a non-zero chemical potential\cite{stephanov}. Starting from the first
works\cite{ginibre,girko}, properties of the ensembles of the non-Hermitian
matrices were intensively studied in a considerable number of publications
\cite{grobe,sommers,janik,zee,fyodorov,fyodorov1}.
Unusual properties of the ensembles of the non-Hermitian operators or
matrices are related to the fact that eigenvalues of the operators and
matrices can be complex. Completely different methods have been used for
study of distributions of the eigenvalues on the complex plane. For example,
the authors of Refs.\cite{chalker,zee} applied Green functions methods while
in Refs.\cite{fyodorov1}, a method of orthogonal polynomials was used. In Ref.%
\cite{fyodorov}, a new regime of a ``weak non-Hermiticity'' was found and the
authors have calculated a joint probability of complex eigenvalues for
complex weakly non-Hermitian matrices. For calculations they used the
supersymmetry technique \cite{book} and derived a zero-dimensional
non-linear $\sigma $-model. An important information about the distribution
function of complex eigenvalues of $N\times N$ matrices for the orthogonal,
unitary and symplectic chiral random matrix ensembles has been obtained
recently numerically\cite{verbaar}.
Although the model with the non-Hermitian Hamiltonian of Ref.\cite{hatano}
differ from those with random matrices, they turn out to be closely related
to each other. In Ref.\cite{efetov}, the model with the non-Hermitian
Hamiltonian $H$ was studied using the supersymmetry method. This Hamiltonian
can be written in the form \begin{equation} H=\frac{(\hat{{\bf p}}+i
{\bf h})^2}{2m}+U({\bf r)} \; , \label{a1} \end{equation}
where $\hat{{\bf p}}=-i\nabla $, $m$ is the mass of particles, and $U\left(
{\bf r}\right) $ is a random potential. The vector ${\bf h}$ is proportional
to the component of the magnetic field perpendicular to the direction of the
line defects in the initial problem of the vortices in superconductors. The
Hamiltonian (\ref{a1}) describes a particle moving in an imaginary
vector-potential $i{\bf h}$ and a real random potential $U\left( {\bf r}%
\right) $. The distribution of complex eigenvalues on the complex plane can
be extracted from the distribution function $P(\epsilon ,y)$ defined as
follows \begin{equation}
P(\epsilon ,y)=\frac 1V\left\langle \sum_k\delta (\epsilon -\epsilon
_k^{\prime })\delta (y-\epsilon _k^{\prime \prime })\right\rangle \; ,
\label{a2} \end{equation}
where $\epsilon _k^{\prime }$ and $\epsilon _k^{\prime \prime }$ are the
real and imaginary parts of the eigenenergies, respectively and $V$ is the
volume of the system. The angle brackets stand for an averaging over the
random potential and the sum should be taken over all states.
The problem of calculation of the function $P(\epsilon ,y)$ was mapped in
Ref.\cite{efetov} onto a new supermatrix non-linear $\sigma $-model. This
model differs from the conventional ones written previously\cite{book} by
the presence of new ``effective fields''. The symmetry of the matrix $Q$
entering the new $\sigma $-model is the same as that of obtained in Ref.\cite
{book} for the orthogonal ensemble. This is not accidental because the
Hamiltonian, Eq. (\ref{a1}), is real and, hence, time reversal invariant.
To violate the time reversal invariance one can add to the Hamiltonian (\ref
{a1}) a real magnetic field or/and magnetic impurities. This leads to
additional terms in the $\sigma $-model lowering the symmetry of the model.
As a result, one gets\cite{efetov} the $\sigma $-model with the
supermatrices $Q$ corresponding to the unitary ensemble. Although the real
magnetic interactions in the Hamiltonian with the imaginary vector potential
do not correspond to any physical interactions in the initial problem of the
vortices, consideration of the $\sigma $-model for the unitary ensemble was
interesting from the formal point of view because it allowed to establish
important relations with the random matrix models.
The $\sigma $-models corresponding to the Hamiltonian of Eq. (\ref{a1}) and
its extensions can be written in an arbitrary dimension. Remarkably, the
zero-dimensional version of the $\sigma $-model for the unitary ensemble is
exactly the same as the zero-dimensional $\sigma $-model derived in Ref.\cite
{fyodorov} for the weakly non-Hermitian matrices. Complex random
non-Hermitian matrices appeared in studies of dissipative quantum maps \cite
{haake,grobe}, which justifies an interest to studying the unitary ensemble.
By the term ``weakly non-Hermitian'' the authors of Ref.\cite{fyodorov}
called matrices $X$ that could be represented in the form \begin{equation}
X=A+i\alpha N^{-1/2}B \; , \label{a3} \end{equation}
where $A$ and $B$ are Hermitian $N\times N$ matrices and $\alpha $ is a
parameter characterizing the non-Hermiticity.
The $\sigma $-model obtained from the ensemble of the matrices, Eq. (\ref{a3}%
), and from the Hamiltonian with the imaginary vector-potential allows us to
relate the parameters $h$ and $\alpha $ to each other. A similar
correspondence exists for the orthogonal ensemble.
Study of the distributions of the complex eigenvalues revealed a striking
difference between the orthogonal and unitary ensembles. The function $%
P(\epsilon ,y)$, Eq. (\ref{a2}), is a smooth positive function of $y$ for
the unitary (provided the disorder is not very strong, this function does
not depend on $\epsilon $). It reaches its maximum at $y=0$ and monotonously
decays with increasing $y$. The corresponding function $P(\epsilon ,y)$ for
the orthogonal ensemble is a sum of a smooth function and a $\delta $%
-function of $y$. This means that a finite fraction of the eigenvalues
remain real at any degree of the non-Hermiticity.
In all the works done in statistical physics only the orthogonal and unitary
were considered. The symplectic ensembles have not been even mentioned,
apparently due to the absence of any applications. However, for the random
matrix ensembles applied for clarifying properties of QCD models\cite
{ver,janik,stephanov,verbaar}, the symplectic ensemble is of the same
importance as the orthogonal and unitary ones. Moreover, numerical results
for distributions of complex eigenvalues presented in Ref.\cite{verbaar}
demonstrate a pronounced difference between the ensembles. The distribution
of the complex eigenvalues on the complex plane is homogeneous in the case
of the unitary ensemble while it shows an {\it accumulation} of the
eigenvalues along the real axis. This corresponds to the presence of the $%
\delta $-function in the function $P(\epsilon ,y)$, Eq. (\ref{a2}), found in
Ref.\cite{efetov}. Although the authors of Ref.\cite{verbaar} considered
chiral matrices, the dependence of the number of eigenvalues at the real
axis on a parameter characterizing the non-Hermiticity was found to be
exactly the same as in Ref.\cite{efetov}. This shows that the phenomenon of
the accumulation is quite general.
A completely different behavior was found in Ref.\cite{verbaar} for the
symplectic ensemble. The distribution function of the complex eigenvalues is
in this case smooth but the probability of real eigenvalues turns to zero,
which corresponds to a {\it depletion} of the eigenvalues along the real
axis. This is a new effect that clearly motivates an analytical
investigation of non-Hermitian symplectic matrices.
In the present publication the distribution function $P(\epsilon ,y)$, Eq. (%
\ref{a2}), is calculated for the ensemble of symplectic non-Hermitian
matrices. This is done by writing a proper zero-dimensional $\sigma $-model.
We are able to obtain an explicit expression for the function $P(\epsilon
,y) $ and demonstrate the depletion of the eigenvalues along the real axis.
The paper is organized as follows: In Sec. II, we introduce the notations
and remind to the reader the scheme of the derivation of the $\sigma $%
-model. In Sec. III, we present the parametrization of the supermatrices $Q$
for the symplectic ensemble. In Sec. IV, the joint probability density of
complex eigenvalues is calculated. Sec. V summarizes the results, and in the
Appendix the Jacobian of the parametrization is derived.
\section{Non-linear $\sigma $-model}
The derivation of the $\sigma $ model for the non-Hermitian orthogonal and
unitary ensembles has been comprehensively presented in Ref. \cite{efetov}.
Addressing to this paper for all details, we repeat some intermediate steps,
concentrating on minor changes that have to be done in the symplectic case.
The final goal is to derive the joint probability density of complex
eigenenergies $P(\epsilon ,y)$, Eq. (\ref{a2}). Of course, one can derive
the zero-dimensional $\sigma $-model from the ensemble of symplectic random
matrices but we prefer to start from the Hamiltonian, Eq. (\ref{a1}), adding
to it spin-orbit impurities.
Due to the non-Hermiticity the Hamiltonian, the notion of advanced and
retarded Green functions, $G_{\epsilon }^{A}$ and $G_{\epsilon }^{R}$
usually used in perturbation theory and in deriving the non-linear $\sigma $%
-models becomes meaningless since they loose their analytic properties. The
difficulty can be overcome by introducing an Hermitian double size operator $%
\hat{M}$ of the form \begin{equation} \hat{M}=\left( \begin{array}{cc}
H^{\prime }-\epsilon & i(H^{\prime \prime }-y) \\
-i(H^{\prime \prime }-y) & -(H^{\prime }-\epsilon )
\end{array} \right) , \label{b1} \end{equation} where \begin{equation}
H^{\prime }=\frac{(H+H^{+})}2 \; ,\ \ \ H^{\prime \prime }=-\frac{i(H-H^{+})}2
\; . \label{b2} \end{equation}
In equations (\ref{b1}) and (\ref{b2}), $H$ is the Hamiltonian, Eq. (\ref{a1}),
and $H^{+}$ means its Hermitian conjugated. Instead of manipulating the
non-Hermitian operator, one can use the Hermitian operator $\hat{M}$\cite
{efetov}. Using the ``effective Hamiltonian'' $\hat{M}$, Eq. (\ref{b1}), one
can represent the complex eigenvalues distribution function $P(\epsilon ,y)$%
, Eq. (\ref{a2}), in a form of a functional integral over supervectors $\psi
\left( {\bf r}\right) $ with the weight $\exp (-{\cal L})$ with the
Lagrangian ${\cal L}$ taking the form \begin{equation}
{\cal L}=-i\int \overline{\psi }({\bf r})[{\cal H}_{0}+U({\bf r}%
)+V_{so}\left( {\bf r}\right) ]\psi ({\bf r})\,d{\bf r} \; .
\label{b3} \end{equation}
Here, $\psi ({\bf r})$ and $\overline{\psi }({\bf r})$ are the standard
supervector and its charge-conjugated counterpart, respectively, composed
from anticommuting and commuting fields \cite{book}. The matrix operator $%
{\cal {H}}_0$ consists of two terms \begin{equation}
{\cal H}_0=(H_0^{\prime }-\epsilon +i\gamma \Lambda )I+i\Lambda
_1(H_0^{\prime \prime }+y\tau _3) \; , \label{b4} \end{equation}
where $H_0^{\prime }$ and $H_0^{\prime \prime }$ have the form
\begin{equation}
H_0^{\prime }=\frac{{\bf \hat{p}}^2}{2m} \; ,\qquad H_0^{\prime \prime }=-
i\frac{%
{\bf h\hat{p}}}m \; . \label{b5} \end{equation}
In equation (\ref{b4}), $\gamma $ is a small positive number that should be put
to zero at the end of calculations. The term $V_{so}$ in Eq. (\ref{b3})
stands for spin-orbit impurities. It can be derived from the initial
Hamiltonian after a formal inclusion of the interaction $U_{so}\left( {\bf r}%
\right) $ with the spin-orbit impurities. The simplest form of this
interaction can be written as follows \begin{equation}
U_{so}({\bf r})=\mbox{\boldmath $\sigma$}\,[\nabla u_{{\rm so}}({\bf r})\times
\hat{{\bf %
p}}] \; , \label{b6} \end{equation}
where the vector \mbox{\boldmath $\sigma$} is formed from the Dirac matrices $\sigma
_x $, $\sigma _y$, and $\sigma _z$. The matrices $I$, $\Lambda $, $\Lambda _1
$ and $\tau _3$ entering Eq. (\ref{b4}) have the form
\begin{equation} I=\left( \begin{array}{cc}
{\bf 1} & 0 \\ 0 & {\bf 1} \end{array}
\right) ,\quad \Lambda _1=\left( \begin{array}{cc}
0 & {\bf 1} \\ {\bf 1} & 0 \end{array}
\right) ,\quad \Lambda =\left( \begin{array}{cc} {\bf 1} & 0 \\
0 & -{\bf 1} \end{array} \right) , \label{b7}
\end{equation} \begin{equation} \tau _3=\left(
\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}
\right) . \label{b8} \end{equation}
Due to the necessity of considering the spin variables the supervectors $%
\psi \left( {\bf r}\right) $ have now $16$ components. The unit blocks ${\bf %
1}$ in the matrices in Eq. (\ref{b7}) have the size $8\times 8$ and unities $%
1 $ entering the matrix $\tau _3$ are $2\times 2$ matrices.
The distribution of the electric fields $\nabla u_{{\rm so}}({\bf r})$ is
assumed to be Gaussian: \begin{equation}\langle \nabla u_{{\rm so}}({\bf r)}
\rangle =0 \; ,\ \ \ \langle \partial _{i}u_{%
{\rm so}}({\bf r})\,\partial _{j}u_{{\rm so}}({\bf r})\rangle =\frac{\delta
_{ij}\,\delta ({\bf r}-{\bf r}^{\prime })}{6\pi \nu \tau _{{\rm so}}}
\; , \label{b9} \end{equation}
where the density of states at ${\bf h}=0$ at the Fermi surface $\nu
=mp_{0}/2\pi ^{2}$ and $\tau _{{\rm so}}$ is the spin-orbit scattering time.
Further transformations are performed according to standard rules of the
supersymmetry technique\cite{book}. Averaging over the disorder results in
an interaction term $\psi ^4$ in the Lagrangian ${\cal L}$. This term is
decoupled by integration over a supermatrix $Q$. Then, one integrates the
supervector $\psi $ out, arriving thus at an integral over $Q$ with the
weight $\exp (-F[Q])$, where $F[Q]$ is a free energy functional.
The spin-orbit interactions lead to additional ``effective fields'' of a
certain symmetry in the free energy functional $F[Q]$. These fields lower
the symmetry of the functional. As a result, a part of fluctuational modes
have a gap and their contribution at low energies can be neglected. This is
equivalent to putting certain elements of the supermatrix $Q$ to zero.
Carrying out this procedure, one comes to a matrix $Q$ with spin blocks
proportional to unit matrices. This is equivalent to consideration the model
with $8\times 8$ supermatrices $Q$ having a new symmetry. These are the same
supermatrices as those used in Ref.\cite{book} for description of the
symplectic case.
Therefore, we should perform calculations similar to those of Ref.\cite
{efetov} but integrating over the supermatrices $Q$ with the symmetry
corresponding to the symplectic ensemble. After standard transformations, we
reduce the distribution function $P(\epsilon ,y)$ to the following integral
\begin{equation}
P(\epsilon ,y)=-\frac{\pi \nu }{4\Delta }\int A[Q]\exp \left( -F[Q]\right)
dQ \; , \label{b10} \end{equation} \[
A[Q]=\left( Q_{42}^{11}+Q_{42}^{22}\right) \left(
Q_{24}^{11}+Q_{24}^{22}\right) -\left( Q_{42}^{21}+Q_{42}^{12}\right) \left(
Q_{42}^{21}+Q_{24}^{12}\right) \]
with the zero-dimensional version of the free-energy functional
\begin{equation}
F[Q]=STr\left( \frac{a^{2}}{16}[Q,\Lambda _{1}]^{2}-\frac{x}{4}\Lambda
_{1}\tau _{3}Q\right) \; . \label{b11} \end{equation}
In equation (\ref{b11}), the symbol $[..]$ stands for commutator, $STr$ for
supertrace and we have introduced the following parameters: \begin{equation}
a^{2}=\frac{2\pi D_{0}h^{2}}{\Delta } \; ,\qquad x=\frac{2\pi
y}{\Delta } \; , \label{b12} \end{equation}
where $D_{0}$ is the classical diffusion coefficient and $\Delta =(2\nu
V)^{-1}$ is the mean level spacing (the factor $2$ in this expression is due
to lifting of the spin degeneracy by the spin-orbit impurities).
\section{Parametrization for the supermatrices $Q$}
To integrate over all symplectic matrices, proper variables parametrizing
the $Q$ supermatrix should be introduced. The parameters have to be chosen
so that to cover all the set of the symplectic matrices: Any symplectic
matrix has to be reached only once.
Although the parametrization for the unitary and the orthogonal ensembles
cannot be used for the symplectic ensemble only minor changes have to be
done to adjust the non-Hermitian parametrization \cite{efetov} to the case
under consideration. As in Ref.\cite{efetov}, we represent the $Q$ matrix
in the form of the product \begin{equation}
Q=TYQ_0\overline{Y}\,\overline{T} \; . \label{c1} \end{equation}
To fulfill the constrain $Q^{2}=1$, the following equalities must be
hold \begin{equation} Q_{0}^{2}=1\; ,\ \ T\overline{T}=1\; ,\ \
Y\overline{Y}=1 \; . \label{c2} \end{equation}
As in Ref.\cite{efetov}, the supermatrices $T$ and $Y$ are chosen to commute
with $\Lambda _{1}
\begin{equation} \lbrack T,\Lambda _{1}]=[Y,\Lambda _{1}]=0 \; . \label{c3}
\end{equation}
The next simplification facilitates the parametrization and enables one to
calculate the Jacobians quickly. Namely, we decompose the supermatrix $Y$
into the product of the matrix $Y_{0}$ containing commuting variables and
the matrices $R$ and $S$, consisting of the Grassmann ones
\begin{equation} Y=Y_{0}RS \; . \label{c4} \end{equation}
The $2\times 2$ blocks in the matrices $R$, $S$, and $Q_{0}$ are chosen to
be diagonal; the necessary symmetry of the $2\times 2$ blocks $a$, $b$ and $%
\sigma $ is achieved by a proper choice of $2\times 2$ blocks of the matrix $%
Y_{0}$. Thus, the matrices $Q_{0}$, $R$, and $S$ can be written in a form
similar to the one in \cite{efetov}
\begin{equation} Q_{0}=\left( \begin{array}{cc}
\cos \hat{\varphi} & -\tau _{3}\sin \hat{\varphi} \\
-\tau _{3}\sin \hat{\varphi} & -\cos \hat{\varphi}
\end{array} \right) ,\ \ \hat{\varphi}=\left(
\begin{array}{cc} \varphi & 0 \\ 0 & i\chi \end{array}
\right) , \label{c5} \end{equation}
where $\varphi $ and $\chi $ are proportional to the unity $2\times 2$
matrix, \begin{equation} R=\left( \begin{array}{cc} \hat{R} & 0 \\
0 & \hat{R} \end{array} \right) ,\ \ \hat{R}=\left(
\begin{array}{cc} 1-2\rho \overline{\rho } & 2\rho \\
-2\overline{\rho } & 1+2\rho \overline{\rho }
\end{array} \right) , \label{c6} \end{equation} and
\begin{equation} S=\left(
\begin{array}{cc} 1-2\hat{\sigma}^{2} & 2i\hat{\sigma} \\
2i\hat{\sigma} & 1-2\hat{\sigma}^{2}
\end{array} \right) ,\ \ \hat{\sigma}=\left(
\begin{array}{cc} 0 & \sigma \\
\overline{\sigma } & 0 \end{array}
\right) . \label{c7} \end{equation}
The matrices $\rho $ and $\sigma $ in Eqs. (\ref{c6}, \ref{c7}) have the
form \begin{equation} \rho =\left( \begin{array}{cc} \rho & 0 \\
0 & \rho ^{*} \end{array} \right) ,\ \ \sigma =\left(
\begin{array}{cc} \sigma & 0 \\ 0 & \sigma ^{*}
\end{array} \right). \label{c8} \end{equation}
The next step is to represent the supermatrix $Y_{0}$ in Eq. (\ref{c4}) as the
product \begin{equation} Y_{0}=Y_{3}Y_{2}Y_{1} \; , \label{c9}
\end{equation} where $Y_{3}$ is the diagonal matrix \begin{equation}
Y_{3}=\left( \begin{array}{cc} \exp (i\hat{\beta}/2) & 0 \\ 0 &
\exp (i\hat{\beta}/2) \end{array} \right) ,\ \ \hat{\beta}=\left(
\begin{array}{cc} \beta \tau _{3} & 0 \\ 0 & \beta _{1}\tau _{3}
\end{array} \right) . \label{c10} \end{equation}
In order to recover the symplectic symmetry we have to choose the matrices $%
Y_{1}$ and $Y_{2}$ as follows \begin{eqnarray} Y_{1} &=&\left(
\begin{array}{cc} \hat{w} & 0 \\ 0 & \hat{w} \end{array}
\right) ,\ \ \hat{w}=\left( \begin{array}{cc} 1 & 0 \\
0 & w \end{array} \right) , \label{c11} \\ w &=&\left(
\begin{array}{cc} \cosh (\mu /2) & -i\sinh (\mu /2) \\
i\sinh (\mu /2) & \cosh (\mu /2)
\end{array} \right) , \nonumber \end{eqnarray}
and \begin{eqnarray} Y_{2} &=&\left(
\begin{array}{cc}
\cos (\hat{\theta}_{2}/2) & -i\sin (\hat{\theta}_{2}/2) \\
-i\sin (\hat{\theta}_{2}/2) & \cos (\hat{\theta}_{2}/2)
\end{array} \right) , \label{c110} \\
\hat{\theta}_{2} &=&\left(
\begin{array}{cc} \theta _{2}\tau _{1} & 0 \\ 0 & 0
\end{array} \right) ,\ \ \tau _{1}=\left(
\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}
\right) . \nonumber \end{eqnarray}
Finally, the supermatrix $T$ can be taken as
\begin{equation} T=\left( \begin{array}{cc}
u & 0 \\ 0 & u \end{array} \right) \left(
\begin{array}{cc}
\cos (\hat{\theta}/2) & -i\sin (\hat{\theta}/2) \\
-i\sin (\hat{\theta}/2) & \cos (\hat{\theta}/2)
\end{array} \right) \left(
\begin{array}{cc} v & 0 \\ 0 & v \end{array} \right) , \label{c12}
\end{equation}
where \[ \hat{\theta}=\left( \begin{array}{cc} \theta & 0 \\
0 & i\theta _{1} \end{array} \right) , \]
\begin{equation} u=\left( \begin{array}{cc}
1-2\eta \overline{\eta } & 2\eta \\
-2\overline{\eta } & 1-2\overline{\eta }\eta
\end{array} \right) ,\ \ v=\left(
\begin{array}{cc}
1-2\kappa \overline{\kappa } & 2\kappa \\
-2\overline{\kappa } & 1-2\overline{\kappa }\kappa
\end{array} \right) . \label{c120} \end{equation}
The $2\times 2$ matrices $\theta $ and $\theta _{1}$ in Eq. (\ref{c12}) are
proportional to the unit matrix and the matrices $\eta $ and $\kappa $ are
\[ \eta =\left( \begin{array}{cc} \eta & 0 \\ 0 & \eta ^{*}
\end{array} \right) ,\ \ \kappa =\left(
\begin{array}{cc} \kappa & 0 \\ 0 & \kappa ^{*}
\end{array} \right) . \]
The explicit form of the supermatrix $Q$ within the parametrization
suggested, Eqs. (\ref{c5}-\ref{c120}), is very similar to that for the
orthogonal ensemble\cite{efetov} and differs from the latter by minor
changes in the matrices $\hat{w}$, $\hat{\theta}_{2}$, $\sigma $, $\rho $, $%
\eta $, and $\kappa $.
To ensure the unambiguity of the parametrization, we should specify the
variation range of the variables. This is done by comparing the compact and
the noncompact sector with those in the standard parametrization \cite{book}%
. As the result, the variables vary in the following intervals:
\begin{eqnarray}
-\pi /2 &<&\varphi <\pi /2\; ,\ \ 0<\chi <\infty \; ,\ \ -\pi <\theta <\pi \;,\ \
-\infty <\theta _{1}<\infty \;, \label{c13} \\
0 &<&\mu <\pi \; ,\ \ 0<\theta _{2}<\infty \; ,\ \ 0<\beta <\pi \; ,\ \ 0<\beta
_{1}<2\pi \; . \nonumber \end{eqnarray}
The only thing that remains to be done to perform explicit calculations for
physical quantities is calculation of the Jacobian of the transformation to
the variables described by Eqs. (\ref{c5}-\ref{c120}).
Its derivation presented in the Appendix leads to the following final result
for the elementary volume \begin{equation}
\lbrack dQ]=J_\varphi J_\theta J_\mu J_cdR_B\,dR_F \; , \label{c130}
\end{equation} where \begin{eqnarray}
J_\varphi &=&\frac 1{8\pi }\frac{\cos \varphi \cosh \chi }{(\sinh \chi
+i\sin \varphi )^2}\; ,\ \ J_\theta =\frac 1{32\pi }\frac 1{\sinh ^2\frac 12%
(\theta _1+i\theta )} \; , \label{c14} \\
J_\mu &=&\frac 1{2^8\pi ^2}\frac{\sin \theta _2\sinh \mu }{(\cos \theta
_2-\cosh \mu )^2}\; ,\ \ J_c=\frac{4\sinh ^2\chi }{(\sinh \chi -i\sin \varphi
)^2} \; . \nonumber \end{eqnarray}
and
\begin{equation}
dR_B=d\theta \,d\theta _1d\varphi \,d\chi \,d\mu \,d\theta _2d\beta \,d\beta
_1\; ,\ \ dR_F=d\eta \,d\eta ^{*}d\kappa \,d\kappa ^{*}d\sigma \,d\sigma
^{*}d\rho \,d\rho ^{*} \; . \label{c140}
\end{equation}
Equations (\ref{c1}-\ref{c14}) are sufficient for evaluation of any integral over
the supermatrices $Q$ and, with the help of Eqs. (\ref{b10}, \ref{b11}),
provides a straightforward way of calculating the distribution function of
complex eigenvalues $P(\epsilon ,y)$, Eq. (\ref{a2}).
\section{Density of complex eigenvalues}
Before staring the calculations let us introduce more compact notations. As
it will be seen in what follows, only the following combinations of the
variables describing the parametrization of the supermatrix $Q$ enter all
functions of interest \begin{equation}
t=\sin \varphi \; ,\ \ z=\sinh \chi \; ,\ \ \omega =\cosh \mu \; ,\ \ \lambda
=\cos \theta _{2} \; . \label{d1} \end{equation}
Since the matrices $T$ and $Y$ commute with $\Lambda _{1}$, the first term
in the free energy, Eq. (\ref{b11}) does not depend on them. The second term
in Eq. (\ref{b11}) does not depend on $T$. As a result, the free energy $%
F[Q] $ takes a rather simple form \begin{equation}
F[Q]=a^{2}(t^{2}+z^{2})+x[(\lambda t-i\omega z)+4(\sigma \sigma ^{*}+\rho
\rho ^{*})(\omega -\lambda )(t-iz)] \; . \label{d2} \end{equation}
The fact that $F[Q]$ does not depend on $T$ simplifies the integration over $%
Q$ in Eq. (\ref{b10}). Using the parametrization, Eqs. (\ref{c1}\ref{c12}),
we can also represent the supermatrix $Q$ as \begin{equation}
Q=u\widetilde{Q}\overline{u} \; , \label{d3} \end{equation}
with $u$ from Eq. (\ref{c120}) and some supermatrix $\widetilde{Q}$.
Substituting Eq. (\ref{d3}) into Eq. (\ref{b10}) for the density of complex
eigenvalues $P(\epsilon ,y)$ and integrating over $\eta $ and $\eta ^{*}$,
one represents this function in the form \begin{eqnarray} P(\epsilon ,y)
&=&\frac{\pi \nu }{4\Delta }\int [Str(\tau _3\Lambda _1%
\widetilde{Q})]^2\,\exp (-F[Q])\,d\widetilde{Q} \label{d4} \\
&=&\frac{4\pi \nu }\Delta \frac{d^2}{dx^2}\int \exp (-F[Q])\,dQ \; .
\nonumber \end{eqnarray}
For the symplectic ensemble, one has in Eq. (\ref{d4}) an uncertainty of the
type $0\!\times \!\infty $, since the integrand does not contain the
variables $\kappa $ and $\kappa ^{*}$ and, on the other hand, the Jacobians $%
J_{\theta }$ and $J_{\mu }$ are singular as $\theta $, $\theta _{1}$, $%
\theta _{2}$, and $\mu \rightarrow 0$. To resolve this singularity, we can
use the regularization procedure developed for the orthogonal ensemble \cite
{efetov}. All the manipulations are identical to those of Ref.\cite{efetov},
because the free energy, Eq. (\ref{d2}) has the form similar to that of the
orthogonal ensemble. Moreover, the singularities of the Jacobians, Eqs. (\ref
{c14}), are the same as the ones for the orthogonal ensemble. We do not
specify the procedure once more and present here only the final result of
the regularization with proper changes of notations. The function $%
P(\epsilon ,y)$ can be written in the form of a sum of two terms
\begin{equation} P(\epsilon ,y)=P^{(1)}(\epsilon ,y)+P^{(2)}(\epsilon ,y) \; ,
\label{d5} \end{equation} where
\begin{equation}
P^{(1)}(\epsilon ,y)=\frac{\nu }{4\Delta }\frac{d^{2}}{dx^{2}}\int \exp
[-a^{2}(t^{2}+z^{2})-x(t-iz)]\,\frac{4z^{2}dtdz}{(t^{2}+z^{2})^{2}}
\label{d6} \end{equation}
and \begin{eqnarray}
P^{(2)}(\epsilon ,y) &=&\frac{\nu }{4\Delta }\frac{d^{2}}{dx^{2}}\int \exp
[-a^{2}(t^{2}+z^{2})-x(t\omega -i\lambda z)] \label{d7} \\
&&\times \frac{(t-iz)^{2}\,z^{2}x^{2}}{(t^{2}+z^{2})^{2}}\,dt\,dz\,d\omega
\,d\lambda \; . \nonumber \end{eqnarray}
The integration in Eqs. (\ref{d6}) and (\ref{d7}) is performed in the
intervals $-1<t<1$, $-\infty <z<\infty $, $1<\omega <\infty $, and $%
-1<\lambda <1$.
To perform the integration in Eq. (\ref{d7}) over $\lambda $, one should
introduce an infinitesimal positive $\delta $, defining $z_{-}$ according to
$z_{-}=z+i\delta \,{\rm sgn}(x)$, so that the integral becomes convergent.
After this, the integration over $\lambda $ and $\omega $ in Eq. (\ref{d7})
is easily carried out. Adding the result of the integration to Eq. (\ref{d6}%
) we obtain \begin{eqnarray}
P(\epsilon ,y) &=&\frac{\nu }{4\Delta }\,\frac{d^{2}}{dx^{2}}%
\int_{-1}^{+1}dt\,\int_{-\infty }^{+\infty }dz\exp
(-a^{2}(t^{2}+z_{-}^{2}))[(t+iz_{-})^{2}\exp (ixz_{-}-tx) \label{d8} \\
&&-(iz_{-}-t)^{2}\exp (ixz_{-}+tx)]\,\frac{z_{-}}{it(t^{2}+z_{-}^{2})^{2}} \; .
\nonumber \end{eqnarray}
Comparing Eq. (\ref{d8}) with its analog for the orthogonal ensemble, we
notice an important difference between them: The variable $z_{-}$ is present
in the numerator in Eq. (\ref{d8}), whereas it stands in the denominator of
the equation for the orthogonal ensemble. In the latter case, the
distribution function $P(\epsilon ,y)$ contains an additional contribution
of a $\delta $-function after differentiation of $z_{-}$ over $x$. For the
symplectic ensemble, the differentiation of $z_{-}$ in the numerator leads
to no singularity on the real axis and one take the limit $\delta
\rightarrow 0$ before calculating the integral in Eq. (\ref{d8}). Thus, only
the exponents should be differentiated over $x$. These differentiations
simplifies considerably the integrand and the integration over $z$ can be easily
carried out. After that one obtains \begin{equation}
P(\epsilon ,y)=\frac \nu \Delta \,\frac x{4a^3}\,\sqrt{\pi }\,\exp \left( -%
\frac{x^2}{4a^2}\right) \int_0^1dt\exp (-a^2t^2)\,\frac{\sinh (tx)}t \; .
\label{d9} \end{equation}
Equation (\ref{d9}) solves completely the problem involved and is the main
result of the present paper.
The following properties of the density function $P(\epsilon ,y)$ are easily
checked: It is symmetric with respect to $y$ and is properly normalized
\begin{equation} \int dy\,P(\epsilon ,y)=1 \; . \label{d10} \end{equation}
In the limit $a\gg 1$(the limit of a strong non-Hermiticity), one obtains
the universal asymptotics valid for all three ensembles
\begin{equation}
P(\epsilon ,y)\simeq \frac{\pi \nu }{2a^2\Delta }\left\{
\begin{array}{cc} 1, & 2a\ll |x|<2a^2 \\
0, & |x|>2a^2 \end{array} \right. \label{d11} \end{equation}
The form of the density of complex eigenstates, Eq. (\ref{d11}), corresponds%
\cite{efetov} to the ``elliptic law'' of Refs.\cite{ginibre,girko}.
In the opposite limit $a\ll 1$, the function $P(\epsilon ,y)$ can be written
as \begin{equation}
P(\epsilon ,y)=\frac \nu \Delta \frac{x^2}{4a^3}\sqrt{\pi }\exp \left( -%
\frac{x^2}{4a^2}\right) \; . \label{d12}
\end{equation}
The behavior of the function $P(\epsilon ,y)$, Eq. (\ref{d9}), at small $y$
(related to $x$ by Eq. (\ref{b12})) is drastically different from the
behavior of the corresponding functions for the orthogonal and unitary
ensembles\cite{efetov}. This function is small at small $y$, being
proportional to $y^2$ and turns to zero in the limit $y\rightarrow 0$. This
means that the probability that eigenvalues remain real at finite degree of
the non-Hermiticity is zero. In other words, the distribution function of
complex eigenvalues exhibits a depletion along the real axis. The depletion
region broadens with increasing the non-Hermiticity. The function $%
P(\epsilon ,y)$ is represented in Fig. 1 for several values of $a\approx 1$.
\section{Conclusions}
In the present paper, we studied analytically disordered non-Hermitian
models with the symplectic symmetry. This is the last of three universality
classes that has not been considered yet. Using the supersymmetry technique
we derived a proper non-linear $\sigma $-model starting from the a model of
disorder with a direction. The zero-dimensional version of the non-linear $%
\sigma $-model corresponds the ensemble of random non-Hermitian symplectic
matrices. Within the zero-dimensional $\sigma $-model, we calculated the
joint probability density function of complex eigenvalues. We introduced a
convenient parametrization and calculated the Jacobian corresponding to this
parametrization.
All this allowed us to derive an explicit expression for the density of
complex eigenvalues. Asymptotic behavior of this function demonstrates
clearly that the basic properties of the system depend strongly on the
ensemble. Introducing the non-Hermiticity in the Hamiltonian affects very
differently the spectrum of three ensembles. Only when the non-Hermiticity
is very large, the difference is no longer important.
It is known from previous works that the eigenvalues of a system belonging
to the unitary ensemble are smoothly distributed around the real axis. The
density function for the orthogonal ensemble contains a $\delta $-function
contribution on the real axis describing an accumulation of the eigenvalues.
In contrast to the previous cases, we obtained for the symplectic ensemble a
depletion of eigenvalues along the real axis, which is in a good agreement
with the results of a numerical study \cite{verbaar}. These features
correspond to a tendency of a system from the orthogonal ensemble to
preserve localized behavior. However, after introducing spin-orbit
impurities, the system acquires delocalized features.
\section{Appendix}
The Jacobian of the parametrization specified by Eqs. (\ref{c1}-\ref{c120})
can be derived from the elementary length $Str(dQ)^{2}$. The most economical
way to proceed is to compare the parametrization involved with that for the
orthogonal ensemble \cite{efetov}. Two essential differences are easily
noticed: the $2\times 2$ blocks in matrices $Y_{1}$ and $Y_{2}$ are
interchanged and all the conjugated Grassmann variables have the opposite
sign. The last difference, however, does not lead to any change in the
calculation, so long as the contribution to the length $Str(dQ)^{2}$ from
the Grassmann variables is due to terms of the kind $\overline{\eta }%
\,d\kappa $, $\overline{\eta }\,d\eta $ etc. Taking this into account, we
can immediately reduce the elementary length to the following expression
\begin{eqnarray}
Str(dQ)^{2} &=&Str((dQ_{0})^{2}+[\delta Z,Q_{0}]^{2})\; , \label{f1} \\
\delta Z &=&\overline{S}\,\overline{R}(\overline{Y}_{0}\delta TY_{0}+dR\,%
\overline{R}+R\,dS\,\overline{S}\,\overline{R}+\delta Y_{0})R\,S \; ,
\nonumber \end{eqnarray}
where all the terms apart of the last one, $\overline{S}\,\overline{R}\delta
Y_{0}\,R\,S$, are identical to those in Ref.\cite{efetov}.
Using Eq. (\ref{c9}), we write $\delta Y_{0}$ as
\begin{equation}
\delta Y_{0}=\delta Y_{1}+\delta Y_{2}+\overline{Y_{1}}\overline{Y_{2}}%
\delta Y_{3}\,Y_{2}\,Y_{1} \; , \label{f100} \end{equation}
which can be rewritten in the form
\begin{eqnarray} \delta Y_{0} &=&{\bf 1}\frac{i}{2}\,\left[ \left(
\begin{array}{cc} d\beta _{1}\tau _{3}\cos \theta _{2} & 0 \\
0 & d\overline{w}\tau _{3}w \end{array} \right) -d\mu \left(
\begin{array}{cc} 0 & 0 \\ 0 & \tau _{2} \end{array}
\right) \right] \label{f2} \\
&&+\Lambda _{1}\frac{1}{2}\,\left[ -\left(
\begin{array}{cc} \tau _{2}d\beta _{2}\sin \theta _{2} & 0 \\ 0 & 0
\end{array} \right) +d\theta _{2}\left(
\begin{array}{cc} \tau _{1} & 0 \\ 0 & 0 \end{array}
\right) \right] , \nonumber \end{eqnarray}
where \begin{equation} \tau _{2}=\left( \begin{array}{cc}
0 & -i \\ i & 0 \end{array} \right) . \label{f3} \end{equation}
Multiplying three matrices with each other, one obtains
\begin{eqnarray}
\overline{Y}_0\delta TY_0 &=&{\bf 1\times }2\left[ \cos \frac{\theta _2}2%
\left( \begin{array}{cc} 0 & d\kappa ^{\prime } \\
-d\overline{\kappa }^{\prime } & 0 \end{array}
\right) +i\sin \frac{\theta _2}2\left(
\begin{array}{cc} 0 & \tau _1d\eta ^{\prime } \\
-d\overline{\eta }^{\prime }\tau _1 & 0
\end{array} \right) \right] \label{f4} \\
&&+2i\Lambda _1\left[ \cos \frac{\theta _2}2\left(
\begin{array}{cc} 0 & \eta ^{\prime } \\
\overline{\eta }^{\prime } & 0
\end{array} \right) -i\sin \frac{\theta _2}2\left(
\begin{array}{cc}
0 & \tau _1d\kappa ^{\prime } \\
d\overline{\kappa }^{\prime }\tau _1 & 0 \end{array}
\right) -\frac i2d\hat{\theta}\right] , \nonumber \end{eqnarray}
where $d\kappa ^{\prime }=d\kappa w\,\exp [i(\beta -\beta _1)/2]$ and $d%
\overline{\kappa }^{\prime }=\overline{w}d\overline{\kappa }\,\exp [i(\beta
-\beta _1)/2]$ and analogous for $d\eta ^{\prime }$ and $d\overline{\eta }%
^{\prime }$. One should keep in mind that the differentials $d\eta $ and $%
d\kappa $ in Eq. (\ref{f4}) are not the initial variables entering Eq. (\ref
{c120}) but new variables obtained from the initial ones by several
replacements and shifts common for all three ensembles. The Jacobian of
those transformations $J_{\theta \text{ }}$ is given by Eqs. (\ref{c14}).
After that we pick up the differentials of the Grassmann variables
(proportional to the unit matrix), make a shift of the differentials
analogous to the one in Ref.\cite{efetov} and introduce the matrix
differentials \begin{equation} d\sigma =\left(
\begin{array}{cc} d\sigma _1 & d\sigma _2 \\
-d\sigma _2^{*} & d\sigma _1^{*}
\end{array} \right) ,\ \ d\rho =\left(
\begin{array}{cc} d\rho _1 & d\rho _2 \\
-d\rho _2^{*} & d\rho _1^{*}
\end{array} \right) , \label{f5} \end{equation}
where \begin{equation} \begin{array}{c}
d\sigma _2=-i\cos \frac{\theta _2}2\sinh \frac \mu 2d\eta -\sin \frac{\theta
_2}2\cosh \frac \mu 2d\kappa ^{*} \\
d\sigma _2^{*}=-i\cos \frac{\theta _2}2\sinh \frac \mu 2d\eta ^{*}+\sin
\frac{\theta _2}2\cosh \frac \mu 2d\kappa \\
d\rho _2=-i\cos \frac{\theta _2}2\sinh \frac \mu 2d\kappa +\sin \frac{\theta
_2}2\cosh \frac \mu 2d\eta ^{*} \\
d\rho _2^{*}=-i\cos \frac{\theta _2}2\sinh \frac \mu 2d\kappa ^{*}-\sin
\frac{\theta _2}2\cosh \frac \mu 2d\eta \end{array} \label{f6} \end{equation}
The Jacobian of the transformation, Eq. (\ref{f6}), from $\eta $, $\kappa $,
to $\sigma $ and $\rho $, equals \begin{equation}
\widetilde{J}_\mu =\frac 4{(\cos \theta _2-\cosh \mu )^2}\; . \label{f60}
\end{equation}
The supermatrix $\delta Z$ from Eq. (\ref{f1}) can be represented as
\begin{equation} \delta Z=\delta Y_0^{\prime }+i\Lambda _1(2d\hat{\sigma}-
d\hat{\theta}/2)+%
{\bf 1\times }\,2k\,d\hat{\rho} \; , \label{f10} \end{equation}
where \[ k=\left(
\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) \]
and \begin{eqnarray}
\delta Y_0^{\prime } &=&-{\bf 1}\frac i2\left[ d\beta \,\sinh \mu \left(
\begin{array}{cc} 0&0 \\ 0&\tau _1 \end{array} \right) \right] +d\mu \left(
\begin{array}{cc} 0 & 0 \\ 0 & \tau _2 \end{array} \right) \label{f8} \\
&&+\Lambda _1\frac 12\left[ -d\beta _1\,\sin \theta _2\left(
\begin{array}{cc} \tau _2&0 \\ 0&0 \end{array}\right) +d\theta _2\left(
\begin{array}{cc} \tau _1 & 0 \\ 0 & 0 \end{array}
\right) \right] . \nonumber \end{eqnarray}
Calculating the length $Str(dQ)^2$ one notices that the anticommuting part
of $\delta Z$, proportional to $\Lambda _1$, decouples from the commuting
one, proportional to the unit matrix ${\bf 1}$. The contribution to the
length from the first part is the same as that of Ref. \cite{efetov} leading
to the Jacobian \begin{equation}
J_{\varphi \chi }=\frac 1{2^{24}}\frac 1{(\sin ^2\varphi +\sinh ^2\chi )^2} \; ,
\label{f9} \end{equation}
whereas the second part of the elementary length equals \begin{eqnarray}
Str[\delta Z_{\parallel },Q_0]^2 &=&4\{[(d\mu )^2+(d\beta )^2\sinh \mu
]\sinh ^2\chi +(d\theta )^2\cos ^2\varphi \label{f20} \\ &&+(d\theta _1)^2\cosh
^2\chi +(d\theta _2)^2+(d\beta _1)^2\sin ^2\theta _2\} \; . \nonumber
\end{eqnarray}
Since in our parametrization the blocks from the commuting variables in the
matrices $Q_0$ and $T$ are the same as in Ref.\cite{efetov}, the Jacobian $%
J_\theta $ does not change. Combining the contributions from Eqs. (\ref{f60}%
, \ref{f9}, \ref{f20}) with $J_\theta $, we arrive at the elementary volume,
Eq. (\ref{c130}--\ref{c140}).
|
2,877,628,088,351 | arxiv | \section{Introduction}
The observations of quiescent X-ray luminosity from accreting neutron star transients has opened a new probe for exploring the physics of the neutron star structure~\cite{2007PhR...442..109L,2007ApJ...660.1424H,2009ApJ...691.1035H,2015MNRAS.447.1598B}.
A transiently accreting neutron star experiences periods of outburst activity separated by long phases of relative quiescence periods, during which accretion is switched off or strongly suppressed.
The accreted crust is heated during outburst by electron captures, neutron emission, and pycnonuclear reactions that release 1$-$2 MeV per accreted nucleon~\cite{1990A&A...227..431H,2003A&A...404L..33H,2008A&A...480..459H}.
The energy release due to crustal heating heats the star enough to produce the quiescent light curve which is consistent with the observations~\cite{2003A&A...407..265Y,2015MNRAS.447.1598B,2016arXiv161009100M}.
However, it has been advocated that the shallow outer crust ($\rho\lesssim10^{10}\rm ~g~cm^{-3}$) should be heated with respect to the deeper neutron star crust to explain the temperatures observed in the first months of relaxation for several sources, for example, the light curves of KS1731-260 and MXB 1659-29 required a shallow heat source of $\approx1~\rm MeV$ in the calculation by Brown \& Cumming~\cite{2009ApJ...698.1020B}.
MAXI J0556-332 is such an accreting neutron star transient; it was discovered in January 2011 through MAXI \cite{2011ATel.3102....1M} and went into quiescence in May 2012. After the outburst period had continued for more than 16 months, it returned to quiescence \cite{2014ApJ...795..131H}.
The observations of MAXI J0556-332 through Chandra and XMM-Newton were initiated and had been analyzed using a variety of X-ray instruments. As a consequence, the Swift/XRT light curve indicates that there is an exponential decay time scale of $\sim$ 3.3 days for the last $\approx14$ days of the outburst \cite{2014ApJ...795..131H}. The data of MAXI J0556-332 obtained through the Rossi X-ray Timing Explorer (RXTE) show similarities to the class of low mass X-ray binaries known as ``Z-sources"~\cite{2014ApJ...795..131H,2013PASJ...65...58S}, implying that the neutron star in MAXI J0556-332 accretes at near- or super-Eddington limit.
Furthermore, this star has been thought to be the hottest quiescent neutron star in this class, in which crustal heating models cannot
explain the observational data of light curve. In Deibel's detailed calculation \cite{2015ApJ...809L..31D}, an additional shallow heat source $Q_{\rm shallow}\approx6-16~\rm MeV$ per accreted nucleon is required by considering of the decay of the accreting rate at the end of the outburst. However, the physical source of this shallow heating is still unknown.
Furthermore, we cannot adopt the relation between the photospheric temperature and the temperature at the bottom of accreted envelop
if some heatings occur in the envelope.
Regarding the crustal heating, the heat flow from the crust to the inner regions has been shown to be
important for explaining the observation related to the X-ray burst due to helium shell burning~\cite{2015ApJ...809L..31D}.
In the present paper, we present theoretical fits to the light curve for the observational data of the transient source MAXI J0556-332 by adopting the stellar evolutionary code with the effect of the outburst behavior; the accretion rate does not turn off instantaneously at the end of the outburst.
We assume a similar decay time scale obtained from the Swift/XRT observations.
In the accretion layer, the nuclear reaction from the hot CNO cycle~\cite{1981ApJS...45..389W} is included because it will occur when the temperature increases in the range of $0.2\leq T_9\leq 0.5~$($T_9=T/10^9$~K).
During the quiescence, the energy deposited through crustal heating, compressional heating, and
hot CNO cycle will be released gradually.
We performed calculations of the thermal evolution of neutron stars in hydrostatic equilibrium by using the spherical symmetric stellar evolutionary code~\cite{1984ApJ...278..813F,1984PASJ...36..199H},
which includes full general relativistic effects formulated by Throne~\cite{1977ApJ...212..825T}.
Basic simultaneous differential equations are written as follows:
\begin{eqnarray}
\frac{\partial M_{tr}}{\partial r} \hspace*{-2mm}& = &\hspace*{-2mm} 4\pi r^{2} \rho~, \label{eq:1} \\
\frac{\partial P}{\partial r}\hspace*{-2mm} & = &\hspace*{-2mm} -\frac{GM_{tr}\rho}{r^{2}}
\left(1+\frac{P}{\rho c^{2}}\right)
\left(1+\frac{4\pi r^{3}P}{M_{tr}c^{2}}\right) \left(1-\frac{2GM_{tr}}{c^{2}r}\right)^{-1}~, \label{eq:2} \\
\frac{\partial (L_{r}e^{2\phi/c^{2}})}{\partial M_{r}}\hspace*{-2mm} & =\hspace*{-2mm} &
e^{2\phi/c^{2}}\left(\varepsilon_{\rm n}+\varepsilon_{\rm g}-\varepsilon_{\nu}
\right)~, \label{eq:3} \\
\frac{\partial \ln T}{\partial \ln P}\hspace*{-2mm} & =\hspace*{-2mm} & {\rm min}(\nabla_{\rm rad}, \nabla_{\rm ad})~, \label{eq:4} \\
\frac{\partial M_{tr}}{\partial M_{r}}\hspace*{-2mm} & =\hspace*{-2mm} & \frac{\rho}{\rho_0}
\left(1-\frac{2GM_{tr}}{c^{2}r}\right)^{1/2}~, \label{eq:5}\\
\frac{\partial \phi}{\partial M_{tr}}\hspace*{-2mm} & =\hspace*{-2mm} & \frac{G(M_{tr}+4\pi r^{3}P/c^{2})}
{4\pi r^{4}\rho}\left(1-\frac{2GM_{tr}}{c^{2}r}\right)^{-1}. \label{eq:6}
\end{eqnarray}
Here, $M_{tr}$ and $M_r$ are the gravitational and rest masses, respectively, in radius $r$;
$\rho$ and $\rho_0$ denote the total mass energy and rest mass densities, respectively;
$P$ and $T$ are the pressure and local temperature, respectively;
$\varepsilon_{\rm n}$ and $\varepsilon_{\rm g}$ are the energy generation rates by nuclear burning and gravitational energy release, respectively.
Furthermore, $\varepsilon_\nu$ represents energy loss rate by neutrino emission;
$\nabla_{\rm rad}$ and $\nabla_{\rm ad}$ are the radiative and adiabatic gradients, respectively;
$\phi$ is the gravitational potential in unit mass.
We adopt the fraction of the rest mass $q~[=M_r/M(t)]$, which is adopted when the stellar mass varies \cite{1981PThPS..70..115S,1984ApJ...278..813F}. The gravitational energy release $\varepsilon_{\rm g}$ in Eq.~(\ref{eq:3})
is expressed as $\varepsilon_{\rm g} = \varepsilon_{\rm g}^{\rm (nh)} + \varepsilon_{\rm g}^{\rm (h)}$, where each part in the right-hand
side is written as follows:
\begin{eqnarray}
\varepsilon_{\rm g}^{\rm (nh)}\hspace*{-2mm} & = &\hspace*{-2mm} -\exp\left(-\frac{\phi}{c^2}\right)\left(T\frac{\partial s}{\partial t}\Bigg|_q + \mu_i\frac{\partial N_i}{\partial t}\Bigg|_q\right), \label{eq:7} \\
\varepsilon_{\rm g}^{\rm (h)}\hspace*{-2mm} & = &\hspace*{-2mm} \exp\left(-\frac{\phi}{c^2}\right)\frac{\dot{M}}{M}\left(T\frac{\partial s}{\partial \ln q}\Bigg|_t+\mu_i\frac{\partial N_i}{\partial \ln q}\Bigg|_t\right), \label{eq:8}
\end{eqnarray}
where $\mu_i$ and $N_i$ are the chemical potential and number per unit mass of the $i$-th element, respectively, and
$t$ is the Schwarzschild time coordinate.
In Eq.~(\ref{eq:8}), $\dot{M}$ is the mass accretion rate.
Eqs.~(\ref{eq:7}) and (\ref{eq:8}) are respectively called nonhomologous and homologous terms, where the latter implies a homologous compression due to the accretion~\cite{1984ApJ...278..813F}.
Note that compressional heating due to the accretion contributes significantly to the heat source as well as nuclear burning.
We adopt an equation of state (EoS) by Lattimer \& Swesty\cite{1991NuPhA.535..331L} with the incompressibility of 220 MeV in the inner layers~($\rho\geq10^{12.8} \rm~g~cm^{-3}$) and connect it to EoS in Ref. \citen{1971ApJ...170..299B}) for the outer layers
($\rho<10^{12.8} \rm~g~cm^{-3}$).
The neutrino emission process is set to a slow cooling process; electron-positron pair, photo, plasmon processes~\cite{1979ApJ...232..541F,1969PhRv..180.1227F,1967ApJ...150..979B}; bremsstrahlung process; modified Urca (MURCA) process.
We do not include the neutrino emission through pion condensation.
Therefore, the dominant processes are MURCA and bremsstrahlung processes.
The corresponding energy loss rates are briefly written as follows~\cite{1979ApJ...232..541F}:
\begin{eqnarray}
\varepsilon_{\nu}^{\rm MURCA} &=& 2.6 \times 10^{20} \left( \frac{\rho}{\rho_{\rm nuc}} \right)^{2/3} T_9^{8} ~{\rm erg~cm^{-3} s^{-1}}, \\
\varepsilon_{\nu}^{\rm brems.} &=& 3.8 \times 10^{19} \left( \frac{\rho}{\rho_{\rm nuc}} \right)^{1/3} T_9^{8} ~{\rm erg~cm^{-3} s^{-1}},
\end{eqnarray}
where $\rho_{\rm nuc}$ is the nuclear density.
On the other hand, the neutrino emission rates have been studied; however, the uncertainty of a factor of ten remains because of the insufficient understanding of the symmetry energy and nucleon effective mass in the dense matter \cite{Yin+2017}.
Moreover, we do not include the strong process such as pion condensation.
Although pion condensation accompanies the strong neutrino loss rates, the effects of the super-fluidity may reduce the neutrino emissions.
However, the critical temperature for the super-fluid to occur is very uncertain.
As our aim is to present a possible heat source instead of unknown source, we neglect the strong neutrino emission for simplicity.
The energy generation includes crustal heating~\cite{1990A&A...227..431H}, compressional heating~\cite{1984ApJ...278..813F}, and the hot CNO cycle~\cite{1981ApJS...45..389W}. Crustal heating has the following form:
\begin{equation}
Q_i=6.03 \times \dot{M}_{-10}~q_i~ 10^{33}~ \rm erg~s^{-1}~, \label{eq:crustheat}
\end{equation}
where $\dot{M}_{-10}=\dot{M} / (10^{-10}~M_\odot~{\rm yr^{-1}})$ is the mass accretion rate, and $q_i$ is the deposited heat per nucleon on the $i$-th reaction layer. Detailed tables for $q_i$ can be found in Ref. \citen{1990A&A...227..431H}). The energy generation rate $\varepsilon_n$ in Eq.~(\ref{eq:3}) can be obtained from $Q_i/\delta M$, where $\delta M$ is the mass of the $i$-th reaction layer.
The other heating process is viscous heating,
which originates because of the $r$-mode instability associated with the rapidly rotating NSs \cite{Andersson1998,Lindblom+1998}.
This process may affect the temperature evolution of NSs; \cite{Levin1999}
however, we do not include it in our study because the dimensionless amplitude of the $r$-mode is very uncertain, and the heating rates is unclear \cite{Chugunov+2017}.
The accreted matter is assumed to have a uniform chemical composition with each mass fraction ($X, Y, Z) = (0.73, 0.25, 0.02)$, where $X, Y$, and $Z$ represent the mass fractions of hydrogen, helium, and heavy elements, respectively.
We adopt the simple formula of the nuclear energy generation rate for the hot CNO cycle~\cite{1981ApJS...45..389W} for the temperature range of $0.2\leq T_9\leq 0.5$:
\begin{equation}
\varepsilon_{\rm hCNO}=5.86\times 10^{15}Z' \rm ~erg~g^{-1}~s^{-1}~, \label{eq:CNO}
\end{equation}
where $Z'$ represents the sum of the mass fractions of CNO isotopes inside the accreted envelope,
that is, $Z' = 0.02$.
We construct the initial models of a neutron star to be accreted at around the Eddington rate $dM/dt =2.73 \times 10^{-8} M_\odot~\rm yr^{-1}$ with and without the crustal and compressional heatings and the hot CNO cycle. The initial model corresponds to a steady state,
in which the nonhomologous part of the gravitational energy release Eq.~(\ref{eq:8})
can be neglected.~\cite{1984ApJ...278..813F,1984PASJ...36..199H}
The gravitational mass and radius of the neutron star are $M=1.54~M_\odot$ and $R=12.48~\rm km$, respectively.
We assume a decay time scale $\tau$ ($e$-folding time) at the beginning of cooling, where the time scale is chosen
to match the duration of the MAXI outburst~\cite{2014ApJ...795..131H}.
Futhermore, we construct the light curve
by tuning the mass of the accreted elements $\Delta M$ and $\tau$ for the accretion rate.
Figure~\ref{lc} shows the theoretical light curve with $\Delta M=1.2\times10^{-12}M_\odot$ and $\tau=14~\rm days$.
Five curves are drawn involving four cases: curve `a' includes only the crustal heating~(\ref{eq:crustheat}),
and curve `b' only the compressional heating~(\ref{eq:8}). The case for the heating due to the hot CNO
cycle~(\ref{eq:CNO}) is indicated by `c'.
Note that the time is measured from the end of the outburst~\cite{2014ApJ...795..131H}, and
our cooling curve (`a+b+c') can well reproduce the light curve as a whole.
The dotted curves (`a+b' and `b') show the light curves without the additional energy source of the hot CNO cycle.
We recognize that the nuclear energy source of the hot CNO cycle can provide significant heating
to increase the luminosity in addition to the compressional and crustal heatings until approximately 300 days.
Both the compressional and crustal heatings maintain the heating according to the decreasing accretion rate after 500 days.
In particular, the light curve after 700 days is predicted owing to the two heat sources, as shown in Fig~\ref{lc},
in which the contribution from the compressional heating is larger than the crustal heating by a factor of 1.8.
\begin{figure}
\centering
\includegraphics [scale=0.35]{17310Fig1.eps}
\caption{
Model fit to the quiescent light curve of MAXI J0556-332 for a neutron star with $M=1.54~M_\odot$ and $R=12.48~\rm km$
considering the hot CNO cycle. The two data with high effective temperatures above the theoretical light curve are considered to be
contaminations from residual accretion~\cite{2015ApJ...809L..31D}.
The observational data are taken from Ref.~(\citen{2014ApJ...795..131H}).
}
\label{lc}
\end{figure}
Note that the two observational data with high effective temperatures
that are not fitted by our light curve are attributed to the increase in the accretion rates~\cite{2014ApJ...795..131H} or contaminations
due to the residual accretion~\cite{2015ApJ...809L..31D}.
If we include sudden increases in accretion rates,
the temperature in the accretion layers will increase and furthers the nuclear burning results.
However, this may result in a very complex thermal structure.
The fitting is not adequate for the last two observations
because we use the approximate nuclear energy generation rate of Eq.~(\ref{eq:CNO}) for the hot CNO cycle.
Figure \ref{trho} shows changes in the temperature distribution against the
density during the quiescence. The shadowed area indicates the temperature region for the envelope to reach the condition where the
hot CNO cycle occurs significantly.
The hot CNO cycle can be seen to operate until 500 days before the flat shape of the quiescence appears.
Therefore, we must at least consider the effects of changes in the abundances during the operation of the hot CNO cycle.
Furthermore, our assumption for the exponential decay of the accretion rate may be inadequate to apply the last two observations of the light curve.
If a reasonable initial model reflecting thermal history of the previous accretions are constructed,
detailed calculations of the large nuclear reaction network could reproduce the light curve.
\begin{figure}
\centering
\includegraphics [scale=0.38]{17310Fig2.eps}
\caption{
Changes in the temperature during the quiescence era against the density.
The shadowed rectangle indicates the region where the hot CNO cycle operates effectively.
The right edge of the rectangle area indicates the bottom of the accreted envelope.
}
\label{trho}
\end{figure}
We constructed a theoretical light curve of MAXI by using a stellar evolutionary code and tried to fit the observations.
We included nuclear burning of the hot CNO cycle in the envelope in addition to compressional and crustal heatings.
Given the accreted mass and $e$-folding time, our calculations can reproduce the observed light curve as a whole.
The two observations around 32 and 85 days locate significantly above our light curves.
The two flare observations might be due to increases in the quiescent accretion rate~\cite{2014ApJ...795..131H}
or contamination due to the residual accretion~\cite{2015ApJ...809L..31D}.
Although we do not need to include an unknown shallow heat source different from the previous study~\cite{2015ApJ...809L..31D},
we must discuss the light curve after 200 days.
As shown in Fig.~\ref{trho}, the hot CNO cycle expressed by the Eq.~(\ref{eq:CNO}) does not operate enough after 300 days
because the formula can be applied for the temperature range of $0.2\leq T_9\leq 0.5$~\cite{1981ApJS...45..389W}.
Therefore, the effective temperature of our light curve decreases rather suddenly after 350 days.
To explore the agreement between theoretical light curve and observations,
we need to use a nuclear reaction network coupled with the stellar evolutionary calculations;
this gives changes in abundances with time.
However, this could lead to unstable nuclear burning and type I X-ray bursts.
This is beyond our present research because our aim in this investigation is to find unknown energy source.
To obtain the detailed light curve, we may further need to calculate X-ray bursts and elucidate how the bursts play a role against the quiescent luminosities.
Note that we have not included the effects of super-fluidity and viscous heating process because
these have large uncertainties.
If these are included, we expect that the effective temperature will become higher because the super-fluidity suppresses the neutrino cooling rates,
and the total heating rate increases because of the viscous heating.
If we decrease the initial mass accretion rate $dM / dt$ and choose its $e$-folding time $\tau$,
we may explain the observations of MAXI J0556-332 without the unknown shallow heat source.
Our aim is to investigate the physical source for the shallow heating of the nuclear burning.
Further discussions about the super-fluidity and viscous heating are beyond the scope of our present study.
In fact, the quiescent light curves of KS 1731-260 and MXB 1659-29~\cite{2009ApJ...698.1020B} require the heat source of approximately 1 MeV per accreted nucleon as the shallow heat source.
The relation between energy sources in the envelope and light curves depends on the history of the accretion and preceding X-ray bursts.
These issues are worth-while for studying the neutron star property.
It is significantly important to investigate whether light curves during quiescence eras can be reproduced with nuclear burning.
Furthermore, there remain problems such as the reheating event~\cite{2014ApJ...795..131H} and Urca cooling~\cite{2014Natur.505...62S} in the crust.
It may be very difficult to include all these phenomena and/or nuclear processes involving very uncertain nuclear processes.
\section*{Acknowledgment}
We would like to thank Kenzo Arai for useful comments.
This work was supported by JSPS KAKENHI Grant Numbers 24540278, 15K05083
and by a China Scholarship Council.
\bibliographystyle{jpsj}
|
2,877,628,088,352 | arxiv | \section*{S1: Atoms on a one-dimensional lattice}
\maketitle
Here we study the dispersive effect of atoms when they are trapped
in a one dimensional optical lattice, as expected for the tapered fiber trap given in the paper. We consider a situation where the atoms are not saturated with few photons $(\frac{\Gamma_{\rm{wg}}}{\Gamma_{\rm{tot}}}\ll1)$, and therefore, the system is in the linear regime and the electric field operator can be presented by its expectation value, i.e., $\langle\hat{\mathcal{E}}(x)\rangle$. Furthermore, since the atoms are periodically spaced, we can discretize the propagation of the electric field (as shown in Fig.~2 of the main text) and use the transfer matrix formalism to study multiple scattering events to all order \cite{Deutsch:1995}. In the $n$th cell, we define the forward (backward)-propagating field as $\overrightarrow{\mathcal{E}}_{n}(\overleftarrow{\mathcal{E}}_{n})$, respectively. The fields in two consecutive cells are related by:
\begin{equation}
\left(\begin{array}{c}
\overrightarrow{\mathcal{E}}_{n+1}\\
\overleftarrow{\mathcal{E}}_{n+1}\end{array}\right)=M_{\rm{cell}} \left(\begin{array}{c}
\overrightarrow{\mathcal{E}}_{n}\\
\overleftarrow{\mathcal{E}}_{n}\end{array}\right)
\end{equation}
where $M_{\rm{cell}}$ is the transfer matrix and the corresponding transmission (reflection) coefficient is given by: $t=\frac{1}{M_{22}} ~(r=\frac{M_{12}}{M_{22}}) $, respectively. The transfer matrix of each cell is the product of two terms: $M_{\rm{cell}}=M_{\rm{free}}M_{\rm{atom}}$. The first term corresponds to the free propagation between two sites:
\begin{equation}
M_{\rm{free}}=\left(\begin{array}{cc}
e^{ik_{p}a} & 0\\
0 & e^{-ik_{p}a}\end{array}\right),
\end{equation}
where $k_{p}$ is the wave number of the incoming field, and $a$ is the lattice spacing. The second represents the light scattering due to the presence of atoms:
\begin{equation}
M_{\rm{atom}}=\left(\begin{array}{cc}
1+i\zeta & i\zeta \\
-i\zeta & 1-i\zeta\end{array}\right),\end{equation}
where $\zeta$ characterizes the light scattering from the trapped atoms in a single site. In other words, the atomic transmission (reflection) coefficient is $t_a=\frac{1}{1-i\zeta} \left(r_a=\frac{i\zeta}{1-i\zeta}\right)$, respectively. Now, we study the scattering of photons from a single site (i.e. single atom) to find $\zeta$. A generic three-level system (Fig.\ref{fig:Three-level-system-coupled})
coupled to an electromagnetic waveguide can be described by the following
Hamiltonian \cite{Shen:05,Fan_PRA:2007,Chang:2007,Witthaut:2010}:
\begin{equation}
H=H_{atom}+H_{field}+H_{coupling}
\end{equation}
where the atomic term, in the rotating framce, is given by ($\hbar=1$):
\begin{eqnarray}
H_{atom}&=&\omega_{e}|e\rangle\langle e|+\omega_{g}|g\rangle\langle g|+(\omega_{c}+\omega_s)|s\rangle\langle s|\nonumber\\
&+&\Omega^{*}|s\rangle\langle e|+\Omega|e\rangle\langle s|
\end{eqnarray}
where $\omega_{i}$ is the energy of the state $|i\rangle$ for $i=g,e,s$, and $\omega_c$ is the frequency of the control field.
We assume the waveguide has linear dispersion, so the
Hamiltonian of free propagating photons can be written as:
\begin{equation}
H_{field}=-ic\int_{-\infty}^{+\infty}dx\left(\mathcal{E}_{R}^{\dagger}(x)\frac{\partial}{\partial x}\mathcal{E}_{R}(x)-\mathcal{E}_{L}^{\dagger}(x)\frac{\partial}{\partial x}\mathcal{E}_{L}(x)\right),
\end{equation}
where the annihilation operator for the left- (right-) going field at position $x$ is denoted as
$\mathcal{E}_{L}(x)(\mathcal{E}_{R}(x))$, respectively and $c$ is the group velocity in the waveguide.
Finally, the atom-field coupling is described by:
\begin{equation}
H_{coupling}=-g\sqrt{2\pi}\int_{-\infty}^{+\infty}dx\delta(x-x_{0})|g\rangle\langle e|(\mathcal{E}_{R}^{\dagger}(x)+\mathcal{E}_{L}^{\dagger}(x))+h.c.,
\end{equation}
where the atom is located at $x_{0}$ and the atom-field coupling coefficient
is $g$. Assuming that there is only one excitation inside the system,
the most general state of the system can be written as:
\begin{eqnarray}
|\psi_{k}\rangle&=&\int_{-\infty}^{+\infty}dx\left(\phi_{R}(x)\mathcal{E}_{R}^{\dagger}(x)+\phi_{L}(x)\mathcal{E}_{L}^{\dagger}(x)\right)|g,0\rangle\\
&+&P|e,0\rangle+S|s,0\rangle,
\end{eqnarray}
where the first term in the ket refers to the atomic state and the
second term refers to the photon Fock state. Now, we consider a situation
where the field is propagating to the right and scatter from the atom
at location $x_{0}$. Therefore, the photonic wave function will take
the form
\begin{eqnarray*}
\phi_{R}(x) & = & \theta(-x+x_{0})e^{ikx}+t_a\theta(x-x_{0})e^{ikx},\\
\phi_{L}(x) & = & r_a\theta(-x+x_{0})e^{-ikx},
\end{eqnarray*}
where $t _a(r_a)$ is the atomic transmission (reflection) coefficient, respectively.
By solving the equation $H|\psi_{k}\rangle=\epsilon_{k}|\psi_{k}\rangle$,
we find that:
\begin{eqnarray*}
\epsilon_{k} & = & ck\\
\epsilon_{k}P & = & (\omega_{e}-i\Gamma_{out}/2)P+\Omega S-g\sqrt{2\pi}(1+r_a)\\
-ic(t_a-1) & = & g\sqrt{2\pi}P\\
1+r_a & = & t_a\\
\epsilon_k S &=&(\omega_c+\omega_s) S+\Omega^*P.
\end{eqnarray*}
Note that we have added the effect of the spontaneous emission into the
outside photonic modes by introducing a polarization decay term with
the rate $\Gamma_{out}/2$. We assume the ground state decay is negligible.
\begin{figure}
\includegraphics{generic_lambda}\caption{Three-level system coupled to an electromagnetic waveguide with left-
(right-) going field, $\mathcal{E}_L(\mathcal{E}_R)$, respectively. The control field
($\Omega)$ is detuned from e $\leftrightarrow$ s transition
by $\Delta$. \label{fig:Three-level-system-coupled}}
\end{figure}
Therefore, the reflection coefficient of light scattering at each
site is given by:
\begin{equation}
r_a = -\frac{\Gamma_{1D}\delta}{\delta(\Gamma_{1D}+\Gamma_{out}-2i(\delta+\Delta))+2i|\Omega|^{2}}.\\
\end{equation}
where $\Gamma_{1D}=4\pi g^2/c$ is the spontaneous emission rate into the waveguide. The total spontaneous emission rate is $\Gamma_{tot}=\Gamma_{1D}+\Gamma_{out}$. Therefore, the scattering matrix of a single cell will be:
\begin{equation}
M_{\rm{atom}}=\left(\begin{array}{cc}
(1+i\zeta)e^{ik_{p}a} & i\zeta e^{-ik_{p}a} \\
-i\zeta e^{ik_{p}a} & 1-i\zeta e^{-ik_{p}a} \end{array}\right)\end{equation}
where $\zeta = -\frac{\Gamma_{1D}\delta}{\delta(i\Gamma_{out}+2(\delta+\Delta))-2|\Omega|^{2}}$. By defining the lattice frequency as $\omega_{lattice}=c\frac{2\pi}{a}$, we can write the probe wave number in terms of the detuning from the
atomic transition: $k_{p}a=(k_{p}-k_{a})a+k_{a}a=(\delta+\Delta)\frac{a}{c}+2\pi\frac{\omega_{a}}{\omega_{lattice}}$.
For simplicity, we assume the resonant EIT case
$(\Delta=0)$.
The band structure of a 1D array of 50,000 sites is shown in Fig.\ref{fig:band_theory}.
Due to the atomic periodic dispersion, one can observe the appearance of a band gap and also the finite
size oscillations at the band gap edge \cite{Deutsch:1995,Witthaut:2010}. Note that number used in Fig.\ref{fig:band_theory} are far from accessible regime in the current experiments \cite{Vetsch:2009,Hoffman:2011}.
\begin{figure}
\includegraphics[width=0.4\textwidth]{band_theory}
\caption{Transmission and reflection spectrum. Due to finite size effect, sharp oscillations occur at the band gaps edges. For this plot, a lattice of 50,000 sites is considered, also $\omega_{lattice}=\omega_a$ and $(\Delta,\Gamma_{1D},c/a,\Omega)/\Gamma_{tot}=(0,0.1,10^6,10)$. \label{fig:band_theory}}
\end{figure}
As mentioned in the main text, for the numbers close to the experimental parameters, we recover a transmission spectrum
similar to that of free space, as shown in Fig.~3 of the main text. There, the dips in the transmission spectrum corresponds to dressed states
split by the Rabi frequency of the control field, in direct analogy
to EIT in free space.
\section*{S2: Adiabatic passage between two LC resonators using the dark state}
In this section, we discuss the adiabatic transfer of a single excitation from one LC resonator to another, using a dark state.
First we consider a single cell where an atomic ensemble is coupled
to a resonator. Since the system is linear, we solve the problem for
a single excitation. The result can be easily generalized to any number
of excitations, as long as the number of excitations remains smaller
than than the number of atoms to guarantee the bosonic commutation
relations \cite{Fleischhauer:2002}. The equations of motion are given
by \cite{Fleischhauer:2000kx,Zimmer:2008}:
\[
i\frac{dX}{dt}=H_{0}X,
\]
where $X=(\langle\mathcal{E}_M\rangle,\langle S \rangle, \langle P \rangle)^{T}$ represents the state of the system: $\langle\mathcal{E}\rangle$ is the mircowave electric field, $\langle S \rangle $ is the atomic coherence for $|c\rangle \leftrightarrow |a\rangle$ transition, and $\langle P \rangle $ is the atomic coherence for $|b\rangle \leftrightarrow |a\rangle$ transition. The Hamiltonian is
\begin{equation}
H_{0}=-g\sqrt{N}\left(\begin{array}{ccc}
0 & 0 & 1\\
0 & 0 & \eta^{-1}\\
1 & \eta^{-1} & \delta_2
\end{array}\right)
\end{equation}
where $\eta=\frac{g\sqrt{N}}{\Omega}$ and the dimesionless detuning
is $\delta_2=\frac{\Delta_2}{g\sqrt{N}}.$ We use $g\sqrt{N}$ for the unit of energy. In this case, the group velocity reduction factor
is $\frac{c}{v_{g}}=\sqrt{1+\eta^{2}}.$ When the atmoic medium is
inside a resonator, the slow light effect manifests itself as narrowing of
the resonator bandwidth \cite{Lukin:98}. The energy spectrum of the system as a function
of the control field -- which is characterized by $\eta$ -- is shown in Fig.~\ref{fig:single-EIT}.
The zero energy state is the dark state $|D\rangle=\frac{1}{\sqrt{1+\eta^{2}}}(|E\rangle-\eta|S\rangle)$, where $|E\rangle=(1,0,0)^T$ represents the photonic state and $|S\rangle=(0,1,0)^T$ represents the atomic coherence. Therefore, one can change the nature of the dark state from photonic to atomic by increasing $\eta$, or equivalently decreasing the magnitude of the control field $\Omega$.
\begin{figure}[h]
\center
\includegraphics[width=0.4\textwidth]{single_EIT}
\includegraphics[width=0.4\textwidth]{dark_state_transfer_single}\caption{single cell: (a) energy of the eigenstates in units of $g\sqrt{N}$. Note that the dark state has zero energy and is separated from the other states. (b) The dark state nature changes from photonic to atomic by changing the control field, characterized by $\eta$. \label{fig:single-EIT}}
\end{figure}
Now, we consider two cells to investigate the adiabatic passage
of a single excitation between them, by simply changing their respective
control field ($\eta_1,\eta_2$). The state of the system can be represented by $X=(X_{1},X_{2})^{T}$,
and the coupling matrix is:
\begin{equation}
H=\left(\begin{array}{cc}
H_{1} & H_{c}\\
H_{c} & H_{2}
\end{array}\right)
\end{equation}
where $H_{i}=H_{0}(\eta\rightarrow\eta_{i})$ and $H_{c}=\left(\begin{array}{ccc}
-\kappa & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0
\end{array}\right)$ respresents the electric field coupling between resonators. Before
presenting the scheme, we study the behavior of the system for different control field values
$(i.e., \eta_{1},\eta_{2})$. In particular, we investigate the situation
where they are inversely changed: $\eta_{2}=\eta_{c}^{2}\eta_{1}^{-1}$ and the control field values cross at $\eta_{c}$.
The energy eigenstates of the system are detonated by $|e_i\rangle$ with their corresponding energies $e_i$. The energy
spectrum of such coupled system is shown in Fig.\ref{fig:double-EIT}.
Due to coupling between the resonators, the dark states are split.
Since the coupling between dark states is given by $\frac{-\kappa}{\sqrt{(1+\eta_{1}^{2})(1+\eta_{2}^{2})}}$,
this splitting is more pronounced when $ $$\eta_{1}\simeq\eta_{2}=1$.
If this was the only coupling mechanics, then at very large or small $\eta$'s,
the splitting would vanish. However, there
is a finite splitting between the new dark states which is constant
in those limits. The latter splitting is due to coupling of the dark
states to the bright and excited states. Using
second order perturbation theory, we find that the energy corrections
to the dark states are equal to $e_{3}\simeq-\delta\kappa^{2},e_{4}\simeq0$
for both large and small $\eta$'s. When two cells are decoupled from each other, if $\eta_{1}\ll1$ (and
$\eta_{2}\gg1$), then we have $|D^{(1)}\rangle\simeq|E^{(1)}\rangle$ and $|D^{(2)}\rangle\simeq|S^{(2)}\rangle$.
Therefore, in this limit, the coupled dark states are: $|e_3\rangle\simeq|E^{(1)}\rangle$
and $|e_4\rangle\simeq|S^{(2)}\rangle$. Similarly, for the opposite limit,
when $\eta_{2}\ll1$ (and $\eta_{1}\gg1$), we will have $|e_3\rangle\simeq|E^{(2)}\rangle$
and $|e_4\rangle\simeq|S^{(1)}\rangle$.
\begin{figure}
\includegraphics[width=0.4\textwidth]{double_EIT}
\includegraphics[width=0.4\textwidth]{dark_state_transfer_double}\caption{Double cell: (a) Energy of the eigenstates in units of $g\sqrt{N}$, (b) Probabilities of photonic and atomic components of the forth lowest eigenstate $|e_4\rangle$. By changing the control fields (increasing $\eta_1$ and decreasing $\eta_2$), the atomic coherence transfers from the second cell to the first cell, characterized by $|\langle S^{(2)}\rangle|^2$ and $|\langle S^{(1)}\rangle|^2$, respectively. In these plots, $(\kappa,\delta_2)/g\sqrt{N}=(0.5,2)$ and $\eta_c=1$. \label{fig:double-EIT}}
\end{figure}
Therefore, we can adiabatically transfer an atomic excitation from the
resonator 2 to the resonator 1 while remaining in the dark state ($|e_4\rangle$).
The adiabaticity condition is
$\sum_{i\neq4} \frac{|\langle
e_4|\partial_\eta|e_i\rangle|}{e_4-e_i}\dot\eta =\epsilon_1 \dot\eta
\ll 1$, where the sum is over all the states except the dark state of interest. Therefore, a slower transfer is more adiabatic. However, a slower transfer leads to higher loss of the excitation which is characterized by: $\sum_{i=1,2}\int (\kappa_{in}
|\langle\mathcal{E}^{(i)}_M\rangle|^2+\gamma |\langle S^{(i)} \rangle|^2)
\frac{1}{\dot \eta}d\eta = \epsilon_2/ \dot \eta$, where the
first (second) term represents the photonic (atomic) loss,
respectively. The photonic loss is due to the intrinsic loss of the resonator. In general, there should also be a term for the polarization ($|\langle P^{(i)} \rangle|^2$) loss; however, for the dark state transfer, the probability of the excitation being in the excited state is negligible and therefore one can ignore that the polarization loss.
Combining these two conditions, we find that one should always satisfy $\epsilon_1 \epsilon_2 \ll 1$, regardless of the control field changing rate. We can use this condition to find optimized values of the detuning ($\Delta_2$) and the control field crossing $\eta_c$. An example of this optimization is shown in Fig.~3 of the main text.
\bibliographystyle{apsrev}
|
2,877,628,088,353 | arxiv |
\section{Introduction}
The so-called \emph{gossip problem} is a problem about peer-to-peer information
sharing: a number of agents each start with some private information, and the
goal is to share this information among all agents, using only peer-to-peer
communication channels~\cite{Tijdeman1971:TelephoneProblem}.
For example, the agents could be autonomous sensors that need to pool their
individual measurements in order to obtain a joint observation.
Or the agents could be distributed copies of a database that can each be edited
separately, and that need to synchronize with each
other~\cite{EGKM2004:Epidemic,HPPRS2016:GossipDiscovery,Irv2016:GosSec}.
The example that is typically used in the literature, however, is a bit more
frivolous: as the name suggests, the gossip problem is usually represented as a
number of people \emph{gossiping}~\cite{Hedetniemi1988:GossipSurvey,DEPRS2015:DynamicGossip,DEPRS2017:EpistemicGossip}.
This term goes back to the oldest sources on the topic, such as~\cite{BakSho1972:GossipPhone}.
The gossip scenario gives us not only the name of the gossip problem, but also
the names of some of the other concepts that are used: the private information
that an agent starts out with is called that agent's \emph{secret}, the
communication between two agents is called a \emph{telephone call} and an agent $a$
is capable of contacting another agent $b$ if $a$ \emph{knows $b$'s telephone
number}.
These terms should not be taken too literally. Results on the
gossip problem can, in theory, be used by people that literally just want to
exchange gossip by telephone. But we model information exchange in general and
ignore all other social and fun aspects of gossip among humans --- although
these aspects can also be modeled in epistemic logic~\cite{Klein2017:LogDynGossipDEL}.
For our framework, applications where artificial agents need to synchronize
their information are much more likely. For example, recent ideas to improve
cryptocurrencies like bitcoin and other blockchain applications focus on the
peer-to-peer exchange (gossip) happening in such networks~\cite{SoLeZo2016:Spectre}
or even aim to replace blockchains with directed graphs storing the history of
communication~\cite{Baird2017:Hashgraph}.
Epistemic logic can shed new light on the knowledge of agents participating in blockchain
protocols~\cite{HalpernPass2017:KnowBlockchain,BruFluStu2017:LogicBlockchain}.
There are many different sets of rules for the gossip problem~\cite{Hedetniemi1988:GossipSurvey}.
For example, calls may be one-on-one, or may be conference calls. Multiple calls
may take place in parallel, or must happen sequentially. Agents may only be
allowed to exchange one secret per call, or exchange everything they know.
Information may go both ways during a call, or only in one direction.
We consider only the most commonly studied set of rules: calls are one-on-one,
calls are sequential, and the callers exchange all the secrets they know. So if
a call between $a$ and $b$ is followed by a call between $b$ and $c$, then in
the second call agent $b$ will also tell agent $c$ the secret of agent $a$.
The goal of gossip is that every agent knows every secret.
An agent who knows all secrets is called an \emph{expert}, so the goal is to
turn all agents into experts.
The \emph{classical} gossip problem, studied in the 1970s, assumed a total
communication network (anyone could call anyone else from the start), and focused
on optimal call sequences, i.e.\ schedules of calls which spread all the secrets
with a minimum number of calls, which happens to be $2n-4$ for $n \geq 4$
agents~\cite{Tijdeman1971:TelephoneProblem,Hurkens2000:GossipEfficiently}.
Later, this strong assumption on the network of the gossiping agents was dropped,
giving rise to studies on different network topologies (see~\cite{Hedetniemi1988:GossipSurvey}
for a survey), with $2n-3$ calls sufficing for most networks.
Unfortunately, these results about optimal call sequences only show that such
call sequences exist. They do not provide any guidance to the agents about how
to achieve an optimal call sequence. Effectively, these solutions assume a
central scheduler with knowledge of the entire network, who will come up with an
optimal schedule of calls, to be sent to the agents, who will eventually execute
it in the correct order.
Most results also rely upon synchrony so that agents can execute their calls at
the appropriate time (i.e.\ after some calls have been made, and before some
other calls are made).
The requirement that there be a central scheduler that tells the agents exactly
what to do, is against the spirit of the peer-to-peer communication that we want
to achieve. Computer science has shifted towards the study of \emph{distributed
algorithms} for the gossip problem~\cite{BLL1999:DiscovDistrib,KSSV2000:RandomRumor}.
Indeed, the gossip problem becomes more natural without a central scheduler;
the gossiping agents try to do their best with the information they have when
deciding whom to call.
Unfortunately, this can lead to sequences of calls that are redundant because
they contain many calls that are uninformative in the sense that neither agent
learns a new secret.
Additionally, the algorithm may fail, i.e., it may deadlock, get stuck in a
loop or terminate before all information has been exchanged.
For many applications it is not realistic to assume that every agent
is capable of contacting every other agent. So we assume that every agent has a
set of agents of which they ``know the telephone number'', their neighbors, so
to say, and that they are therefore able to contact.
We represent this as a directed graph, with an edge from agent $a$ to agent $b$
if $a$ is capable of calling $b$.
In classical studies, this graph is typically considered to be unchanging. In
more recent work on \emph{dynamic gossip} the agents exchange both the secrets
and the numbers of their contacts, therefore increasing the connectivity of the
network~\cite{DEPRS2015:DynamicGossip}. We focus on dynamic gossip. In
distributed protocols for dynamic gossip all agents decide on their own whom to
call, depending on their current information~\cite{DEPRS2015:DynamicGossip}, or
also depending on the expectation for knowledge growth resulting from the
call~\cite{DEPRS2017:EpistemicGossip}. The latter requires agents to represent
each other's knowledge, and thus epistemic logic.
Different protocols for dynamic gossip are successful in different classes of
gossip networks. The main challenge in designing such a protocol is to find a
good level of redundancy: we do not want superfluous calls, but the less
redundant a gossip protocol, the easier it fails in particular networks. Another
challenge is to keep the protocol simple. After all, a protocol that requires
the agents to solve a computationally hard problem every time they have to
decide whom to call next, would not be practical.
There is also a trade-off between the content of the message of which a call
consists, and the expected duration of gossip protocols. A nice example of that
is~\cite{HerMaf2017:ShareGossiping}, wherein the minimum number of calls to
achieve the epistemic goal is reduced from quadratic to linear order, however at
the price of more `expensive' messages, not only exchanging secrets but also
knowledge about secrets.
\bigskip
A well-studied protocol is ``Learn New Secrets'' ($\mathit{LNS}$), in which agents are
allowed to call someone if and only if they do not know the other's secret.
This protocol excludes redundant calls in which neither participant learns
any new secrets.
As a result of this property, all $\mathit{LNS}$ call sequences are finite. For small
numbers of agents, it therefore has a shorter expected execution length than
the ``Any Call'' ($\mathit{ANY}$) protocol that allows arbitrary calls at all times
and thus allows infinite call sequences~\cite{DitKokSto2017:GossipReachExpec}.
Additionally, it is easy for agents to check whom they are allowed to call
when following $\mathit{LNS}$.
However, $\mathit{LNS}$ is not always successful. On some graphs it can terminate
unsuccessfully, i.e.\ when some agents do not yet know all secrets.
In particular there are graphs where the outcome depends on how the
agents choose among allowed calls~\cite{DEPRS2015:DynamicGossip}.
\bigskip
Fortunately, it turns out that failure of $\mathit{LNS}$ can often be avoided with some
forethought by the calling agents. That is, if some of the choices available to
the agents lead to success and other choices to failure, it is often possible
for the agents to determine in advance which choices are the successful ones.
This leads to the idea of \emph{strengthening} a protocol. Suppose that $P$
is a protocol that, depending on the choices of the agents, is sometimes
successful and sometimes unsuccessful. A strengthening of $P$ is an addition
to $P$ that gives the agents guidance on how to choose among the options that
$P$ gives them.
The idea is that such a strengthening can leave good properties of a protocol
intact, while reducing the chance of failure. For example, any strengthening of
$\mathit{LNS}$ will inherit the property that there are no redundant calls: It will
still be the case that agents only call other agents if they do not know their
secrets.
\bigskip
Let us illustrate this with a small example, also featuring as a running
example in the technical sections (see Figure~\ref{figure:executionThree} on
page~\pageref{figure:executionThree}).
There are three agents $a,b,c$. Agent $a$ knows the number of $b$, and $b$ and $c$ know each other's number.
Calling agents exchange secrets and numbers, which may expand the network, and they apply the $\mathit{LNS}$ protocol, wherein you may only call other agents if you do not know their secret.
If $a$ calls $b$, it learns the secret of $b$ and the number of $c$.
All different ways to make further calls now result in all three agents knowing
all secrets. If the first call is between $b$ and $c$ (and there are no other
first calls than $ab$, $bc$, and $cb$), they learn each other's secret but no
new number. The only possible next call now is $ab$, after which $a$ and $b$
know all secrets but not $c$. But although $a$ now knows $c$'s number, she is
not permitted to call $c$, as she already learned $c$'s secret by calling $b$.
We are stuck. So, some executions of $\mathit{LNS}$ on this graph are successful and
others are unsuccessful.
Suppose we now strengthen the $\mathit{LNS}$ protocol into $\mathit{LNS}'$ such that $b$
and $c$ have to wait before making a call until they are called by another agent.
This means that $b$ will first receive a call from $a$.
Then all executions of $\mathit{LNS}'$ are successful on this graph.
In fact, there is only \emph{one} remaining execution: $ab;bc;ac$.
The protocol $\mathit{LNS}'$ is a \emph{strengthening} of the protocol $\mathit{LNS}$.
\bigskip
The main contributions of this paper are as follows.
We define what it means that a gossip protocol is common knowledge between all agents.
To that end we propose a logical semantics with an individual knowledge modality for protocol-dependent knowledge.
We then define various strengthenings of gossip protocols, both in the logical syntax and in the semantics.
This includes a strengthening called uniform backward induction, a form of backward induction applied to (imperfect information) gossip protocol execution trees.
We give some general results for strengthenings, but mainly apply our strengthenings to the protocol $\mathit{LNS}$: we investigate some basic gossip graphs (networks) on which we gradually strengthen $\mathit{LNS}$ until all its executions are successful on that graph.
However, no such strengthening will work for all gossip graphs. This is proved
by a counterexample consisting of a six-agent gossip graph, that requires fairly
detailed analysis. Some of our results involve the calculation and checking of
large numbers of call sequences. For this we use an implementation in Haskell.
\bigskip
Our paper is structured as follows.
In Section~\ref{sec:ELfDGP} we introduce the basic definitions to describe
gossip graphs and a variant of epistemic logic to be interpreted on them.
In particular, Subsection~\ref{subsec:protodep}
introduces a new operator for protocol-dependent knowledge.
In Section~\ref{sec:strengthening} we define semantic and --- using the new
operator --- syntactic ways to strengthen gossip protocols. We investigate how
successful those strengthenings are and study their behavior under iteration.
Section~\ref{sec:imposs} contains our main result, that strengthening $\mathit{LNS}$ to
a strongly successful protocol is impossible.
In Section~\ref{sec:generalizations} we wrap up and conclude.
The Appendix describes the Haskell code used to support our results.
\section{Epistemic Logic for Dynamic Gossip Protocols}\label{sec:ELfDGP}
\subsection{Gossip Graphs and Calls}
\emph{Gossip graphs} are used to keep track of who knows which secrets
and which telephone numbers.
\begin{definition}[Gossip Graph]\label{def:ggs}
Given a finite set of agents $A$, a \emph{gossip graph} $G$ is a triple
$(A,N,S)$ where $N$ and $S$ are binary relations on $A$ such that
$I \subseteq S \subseteq N$ where $I$ is the identity relation on $A$.
An \emph{initial gossip graph} is a gossip graph where $S = I$.
We write $N_ab$ for $(a,b) \in N$ and $N_a$ for $\{ b \in A \mid N_ab \}$, and similarly for the relation $S$.
The set of all initial gossip graphs is denoted by $\mathcal{G}$.
\end{definition}
The relations model the basic knowledge of the agents.
Agent $a$ \emph{knows the number} of $b$ iff $N_a b$
and $a$ \emph{knows the secret} of $b$ iff $S_a b$.
If we have $N_a b$ and not $S_a b$ we also say that
$a$ knows the \emph{pure number} of $b$.
\begin{definition}[Possible Call; Call Execution]\label{def:calls}
A \emph{call} is an ordered pair of agents $(a,b) \in A \times A$.
We usually write $ab$ instead of $(a,b)$.
Given a gossip graph $G$, a call $ab$ is \emph{possible} iff $N_a b$.
Given a possible call $ab$, $G^{ab}$ is the graph $(A',N',S')$ such that
$A':=A$,
$N'_a := N'_b := N_a \cup N_b$,
$S'_a := S'_b := S_a \cup S_b$, and
$N'_c := N_c$, $S'_c := S_c$ for $c \neq a,b$.
For a sequence of calls $ab;cd;\dots$ we write $\sigma$ or $\tau$.
The empty sequence is $\epsilon$.
A sequence of possible calls is a \emph{possible call sequence}.
We extend the notation $G^{ab}$ to possible call sequences by $G^\epsilon := G$ and $G^{\sigma;ab} := {(G^\sigma)}^{ab}$.
Gossip graph $G^\sigma$ is the result of \emph{executing} $\sigma$ in $G$.
\end{definition}
To visualize gossip graphs we draw $N$ with dashed and $S$ with solid arrows.
When making calls, the property $S \subseteq N$ is preserved,
so we omit the dashed $N$ arrow if there already is a solid $S$ arrow.
\begin{example}\label{example:simpleIntro}
Consider the following initial gossip graph $G$ in which $a$ knows the number
of $b$, and $b$ and $c$ know each other's number and no other numbers are known:
\begin{center}
\begin{tikzpicture}[node distance=1.5cm,>=latex]
\node (a) [] {$a$};
\node (b) [right of=a] {$b$};
\node (c) [right of=b] {$c$};
\draw (a) [dashed,->] -- (b);
\draw (b) [dashed,<->] -- (c);
\end{tikzpicture}
\end{center}
Suppose that $a$ calls $b$. We obtain the gossip graph $G^{ab}$ in which
$a$ and $b$ know each other's secret and $a$ now also knows the number of $c$:
\begin{center}
\begin{tikzpicture}[node distance=1.5cm,>=latex]
\node (a) [] {$a$};
\node (b) [right of=a] {$b$};
\node (c) [right of=b] {$c$};
\draw (a) [<->] -- (b);
\draw (b) [dashed,<->] -- (c);
\draw (b) [dashed,<->] -- (c);
\draw[->,dashed] (a) to[bend left] (c);
\end{tikzpicture}
\end{center}
\end{example}
\subsection{Logical Language and Protocols}
\label{subsec:language}
We now introduce a logical language which we will interpret on gossip graphs.
Propositional variables $N_ab$ and $S_ab$ stand for ``agent $a$ knows the number of agent $b$'' and ``agent $a$ knows the secret of agent $b$'', and $\top$ is the `always true' proposition.
Definitions~\ref{def:language} and~\ref{def:protocol} are by simultaneous induction, as the language construct $K_a^P \phi$ refers to a protocol $P$.
\begin{definition}[Language]\label{def:language}
We consider the language $\mathcal{L}$ defined by
\[ \begin{array}{lll}
\phi & ::= & \top \mid N_ab \mid S_ab \mid \neg \phi \mid (\phi \wedge \phi) \mid K_a^P \phi \mid [\pi] \phi \\[0.5em]
\pi & ::= & ?\phi \mid ab \mid (\pi \ ; \ \pi) \mid (\pi \cup \pi) \mid \pi^\ast
\end{array} \]
where $a,b \in A$.
Members of $\mathcal{L}$ of type $\phi$ are \emph{formulas} and those of type $\pi$ are \emph{programs}.
\end{definition}
\begin{definition}[Syntactic protocol]\label{def:protocol}
A \emph{syntactic protocol}
$P$ is a program defined by
\[ P :=
{\left(\bigcup_{a \neq b \in A} \left(? (N_ab \land P_{ab}) ; ab \right)\right)}^\ast ;
?\bigwedge_{a \neq b \in A} \neg \left( N_ab \land P_{ab} \right)
\] where for all $a \neq b \in A$, $P_{ab}\in\mathcal{L}$ is a formula.
This formula is called the \emph{protocol condition} for call $ab$ of protocol $P$.
The notation $P_{ab}$ means that $a$ and $b$ are designated variables in that formula.
\end{definition}
Other logical connectives and program constructs are defined by abbreviation.
Moreover, $N_a bcd$ stands for $N_ab \wedge N_ac \wedge N_ad$, and $N_a B$ for $\bigwedge_{b \in B}N_ab$.
We use analogous abbreviations for the relation $S$.
We write $\mathit{Ex}_a$ for $S_a A$.
We then say that agent $a$ is an \emph{expert}.
Similarly, we write $\mathit{Ex}_B$ for $\bigwedge_{b \in B} \mathit{Ex}_b$, and $\mathit{Ex}$ for $\mathit{Ex}_A$: all agents are experts.
Construct $[\pi] \phi$ reads as ``after every execution of program $\pi$, $\phi$ (is true).''
For program modalities, we use the standard definition for diamonds:
$\langle\pi\rangle\phi := \lnot[\pi]\lnot\phi$, and further: $\pi^0 := {?\top}$ and for all $n\in\mathbb{N}$, $\pi^n := \pi^{n-1};\pi$.
Our protocols are \emph{gossip} protocols, but as we define no other, we omit the word `gossip'.
The word `syntactic' in syntactic protocol is to distinguish it from the semantic protocol that will be defined later.
It is also often omitted.
Our new operator $K_a^P \phi$ reads as ``given the protocol $P$, agent $a$ knows that $\phi$''.
Informally, this means that agent $a$ knows that $\phi$ on the assumption that it is common knowledge among the agents that they all use the gossip protocol $P$.
The epistemic dual is defined as $\hat{K}_a^P \phi := \lnot K_a^P \lnot \phi$ and can be read as ``given the protocol $P$, agent $a$ considers it possible that $\phi$.''
We note that the language is well-defined, in particular $K_a^P$.
The only variable parts of a protocol $P$ are the protocol conditions $P_{ab}$.
Hence, given $|A|$ agents, and the requirement that $a \neq b$, a protocol is determined by its $|A| \cdot (|A|-1)$ many protocol conditions.
We can therefore see the construct $K_a^P \phi$ as an operator with input $(|A| \cdot (|A|-1))+1$ objects of type formula (namely all these protocol condition formulas plus the formula $\phi$ in $K_a^P \phi$), and as output a more complex object of type formula (namely $K_a^P \phi$).%
\footnote{Alternatively one could define a \emph{protocol condition function} $f \colon A^2 \rightarrow \mathcal{L}$ and proceed as follows.
In the language BNF replace $K_a^P \phi$ by $K_a (\vec{\phi_{ab}}, \phi)$ where $a \neq b$ and $\vec{\phi_{ab}}$ is a vector representing $|A| \cdot (|A|-1)$ arguments, and in the definition of protocol replace $P_{ab}$ by $f(a,b)$.
That way, Definition~\ref{def:language} precedes Definition~\ref{def:protocol} and is no longer simultaneously defined.
Then, when later defining the semantics of $K_a (\vec{\phi_{ab}}, \phi)$, replace all $\phi_{ab}$ by $f(a,b)$.}
Note that this means that all knowledge operators in a call condition $P_{ab}$ of a protocol $P$ must be relative to protocols strictly simpler than $P$.
In particular, the call condition $P_{ab}$ cannot contain the operator $K_a^P$, although it may contain $K_a^{P'}$ where $P'$ is less complex than $P$.
So the language is incapable of describing the ``protocol'' $X$ given by ``$a$ is allowed to call $b$ if and only if $a$ knows, assuming that $X$ is common knowledge, that $b$ does not know $a$'s secret.''
This is intentional; the ``protocol'' $X$ is viciously circular so we do not want our language to be able to represent it.
\begin{example}\label{example:LNS}
The ``Learn New Secrets'' protocol ($\mathit{LNS}$) is the protocol with protocol conditions $\lnot S_ab$ for all $a \neq b \in A$.
This prescribes that you are allowed to call any agent whose secret you do not yet know (and whose number you already know).
The ``Any Call'' protocol ($\mathit{ANY}$) is the protocol with protocol conditions $\top$ for all $a \neq b \in A$.
You are allowed to call any agent whose number you know.
\end{example}
\noindent
The standard epistemic modality is defined by abbreviation as $K_a \phi := K^\mathit{ANY}_a \phi$.
\subsection{Semantics of Protocol-Dependent Knowledge}\label{subsec:protodep}
We now define how to interpret the language $\mathcal{L}$ on gossip graphs.
A \emph{gossip state} is a pair $(G,\sigma)$ such that $G$ is an initial gossip graph and $\sigma$ a call sequence possible on $G$ (see Def.~\ref{def:calls}).
We recall that $G$ and $\sigma$ induce the gossip graph $G^\sigma = (A,N^\sigma,S^\sigma)$.
This is called the gossip graph \emph{associated} with gossip state $(G,\sigma)$.
The semantics of $\mathcal{L}$ is with respect to a given initial gossip graph $G$, and defined on the set of gossip states $(G,\sigma)$ for all $\sigma$ possible on $G$.
Definitions~\ref{def:SyncEpistRel} and~\ref{def:Semantics} are simultaneously defined.
\begin{definition}[Epistemic Relation]\label{def:SyncEpistRel}
Let an initial gossip graph $G = (A,N,S)$ and a protocol $P$ be given.
We inductively define the \emph{epistemic relation} $\sim_a^P$ for agent $a$ over gossip states $(G,\sigma)$, where $G^\sigma = (A,N^\sigma,S^\sigma)$ are the associated gossip graphs.
\begin{enumerate}
\item $(G,\epsilon) \sim_a^P (G,\epsilon)$;
\item if $(G,\sigma) \sim_a^P (G,\tau)$, $N^\sigma_b = N^\tau_b$, $S^\sigma_b = S^\tau_b$, and $ab$ is $P$-permitted at $(G,\sigma)$ and at $(G,\tau)$, then $(G,\sigma;ab) \sim_a^P (G,\tau;ab)$; \\
if $(G,\sigma) \sim_a^P (G,\tau)$, $N^\sigma_b = N^\tau_b$, $S^\sigma_b = S^\tau_b$, and $ba$ is $P$-permitted at $(G,\sigma)$ and at $(G,\tau)$, then $(G,\sigma;ba) \sim_a^P (G,\tau;ba)$;
\item if $(G,\sigma) \sim_a^P (G,\tau)$ and $c,d,e,f \neq a$ such that $cd$ is $P$-permitted at $(G,\sigma)$ and $ef$ is $P$-permitted at $(G,\tau)$, then $(G,\sigma;cd) \sim_a^P (G,\tau;ef)$.
\end{enumerate}
\end{definition}
\begin{definition}[Semantics]\label{def:Semantics}
Let initial gossip graph $G = (A,N,S)$ be given.
We inductively define the interpretation of a formula $\phi \in \mathcal{L}$ on a gossip
state $(G,\sigma)$, where $G^\sigma = (A,N^\sigma,S^\sigma)$ is the associated gossip graph.
\[
\begin{array}{lll}
G,\sigma \models \top & & \text{always} \\
G,\sigma \models N_ab & \text{iff} & N^\sigma_a b \\
G,\sigma \models S_ab & \text{iff} & S^\sigma_a b \\
G,\sigma \models \neg \phi & \text{iff} & G,\sigma \not \models \phi \\
G,\sigma \models \phi\land\psi & \text{iff} & G,\sigma \models \phi \text{ and } G,\sigma \models \psi \\
G,\sigma \models K_a^P \phi & \text{iff} & G,\sigma' \models \phi \text{ for all } (G,\sigma') \sim_a^P (G,\sigma) \\
G,\sigma \models [\pi] \phi & \text{iff} & G,\sigma' \models \phi \text{ for all } (G,\sigma') \in \llbracket \pi \rrbracket (G,\sigma) \\
\end{array}
\]
where $\llbracket \cdot \rrbracket$ is the following interpretation of programs as relations between gossip states.
Note that we write $\llbracket \pi \rrbracket(G,\sigma)$ for the set $\{(G,\sigma') \mid ((G,\sigma), (G, \sigma')) \in \llbracket \pi \rrbracket \}$.
\[
\begin{array}{rcl}
\llbracket ?\phi \rrbracket {(G,\sigma)} & := & \{ (G,\sigma) \mid G,\sigma \models \phi \} \\
\llbracket ab \rrbracket {(G,\sigma)} & := & \{ (G,(\sigma;ab)) \mid G,\sigma \models N_ab \} \\
\llbracket \pi;\pi' \rrbracket {(G,\sigma)} & := & \bigcup \{ \llbracket \pi' \rrbracket {(G,\sigma')} \mid (G,\sigma') \in \llbracket \pi \rrbracket {(G,\sigma)} \} \\
\llbracket \pi\cup\pi' \rrbracket {(G,\sigma)} & := & \llbracket \pi \rrbracket {(G,\sigma)} \cup \llbracket \pi' \rrbracket {(G,\sigma)}\\
\llbracket \pi^\ast \rrbracket {(G,\sigma)} & := & \bigcup \{ \llbracket \pi^n \rrbracket {(G,\sigma)} \mid n \in \mathbb{N} \} \\
\end{array}
\]
If $G,\sigma \models P_{ab}$ we say that $ab$ is \emph{$P$-permitted} at $(G,\sigma)$.
A \emph{$P$-permitted call sequence} consists of $P$-permitted calls.
\end{definition}
Let us first explain why the interpretation of protocol-dependent knowledge is well-defined.
The interpretation of $K^P_a \phi$ in state $(G,\sigma)$ is a function of the truth of $\phi$ in all $(G,\tau)$ accessible via $\sim_a^P$.
This is standard.
Non-standard is that the relation $\sim_a^P$ is a function of the truth of protocol conditions $P_{ab}$ in gossip states including $(G,\sigma$).
This may seem a slippery slope.
However, note that $K^P_a \phi$ cannot be a subformula of any such $P_{ab}$, as the language $\mathcal{L}$ is well-defined: knowledge cannot be self-referential.
These checks of $P_{ab}$ can therefore be performed without vicious circularity.
Let us now explain an important property of $\sim_a^P$, namely that it only relates two gossip states if both are reachable by the protocol $P$.
So if $(G,\sigma)\sim_a^P(G,\sigma')$ and $\sigma$ is a $P$-permitted call sequence, then $\sigma'$ is $P$-permitted as well.
In other words, $a$ assumes that no one will make any calls that are not $P$-permitted.
The set $\{\sim_a^P\mid a\in A\}$ of relations therefore represents the information state of the agents under the assumption that it is common knowledge that the protocol $P$ will be followed.
Given the logical semantics, a convenient primitive is the following \emph{gossip model}.
\begin{definition}[Gossip Model; Execution Tree]\label{def:ModelStatTree}
Given an initial gossip graph $G$, the \emph{gossip model} for $G$ consists of
all \emph{gossip states} $(G,\sigma)$ (where, by definition of gossip states, $\sigma$ is possible on $G$), with epistemic relations $\sim^P_a$ between gossip states.
The \emph{execution tree} of a protocol $P$ given $G$ is the submodel of the
gossip model restricted to the set of those $(G,\sigma)$ where $\sigma$ is
$P$-permitted.
\end{definition}
The relation $\sim_a^P$ is an equivalence relation on the restriction of a gossip model to the set of gossip states $(G,\sigma)$ where $\sigma$ is $P$-permitted.
This is why we use the symbol $\sim$ for the relation.
However, $\sim_a^P$ is typically not an equivalence relation on the entire domain of the gossip model, as $\sim_a^P$ is not reflexive on unreachable gossip states $(G,\sigma)$.
In our semantics, the modality $[ab]$ can always be evaluated.
There are three cases to distinguish.
$(i)$ If the call $ab$ is not possible (if $a$ does not know the number of $b$), then $\llbracket ab \rrbracket (G,\sigma) = \emptyset$, so that $[ab]\phi$ is trivially true for all $\phi$.
$(ii)$ If the call $ab$ is possible but not $P$-permitted, then $\llbracket ab \rrbracket(G,\sigma) = \{(G,\sigma ; ab)\}$ but $\sim_a^P(G,\sigma;ab) = \emptyset$, so that in such states $K^P_a \bot$ is true: the agent believes everything including contradictions. In other words, we have that $\lnot P_{ab} \rightarrow [ab] K_c^P \bot$.
$(iii)$ If the call $ab$ is possible and $P$-permitted, then $\llbracket ab \rrbracket(G,\sigma) = \{(G,\sigma ; ab)\}$ and $\sim_a^P(G,\sigma;ab) \neq \emptyset$ consists of the equivalence class of gossip states that are indistinguishable for agent $a$ after call $ab$.
In view of the above, one might want to have a modality or program strictly standing for `call $ab$ is possible and $P$-permitted'. We can enforce protocol $P$ for call $ab$ by $[?P_{ab};ab]\phi$, for ``after the $P$-permitted call $ab$, $\phi$ is true.''
Let us now be exact in what sense the gossip model is a Kripke model. Clear enough, the set of gossip states $(G,\sigma)$ constitute a \emph{domain}, and we can identify the valuation of atomic propositions $N_ab$ (resp.\ $S_ab)$ with the subset of the domain such that $(G,\sigma) \models N_ab$ (resp.\ $(G,\sigma) \models S_ab$).
The relation to the usual accessibility relations of a Kripke model is less clear.
For each agent $a$, we do not have a unique relation $\sim_a$, but parametrized relations $\sim^P_a$; therefore, in a way, there are as many relations for agent $a$ as there are protocols $P$. These relations $\sim^P_a$ are only implicitly given. Given $P$, they can be made explicit if a semantic check of $K^P_a \phi$ so requires.
Gossip models are reminiscent of the history-based models of~\cite{parikhetal:2003} and of the protocol-generated forest of~\cite{jfaketal.JPL:2009}.
A gossip model is a protocol-generated forest (and similarly, the execution trees contained in the gossip model are protocol-generated forests), although a rather small forest, namely consisting of a single tree.
An important consequence of this is that the agents initially have \emph{common knowledge of the gossip graph}.
For example, in the initial gossip graph of the introduction, depicted in Figure~\ref{figure:executionThree}, agent $a$ knows that agent $c$ only knows the number of $b$.
Other works consider uncertainty about the initial gossip graph (for example, to represent that agent $a$ is uncertain whether $c$ knows $a$'s number), such that each gossip graph initially considered possible generates its own tree \cite{DEPRS2017:EpistemicGossip}.
The gossip states $(G,\sigma)$ that are the domain elements of the gossip model carry along a \emph{history} of prior calls.
This can, in principle, be used in a protocol language to be interpreted on such models, although we do not do this in this work.
An example of such a protocol is the ``Call Once'' protocol described in~\cite{DEPRS2015:DynamicGossip}: call $ab$ is permitted in gossip state $(G,\sigma)$, if $ab$ and $ba$ do not occur in $\sigma$.
With respect to the protocol $\mathit{ANY}$ the gossip model is not restricted.
If we only were to consider the protocol $\mathit{ANY}$, to each agent we can associate a unique epistemic relation $\sim^\mathit{ANY}_a$ in the gossip model, for which we might as well write $\sim_a$. We now have a standard Kripke model. This justifies $K_a \phi$ as a suitable abbreviation of $K^\mathit{ANY}_a \phi$.
\begin{definition}[Extension of a protocol]\label{def:extension}
For any initial gossip graph $G$ and any syntactic protocol $P$ we define the
\emph{extension of $P$ on $G$} by
\[ \begin{array}{lcl}
P_0(G) &:=& \{ \epsilon \} \\
P_{i+1}(G) &:=&
\{ \sigma;ab \mid
\sigma \in P_i(G),
\ a,b \in A,
\ G,\sigma \models P_{ab}
\} \\
P(G) &:=& \bigcup_{i<\omega} P_k(G)
\end{array} \]
The \emph{extension of $P$} is $\{ (G,P(G)) \mid G \in \mathcal{G} \}$.
\end{definition}
Recall that $\mathcal{G}$ is the set of all initial gossip graphs. We often
identify a protocol with its extension. To compare protocols we will write
$P \subseteq P'$ iff for all $G \in \mathcal{G}$ we have $P(G) \subseteq P'(G)$.
\begin{definition}[Success]\label{def-success}
Given an initial gossip graph $G$ and protocol $P$, a $P$-permitted call sequence $\sigma$ is \emph{terminal} iff for all calls $ab$, $G,\sigma \not \models P_{ab}$.
We then also say that the gossip state $(G,\sigma)$ is \emph{terminal}.
A terminal call sequence is \emph{successful} iff after its execution all agents are experts.
Otherwise it is \emph{unsuccessful}.
\begin{itemize}
\item A protocol $P$ is \emph{strongly successful} on $G$ iff all terminal $P$-permitted call sequences are successful: $G, \epsilon \models [P]\mathit{Ex}$.
\item A protocol is \emph{weakly successful} on $G$ iff some terminal $P$-permitted call sequences are successful: $G, \epsilon \models \dia{P}\mathit{Ex}$.
\item A protocol is \emph{unsuccessful} on $G$ iff no terminal $P$-permitted call sequences are successful: $G, \epsilon \models [P]\lnot\mathit{Ex}$.
\end{itemize}
A protocol is \emph{strongly successful} iff it is strongly successful on all initial gossip
graphs $G$, and similarly for weakly successful and unsuccessful.
\end{definition}
Instead of `is successful' we also say `\emph{succeeds}', and instead of `terminal sequence' we also say that the sequence is \emph{terminating}.
Given a gossip graph $G$ and a $P$-permitted sequence $\sigma$ we say that the associated gossip graph $G^\sigma$ is \emph{$P$-reachable} (from $G$).
A terminal $P$-permitted sequence is also called an \emph{execution} of $P$.
Given any set $X$ of call sequences, $\overline{X}$ is the subset of the terminal sequences of $X$.
All our protocols can always be executed.
If this is without making any calls, the protocol extension is empty.
Being empty does not mean that $[P]\bot$ holds, which is never the case.
Strong success implies weak success, but not vice versa.
Formally, we have that $[P]\phi \to \langle P \rangle \phi$ is valid for all
protocols $P$, but $\langle P \rangle \phi \to [P]\phi$ is not valid in general,
because our protocols are typically non-deterministic.
We can distinguish unsuccessful termination (not all agents know all secrets) from successful termination.
In other works~\cite{DEPRS2015:DynamicGossip,AptWoj2017:GossipCK} this distinction cannot be made.
In those works termination implies success.
\begin{example}\label{example:executionThree}
We continue with Example~\ref{example:simpleIntro}.
The execution tree of $\mathit{LNS}$ on this graph is shown in Figure~\ref{figure:executionThree}.
We denote calls with gray arrows and the epistemic relation with dotted lines.
For example, agent $a$ cannot distinguish whether call $bc$ or $cb$ happened.
At the end of each branch the termination of $\mathit{LNS}$ is denoted with $\checkmark$ if successful, and $\times$ if unsuccessful.
\begin{figure}[ht!]
\centering
\tikzstyle{world} = [draw]
\scalebox{0.73}{\begin{tikzpicture}
\node (blank1) at (0, .7) {};
\node (blank2) at (5.5, 0) {};
\node (blank3) at (0, -7.2) {};
\node[world] (abbccb) at (0, 0) {\wordthreeagents {->, dashed} {<->, dashed}};
\node[world] (ABbccb) at (-4, -2) {\wordthreeagentswithABwithac{<->, dashed}};
\node[world] (BCCB) at (0, -2) {\wordthreeagents{->,dashed} {<->}};
\node[world] (CBBC) at (4, -2) {\wordthreeagents{->,dashed} {<->}};
\node[world] (ACABbccb) at (-12, -4) {\wordthreeagentswithABwithAC{<->, dashed}};
\node[world] (ABBCCB) at (-8, -4) {\wordthreeagentswithABwithBC{<->, dashed}};
\node[world] (ABCBBC) at (-4, -4) {\wordthreeagentswithABwithBC{<->, dashed}};
\node[world] (BCCBAB) at (0, -4) {\wordthreeagentswithABwithACbis{->,dashed}};
\node[world] (CBBCAB) at (4, -4) {\wordthreeagentswithABwithACbis{->,dashed}};
\node[world] (total1) at (-12, -6) {\wordthreeagentsTOTAL};
\node[world] (total2) at (-8, -6) {\wordthreeagentsTOTAL};
\node[world] (total3) at (-4, -6) {\wordthreeagentsTOTAL};
\callexample{abbccb}{ABbccb}{ab}{0}
\callexample{abbccb}{BCCB}{bc}{0}
\callexample{abbccb}{CBBC}{cb}{0}
\callexample{ABbccb}{ACABbccb}{ac}{0}
\callexample{ABbccb}{ABBCCB}{bc}{0}
\callexample{ABbccb}{ABCBBC}{cb}{0}
\callexample{BCCB}{BCCBAB}{ab}{0}
\callexample{CBBC}{CBBCAB}{ab}{0}
\callexample{ACABbccb}{total1}{bc}{0}
\callexample{ABBCCB}{total2}{ac}{0}
\callexample{ABCBBC}{total3}{ac}{0}
\indist{ABBCCB}{ABCBBC}{a}{-15}
\indist{total2}{total3}{a}{-15}
\indist{BCCB}{CBBC}{a}{-15}
\indist{BCCBAB}{CBBCAB}{a}{-15}
\node (v1) at (-12, -7) {$\checkmark$};
\node (v2) at (-8, -7) {$\checkmark$};
\node (v3) at (-4, -7) {$\checkmark$};
\node (x1) at (0, -5) {$\times$};
\node (x2) at (4, -5) {$\times$};
\end{tikzpicture}}
\caption{Example of an execution tree for $\mathit{LNS}$.}\label{figure:executionThree}
\end{figure}
To illustrate our semantics, for this graph $G$ we have:
\begin{itemize}
\item $G,\epsilon \models N_a b \land \lnot S_a b$ ---
the call $ab$ is $\mathit{LNS}$-permitted at the start.
\item $G,\epsilon \models [ab] (S_a b \land S_b a)$ ---
after the call $ab$ the agents $a$ and $b$ know each other's secret
\item $G,\epsilon \models [ab] \langle ac \rangle \top$ ---
after the call $ab$ the call $ac$ is possible.
\item $G,\epsilon \models [ab] [LNS] \mathit{Ex}$ ---
after the call $ab$ the $\mathit{LNS}$ protocol will always terminate successfully.
\item $G,\epsilon \models [bc \cup cb] [LNS] \lnot\mathit{Ex}$ ---
after the calls $bc$ or $cb$ the $\mathit{LNS}$ protocol will always terminate unsuccessfully.
\item $G,\epsilon \models [bc \cup cb] K_a^{LNS} (S_b c \land S_c b)$ ---
after the calls $bc$ or $cb$, agent $a$ knows that $b$ and $c$ know each
others secret.
\item $G,ab;bc;ac \models \bigwedge_{i \in \{ a, b, c \}} K_i^{LNS} \mathit{Ex}$ ---
after the call sequence $ab;bc;ac$ everyone knows that everyone is an expert.
\end{itemize}
We only have epistemic edges for agent $a$, and those are between states with identical gossip graphs.
If there are three agents, then if you are not involved in a call, you know that the other two agents must have called.
You may only be uncertain about the direction of that call.
But the direction of the call does not matter for the numbers and secrets being exchanged.
Hence all agents always know what the current gossip graph is.
For a more interesting epistemic relation, see Figure~\ref{figure:nExampleTreePart} in the Appendix.
\end{example}
\subsection{Symmetric and epistemic protocols, and semantic protocols}
Given a protocol $P$, for any $a\neq b$ and $c\not = d$, the protocol conditions $P_{ab}$ and $P_{cd}$ can be different formulas. So a protocol may require different agents to obey different rules.
Although there are settings wherein this is interesting to investigate, we want to restrict our investigation to those protocols where there is one protocol condition to rule them all.
This is enforced by the requirement of \emph{symmetry}.
Another requirement is that the calling agent should know that the protocol condition is satisfied before making a call.
That is the requirement that the protocol be \emph{epistemic}.
It is indispensable in order to see our protocols as \emph{distributed} gossip protocols.
\begin{definition}[Symmetric and epistemic syntactic protocol]\label{def:SynSymEpis}
Let a syntactic protocol $P$ be given. Protocol $P$ is \emph{symmetric} iff
for every permutation $J$ of agents, we have $\phi_{J(a)J(b)}=J(\phi_{ab})$,
where $J(\phi_{ab})$ is the natural extension of $J$ to formulas.\footnote{Formally: $J(\top):=\top$, $J(N_ab) := N_ab$, $J(S_ab) := S_ab$, $J(\neg\phi) := \neg J(\phi)$, $J(\phi\wedge\psi) := J(\phi)\wedge J(\psi)$, $J(K_a^P \psi) := K_{J(a)}^{J(P)} J(\psi)$, $J(?\phi) := {? J(\phi)}$, $J(ab) := J(a)J(b)$, $J(\pi;\pi') := J(\pi);J(\pi')$, $J(\pi \cup \pi') := J(\pi) \cup J(\pi')$, $J(\pi^*) := J(\pi)^*$.}
Protocol $P$ is \emph{epistemic} iff for every $a,b\in A$, the protocol condition $P_{ab} \rightarrow K^P_a P_{ab}$ is valid.
We henceforth require all our protocols to be symmetric and epistemic.
\end{definition}
Intuitively, a protocol is \emph{epistemic} if callers always know when to make a call, without being given instructions by a central scheduler.
This means that whenever $P_{ab}$ is true, so agent $a$ is allowed to call agent $b$, it must be the case that $a$ knows that $P_{ab}$ is true.
In other words, in an epistemic protocol $P_{ab}$ implies $K_a^PP_{ab}$.
Furthermore, by Definition~\ref{def:Semantics} knowledge is truthful on the execution tree for protocol $P$ in gossip model.
So except in the gossip states that cannot be reached using the protocol $P$, we also have that $K_a^PP_{ab}$ implies $P_{ab}$.
If a protocol is \emph{symmetric} the names of the agents are irrelevant and therefore interchangeable.
So a symmetric protocol is not allowed to ``hard-code'' agents to perform certain roles.
This means that, for example, we cannot tell agent $a$ to call $b$, as opposed to $c$, just because $b$ comes before $c$ in the alphabet.
But we can tell $a$ to call $b$, as opposed to $c$, on the basis that, say, $a$ knows that $b$ knows five secrets while $c$ only knows two secrets.
If a protocol $P$ is symmetric, we can think of the protocol condition as the \emph{unique} protocol condition for $P$, modulo permutation.
Epistemic and symmetric protocols capture the distributed peer-to-peer nature
of the gossip problem.
\begin{example}
The protocols $\mathit{ANY}$ and $\mathit{LNS}$ are symmetric and epistemic.
For $\mathit{ANY}$ this is trivial.
For $\mathit{LNS}$, observe that agents always know which numbers and secrets they know.
A direct consequence of clause (2.) of Definition~\ref{def:SyncEpistRel} of the epistemic relation is that for any protocol $P$, if $(G,\sigma) \sim^P_a (G, \sigma')$, then $N^\sigma_a = N^{\sigma'}_a$ and $S^\sigma_a = S^{\sigma'}_a$.
Thus, applying the clause for knowledge $K^P_a \phi$ of Definition \ref{def:Semantics},
we immediately get that the following formulas are all valid:
$N_ab \rightarrow K_a^P N_ab$,
$\neg N_ab \rightarrow K_a^P \neg N_ab$,
$S_ab \rightarrow K_a^P S_ab$,
and $\neg S_ab \rightarrow K_a^P \neg S_ab$.
Therefore, in particular this holds for $P = \mathit{LNS}$.
\end{example}
Although the numbers and secrets known by an agent before and after a call may vary, the agent always knows \emph{whether} she knows a given number or secret. Knowledge about other agents having a certain number or a secret is preserved after calls. But, of course, knowledge about other agents \emph{not} having a certain number or secret is not preserved after calls.
Not all protocols we discuss in this work are definable in the logical language.
We therefore need the additional notion of a \emph{semantic protocol}, defined by its extension.
\begin{definition}[Semantic protocol]\label{def:semanticprotocol}
A \emph{semantic protocol} is a function
$P \colon \mathcal{G} \to \mathcal{P}({(A \times A)}^*)$
mapping initial gossip graphs to sets of call sequences.
We assume semantic protocols to be closed under subsequences, i.e.\ for all $G$
we want that $\sigma;ab \in P(G)$ implies $\sigma \in P(G)$.
For a \emph{semantic protocol} $P$ we say that a call $ab$ is
\emph{$P$-permitted at $(G,\sigma$)} iff $(\sigma;ab) \in P(G)$.
\end{definition}
Given any syntactic protocol we can view its extension as a semantic protocol.
Using this definition of permitted calls for semantic protocols we can apply
Definition~\ref{def:SyncEpistRel} to get the epistemic relation with respect
to a semantic protocol $P$. Because the relation $\sim_a^P$ depends only on
which calls are allowed, the epistemic relation with respect to a (syntactic)
protocol $P$ is identical to the epistemic relation with respect to the
extension of $P$.
We also require that semantic protocols are symmetric and epistemic, adapting
the definitions of these two properties as follows.
\begin{definition}[Symmetric and epistemic semantic protocol]\label{def:SemSymEpis}
A semantic protocol $P$ is \emph{symmetric} iff
for all initial gossip graphs $G$ and for all permutations $J$ of agents we have $P(J(G)) = J(P(G))$ (where $J(P(G)) := \{ J(\sigma) \mid \sigma \in P(G) \}$).
A semantic protocol $P$ is \emph{epistemic} iff
for all initial gossip graphs $G$ and for all $\sigma \in P(G)$ we have:
$(\sigma;ab) \in P(G)$
iff
for all $\tau \sim_a^P \sigma$ we have $(\tau;ab) \in P(G)$.
\end{definition}
It is easy to verify that the syntactic definition of an epistemic protocol agrees with the semantic definition.
\begin{proposition}\label{prop:epistemic_agrees}
A syntactic protocol $P$ is epistemic if and only if its extension is epistemic.
\end{proposition}
\begin{proof}
Let $Q$ be the extension of $P$ and note that, as remarked above, the epistemic relations induced by $P$ and $Q$ are identical.
Now we have the following chain of equivalences:
\begin{center}
\begin{tabular}{cl}
& $P$ is not epistemic \\
$\Leftrightarrow$ & $\exists a,b,G,\sigma: G,\sigma\not \models P_{ab}\rightarrow K_a^P P_{ab}$\\
$\Leftrightarrow$ & $\exists a,b,G,\sigma,\tau: G,\sigma\models P_{ab}$, $G,\tau\not \models P_{ab}$ and $(G,\sigma)\sim_a^P (G,\tau)$\\
$\Leftrightarrow$ & $\exists a,b,G,\sigma,\tau: (\sigma;ab)\in Q(G)$, $(\tau;ab)\not \in Q(G)$ and $(G,\sigma)\sim_a^P (G,\tau)$ \\
$\Leftrightarrow$ & $\exists a,b,G,\sigma,\tau: (\sigma;ab)\in Q(G)$, $(\tau;ab)\not \in Q(G)$ and $(G,\sigma)\sim_a^{Q} (G,\tau)$ \\
$\Leftrightarrow$ & $Q$ is not epistemic
\end{tabular}
\end{center}
\vspace{-1em}
\end{proof}
Note that Proposition~\ref{prop:epistemic_agrees} does not imply that every epistemic
semantic protocol is the extension of a syntactic epistemic protocol, since some
semantic protocols are not the extension of any syntactic protocol.
For symmetry, the situation is slightly more complex than for being epistemic.
\begin{proposition}\label{prop:symmetric_agrees}
If a syntactic protocol $P$ is symmetric, then its extension is symmetric.
\end{proposition}
\begin{proof}
Let $Q$ be the extension of $P$. Fix any permutation $J$ and any initial gossip graph $G$.
To show is that $Q(J(G))=J(Q(G))$ (where $J$ is extended to gossip graphs in the natural way).
We show by induction that for every call sequence $\sigma$, we have $\sigma\in Q(J(G)) \Leftrightarrow \sigma \in J(Q(G))$.
As base case, note that $\epsilon\in Q(J(G))$ and $\epsilon \in J(Q(G))$.
Now, as induction hypothesis, assume that for every call sequence $\tau$ that is shorter than $\sigma$, we have $\tau\in Q(J(G)) \Leftrightarrow \tau \in J(Q(G))$.
Let $ab$ be the final call in $\sigma$, so $\sigma = (\tau;ab)$.
Then we have the following sequence of equivalences:
\begin{align*}
(\tau;ab) \in Q(J(G))
& \Leftrightarrow J(G),\tau \models P_{ab}\\
& \Leftrightarrow G,J^{-1}(\tau)\models J^{-1}(P_{ab})\\
& \Leftrightarrow G,J^{-1}(\tau)\models P_{J^{-1}(ab)}\\
& \Leftrightarrow (J^{-1}(\tau);J^{-1}(ab))\in Q(G)\\
& \Leftrightarrow (\tau;ab)\in J(Q(G)),
\end{align*}
where the equivalence on the third line is due to $P$ being symmetric.
This completes the induction step and thereby the proof.
\end{proof}
The converse of Proposition~\ref{prop:symmetric_agrees} does not hold:
if $P$ is not symmetric, it is still possible for its extension to be symmetric.
The reason for this discrepancy is that symmetry for syntactic protocols has the very strong condition that $J(P_{ab})=P_{J(ab)}$.
So if $P$ is symmetric and $P'$ is given by (i) $P'_{cd}=P_{cd}\wedge \top$ and (ii) $P'_{ab}=P_{ab}$ for $a,b \neq c,d$,
then $P'$ is not symmetric even though $P$ and $P'$ have the same extension.
We do, however, have the following slightly weaker statement.
Recall that a gossip state $(G,\sigma)$ is $P$-reachable iff the call sequence $\sigma$ is $P$-permitted at $G$.
\begin{proposition}
Let $P$ be a syntactic protocol such that, for some $P$-reachable gossip state
$(G,\sigma)$, some permutation $J$ and some $a,b$ we have
$G,\sigma\not \models P_{J(ab)}\leftrightarrow J(P_{ab})$.
Then the extension of $P$ is not symmetric.
\end{proposition}
\begin{proof}
Let $Q$ be the extension of $P$, and suppose towards a contradiction that $Q$ is symmetric.
Then we have the following sequence of equivalences:
\begin{align*}
G,\sigma\models P_{J(ab)} & \Leftrightarrow (\sigma;J(ab))\in Q(G)\\
& \Leftrightarrow (J^{-1}(\sigma);ab)\in J^{-1}(Q(G))\\
& \Leftrightarrow (J^{-1}(\sigma);ab)\in Q(J^{-1}(G))\\
& \Leftrightarrow J^{-1}(G),J^{-1}(\sigma)\models P_{ab}\\
& \Leftrightarrow G,\sigma\models J(P_{ab}),
\end{align*}
where the equivalence on the third line is due to $Q$ being symmetric.
This contradicts $G,\sigma\not \models P_{J(ab)}\leftrightarrow J(P_{ab})$, from which it follows that $Q$ is not symmetric.
\end{proof}
So while $P$ may be non-symmetric and still have a symmetric extension, this can only happen if $J(P_{ab})$ is equivalent to $P_{J(ab)}$ in all reachable gossip states.
We conclude that our syntactic and semantic definitions of symmetry agree up to logical equivalence.
\section{Strengthening of Protocols}\label{sec:strengthening}
\subsection{How can we strengthen a protocol?}
In our semantics it is common knowledge among the agents that they follow a certain
protocol, for example $\mathit{LNS}$. Can they use this information to prevent making
``bad'' calls that lead to an unsuccessful sequence?
If we look at the execution graph given in Figure~\ref{figure:executionThree},
then it seems easy to fix the protocol. Agents $b$ and $c$ should wait and not
make the first call. Agent $b$ should not make a call before he has received a
call from $a$. We cannot say this in our logic as we have no converse modalities
to reason over past calls. In this case however, there is a different way to
ensure the same result. We can ensure that $b$ and $c$ wait before calling by a
strengthening of $\mathit{LNS}$ that only allows a first call from $i$ to $j$ if $j$
does not know the number of $i$. To determine that a call is not the first call,
we need another property: after at least one call happened, there is an agent
who knows another agent's secret.
We can define this new protocol by protocol condition
$P_{ij} := \mathit{LNS}_{ij} \land ( \lnot N_j i \lor \bigvee_{k \neq l} S_k l)$.
Observe that this new protocol is again symmetric and epistemic:
agents always know whether $(\lnot N_j i \lor \bigvee_{k \neq l} S_k l)$.
Because of synchronicity, not only the callers but also all other agents know
that there are agents $k$ and $l$ such that $k$ knows the secret of $l$.
This is an ad-hoc solution specific to this initial gossip graph. Could we
also give a general definition to improve $\mathit{LNS}$ which works on more or even all
initial graphs? The answer to that is: more, yes, but all, no.
We will now discuss different ways to improve protocols by making them more
restrictive. Our goal is to rule out unsuccessful sequences while keeping at
least some successful ones.
Doing this can be difficult because we still require the strengthened protocols to
be epistemic and symmetric. Hence we are not allowed to
arbitrarily rule out specific calls using the names of agents, for example.
Whenever a call is removed from the protocol, we also have to remove all calls
to other agents that the caller cannot distinguish: it has to be done \emph{uniformly}.
But before we discuss specific ideas for strengthening, let us define it.
\begin{definition}[Strengthening]
A protocol $P'$ is a \emph{syntactic strengthening} of a protocol $P$ iff
$P_{ab}' \rightarrow P_{ab}$ is valid for all agents $a \neq b$.
A protocol $P'$ is a \emph{semantic strengthening} of a protocol $P$ iff
$P' \subseteq P$.
A \emph{syntactic strengthening procedure} is a function $\heartsuit$ that for
any syntactic protocol $P$ returns a syntactic strengthening $P^\heartsuit$ of $P$.
Analogously, we define \emph{semantic strengthening procedure}.
\end{definition}
We stress that strengthening is a relation between two protocols $P$ and $P'$
whereas strengthening procedures define a restricting transformation that
given any $P$ tells us how to obtain $P'$.
In the case of a syntactic strengthening, $P$ and $P'$ are implicitly required
to be syntactic protocols. Vice versa however, syntactic protocols can be
semantic strengthenings. In fact, we have the following.
\begin{proposition}
Every syntactic strengthening is a semantic strengthening.
\end{proposition}
\begin{proof}
Let $P'$ be a syntactic strengthening of a protocol $P$.
Let a gossip graph $G$ be given.
We show by induction on the length of $\sigma$ that
$\sigma \in P'(G)$ implies $\sigma \in P(G)$.
The base case where $\sigma=\epsilon$ is trivial.
For the induction step, consider any $\sigma = \tau;ab$.
As $\tau;ab\in P'(G)$, we also have $\tau \in P'(G)$ and $G, \tau \models P'_{ab}$.
From $\tau \in P'(G)$ and the inductive hypothesis, it follows that $\tau \in P(G)$.
From $G, \tau \models P'_{ab}$ and the validity of $P'_{ab} \rightarrow P_{ab}$ follows $G, \tau \models P_{ab}$.
Finally, by Definition~\ref{def:extension}, $\tau\in P(G)$ and $G, \tau \models P_{ab}$ imply $\tau;ab \in P(G)$.
\end{proof}
\begin{lemma}\label{lemma:str}
Suppose $P$ is a strengthening of $Q$.
Then
$K_a^Q \phi \rightarrow K_a^P \phi$
and
$\hat{K}_a^P \phi \rightarrow \hat{K}_a^Q \phi$
are both valid, for any agent $a$.
\end{lemma}
\begin{proof}
This follows immediately from the semantics of protocol-dependent knowledge
given in Definition~\ref{def:Semantics}.
\end{proof}
\subsection{Syntactic Strengthening: Look-Ahead and One-Step}
We will now present concrete examples of syntactic strengthening procedures.
\begin{definition}[Look-Ahead and One-Step Strengthenings]\label{def:strengthening}
We define four syntactic strengthening procedures as follows.
Let $P$ be a protocol.
\[ \begin{array}{llll}
\text{hard look-ahead strengthening}: &
P^\blacksquare_{ab} &:=& P_{ab} \land K_a^P [ab] \langle P \rangle \mathit{Ex} \\
\text{soft look-ahead strengthening}: &
P^\blacklozenge_{ab} &:=& P_{ab} \land \hat{K}_a^P [ab] \langle P \rangle \mathit{Ex} \\
\text{hard one-step strengthening}: &
P^\square_{ab} &:=& P_{ab} \land K_a^P [ab] (\mathit{Ex} \lor \bigvee_{i,j} (N_i j \land P_{ij}) ) \\
\text{soft one-step strengthening}: &
P^\lozenge_{ab} &:=& P_{ab} \land \hat{K}_a^P [ab] (\mathit{Ex} \lor \bigvee_{i,j} (N_i j \land P_{ij}) )
\end{array}\]
\end{definition}
The \emph{hard} look-ahead strengthening allows agents to make a call iff the call is
allowed by the original protocol and moreover they \emph{know} that making this call
yields a situation where the original protocol can still succeed.
For example, consider $\mathit{LNS}^\blacksquare$. Informally, its condition is that $a$ is
permitted to call $b$ iff $a$ does not have the secret of $b$ and $a$ knows
that after making the call to $b$, it is still possible to follow $\mathit{LNS}$ in
such a way that all agents become experts.
The \emph{soft} look-ahead strengthening allows more calls than the hard look-ahead
strengthening because it only demands that $a$ \emph{considers it possible} that
the protocol can succeed after the call. This can be interpreted as a good faith
or lucky draw assumption that the previous calls between other agents have been
made ``in a good way''. Soft look-ahead strengthening allows agents to take a risk.
The soft and the hard look-ahead strengthening include a diamond $\langle P \rangle$
labeled with the protocol P, where that protocol P by definition contains
arbitrary iteration: the Kleene star $\ast$.
To evaluate this, we need to compute the execution tree of $P$ for the initial gossip graph $G$.
In practice this can make it hard to check the protocol condition of the new protocol.
The \emph{one-step} strengthenings, in contrast, only use the protocol condition
$P_{ij}$ in their formalization and not the entire protocol $P$. This means that
they provide an easier to compute, but less reliable alternative to full
look-ahead, namely by looking only one step ahead.
We only demand that agent $a$ knows (or, in the soft version, considers it
possible) that after the call, everyone is an expert or the protocol can still go
on for at least one more step --- though it might be that all continuation
sequences will eventually be unsuccessful and thus this next call would already
have been excluded by both look-ahead strengthenings.
\bigskip
An obvious question now is, can these or other strengthenings get us from
weak to strong success? Do these strengthenings only remove unsuccessful
sequences, or will they also remove successful branches, and maybe even return
an empty and unsuccessful protocol?
In our next example everything still works fine.
\begin{example}\label{example:exagain}
Consider Example~\ref{example:executionThree} again.
It is easy to see that the soft and the hard look-ahead strengthening rule
out the two unsuccessful branches in this execution tree and keep the successful
ones. Protocol $\mathit{LNS}^\blacksquare$ only preserves alternatives that are all successful
and $\mathit{LNS}^\blacklozenge$ only eliminates alternatives if they are all unsuccessful. In
the execution tree in Figure~\ref{figure:executionThree}, the effect is the same
for $\mathit{LNS}^\blacksquare$ and $\mathit{LNS}^\blacklozenge$, because at any state the agents always know
which calls lead to successful branches.
This is typical for gossip scenarios with three agents: if a call happened, the
agent not involved in the call might be unsure about the direction of the call,
but it knows who the callers are.
The one-step strengthenings are not enough to rule out the unsuccessful
sequences. This is because the unsuccessful sequences are of length $2$ but
the one-step strengthenings can only remove the last call in a sequence. In
this case, the protocols $\mathit{LNS}^\square$ and $\mathit{LNS}^\lozenge$ rule out
the call $ab$ after $bc$ or $cb$ happened.
\end{example}
\subsection{Semantic Strengthening: Uniform Backward Defoliation}
We now present two semantic strengthening procedures.
They are inspired by the notion of backward induction, a well-known solution concept in decision theory and game theory~\cite{OsbRub1994:GameTheory}.
We will discuss this at greater length when defining the arbitrary iteration of these semantic strengthenings and in Section~\ref{sec:conclusion}.
In backward induction, given a game tree or search tree, a parent node is called \emph{bad} if all its children are loosing or bad nodes.
Similarly, in trees with information sets of indistinguishable nodes, a parent node can be called bad if all its children are bad \emph{and if also all children from indistinguishable nodes are bad}.
Similar notions were considered in~\cite{BalSmeZve2009:KeepHoping,Perea2014:BelFutRat}.
Again, we have a soft and a hard version.
We define \emph{uniform backward defoliation} on the execution trees of dynamic gossip as follows to obtain two semantic strengthenings.
We choose the name ``defoliation'' here because a single application of this strengthening procedure only removes leaves and not whole branches of the execution tree.
The iterated versions we present later are then called \emph{uniform backward induction}.
\begin{definition}[Uniform Backward Defoliation]\label{def-UBR}
Suppose we have a protocol $P$ and an initial gossip graph $G$.
We define the \emph{Hard Uniform Backward Defoliation} $(\mathsf{HUBD})$
and \emph{Soft Uniform Backward Defoliation} $(\mathsf{SUBD})$ of $P$ as follows.
\[ \begin{array}{llll}
P^{\mathsf{HUBD}}(G) & := &
\{ \sigma \in P(G) \mid & \sigma = \epsilon, \text{ or } \sigma = \tau;ab \text{ and } \forall (G,\tau') \sim_a^P (G,\tau) \\
& & & \text{such that } \tau' \in \overline{P(G)} \text{ implies } (G,\tau';ab) \models \mathit{Ex} \} \\[0.5em]
P^{\mathsf{SUBD}}(G) & := &
\{ \sigma \in P(G) \mid & \sigma = \epsilon, \text{ or } \sigma = \tau;ab \text{ and } \exists (G,\tau') \sim_a^P (G,\tau) \\
& & & \text{such that } \tau' \in \overline{P(G)} \text{ implies } (G,\tau';ab) \models \mathit{Ex} \}
\end{array} \]
\end{definition}
In this definition, $\forall (G,\tau') \sim_a^P (G,\tau)$ implicitly stands for
``for all $\tau' \in P(G)$ such that $(G,\tau') \sim_a^P (G,\tau)$'',
because for $(G,\tau')$ to be in $\sim_a^P$ relation to another gossip state,
$\tau'$ must be $P$-permitted; similarly for the existential quantification.
The \textsf{HUBD} strengthening keeps the calls which \emph{must} lead to a
non-terminal state or a state where everyone is an expert and \textsf{SUBD}
keeps the calls which \emph{might} do so.
Equivalently, we can say that \textsf{HUBD} removes calls which may go wrong
and \textsf{SUBD} removes those calls which will go wrong --- where going
wrong means leading to a terminal node where not everyone is an expert.
We can now prove that for any gossip protocol
\emph{Hard Uniform Backward Defoliation} is the same as \emph{Hard One-Step Strengthening},
in the sense that their extensions are the same on any gossip graph,
and that \emph{Soft Uniform Backward Defoliation} is the same as \emph{Soft One-Step Strengthening}.
\begin{theorem}\label{thm:ubr-is-one-step}
$P^\square = P^{\mathsf{HUBD}}$ and $P^\lozenge = P^{\mathsf{SUBD}}$
\end{theorem}
\begin{proof}
Note that $\epsilon$ is an element of both sides of both equations.
For any non-empty sequence we have the following chain of equivalences for the
hard versions of UBD and one-step strengthening:
\[ \def1.5} \begin{array}{l{1.7} \begin{array}{llr}
(\sigma;ab) \in P^\square(G) \\
\Updownarrow \text{by Definition~\ref{def:extension}} \\
G,\sigma \models P^\square_{ab} \\
\Updownarrow \text{by Definition~\ref{def:strengthening}} \\
G,\sigma \models P_{ab} \land
K_a^P [ab] \left(\bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \right) \\
\Updownarrow \text{by Definition~\ref{def:Semantics}} \\
(\sigma;ab) \in P(G) \text{ and }
(G,\sigma) \vDash K_a^P [ab] \left( \bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \right) \\
\Updownarrow \text{by Definition~\ref{def:Semantics}} \\
(\sigma;ab) \in P(G) \text{ and }
\forall (G,\sigma') \sim_a^P (G,\sigma) :
(G,\sigma';ab) \models \bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \\
\Updownarrow \text{by Definition~\ref{def-success}} \\
(\sigma;ab) \in P(G) \text{ and }
\forall (G,\sigma') \sim_a^P (G,\sigma) :
\sigma';ab \notin \overline{P(G)} \text{ or } (G,\sigma';ab) \models \mathit{Ex} \\
\Updownarrow \text{by Definition~\ref{def-UBR}} \\
(\sigma;ab) \in P^{\mathsf{HUBD}}(G) \\
\end{array} \]
And we have a similar chain of equivalences for the soft versions:
\[ \arraycolsep=2pt\def1.5} \begin{array}{l{1.5} \begin{array}{l}
(\sigma;ab) \in P^\lozenge(G) \\
\Updownarrow \text{by Definition~\ref{def:extension}} \\
G,\sigma \models P^\lozenge_{ab} \\
\Updownarrow \text{by Definition~\ref{def:strengthening}} \\
G,\sigma \models P_{ab} \land \hat{K}_a^P [ab] \left(\bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \right) \\
\Updownarrow \text{by Definition~\ref{def:Semantics}} \\
(\sigma;ab) \in P(G) \text{ and } (G,\sigma) \models \hat{K}_a^P [ab] \left( \bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \right) \\
\Updownarrow \text{by Definition~\ref{def:Semantics}} \\
(\sigma;ab) \in P(G) \text{ and }
\exists (G,\sigma') \sim_a^P (G,\sigma) :
(G,\sigma';ab) \models \bigvee_{i,j} (N_i j \land P_{ij}) \lor \mathit{Ex} \\
\Updownarrow \text{by Definition~\ref{def-success}} \\
(\sigma;ab) \in P(G) \text{ and }
\exists (G,\sigma') \sim_a^P (G,\sigma) :
\sigma';ab \notin \overline{P(G)} \text{ or } (G,\sigma';ab) \models \mathit{Ex} \\
\Updownarrow \text{by Definition~\ref{def-UBR}} \\
(\sigma;ab) \in P^{\mathsf{SUBD}}(G)
\end{array} \]
\end{proof}
Similarly to backward induction in perfect information games \cite{Aumann1995:BIandCKR}, uniform backward defoliation is \emph{rational}, in the sense that it forces an agent to avoid calls leading to unsuccessful sequences.
The strengthening SUBD avoids a call if it always leads to an unsuccessful sequence.
The strengthening HUBD avoids a call if it sometimes leads to a unsuccessful sequence.
\subsection{Iterated Strengthenings}\label{subsec:iterated-with-update}
The syntactic strengthenings we looked at are all defined in terms of the original protocol.
In $P^\blacksquare_{ab} := P_{ab} \land K_a^P[ab] \langle P \rangle \mathit{Ex}$ the given
$P$ occurs in three places.
Firstly, in the protocol condition $P_{ab}$ requiring that the call is permitted
according to the old protocol $P$ --- this ensures that the new protocol is a
strengthening of the original $P$.
Secondly, as a parameter to the knowledge operator, in $K^P_a$, which means that
agent $a$ knows that everyone followed $P$ (and that this is common knowledge).
Thirdly, in the part $\langle P \rangle$ assuming that after the considered call
everyone will continue to follow protocol $P$ in the future.
Hence we have strengthened the protocol that the agents use and thereby changed
their behavior, but not their assumptions about what protocol other agents follow.
For example, when $P = \mathit{LNS}$, all agents now act according to $\mathit{LNS}^\blacksquare$, on
the assumption that all other agents act according to $\mathit{LNS}$.
This does not mean that agents cannot determine what they know if $\mathit{LNS}^\blacksquare$
were common knowledge: each agent $a$ can check that knowledge using $K^{\mathit{LNS}^\blacksquare}_a \phi$.
But this $K^{\mathit{LNS}^\blacksquare}_a$ modality is not part of the protocol $\mathit{LNS}^\blacksquare$.
The agents do not use this knowledge to determine whether to make calls.
But why should our agents stop their reasoning here?
It is natural to iterate strengthening procedures and determine whether we can further improve our protocols by also
updating the knowledge of the agents.
For example, consider repeated hard one-step strengthening:
\[ {(P^\square)}^\square_{ab} = P^\square_{ab} \land
\hat{K}_a^{P^\square} [ab] (\mathit{Ex} \lor
\bigvee_{i,j} (N_i j \land P^\square_{ij}) ) \]
In this section we investigate iterations and combinations of strengthening procedures.
In particular we investigate various combinations of hard and soft one-step and
look-ahead strengthening, in order to determine how they relate to each other.
\begin{definition}[Strengthening Iteration]\label{def:iteration}
Let $P$ be a syntactic protocol. For any of the four syntactic strengthening procedures
$\heartsuit \in \{ \blacksquare, \blacklozenge, \square, \lozenge \}$,
we define its iteration by adjusting the protocol condition as follows,
which implies $P^{\heartsuit 1} = P^\heartsuit$:
\[ \begin{array}{lll}
P^{\heartsuit 0}_{ab} & := & P_{ab} \\
P^{\heartsuit (k+1)}_{ab} & := & {(P^{\heartsuit k})}^\heartsuit_{ab}
\end{array} \]
Let now $P$ be a semantic protocol, and let $\heartsuit \in \{\mathsf{HUBD},\mathsf{SUBD}\}$.
We define their iteration, for all gossip graphs $G$, by:
\[ \begin{array}{lll}
P^{\heartsuit{0}}(G) &:=& P(G) \\
P^{\heartsuit{(k+1)}}(G) &:=& {(P^{\heartsuit{k}})}^{\heartsuit}(G)
\end{array} \]
\end{definition}
It is easy to check that Theorem~\ref{thm:ubr-is-one-step} generalizes to the
iterated strengthenings as follows.
\begin{corollary}\label{cor:hardsoft}
For any $k \in \mathbb{N}$, we have:
\[ P^{\square k} = P^{\mathsf{HUBD}k} \text{ and }
P^{\lozenge k} = P^{\mathsf{SUBD}k} \]
\end{corollary}
\begin{proof}
By induction using Theorem~\ref{thm:ubr-is-one-step}.
\end{proof}
\begin{example}
We reconsider Examples~\ref{example:executionThree} and~\ref{example:exagain},
and we recall that $\mathit{LNS}^\square$ and $\mathit{LNS}^\lozenge$ rule out the call
$ab$ after $bc$ or $cb$ happened. To eliminate $bc$ and $cb$ as the first call,
we have to iterate one-step strengthening: ${(\mathit{LNS}^\square)}^\square$ is
strongly successful on this graph, as well as ${(\mathit{LNS}^\lozenge)}^\lozenge$,
${(\mathit{LNS}^\square)}^\lozenge$ and ${(\mathit{LNS}^\lozenge)}^\square$.
\end{example}
\begin{example}\label{example:nExample}
We consider the ``N''-shaped gossip graph shown below.
There are 21 $\mathit{LNS}$ sequences for this graph, of which $4$ are successful
($\checkmark$) and 17 are unsuccessful ($\times$).
\begin{center}
\begin{tikzpicture}[>=latex,line join=bevel,node distance=15mm,baseline=1]
\node (1) {\textrm{1}};
\node (0) [right of=1, node distance=10mm] {\textrm{0}};
\node (3) [above of=1] {\textrm{3}};
\node (2) [above of=0] {\textrm{2}};
\draw [->,dashed] (3) -- (1);
\draw [->,dashed] (2) -- (0);
\draw [->,dashed] (3) -- (0);
\end{tikzpicture}
\hspace{1em}
$\begin{array}{ll}
20;30;01;31 & \times \\
20;30;31;01 & \times \\
20;31;10;30 & \times \\
20;31;30;10 & \times \\
30;01;20;31 & \times \\
30;01;31;20 & \times \\
30;20;01;21;31 & \checkmark \\
\end{array}
\hspace{1em}
\begin{array}{ll}
30;20;01;31;21 & \checkmark \\
30;20;21;01;31 & \checkmark \\
30;20;21;31;01 & \checkmark \\
30;20;31;01;21 & \times \\
30;20;31;21;01 & \times \\
30;31;01;20 & \times \\
30;31;20;01;21 & \times \\
\end{array}
\hspace{1em}
\begin{array}{ll}
30;31;20;21;01 & \times \\
31;10;20;30 & \times \\
31;10;30;20 & \times \\
31;20;10;30 & \times \\
31;20;30;10 & \times \\
31;30;10;20 & \times \\
31;30;20;10 & \times \\
\end{array}$
\end{center}
\noindent We can show the call sequences in a more compact way if we only distinguish call sequences up
to the moment when it is decided whether $\mathit{LNS}$ will succeed.
Formally, consider the set of minimal $\sigma \in \mathit{LNS}(G)$ such that for all two
terminal $\mathit{LNS}$-sequences $\tau,\tau' \in \overline{\mathit{LNS}(G)}$ extending $\sigma$,
we have $G, \tau \models \mathit{Ex}$ iff $G, \tau' \models \mathit{Ex}$.
We will use this shortening convention throughout the paper.
\[\begin{array}{ll}
20 & \times \\
30;01 & \times \\
30;20;01 & \checkmark \\
30;20;21 & \checkmark \\
30;20;31 & \times \\
30;31 & \times \\
31 & \times \\
\end{array}\]
It is pretty obvious what the agents should do here: Agent 2 should not make the
first call but let $3$ call $0$ first. The soft look-ahead strengthening works well
on this graph: It disallows all unsuccessful sequences and keeps all successful
ones. For example, after call $30$, agent $2$ considers it possible that call
$30$ happened and in this case the call $20$ can lead to success. Hence the
protocol condition of $\mathit{LNS}^\blacklozenge$ is fulfilled.
The strengthening $\mathit{LNS}^\blacklozenge$ is strongly successful on this graph.
But note that $2$ does not \emph{know} that $20$ can lead to success, because
the first call could have been $31$ as well and for agent $2$ this would be
indistinguishable from $30$.
Therefore the hard look-ahead strengthening is too restrictive here.
In fact, the only call which ${LNS}^\blacksquare$ still allows is $30$ at the beginning.
After that no more calls are allowed by the hard look-ahead strengthening.
A full list showing which call sequences are allowed by which strengthenings of
$\mathit{LNS}$ for this example is provided in Table~\ref{table:nExampleExtensions}.
``Full'' means that we continue iterating the strengthening until
$P^{\heartsuit{k}}(G) = P^{\heartsuit{(k + 1)}}(G)$ for the given graph $G$.
Such fixpoints of protocol strengthening will be formally introduced in the
next section.
\end{example}
The hard look-ahead strengthening restricts the set of allowed calls based on a
full analysis of the whole execution tree. One might thus expect, that applying
hard look-ahead more than once would not make a difference. However, we have
the following negative results on iterating hard look-ahead strengthening and
the combination of hard look-ahead and hard one-step strengthening.
\begin{fact}\label{fact:hard-idem-fix}
Hard look-ahead strengthening is not idempotent and does not always yield a
fixpoint of hard one-step strengthening:
\begin{enumerate}[(i)]
\item There exist a graph $G$ and a protocol $P$ for which
$P^\blacksquare(G) \neq {(P^\blacksquare)}^\blacksquare(G)$.
\item There exist a graph $G$ and a protocol $P$ for which
${(P^\blacksquare)}^\square(G) \neq P^\blacksquare(G)$.
\end{enumerate}
\end{fact}
\begin{proof} \
\begin{enumerate}[(i)]
\item Let $G$ be the ``N'' graph from Example~\ref{example:nExample} and
consider the protocol $P = \mathit{LNS}$. Applying hard look-ahead strengthening
once only allows the first call $30$ and nothing after that call.
If we now apply hard look-ahead strengthening again we get the empty set:
$P^\blacksquare(G) \neq {(P^\blacksquare)}^\blacksquare(G) = \varnothing$.
See also Table~\ref{table:nExampleExtensions}.
\item The ``diamond'' graph that we will present in Section~\ref{subsec:diamond}
can serve as an example here. We can show that the inequality holds for this
graph by exhaustive search, using our Haskell implementation described in the
Appendix.
Plain $\mathit{LNS}$ has $48$ successful and $44$ unsuccessful sequences on this
graph. Of these, $\mathit{LNS}^\blacksquare$ still includes $8$ successful and $8$
unsuccessful sequences. If we now apply hard one-step strengthening, we get
${(\mathit{LNS}^\blacksquare)}^\square$ where $4$ of the unsuccessful sequences are removed.
See also Table~\ref{table:DiamondExampleExtensions} in the Appendix.
We note that for $P = LNS$ there is no smaller graph to show the inequality.
This can be checked by manual reasoning or with our implementation.
\qedhere
\end{enumerate}
\end{proof}
\noindent
Similarly, we can ask whether the soft strengthenings are related to each other,
analogous to Fact~\ref{fact:hard-idem-fix}. We do not know whether there is a
protocol $P$ for which ${(P^\blacklozenge)}^\lozenge \neq P^\blacklozenge$ and leave this as
an open question.
Another interesting property that strengthenings can have is \emph{monotonicity}.
Intuitively, a strengthening is monotone iff it preserves the inclusion relation
between extensions of protocols. This property is useful to study the fixpoint
behavior of strengthenings.
We will now define monotonicity formally and then obtain some results for it.
\begin{definition}\label{def:monotone}
A strengthening $\heartsuit$ is called \emph{monotone} iff
for all protocols $Q$ and $P$ such that $Q \subseteq P$,
we also have $Q^\heartsuit \subseteq P^\heartsuit$.
\end{definition}
\begin{proposition}[Soft one-step strengthening is monotone]
Let $P$ be a protocol and $Q$ be an arbitrary strengthening of $P$, i.e. $Q \subseteq P$.
Then we also have $Q^\lozenge \subseteq P^\lozenge$.
\end{proposition}
\begin{proof}
As $Q$ is a strengthening of $P$, the formula $Q_{ab} \rightarrow P_{ab}$ is valid.
We want to show that $Q^\lozenge_{ab} \rightarrow P^\lozenge_{ab}$.
Suppose that $G,\sigma \models Q^\lozenge_{ab}$, i.e.:
\[ G,\sigma \models Q_{ab} \text{ and } G,\sigma \models
\hat{K}_a^{Q} [ab] ( \mathit{Ex} \lor \bigvee_{i,j} (N_i j \land Q_{ij}) ) \]
From the first part and the validity of $Q_{ab} \rightarrow P_{ab}$, we get $G,\sigma \models P_{ab}$.
The second part and the validity of $Q_{ij} \rightarrow P_{ij}$ give us
$G,\sigma \models \hat{K}_a^{Q} [ab] (\mathit{Ex} \lor \bigvee_{i,j} (N_i j \land P_{ij}))$.
From that and Lemma~\ref{lemma:str} it follows that
$G,\sigma \models \hat{K}_a^P [ab] (\mathit{Ex} \lor \bigvee_{i,j} (N_i j \land P_{ij}))$.
Combining these, it follows by definition of soft one-step strengthening that
we have $G,\sigma \models P^\lozenge_{ab}$.
\end{proof}
\begin{proposition}[Both hard strengthenings are not monotone]\label{prop:hard-non-mono}
Let $P$ and $Q$ be protocols. If $Q \subseteq P$, then
$(i)$ $Q^\blacksquare \subseteq P^\blacksquare$ may not hold, and also
$(ii)$ $Q^\square \subseteq P^\square$ may not hold.
\end{proposition}
\begin{proof}
(i) \emph{Hard one-step strengthening is not monotone}:
Consider the ``spaceship'' graph below with four agents 0, 1, 2 and 3
where 0 and 3 know 1's number, 1 knows 2's number, and 2 knows no numbers.
\begin{center}
\begin{tikzpicture}[>=latex,line join=bevel,node distance=1.5cm,baseline=1]
\node (0) {0};
\node (1) [right of=0,below of=0,node distance=14mm] {1};
\node (3) [left of=1,below of=1,node distance=14mm] {3};
\node (2) [right of=1] {2};
\draw [->,dashed] (0) -- (1);
\draw [->,dashed] (3) -- (1);
\draw [->,dashed] (1) -- (2);
\end{tikzpicture}
\end{center}
On this graph the $\mathit{LNS}$ sequences up to decision point are:
\[\begin{array}{ll}
01;02 & \times \\
01;12 & \times \\
01;31;02 & \times \\
\end{array}\hspace{1em}\begin{array}{ll}
01;31;12 & \checkmark \\
01;31;32 & \checkmark \\
12 & \times \\
\end{array}\hspace{1em}\begin{array}{ll}
31;01;02 & \checkmark \\
31;01;12 & \checkmark \\
31;01;32 & \times \\
\end{array}\hspace{1em}\begin{array}{ll}
31;12 & \times \\
31;32 & \times \\
\ \\
\end{array}\]
Note that
\[ \mathit{LNS}^\blacklozenge(G) = \left \{ \hspace{-3.2pt} \begin{array}{l}
(01;31;12;02;32), (01;31;12;32;02), (01;31;32;02;12), \\
(01;31;32;12;02), (31;01;02;12;32), (31;01;02;32;12), \\
(31;01;12;02;32), (31;01;12;32;02)
\end{array} \hspace{-3.2pt} \right \} \]
is strongly successful and therefore hard one-step strengthening does not
change it --- we have ${(\mathit{LNS}^\blacklozenge)}^\square(G) = \mathit{LNS}^\blacklozenge(G)$.
On the other hand, consider
\[ \mathit{LNS}^\square(G) = \left \{ \hspace{-3pt}
\begin{array}{l}
(01;02;12), (01;12;02), (01;31;02;12), (01;31;02;32), \\
(01;31;12;32;02), (01;31;32;12;02), (12;01), (12;31), \\
(31;01;02;12;32), (31;01;12;02;32), (31;01;32;02), \\
(31;01;32;12), (31;12;32), (31;32;12)
\end{array} \hspace{-3pt} \right \}
\]
and note that this is not a superset of ${(\mathit{LNS}^\blacklozenge)}^\square(G) = \mathit{LNS}^\blacklozenge(G)$,
because we have $(01;31;12;02;32) \in {(\mathit{LNS}^\blacklozenge)}^\square(G) = \mathit{LNS}^\blacklozenge(G)$
but $(01;31;12;02;32) \notin \mathit{LNS}^\square(G)$.
Together, we have $\mathit{LNS}^\blacklozenge(G) \subseteq \mathit{LNS}(G)$ but
${(\mathit{LNS}^\blacklozenge)}^\square(G) \not\subseteq \mathit{LNS}^\square(G)$.
Hence $Q = \mathit{LNS}^\blacklozenge \subseteq \mathit{LNS} = P$ is a counterexample and $\square$ is not monotone.
\bigskip
\noindent (ii) \emph{Hard look-ahead strengthening is not monotone}:
For hard look-ahead strengthening we can use the same example.
Because $\mathit{LNS}^\blacklozenge$ is strongly successful, hard look-ahead strengthening does not change it:
${(\mathit{LNS}^\blacklozenge)}^\blacksquare(G) = \mathit{LNS}^\blacklozenge(G)$.
Moreover, $\mathit{LNS}^\blacksquare(G) = \{ (01), (31) \}$ is not a superset of
${(\mathit{LNS}^\blacklozenge)}^\blacksquare(G) = \mathit{LNS}^\blacklozenge(G)$.
Together we have $\mathit{LNS}^\blacklozenge(G) \subseteq \mathit{LNS}(G)$ but
${(\mathit{LNS}^\blacklozenge)}^\blacksquare(G) \not\subseteq \mathit{LNS}^\blacksquare(G)$,
hence hard look-ahead strengthening is not monotone either.
\end{proof}
This result is relevant for our pursuit to pin down how rational agents can employ common knowledge of a protocol to improve upon it.
It shows that hard look-ahead strengthening is not rational, as follows.
We consider again the ``spaceship'' graph in the proof of Proposition~\ref{prop:hard-non-mono}.
Let us define a \emph{bad call} as a call after which no successful continuation is possible.
Correspondingly, a \emph{good call} is one after which success is still possible.
The initial call could be $12$, but that is a bad call.
All successful $\mathit{LNS}$ sequences on this graph start with $01;31$ or $31;01$.
Let us place ourselves in the position of agent 3 after the call $01$ has been made.
As far as 3 can tell (if the only background common knowledge is that everyone
follows $\mathit{LNS}$), the first call may have been 12, at which point no agent can
make a good call because no continuation is successful.
In particular, the second call 31 is then bad.
So 3 will not call 1, because it is possible that the call $31$ is bad, and we are following hard look-ahead.
Symmetrically, the same reasoning is made by agent 0: even if the first call is
$31$, it could also have been $12$, after which any continuation is unsuccessful,
and therefore 0 will not call 1, which again seems irrational.
So nobody will make a call. The extension of $\mathit{LNS}^\blacksquare$ on this graph is empty.
But as all agents know that $12$ is bad, agent 1 knows this in particular, and
as agent 1 is rational herself, she would therefore not have made that call.
And agents 3 and 0 can draw that conclusion too.
It therefore seems after all irrational for 3 not to call 1, or for 0 not to call 1.
This shows that hard look-ahead strengthening is not rational.
In particular, it ignores the rationality of other agents.
\subsection{Limits and Fixpoints of Strengthenings}\label{subsec:fixpoints}
Given the iteration of strengthenings we discussed in the previous section, it
is natural to consider limits and fixpoints of strengthening procedures.
In this subsection we discuss them and give some small results.
A detailed investigation is deferred to future research.
Note that the protocol conditions of all four basic syntactic strengthenings are
conjunctions with the original protocol condition as a conjunct.
Therefore, all these four strengthenings are \emph{non-increasing}:
For all $\heartsuit \in \{ \blacksquare, \blacklozenge, \square, \lozenge \}$
and all protocols $P$, we have
$P^\heartsuit \subseteq P$.
The same holds, by definition, for semantic strengthenings.
This implies that if, on any gossip graph, we start with a protocol that only
allows finite call sequences, such as $\mathit{LNS}$, then applying strengthening
repeatedly will eventually lead to a fixpoint. This fixpoint might be the empty
set, or a non-empty set and thereby provide a new protocol.
For other protocols that allow infinite call sequences, such as $\mathit{ANY}$, we do
not know if this procedure leads to a unique fixpoint and whether fixpoints
are always reached. We therefore distinguish fixpoints from limits.
\begin{definition}[Strengthening Limit; Fixpoint]\label{def:limitProtocol}
Consider any strengthening $\heartsuit$.
The \emph{$\heartsuit$-limit} of a given protocol $P$ is the semantic protocol
$P^{\heartsuit\ast}$ defined as $\bigcap_{k \in \mathbb{N}} P^{\heartsuit{k}}$.
A given protocol $P$ is a \emph{fixpoint} of a strengthening $\heartsuit$
iff $P = P^\heartsuit$.
\end{definition}
\noindent
Note that limit protocols $P^{\heartsuit\ast}$ are \emph{not} in the logical
language, unlike their constituents $P^{\heartsuit k}$. We now define
$P^{\square \ast}$ as \emph{Hard Uniform Backward Induction}, and
$P^{\lozenge \ast}$ as \emph{Soft Uniform Backward Induction}.
Again using induction on Theorem~\ref{thm:ubr-is-one-step}, it follows that
Uniform Backward Induction is the same as arbitrarily often iterated Uniform
Backward Defoliation.
\begin{corollary}\label{cor:ubi}
\[ P^{\square \ast} = P^{\mathsf{HUBD}\ast} \text{ and }
P^{\lozenge \ast} = P^{\mathsf{SUBD}\ast}. \]
\end{corollary}
\begin{example}
Consider $P=\mathit{LNS}$.
The number of $\mathit{LNS}$ calls between $n$ agents is bounded by $\binom{n}{2}= n(n-1)/2$.
The limit $\mathit{LNS}^{\heartsuit\ast}$ is therefore reached after a finite number of
iterations, and expressible in the gossip protocol language:
$\mathit{LNS}^{\heartsuit n(n-1)/2} = \mathit{LNS}^{\heartsuit\ast}$.
\end{example}
As a further observation, the look-ahead strengthenings are not always the limits
of one-step strengthenings. In other words, we do \emph{not} have for all $G$
that $P^{\square\ast}(G) = P^\blacksquare(G)$ or that $P^{\lozenge\ast}(G) = P^\blacklozenge(G)$.
Counterexamples are the ``N'' graph from Example~\ref{example:nExample} and the
extension of various strengthenings relating to the example in the upcoming
Section~\ref{subsec:diamond}, as shown in Table~\ref{table:DiamondExampleExtensions}
in the Appendix.
However, we know by the Knaster-Tarski theorem~\cite{Tarski1955:LatticeThm}
that on any gossip graph soft one-step strengthening $\lozenge$ has
a unique greatest fixpoint, because $\lozenge$ is monotone and the
lattice we are working in is the powerset of the set of all call sequences and
thereby complete.
\subsection{Detailed Example: the Diamond Gossip Graph}\label{subsec:diamond}
Consider the initial ``diamond'' gossip graph below.
\begin{center}
\begin{tikzpicture}
\node (0) at (0, 2) {0};
\node (1) at (0,-2) {1};
\node (2) at (-2,0) {2};
\node (3) at (2,0) {3};
\draw [->,dashed] (2) -- (0);
\draw [->,dashed] (2) -- (1);
\draw [->,dashed] (3) -- (0);
\draw [->,dashed] (3) -- (1);
\end{tikzpicture}
\end{center}
There are 92 different terminating sequences of $\mathit{LNS}$ calls for this initial graph
of which 48 are successful and 44 are unsuccessful.
Also below we give an overview of all sequences.
For brevity we only list them in the compact way, up to the call after which
success has been decided.
\[ \begin{array}{ll}
20;01 & \times \\
20;21 & \times \\
20;30;01 & \checkmark \\
20;30;21 & \times \\
20;30;31 & \checkmark \\
20;31 & \checkmark \\
\end{array}
\hspace{2em}
\begin{array}{ll}
21;10 & \times \\
21;20 & \times \\
21;30 & \checkmark \\
21;31;10 & \checkmark \\
21;31;20 & \times \\
21;31;30 & \checkmark \\
\end{array}
\hspace{2em}
\begin{array}{ll}
30;01 & \times \\
30;20;01 & \checkmark \\
30;20;21 & \checkmark \\
30;20;31 & \times \\
30;21 & \checkmark \\
30;31 & \times \\
\end{array}
\hspace{2em}
\begin{array}{ll}
31;10 & \times \\
31;20 & \checkmark \\
31;21;10 & \checkmark \\
31;21;20 & \checkmark \\
31;21;30 & \times \\
31;30 & \times \\
\end{array} \]
Table~\ref{table:diamondStatistics} shows how many sequences are permitted
by the different strengthenings. Both soft strengthenings rule out no
successful sequences and rule out some unsuccessful sequences.
The hard look-ahead strengthening removes some successful sequences
and rules out the same number of unsuccessful sequences as the soft lookahead
strengthening, but interestingly enough this is a different set.
This demonstrates that Table~\ref{table:diamondStatistics} may be misleading:
the same number of sequences does not imply the same set of sequences. Table~\ref{table:DiamondExampleExtensions} in the Appendix is more detailed and lists sequences. If a further iteration of a strengthening does not change the number and also not the set of sequences, it has the same extension, and is therefore a fixpoint.
For example, Table~\ref{table:DiamondExampleExtensions} shows that $\mathit{LNS}^{\lozenge 2}$ and $\mathit{LNS}^{\lozenge 3}$ both have $48$ successful and $32$ unsuccessful sequences on the diamond graph.
They also have the same extension, hence $\mathit{LNS}^{\lozenge 2}$ is a fixpoint of $\lozenge$ on this graph.
\begin{table}
\centering
\begin{tabular}{lrr}
Protocol & \# successful & \# unsuccessful \\
\toprule
$\mathit{LNS}$ & 48 & 44 \\
$\mathit{LNS}^\blacksquare$ & 8 & 8 \\
$\mathit{LNS}^{\blacksquare 2}$ & 0 & 4 \\
$\mathit{LNS}^{\blacksquare 3}$ & 0 & 0 \\
$\mathit{LNS}^\blacklozenge$ & 48 & 8 \\
$\mathit{LNS}^{\blacklozenge 2}$ & 48 & 8 \\
$\mathit{LNS}^{\blacklozenge 3}$ & 48 & 8 \\
$\mathit{LNS}^\square$ & 24 & 36 \\
$\mathit{LNS}^{\square 2}$ & 8 & 16 \\
$\mathit{LNS}^{\square 3}$ & 8 & 4 \\
$\mathit{LNS}^{\square 4}$ & 0 & 4 \\
$\mathit{LNS}^{\square 5}$ & 0 & 0 \\
$\mathit{LNS}^\lozenge$ & 48 & 36 \\
$\mathit{LNS}^{\lozenge 2}$ & 48 & 32 \\
$\mathit{LNS}^{\lozenge 3}$ & 48 & 32 \\
${(\mathit{LNS}^\lozenge)}^{\square 3}$ & 16 & 0 \\
${({(\mathit{LNS}^\lozenge)}^\square)}^\blacksquare$ & 16 & 0 \\
\end{tabular}
\caption{Statistics for the diamond example.}\label{table:diamondStatistics}
\end{table}
Recall that one-step strengthening is uniform backward
defoliation (Theorem~\ref{thm:ubr-is-one-step}) and that the limit of one-step strengthening is uniform
backward induction (Corollary~\ref{cor:ubi}).
Table~\ref{table:diamondStatistics} shows the difference between
the look-ahead strengthenings and the one-step/defoliation strengthenings.
Although on this ``diamond'' graph, the hard strengthenings $\mathit{LNS}^{\blacksquare k}$
and $\mathit{LNS}^{\square k}$ have the same fixpoint, namely the empty extension for all $k \geq 4$,
the soft strengthenings $\mathit{LNS}^{\blacklozenge k}$ and $\mathit{LNS}^{\lozenge k}$ have different
fixpoints. Both are reached when $k=2$.
We now present two strengthenings that are strongly successful on this graph
(only successfully terminating call sequences remain).
Firstly, consider the protocol ${(\mathit{LNS}^\lozenge)}^{\square 3}$.
Its extension is as follows, see also Tables~\ref{table:diamondStatistics}
and~\ref{table:DiamondExampleExtensions}.
\[ \begin{array}{l}
20;30;01;31;21 \\
20;30;31;01;21 \\
20;31;10;30;21 \\
20;31;30;10;21 \\
\end{array}\hspace{1em}
\begin{array}{l}
21;30;01;31;20 \\
21;30;31;01;20 \\
21;31;10;30;20 \\
21;31;30;10;20 \\
\end{array}\hspace{1em}
\begin{array}{l}
30;20;01;21;31 \\
30;20;21;01;31 \\
30;21;10;20;31 \\
30;21;20;10;31 \\
\end{array}\hspace{1em}
\begin{array}{l}
31;20;01;21;30 \\
31;20;21;01;30 \\
31;21;10;20;30 \\
31;21;20;10;30 \\
\end{array} \]
Its extension has no sequences
with only four calls. There are sequences with redundant second-to-last calls, for
example $10$ in $20;31;30;10;21$.
Secondly, we present a protocol that is strongly successful on this graph and that has no redundant calls. Its description is far more involved than the previous protocol, but the effort seems worthwhile as is shows that: $(i)$ for some initial gossip graphs we can strengthen $\mathit{LNS}$ up to finding strongly successful as well as optimal extensions; $(ii)$ the hard and soft strengthening procedures described so far merely touch the surface and are not all that goes around, because one can easily show that the following protocol does not correspond to any of those or their iterations.
We first describe it as a semantic protocol, liberally referring to call
histories in our description (which cannot be done in our logical language)
and only then give a formalization using the syntax of our protocol logic.
Consider the following semantic protocol:
\begin{quote}
(1) agent 2 or agent 3 makes a call to either 0 or 1.
(2) the agent among 2 and 3 that did not make a call in step (1) calls either 0 or 1.
(3) the agent $x$ that made the call in step (2) now makes a second call;
if $x$ called agent 1 before then $x$ now calls 0 and vice versa.
(4) the agent $y$ that made the call in step (1) now makes a second call;
if $y$ called agent 1 before then $y$ now calls 0 and vice versa.
(5) if the agent $z$ that was called in step (2) is not yet an expert,
then $z$ calls the last remaining agent whose secret $z$ does not know.
\end{quote}
Now let us explain why this protocol is strongly successful on the ``diamond'' graph, and why it is a strengthening of $\mathit{LNS}$. There are four possibilities for the first call: 2 may call 0, 2 may call 1, 3 may call 0 or 3 may call 1. These four cases are symmetrical, so let us assume that the first call is 20. The next call will then be made by agent 3, and there are two possibilities: either 3 also calls agent 0, or 3 calls agent 1. The call sequences, and the secrets known by the agents after each call has been made, are shown in the following two tables.
\[\begin{array}{cccccc}
\multicolumn{6}{c}{\text{First case: 2 and 3 call the same agent}}\\
\text{Stage} & \text{Call} & 0 & 1 & 2 & 3\\
(1) & 20 & \{0,2\} & \{1\} & \{0,2\} & \{3\}\\
(2) & 30 & \{0,2,3\} & \{1\} & \{0,2\} & \{0,2,3\}\\
(3) & 31 & \{0,2,3\} & \{0,1,2,3\} & \{0,2\} & \{0,1,2,3\}\\
(4) & 21 & \{0,2,3\} & \{0,1,2,3\} & \{0,1,2,3\} & \{0,1,2,3\}\\
(5) & 01 & \{0,1,2,3\} & \{0,1,2,3\} & \{0,1,2,3\} & \{0,1,2,3\}
\end{array}\]
\[\begin{array}{cccccc}
\multicolumn{6}{c}{\text{Second case: 2 and 3 call different agents}}\\
\text{Stage} & \text{Call} & 0 & 1 & 2 & 3\\
(1) & 20 & \{0,2\} & \{1\} & \{0,2\} & \{3\}\\
(2) & 31 & \{0,2\} & \{1,3\} & \{0,2\} & \{1,3\}\\
(3) & 30 & \{0,1,2,3\} & \{1,3\} & \{0,2\} & \{0,1,2,3\}\\
(4) & 21 & \{0,1,2,3\} & \{0,1,2,3\} & \{0,1,2,3\} & \{0,1,2,3\}
\end{array}\]
Note that all of these calls are possible, in the sense that all callers know the number of the agent they are calling.
Agents 2 and 3 start out knowing the numbers of 0 and 1, so the calls 20, 30, 21 and 31 are possible from the start.
Furthermore, agent 0 learns the number of agent 1 from agent 2 in the first call, so after the call 20 the call 01 is also possible.
In the second case there is no fifth call, since the agent that received the call in step (2) is already an expert after step (4).
As a result, there are no redundant calls in either possible call sequence.
Furthermore, in either case, all agents become experts.
Finally, every call is to an agent whose secret is unknown to the caller before the call.
So, the described protocol is a strongly successful strengthening of $\mathit{LNS}$.
The two call sequences shown above are possible if the first call is $20$.
There are six other call sequences corresponding to the other three options for the first call.
Overall, the protocol allows the following 8 sequences.
\[ \begin{array}{l}
20;30;31;21;01 \\
20;31;30;21 \\
\end{array}\hspace{1em}
\begin{array}{l}
21;31;30;20;10 \\
21;30;31;20 \\
\end{array}\hspace{1em}
\begin{array}{l}
30;20;21;31;01 \\
30;21;20;31 \\
\end{array}\hspace{1em}
\begin{array}{l}
31;21;20;30;10 \\
31;20;21;30 \\
\end{array}\]
We can also define a syntactic protocol that has the above semantic protocol as its extension.
This syntactic protocol is not particularly elegant, but it illustrates how the logical language can be used to express more complex conditions.
The call condition $P_{ij}$ of this syntactic protocol is of the form $P_{ij}=K_i\psi_{ij}$ (where $K_i$ abbreviates $K_i^{ANY}$, as defined in Section~\ref{subsec:language}).
This guarantees that the protocol is epistemic, because Lemma~\ref{lemma:str} implies that $K_i\psi_{ij}\rightarrow K_i^P K_i\psi_{ij}$ is valid.
The formula $\psi_{ij}$ is a disjunction with the following five disjuncts, one for each of the clauses (1) -- (5) of the protocol as described above.
The formula $\phi_0 := \bigwedge_k\bigwedge_{l\not = k} \neg S_kl$ holds if and only if no calls have taken place yet. Since agents 2 and 3 are the only ones that know the number of another agent, if $\phi_0$ is true then any agent who can make a call is allowed to make that call. So $\phi_0$ is the first disjunct of $\psi_{ij}$, enabling the call in stage (1).
Defining ``exactly one call has been made'' is a bit harder, but we can do it: after the first call, there will be two agents that know two secrets, while everyone else only knows one secret.
So $\phi_1 := \bigvee_{k\not = l}(S_kl \wedge S_lk\wedge \bigwedge_{m\not \in \{k,l\}}\bigwedge_{n\not = m}\neg S_mn)$ holds if and only if exactly one call has been made.
In that case, any agent that is capable of making calls and only knows their own secret is allowed to make a call, so $\phi_1\wedge \bigwedge_{k\not = i}\neg S_ik$ is the second disjunct of $\psi_{ij}$, enabling the call in stage (2).
In stage (3), the second caller is supposed to make another call.
We make a case distinction based on whether the first two calls were to the same agent or to different agents.
If they were to the same agent, then the second caller now knows three different secrets: $\bigvee_{k\not = i}\bigvee_{l\not \in \{i,k\}}S_i{kl}$.
But that holds not only for the agent who made the second call, but also for the agent that received the second call.
The difference between them is that the secret of the receiver of this call is now known by three agents, while the secret of the caller is known by only two: $\bigwedge_{k\not = i}(S_ki\rightarrow \bigwedge_{l\not \in \{i,k\}}\neg S_li)$.
If the first two calls were to different agents, the second caller knows that every agent now knows exactly two secrets: $K_i\bigwedge_k\bigvee_{l\not = k}(S_kl\wedge \bigwedge_{m\not \in \{k,l\}}\neg S_km)$.
This holds for the receiver of the second call as well, but the difference between them is that the number of the receiver is known to an agent who does not know their secret, while the number of the caller is not: $\bigwedge_{k}(N_ki\rightarrow S_ki)$.
In either case, the target of the call should be the unique agent whose number the caller knows but whose secret the caller does not know.
Since calls are always to an agent whose number is known, we only have to stipulate that the target's secret is not known.
So the third disjunct of $\psi_{ij}$ is
\begin{align*} \neg S_ij\wedge (&(\bigvee_{k\not = i}\bigvee_{l\not \in \{i,k\}}S_i{kl} \wedge \bigwedge_{k\not = i}(S_ki\rightarrow \bigwedge_{l\not \in \{i,k\}}\neg S_li)) \vee \\
& (K_i\bigwedge_k\bigvee_{l\not = k}(S_kl\wedge \bigwedge_{m\not \in \{k,l\}}\neg S_km)\wedge \bigwedge_{k}(N_ki\rightarrow S_ki))),\end{align*}
enabling the call in stage (3).
It is relatively easy to express when the call in stage (4) should happen: before the third call, all agents know that there is no expert yet, while after the third call all agents consider it possible that there is at least one expert.
This can be expressed as $\hat{K}_i\bigvee_{k}\mathit{Ex}_k$.
It is slightly more difficult to identify the agent who should make the call.
The agent who should make the call, the one who made the call in stage (1), is the only agent who only knows two secrets, and whose number is only known by agents that also know their secret.
So $\neg\bigvee_{k\not = i}\bigvee_{l\not \in \{i,j\}}S_ikl \wedge \bigwedge_{k}(N_ki\rightarrow S_ki)$.
Finally, the person who should be called in this stage is the unique agent of whom the caller knows the number but not the secret.
The fourth disjunct is therefore $\neg S_ij\wedge \hat{K}_i\bigvee_k\mathit{Ex}_k \wedge \neg\bigvee_{k\not = i}\bigvee_{l\not \in \{i,j\}}S_ikl \wedge \bigwedge_{k}(N_ki\rightarrow S_ki)$.
Finally, the call in stage (5) should only happen if there remains a non-expert agent.
This non-expert considers it possible that all other agents are experts, so the final disjunct of $\psi_{ij}$ is $\neg S_ij \wedge \hat{K}_i\bigwedge_{k\not = i}\mathit{Ex}_k$.
On the ``diamond'' graph the extension of the syntactic protocol with call condition $P_{ij}$ is the semantic protocol defined above.
Clearly, this protocol is symmetric.
We already showed that the protocol is epistemic as well.
All in all, this gives us the protocol that we were looking for.
Manually verifying the extension of the protocol is somewhat tedious, so we have also checked the extension using the model checking tool described in the Appendix.
\section{An Impossibility Result on Strengthening LNS}\label{sec:imposs}
\subsection{An Impossibility Result}
In this section we will show that there are graphs where
(i) $\mathit{LNS}$ is weakly successful and
(ii) no epistemic symmetric strengthening of $\mathit{LNS}$ is strongly successful.
Recall that we assume that the system is synchronous and that the initial
gossip graph is common knowledge.
Without such assumptions it is even easier to obtain such an impossibility
result, a matter that we will address in the final section.
\begin{theorem}\label{thm:StrongImposs}
There is no epistemic symmetric protocol that is a strongly successful
strengthening of $\mathit{LNS}$ on all graphs.
\end{theorem}
\begin{proof}
Consider the following ``candy'' graph $G$:
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (0) at (4,4) {0};
\node (1) at (-1,2) {1};
\node (2) at (2,2) {2};
\node (3) at (6,2) {3};
\node (4) at (9,2) {4};
\node (5) at (4,0) {5};
\draw[->,dashed] (0) -- (2);
\draw[->,dashed] (0) -- (3);
\draw[->,dashed] (1) -- (2);
\draw[->,dashed] (5) -- (2);
\draw[->,dashed] (5) -- (3);
\draw[->,dashed] (4) -- (3);
\end{tikzpicture}
\end{center}
$\mathit{LNS}$ is weakly successful on $G$, but there is no epistemic symmetric protocol
$P$ that is a strengthening of $\mathit{LNS}$ and that is strongly successful on $G$.
In~\cite{DEPRS2015:DynamicGossip}, it was shown that $\mathit{LNS}$ is weakly successful on any
graph that is neither a ``bush'' nor a ``double bush''. Since this graph
$G$ is neither a bush nor a double bush, $\mathit{LNS}$ is weakly successful on
it.
For example, the sequence
\begin{equation*}02;12;53;43;13;03;23;52;42\end{equation*}
is a successful $\mathit{LNS}$ sequence which makes everyone an expert.
LNS is not strongly successful on this graph, however. For example,
\begin{equation*}02;12;53;43;13;03;52;42\end{equation*}
is an unsuccessful $\mathit{LNS}$ sequence, because $5$ learns neither the number
nor the secret of $4$ and no further calls are allowed.
Now, suppose towards a contradiction that $P$ is an epistemic symmetric
strengthening of $\mathit{LNS}$, and that $P$ is strongly successful on $G$.
Before we look at specific calls made by $P$, we consider a general fact.
Recall that knowing a \emph{pure number} means knowing the number of an
agent without knowing their secret. For any gossip graph and
any agent $a$, if no one has $a$'s pure number, then no call sequence will
result in anyone learning $a$'s pure number. After all, in order to learn $a$'s
number, one would have to call or be called by someone who already knows that
number, but in such a call one would also learn $a$'s secret.
In $\mathit{LNS}$, you are only allowed to call an agent if you have the number but not the
secret of that agent, i.e., if you have their pure number. It follows that if,
in a given gossip graph, no one has $a$'s pure number, then no $\mathit{LNS}$ sequence on
that graph will contain any calls where $a$ is the receiver.
In the gossip graph $G$ under consideration, agents 0, 1, 4 and 5 are
in the situation that no one else knows their number. So in particular, no one
knows the pure number of any of these agents. It follows that 2 and 3 are the
only possible targets for $\mathit{LNS}$ calls in this graph.
Now, let us consider the first call according to $P$. This call must target
$2$ or $3$. The calls $12$ and $43$ are bad calls, since they would result in
1 (resp.~4) being unable to make calls or be called, while still not being an
expert.
This means that either 0 or 5 must make the first call. By symmetry, we can
assume without loss of generality that the first call is $02$. This yields the
following situation.
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (0) at (4,4) {0};
\node (1) at (-1,2) {1};
\node (2) at (2,2) {2};
\node (3) at (6,2) {3};
\node (4) at (9,2) {4};
\node (5) at (4,0) {5};
\draw[<->] (0) -- (2);
\draw[->,dashed] (2) -- (3);
\draw[->,dashed] (0) -- (3);
\draw[->,dashed] (1) -- (2);
\draw[->,dashed] (5) -- (2);
\draw[->,dashed] (5) -- (3);
\draw[->,dashed] (4) -- (3);
\end{tikzpicture}
\end{center}
Now, let us look at the next call.
\begin{itemize}
\item The sequence $02;43$ is bad, because that would make it impossible
for 4 to ever become an expert.
\item Because of the symmetry of $P$, the initial call could have been $03$
instead of $02$. The sequence $03;12$ is bad, since 1 cannot become an
expert, so $03;12$ is not allowed by the strongly successful protocol $P$.
But agent 1 cannot tell the difference between $03$ and $02$, so from the fact
that $03;12$ is disallowed and that $P$ is epistemic it follows that $02;12$
is also disallowed.
\item The sequence $02;03$ is bad, since $0$ will not be able to make any
call afterwards. Because $0$ can also never be called, this implies that $0$
will never become an expert.
\item Consider then the sequence $02;23$. This results in the following diagram.
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (0) at (4,4) {0};
\node (1) at (-1,2) {1};
\node (2) at (2,2) {2};
\node (3) at (6,2) {3};
\node (4) at (9,2) {4};
\node (5) at (4,0) {5};
\draw[<->] (0) -- (2);
\draw[<->] (2) -- (3);
\draw[->] (3) to[bend right] (0);
\draw[->,dashed] (0) -- (3);
\draw[->,dashed] (1) -- (2);
\draw[->,dashed] (5) -- (2);
\draw[->,dashed] (5) -- (3);
\draw[->,dashed] (4) -- (3);
\end{tikzpicture}
\end{center}
This graph has the following property: it is impossible (in any $\mathit{LNS}$
sequence) for any agent to get to learn a new pure number. That is, nobody
can learn a new number without also getting to know the secret of that agent:
agents 1, 0, and 4 each know only one pure number, so they cannot teach anyone a
new number, and agent 5 knows two pure numbers (2 and 3), but those agents
already know each other's secrets.
As a result, any call that will become allowed by $\mathit{LNS}$ in the future is already
allowed now. There are 5 such calls that are currently allowed, namely 12,
52, 53, 03 and 43. Furthermore, of those calls 52 and 53 are mutually exclusive,
since calling 2 will teach 5 the secret of 3, and calling 3 will teach 5 the
secret of 2.
So any continuation of $02;23$ allowed by $\mathit{LNS}$ can only contain (in any order)
12, 03, 43 and either 52 or 53. Since $P$ is a strengthening of $\mathit{LNS}$, the same
holds for $P$. But using only those calls, there is no way to teach 3 the secret
of 1: secret 1 can reach agent 2 using the call 12, but in order for the secret
to travel any further we need the call 52. After that call only 03 and 43 are
still allowed (in particular, 53 is ruled out), so the knowledge of secret 1
remains limited to agents 1, 2 and 5.
Since 02;13 cannot be extended to a successful $\mathit{LNS}$ sequence, 02;13 must be disallowed.
\item Consider the call sequence $02;52$. This gives the following diagram.
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (0) at (4,4) {0};
\node (1) at (-1,2) {1};
\node (2) at (2,2) {2};
\node (3) at (6,2) {3};
\node (4) at (9,2) {4};
\node (5) at (4,0) {5};
\draw[<->] (0) -- (2);
\draw[->,dashed] (2) -- (3);
\draw[->,dashed] (0) -- (3);
\draw[->,dashed] (1) -- (2);
\draw[<->] (5) -- (2);
\draw[->] (5) -- (0);
\draw[->,dashed] (5) -- (3);
\draw[->,dashed] (4) -- (3);
\end{tikzpicture}
\end{center}
Note that in this situation, it is impossible for agents 3 and 4 to learn any
new number without also learning the secrets corresponding to those numbers:
there is no agent that knows the number of agent 3 and that also knows another
pure number, and this will remain the case whatever other calls happen.
This means that agent 3 cannot make any calls, and that agent 4 can make
exactly one call, to agent 3.
Suppose now that $02;52$ is extended to a successful $\mathit{LNS}$ sequence. This
sequence has to contain the call 43 at some point. This will be the only call
by agent 4, so in order for the sequence to be successful, agent 3 already
has to know secret 1 by the time 43 takes place.
In particular, this means that the call 12 has already happened, and that
either agent 1 or agent 2 has then called agent 3 to transmit this secret.
Whichever agent among 1 and 2 makes this call, afterwards they are unable to
make any more calls. Furthermore, this takes place before the call 43, so
whatever agent $x \in \{ 1,2 \}$ informs 3 of secret 1 does not learn secret
4. Since this agent $x$ can neither make another call nor be called, it
follows that $x$ does not become an expert.
So $02;52$ is not allowed by $P$ which we assumed to be strongly successful.
\item Finally, consider the call sequence $02;53$. By symmetry, 03 could have
been the first call as opposed to 02. Furthermore, the same reasoning that
showed 02;52 to be unsuccessful above can, with an appropriate permutation of
agents, be used to show that 03;53 is unsuccessful.
Agent 5 cannot distinguish between the first call 02 and 03 before making the
call $53$, so if $03;53$ is disallowed then so is $02;53$ because $P$ is
epistemic.
\end{itemize}
Remember that $02$ is, without loss of generality, the only initial call that
can lead to success.
We have shown that all of the $\mathit{LNS}$-permitted calls following the initial call
02 (namely, the calls 43, 12, 03, 23, 52 and 53) are disallowed by $P$.
This contradicts $P$ being a strongly successful strengthening of $\mathit{LNS}$.
\end{proof}
\subsection{Backward Induction and Look-Ahead applied to Candy}
Given this impossibility result, it is natural to wonder what would happen if we
use the syntactic strengthenings from Definition~\ref{def:strengthening}, or
their iterations, on the ``candy'' graph $G$.
All second calls are eliminated by $\mathit{LNS}^\blacksquare$, because for any two agents $a$
and $b$ we have $G, 02 \models \lnot K^\mathit{LNS}_a [ab] \langle \mathit{LNS} \rangle \mathit{Ex}$.
By symmetry this also holds for the three other possible first calls,
hence $\mathit{LNS}^\blacksquare$ is unsuccessful on $G$.
However, the first calls \emph{are} still allowed according to $\mathit{LNS}^\blacksquare$.
There are 9468 $\mathit{LNS}$-sequences on this graph of which 840 are successful.
Using the implementation discussed in the Appendix we found out that
$\mathit{LNS}^\blacklozenge$, the soft look-ahead strengthening of $\mathit{LNS}$, is weakly successful
on this graph and allows 840 successful and 112 unsuccessful sequences.
\section{Conclusions, Comparison, and Further Research}\label{sec:generalizations}\label{sec:conclusion}
\paragraph*{Conclusions}
We modeled common knowledge of protocols in the setting of distributed dynamic
gossip. A crucial role is played by the novel notion of protocol-dependent
knowledge. This knowledge is interpreted using an epistemic relation over
states in the execution tree of a gossip protocol in a given gossip graph. As
the execution tree consists of gossip states resulting from calls permitted by
the protocol, this requires a careful semantic framework.
We described various syntactically or semantically definable strengthenings of
gossip protocols, and investigated the combination and iteration of such
strengthenings, in view of strengthening a weakly successful protocol into one
that is strongly successful on all graphs.
In the setting of gossip, a novel notion we used in such strengthenings is that
of uniform backward induction, as a variation on backward induction in search
trees and game trees.
Finally, we proved that for the $\mathit{LNS}$ protocol, in which agents are only allowed
to call other agents if they do not know their secrets, it is impossible to
define a strengthening that is strongly successful on all graphs.
\paragraph*{Comparison}
As already described at length in the introductory section, our work builds
upon prior work on dynamic distributed
gossip~\cite{DEPRS2015:DynamicGossip,DEPRS2017:EpistemicGossip},
which itself has a prior history in the networks
community~\cite{BLL1999:DiscovDistrib,KSSV2000:RandomRumor,Haeupler2015:SimpleSpread}
and in the logic community~\cite{ADGH2014:KnowledgeGossip,AptGroHoe2015:EpisDistGos}.
Many aspects of gossip may or may not be common knowledge among agents: how many
agents there are, the time of a global clock, the gossip graph, etc. The point
of our result is that even under the strongest such assumptions, one can still
not guarantee that a gossip protocol always terminates successfully.
How common knowledge of agents is affected by gossip protocol execution is
investigated in~\cite{AptWoj2017:GossipCK}: for example, the authors demonstrate
how sender-receiver subgroup common knowledge is obtained (and lost) during calls.
However, they do not study common knowledge of gossip protocols.
We do not know of other work on that topic. Outside the area of
gossip, protocol knowledge has been well investigated in the epistemic logic
community~\cite{hoshi:phd,Wang10:phd,hvdetal.aij:2014}.
While the concept of backward induction is well-known in game theory
(see for example \cite{Aumann1995:BIandCKR}), it is only
used in perfect-information settings, where all agents know what the real world
or the actual state is. Our definition of \emph{uniform} backward induction is a
generalization of backward induction to the dynamic gossip setting, where only
partial observability is assumed. A concept akin to uniform backward induction
has been proposed in~\cite{Perea2014:BelFutRat}
(rooted in~\cite{BattiSini2002:StrongBelFwIR}), under the name of
\emph{common belief in future rationality}, with an accompanying recursive
elimination procedure called \emph{backward dominance}.\footnote{We kindly thank
Andr\'es Perea for his interactions.} As in our approach, this models a decision
rule faced with uncertainty over indistinguishable moves.
In~\cite{Perea2014:BelFutRat}, the players are utility maximizers with
probabilistic beliefs, which in our setting would correspond to
\emph{randomizing} over all indistinguishable moves/calls. As a decision rule
this is also known as the \emph{insufficient reason} (or \emph{Laplace})
criterion: all outcomes are considered equiprobable. Seeing uniform backward
induction as the combination of backward induction and a decision rule
immediately clarifies the picture.
Soft uniform backward induction applies the \emph{minimax regret} criterion for
the decision whom to call, minimizing the maximum utility loss. In contrast,
hard uniform backward induction applies the \emph{maximin utility} criterion,
maximizing the minimum utility (also known as risk-averse, pessimistic, or Wald
criterion).
In the gossip scenario, the unique minimum value is unsuccessful termination,
and the unique maximum value is successful termination.
Minimax prescribes that as long as the agent considers it possible that a call
leads to successful termination, the agent is allowed to make the call (as long
as the minimum of the maximum is success, go for it): the soft version.
Maximin prescribes that, as long as the agent considers it possible that a call
lead to unsuccessful termination, the agent should not make the call (as long as
the maximum of the minimum is failure, avoid it): the hard version.
Such decision criteria over uncertainty also crop up in areas overlapping with
social software and social choice, e.g.~\cite{BalSmeZve2009:KeepHoping,CoWaXi2011:ManVot,parikhetal:2013,Meir2015:PluVotUnc}.
In~\cite{BalSmeZve2009:KeepHoping} a somewhat similar concept has been called
``common knowledge of stable belief in rationality''. However, there it applies
to a weaker epistemic notion, namely belief.
\paragraph*{Further Research}
The impossibility result for $\mathit{LNS}$ is for dynamic gossip where agents
exchange both secrets and numbers, and where the network expands.
Also in the non-dynamic setting we can quite easily find a graph where static
$\mathit{LNS}$ is weakly successful but cannot be strengthened to an epistemic symmetric
strongly successful protocol.
Consider again the ``diamond'' graph of Section~\ref{subsec:diamond}, for which
we described various strongly successful strengthenings.
Also in ``static'' gossip $\mathit{LNS}$ is weakly successful on this graph, since $01;30;20;31$ is successful.
All four possible first calls are symmetric.
After $21$, the remaining possible calls are $20$, $31$ and $30$.
But $20$ is bad, since 2 will never learn secret 3 that way.
Also $31$ is bad, since agent 1 will never learn the secret of 0.
The call $30$ is safe and in fact guarantees success, but by epistemic symmetry it cannot be allowed while $31$ is disallowed.
Therefore, in the static setting it is impossible to strengthen $\mathit{LNS}$ on ``diamond'' such that it becomes strongly successful.
We expect a completely different picture for strengthening ``static'' gossip protocols in similar fashion as we did here,
for dynamic gossip.
We assumed synchronicity (a global clock) and common knowledge of the initial
gossip graph. These strong assumptions were made on purpose, because without
them agents will have even less information available and will therefore not be
able to coordinate any better. Such and other parameters for gossip problems are
discussed in~\cite{DGHHK2016:GossipParameters}. It is unclear what results still
can be obtained under fully distributed conditions, where agents only know their
own history of calls and their neighbors.
We wish to determine the logic of protocol-dependent knowledge $K^P_a$, and also
on fully distributed gossip protocols, without a global clock, and to further
generalize this beyond the setting of gossip.
\clearpage
\section*{Appendix: A Model Checker for Dynamic Gossip}
\addcontentsline{toc}{section}{Appendix: A Model Checker for Dynamic Gossip}
Analyzing examples of gossip graphs and their execution trees by hand is
tedious. To help us find and check the examples in this paper we wrote a
Haskell program which is available at \url{https://github.com/m4lvin/gossip}.
Our program can show and randomly generate gossip graphs, execute the protocols
we discussed and draw the resulting execution trees with epistemic edges.
The program also includes an epistemic model checker for the formal language
we introduced, similar to DEMO~\cite{JvE2007:DEMO}, but tailor-made for dynamic
gossip. For further details, see also \cite[Section~6.6]{GattingerThesis2018}.
Figure~\ref{figure:nExampleTreePart} is an example output of the implementation,
showing the execution tree for Example~\ref{example:nExample} up to two calls,
together with the epistemic edges for agent $2$, here called $c$. Note that
we use a more compact way to denote gossip graphs: lower case stands for a pure
number and capital letters for knowing the number and secret.
\begin{figure}[H]
\centering
\includegraphics[width=0.97\linewidth]{img/gossip-nExample_2_2_1.pdf}
\caption{Two levels of the execution tree for Example~\ref{example:nExample},
with epistemic edges for $c$.}\label{figure:nExampleTreePart}
\end{figure}
Our implementation can run different protocols on a given graph and output
a \LaTeX\ table showing and comparing the extension of those protocols.
Tables~\ref{table:nExampleExtensions} and~\ref{table:DiamondExampleExtensions}
have been generated in this way.
They provide details how various strengthenings behave on the gossip graphs
from Example~\ref{example:nExample} and Section~\ref{subsec:diamond}.
\begin{table}
\centering
\fontsize{8pt}{9.5pt}\selectfont
\input{data/gossip-n-table.tex}
\caption{N Example~\ref{example:nExample}: Extensions of strengthenings.}\label{table:nExampleExtensions}
\end{table}
\begin{table}
\centering
\fontsize{9pt}{11.5pt}\selectfont
\input{data/gossip-diamond-table.tex}
\caption{Diamond Example of Section~\ref{subsec:diamond}: Extensions of strengthenings,
after $20$.}\label{table:DiamondExampleExtensions}
\end{table}
\clearpage
\interlinepenalty=10000
\bibliographystyle{myplainurl}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.